Software System Testing Assisted by Large Language Models: An Exploratory Study
Palabra(s) clave:
Software Testing
E2E Testing
LLMs
Large Language Models
Software Engineering
End-to-End Testing
Pruebas de Sistema
Ingeniería del Software
Testing
Modelos de lenguaje
Fecha de publicación:
Editorial:
Springer
Citación:
Descripción física:
Resumen:
Large language models (LLMs) based on transformer architecture have revolutionized natural language processing (NLP), demonstrating excellent capabilities in understanding and generating human-like text. In Software Engineering, LLMs have been applied in code generation, documentation, and report writing tasks, to support the developer and reduce the amount of manual work. In Software Testing, one of the cornerstones of Software Engineering, LLMs have been explored for generating test code, test inputs, automating the oracle process or generating test scenarios. However, their application to high-level testing stages such as system testing, in which a deep knowledge of the business and the technological stack is needed, remains largely unexplored. This paper presents an exploratory study about how LLMs can support system test development. Given that LLM performance depends on input data quality, the study focuses on how to query general purpose LLMs to first obtain test scenarios and then derive test cases from them. The study evaluates two popular LLMs (GPT-4o and GPT- 4o-mini), using as a benchmark a European project demonstrator. The study compares two different prompt strategies and employs well-established prompt patterns, showing promising results as well as room for improvement in the application of LLMs to support system testing.
Large language models (LLMs) based on transformer architecture have revolutionized natural language processing (NLP), demonstrating excellent capabilities in understanding and generating human-like text. In Software Engineering, LLMs have been applied in code generation, documentation, and report writing tasks, to support the developer and reduce the amount of manual work. In Software Testing, one of the cornerstones of Software Engineering, LLMs have been explored for generating test code, test inputs, automating the oracle process or generating test scenarios. However, their application to high-level testing stages such as system testing, in which a deep knowledge of the business and the technological stack is needed, remains largely unexplored. This paper presents an exploratory study about how LLMs can support system test development. Given that LLM performance depends on input data quality, the study focuses on how to query general purpose LLMs to first obtain test scenarios and then derive test cases from them. The study evaluates two popular LLMs (GPT-4o and GPT- 4o-mini), using as a benchmark a European project demonstrator. The study compares two different prompt strategies and employs well-established prompt patterns, showing promising results as well as room for improvement in the application of LLMs to support system testing.
ISBN:
Patrocinado por:
This work was supported in part by the project PID2022-137646OB-C32 under Grant MCIN/ AEI/10.13039/501100011033/FEDER, UE, and in part by the project MASE RDS-PTR_22_24_P2.1 Cybersecurity (Italy).
Colecciones
- Biblioteca Universitaria [74]
- Informática [811]
- Investigaciones y Documentos OpenAIRE [8083]