Poster: Performance Testing Driven by Reinforcement LearningShow others and affiliations
2020 (English)In: 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), 2020, p. 402-405Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]
Performance testing remains a challenge, particularly for complex systems. Different application-, platform- and workload-based factors can influence the performance of software under test. Common approaches for generating platform- and workload-based test conditions are often based on system model or source code analysis, real usage modeling and use-case based design techniques. Nonetheless, creating a detailed performance model is often difficult, and also those artifacts might not be always available during the testing. On the other hand, test automation solutions such as automated test case generation can enable effort and cost reduction with the potential to improve the intended test criteria coverage. Furthermore, if the optimal way (policy) to generate test cases can be learnt by testing system, then the learnt policy can be reused in further testing situations such as testing variants, evolved versions of software, and different testing scenarios. This capability can lead to additional cost and computation time saving in the testing process. In this research, we present an autonomous performance testing framework which uses a model-free reinforcement learning augmented by fuzzy logic and self-adaptive strategies. It is able to learn the optimal policy to generate platform- and workload-based test conditions which result in meeting the intended testing objective without access to system model and source code. The use of fuzzy logic and self-adaptive strategy helps to tackle the issue of uncertainty and improve the accuracy and adaptivity of the proposed learning. Our evaluation experiments show that the proposed autonomous performance testing framework is able to generate the test conditions efficiently and in a way adaptive to varying testing situations.
Place, publisher, year, edition, pages
2020. p. 402-405
Keywords [en]
learning (artificial intelligence), program testing, source code (software), complex systems, workload-based factors, workload-based test conditions, system model, usage modeling, use-case based design techniques, test automation solutions, automated test case generation, intended test criteria coverage, testing system, testing situations, testing variants, testing process, autonomous performance testing framework, model-free reinforcement learning, intended testing objective, source code, Unified modeling language, Stress, Time factors, Sensitivity, Error analysis, Adaptation models, performance testing, stress testing, load testing, machine learning, reinforcement learning
National Category
Natural Sciences
Identifiers
URN: urn:nbn:se:ri:diva-51989DOI: 10.1109/ICST46399.2020.00048OAI: oai:DiVA.org:ri-51989DiVA, id: diva2:1520770
Conference
2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST)
2021-01-212021-01-212023-10-04Bibliographically approved