System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Machine learning-assisted performance testing
RISE - Research Institutes of Sweden (2017-2019), ICT, SICS. Mälardalen University, Sweden.ORCID iD: 0000-0003-3354-1463
2019 (English)In: ESEC/FSE 2019 - Proceedings of the 2019 27th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Association for Computing Machinery, Inc , 2019, p. 1187-1189Conference paper, Published paper (Refereed)
Abstract [en]

Automated testing activities like automated test case generation imply a reduction in human effort and cost, with the potential to impact the test coverage positively. If the optimal policy, i.e., the course of actions adopted, for performing the intended test activity could be learnt by the testing system, i.e., a smart tester agent, then the learnt policy could be reused in analogous situations which leads to even more efficiency in terms of required efforts. Performance testing under stress execution conditions, i.e., stress testing, which involves providing extreme test conditions to find the performance breaking points, remains a challenge, particularly for complex software systems. Some common approaches for generating stress test conditions are based on source code or system model analysis, or use-case based design approaches. However, source code or precise system models might not be easily available for testing. Moreover, drawing a precise performance model is often difficult, particularly for complex systems. In this research, I have used model-free reinforcement learning to build a self-adaptive autonomous stress testing framework which is able to learn the optimal policy for stress test case generation without having a model of the system under test. The conducted experimental analysis shows that the proposed smart framework is able to generate the stress test conditions for different software systems efficiently and adaptively without access to performance models.

Place, publisher, year, edition, pages
Association for Computing Machinery, Inc , 2019. p. 1187-1189
Keywords [en]
Autonomous testing, Performance testing, Reinforcement learning, Stress testing, Test case generation, Machine learning, Automated test case generation, Automated testing, Case based design, Complex software systems, Experimental analysis, Software testing
National Category
Natural Sciences
Identifiers
URN: urn:nbn:se:ri:diva-39924DOI: 10.1145/3338906.3342484Scopus ID: 2-s2.0-85071935558ISBN: 9781450355728 (print)OAI: oai:DiVA.org:ri-39924DiVA, id: diva2:1361872
Conference
27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2019, 26 August 2019 through 30 August 2019
Available from: 2019-10-17 Created: 2019-10-17 Last updated: 2021-01-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Helali Moghadam, Mahshid

Search in DiVA

By author/editor
Helali Moghadam, Mahshid
By organisation
SICS
Natural Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 191 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf