Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Automated Performance Testing Based on Active Deep Learning
RISE Research Institutes of Sweden, Digital Systems, Industrial Systems.
RISE Research Institutes of Sweden, Digital Systems, Industrial Systems.ORCID iD: 0000-0003-3354-1463
RISE Research Institutes of Sweden, Digital Systems, Industrial Systems.ORCID iD: 0000-0002-1512-0844
2021 (English)In: 2021 IEEE/ACM International Conference on Automation of Software Test (AST), 2021, p. 11-19Conference paper, Published paper (Refereed)
Abstract [en]

Generating tests that can reveal performance issues in large and complex software systems within a reasonable amount of time is a challenging task. On one hand, there are numerous combinations of input data values to explore. On the other hand, we have a limited test budget to execute tests. What makes this task even more difficult is the lack of access to source code and the internal details of these systems. In this paper, we present an automated test generation method called ACTA for black-box performance testing. ACTA is based on active learning, which means that it does not require a large set of historical test data to learn about the performance characteristics of the system under test. Instead, it dynamically chooses the tests to execute using uncertainty sampling. ACTA relies on a conditional variant of generative adversarial networks, and facilitates specifying performance requirements in terms of conditions and generating tests that address those conditions. We have evaluated ACTA on a benchmark web application, and the experimental results indicate that this method is comparable with random testing, and two other machine learning methods, i.e. PerfXRL and DN.

Place, publisher, year, edition, pages
2021. p. 11-19
Keywords [en]
Deep learning, Uncertainty, Automation, Benchmark testing, Generative adversarial networks, Software systems, Test pattern generators, Performance testing, automated test generation, active learning, conditional generative adversarial networks
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:ri:diva-55414DOI: 10.1109/AST52587.2021.00010OAI: oai:DiVA.org:ri-55414DiVA, id: diva2:1579129
Conference
2021 IEEE/ACM International Conference on Automation of Software Test (AST). 20-21 May 2021. Madrid, Spain.
Available from: 2021-07-08 Created: 2021-07-08 Last updated: 2023-10-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Helali Moghadam, MahshidSaadatmand, Mehrdad

Search in DiVA

By author/editor
Helali Moghadam, MahshidSaadatmand, Mehrdad
By organisation
Industrial Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 149 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf