Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Performance Testing Using a Smart Reinforcement Learning-Driven Test Agent
RISE Research Institutes of Sweden, Digital Systems, Industrial Systems.ORCID iD: 0000-0003-3354-1463
Mälardalen University, Sweden.
RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.ORCID iD: 0000-0001-7879-4371
RISE Research Institutes of Sweden, Digital Systems, Industrial Systems.ORCID iD: 0000-0002-1512-0844
Show others and affiliations
2021 (English)In: 2021 IEEE Congress on Evolutionary Computation (CEC), 2021, p. 2385-2394Conference paper, Published paper (Refereed)
Abstract [en]

Performance testing with the aim of generating an efficient and effective workload to identify performance issues is challenging. Many of the automated approaches mainly rely on analyzing system models, source code, or extracting the usage pattern of the system during the execution. However, such information and artifacts are not always available. Moreover, all the transactions within a generated workload do not impact the performance of the system the same way, a finely tuned workload could accomplish the test objective in an efficient way. Model-free reinforcement learning is widely used for finding the optimal behavior to accomplish an objective in many decision-making problems without relying on a model of the system. This paper proposes that if the optimal policy (way) for generating test workload to meet a test objective can be learned by a test agent, then efficient test automation would be possible without relying on system models or source code. We present a self-adaptive reinforcement learning-driven load testing agent, RELOAD, that learns the optimal policy for test workload generation and generates an effective workload efficiently to meet the test objective. Once the agent learns the optimal policy, it can reuse the learned policy in subsequent testing activities. Our experiments show that the proposed intelligent load test agent can accomplish the test objective with lower test cost compared to common load testing procedures, and results in higher test efficiency.

Place, publisher, year, edition, pages
2021. p. 2385-2394
Keywords [en]
Analytical models, Automation, Transfer learning, Decision making, Reinforcement learning, Knowledge representation, Evolutionary computation, performance testing, load testing, workload generation, autonomous testing
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:ri:diva-55976DOI: 10.1109/CEC45853.2021.9504763OAI: oai:DiVA.org:ri-55976DiVA, id: diva2:1588403
Conference
2021 IEEE Congress on Evolutionary Computation (CEC)
Available from: 2021-08-27 Created: 2021-08-27 Last updated: 2023-10-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Helali Moghadam, MahshidBorg, MarkusSaadatmand, MehrdadBohlin, MarkusPotena, Pasqualina

Search in DiVA

By author/editor
Helali Moghadam, MahshidBorg, MarkusSaadatmand, MehrdadBohlin, MarkusPotena, Pasqualina
By organisation
Industrial SystemsMobility and Systems
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 150 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf