Change search
Link to record
Permanent link

Direct link
BETA
Publications (5 of 5) Show all publications
Flemström, D., Gustafsson, T. & Kobetski, A. (2018). A Case Study of Interactive Development of Passive Tests. In: : . Paper presented at Proceedings of 5th International Workshop on Requirements Engineering and Testing, Gothenburg, Sweden, June 2018 (RET’2018) (pp. 13-20).
Open this publication in new window or tab >>A Case Study of Interactive Development of Passive Tests
2018 (English)Conference paper, Published paper (Other academic)
Abstract [en]

Testing in the active sense is the most common way to performverification and validation of systems, but testing in the passivesense has one compelling property: independence. Independencefrom test stimuli and other passive tests opens up for parallel testingand off-line analysis. However, the tests can be difficult to developsince the complete testable state must be expressed using someformalism. We argue that a carefully chosen language togetherwith an interactive work flow, providing immediate feedback, canenable testers to approach passive testing. We have conducted a casestudy in the automotive domain, interviewing experienced testers.The testers have been introduced to, and had hands-on practicewith a tool. The tool is based on Easy Approach to RequirementsSyntax (EARS) and provides an interactive work flow for developingand evaluating test results. The case study shows that i) the testersbelieve passive testing is useful for many of their tests, ii) they seebenefits in parallelism and off-line analysis, iii) the interactive workflow is necessary for writing the testable state expression, but iv)when the testable state becomes too complex, then the proposedlanguage is a limitation. However, the language contributes toconcise tests, resembling executable requirements.

Keywords
passive testing, case study, content analysis, test language, test tool
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-34877 (URN)10.1145/3195538.3195544 (DOI)2-s2.0-85051171350 (Scopus ID)
Conference
Proceedings of 5th International Workshop on Requirements Engineering and Testing, Gothenburg, Sweden, June 2018 (RET’2018)
Available from: 2018-08-21 Created: 2018-08-21 Last updated: 2019-03-28Bibliographically approved
Flemström, D., Potena, P., Sundmark, D., Afzal, W. & Bohlin, M. (2018). Similarity-based prioritization of test case automation. Software quality journal, 26(4), 1421-1449
Open this publication in new window or tab >>Similarity-based prioritization of test case automation
Show others...
2018 (English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 26, no 4, p. 1421-1449Article in journal (Refereed) Published
Abstract [en]

The importance of efficient software testing procedures is driven by an ever increasing system complexity as well as global competition. In the particular case of manual test cases at the system integration level, where thousands of test cases may be executed before release, time must be well spent in order to test the system as completely and as efficiently as possible. Automating a subset of the manual test cases, i.e, translating the manual instructions to automatically executable code, is one way of decreasing the test effort. It is further common that test cases exhibit similarities, which can be exploited through reuse when automating a test suite. In this paper, we investigate the potential for reducing test effort by ordering the test cases before such automation, given that we can reuse already automated parts of test cases. In our analysis, we investigate several approaches for prioritization in a case study at a large Swedish vehicular manufacturer. The study analyzes the effects with respect to test effort, on four projects with a total of 3919 integration test cases constituting 35,180 test steps, written in natural language. The results show that for the four projects considered, the difference in expected manual effort between the best and the worst order found is on average 12 percentage points. The results also show that our proposed prioritization method is nearly as good as more resource demanding meta-heuristic approaches at a fraction of the computational time. Based on our results, we conclude that the order of automation is important when the set of test cases contain similar steps (instructions) that cannot be removed, but are possible to reuse. More precisely, the order is important with respect to how quickly the manual test execution effort decreases for a set of test cases that are being automated. 

Keywords
Effort, Prioritization, Reuse, Software-testing, Test-case automation, Automation, Computer software reusability, Heuristic methods, Computational time, Global competition, Meta-heuristic approach, System integration, Test case, Software testing
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-33511 (URN)10.1007/s11219-017-9401-7 (DOI)2-s2.0-85043389019 (Scopus ID)
Available from: 2018-03-23 Created: 2018-03-23 Last updated: 2018-12-20
Flemström, D., Gustafsson, T. & Kobetski, A. (2017). SAGA Toolbox: Interactive Testing of Guarded Assertions. In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017: . Paper presented at 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, 13 March 2017 through 17 March 2017 (pp. 516-523).
Open this publication in new window or tab >>SAGA Toolbox: Interactive Testing of Guarded Assertions
2017 (English)In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, 2017, p. 516-523Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents the SAGA toolbox. It centers around development of tests, and analysis of test results, on Guarded Assertions (GA) format. Such a test defines when to test, and what to expect in such a state. The SAGA toolbox lets the user describe the test, and at the same time get immediate feedback on the test result based on a trace from the System Under Test (SUT). The feedback is visual using plots of the trace. This enables the test engineer to play around with the data and use an agile development method, since the data is already there. Moreover, the SAGA toolbox also enables the test engineer to change test stimuli plots to study the effect they have on a test. It can later generate computer programs that can feed these test stimuli to the SUT. This enables an interactive feedback loop, where immediate feedback on changes to the test, or to the test stimuli, indicate whether the test is correct and it passed or failed.

Keywords
Guarded assertions, Interactive testing tool, Test sequence generation, Verification, Agile development methods, Immediate feedbacks, Interactive feedback, System under test, Test engineers, Test sequence generations, Testing tools, Software testing
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-30921 (URN)10.1109/ICST.2017.59 (DOI)2-s2.0-85020696155 (Scopus ID)9781509060313 (ISBN)
Conference
10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, 13 March 2017 through 17 March 2017
Available from: 2017-09-07 Created: 2017-09-07 Last updated: 2018-08-16Bibliographically approved
Flemström, D., Gustafsson, T., Kobetski, A. & Sundmark, D. (2016). A Research Roadmap for Test Design in Automated Integration Testing of Vehicular Systems (9ed.). In: FASSI 2016: The Second International Conference on Fundamentals and Advances in Software Systems Integration. Paper presented at Second International Conference on Fundamentals and Advances in Software Systems Integration (FASSI 2016), July 24-28, 2016, Nice, France (pp. 18-23). International Academy, Research and Industry Association (IARIA)
Open this publication in new window or tab >>A Research Roadmap for Test Design in Automated Integration Testing of Vehicular Systems
2016 (English)In: FASSI 2016: The Second International Conference on Fundamentals and Advances in Software Systems Integration, International Academy, Research and Industry Association (IARIA), 2016, 9, p. 18-23Conference paper, Published paper (Refereed)
Abstract [en]

An increasing share of the innovations emerging in the vehicular industry are implemented in software. Consequently, vehicular electrical systems are becoming more and more complex with an increasing number of functions, computational nodes and complex sensors, e.g., cameras and radars. The introduction of autonomous functional components, such as advanced driver assistance systems, highlight the foreseeable complexity of different parts of the system interacting with each other and with the human driver. It is of utmost importance that the testing effort can scale with this increasing complexity. In this paper, we review the challenges that we are facing in integration testing of complex embedded vehicular systems. Further, based on these challenges we outline a set of research directions for semi-automated or automated test design and execution in integration testing of vehicular systems. While the discussion is exemplified with our hands-on experience of the automotive industry, much of the concepts can be generalised to a broader setting of complex embedded systems.

Place, publisher, year, edition, pages
International Academy, Research and Industry Association (IARIA), 2016 Edition: 9
Keywords
Software Testing, Automotive Systems, Embedded Systems, Integration Testing
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-24557 (URN)9781612084978 (ISBN)
Conference
Second International Conference on Fundamentals and Advances in Software Systems Integration (FASSI 2016), July 24-28, 2016, Nice, France
Projects
SAGA
Available from: 2016-10-31 Created: 2016-10-31 Last updated: 2019-06-11Bibliographically approved
Flemström, D., Sundmark, D. & Afzal, W. (2015). Vertical Test Reuse for Embedded Systems: A Systematic Mapping Study (11ed.). In: 2015 41st Euromicro Conference on Software Engineering and Advanced Applications: . Paper presented at 41st Euromicro Conference on Software Engineering and Advanced Applications (SEAA 2015), August 26-28, 2015, Funchal, Portugal (pp. 317-324). Conference Publishing Services, Article ID 7302469.
Open this publication in new window or tab >>Vertical Test Reuse for Embedded Systems: A Systematic Mapping Study
2015 (English)In: 2015 41st Euromicro Conference on Software Engineering and Advanced Applications, Conference Publishing Services , 2015, 11, p. 317-324, article id 7302469Conference paper, Published paper (Refereed)
Abstract [en]

Vertical test reuse refers to the the reuse of test cases or other test artifacts over different integration levels in the software or system engineering process. Vertical test reuse has previously been proposed for reducing test effort and improving test effectiveness, particularly for embedded system development. The goal of this study is to provide an overview of the state of the art in the field of vertical test reuse for embedded system development. For this purpose, a systematic mapping study has been performed, identifying 11 papers on vertical test reuse for embedded systems. The primary result from the mapping is a classification of published work on vertical test reuse in the embedded system domain, covering motivations for reuse, reuse techniques, test levels and reusable test artifacts considered, and to what extent the effects of reuse have been evaluated.

Place, publisher, year, edition, pages
Conference Publishing Services, 2015 Edition: 11
Keywords
embedded system, systematic mapping study, vertical test reuse
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-24521 (URN)10.1109/SEAA.2015.46 (DOI)978-1-4673-7585-6 (ISBN)
Conference
41st Euromicro Conference on Software Engineering and Advanced Applications (SEAA 2015), August 26-28, 2015, Funchal, Portugal
Available from: 2016-10-31 Created: 2016-10-31 Last updated: 2019-07-12Bibliographically approved
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8096-3592

Search in DiVA

Show all publications
v. 2.35.7