Change search
Link to record
Permanent link

Direct link
BETA
Publications (4 of 4) Show all publications
Pietrantuono, R., Potena, P., Pecchia, A., Rodriguez, D., Russo, S. & Fernandez, L. (2018). Multi-Objective Testing Resource Allocation under Uncertainty. IEEE Transactions on Evolutionary Computation, 22(3), 347-362
Open this publication in new window or tab >>Multi-Objective Testing Resource Allocation under Uncertainty
Show others...
2018 (English)In: IEEE Transactions on Evolutionary Computation, ISSN 1089-778X, E-ISSN 1941-0026, Vol. 22, no 3, p. 347-362Article in journal (Refereed) Published
Abstract [en]

Testing resource allocation is the problem of planning the assignment of resources to testing activities of software components so as to achieve a target goal under given constraints. Existing methods build on Software Reliability Growth Models (SRGMs), aiming at maximizing reliability given time/cost constraints, or at minimizing cost given quality/time constraints. We formulate it as a multi-objective debug-aware and robust optimization problem under uncertainty of data, advancing the stateof- the-art in the following ways. Multi-objective optimization produces a set of solutions, allowing to evaluate alternative tradeoffs among reliability, cost and release time. Debug awareness relaxes the traditional assumptions of SRGMs – in particular the very unrealistic immediate repair of detected faults – and incorporates the bug assignment activity. Robustness provides solutions valid in spite of a degree of uncertainty on input parameters. We show results with a real-world case study.

Keywords
Testing, Resource management, Mathematical model, Debugging, Fault detection, Uncertainty, Optimization
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-32387 (URN)10.1109/TEVC.2017.2691060 (DOI)2-s2.0-85043388350 (Scopus ID)
Available from: 2017-10-26 Created: 2017-10-26 Last updated: 2019-01-10Bibliographically approved
Flemström, D., Potena, P., Sundmark, D., Afzal, W. & Bohlin, M. (2018). Similarity-based prioritization of test case automation. Software quality journal, 26(4), 1421-1449
Open this publication in new window or tab >>Similarity-based prioritization of test case automation
Show others...
2018 (English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 26, no 4, p. 1421-1449Article in journal (Refereed) Published
Abstract [en]

The importance of efficient software testing procedures is driven by an ever increasing system complexity as well as global competition. In the particular case of manual test cases at the system integration level, where thousands of test cases may be executed before release, time must be well spent in order to test the system as completely and as efficiently as possible. Automating a subset of the manual test cases, i.e, translating the manual instructions to automatically executable code, is one way of decreasing the test effort. It is further common that test cases exhibit similarities, which can be exploited through reuse when automating a test suite. In this paper, we investigate the potential for reducing test effort by ordering the test cases before such automation, given that we can reuse already automated parts of test cases. In our analysis, we investigate several approaches for prioritization in a case study at a large Swedish vehicular manufacturer. The study analyzes the effects with respect to test effort, on four projects with a total of 3919 integration test cases constituting 35,180 test steps, written in natural language. The results show that for the four projects considered, the difference in expected manual effort between the best and the worst order found is on average 12 percentage points. The results also show that our proposed prioritization method is nearly as good as more resource demanding meta-heuristic approaches at a fraction of the computational time. Based on our results, we conclude that the order of automation is important when the set of test cases contain similar steps (instructions) that cannot be removed, but are possible to reuse. More precisely, the order is important with respect to how quickly the manual test execution effort decreases for a set of test cases that are being automated. 

Keywords
Effort, Prioritization, Reuse, Software-testing, Test-case automation, Automation, Computer software reusability, Heuristic methods, Computational time, Global competition, Meta-heuristic approach, System integration, Test case, Software testing
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-33511 (URN)10.1007/s11219-017-9401-7 (DOI)2-s2.0-85043389019 (Scopus ID)
Available from: 2018-03-23 Created: 2018-03-23 Last updated: 2018-12-20
Gonzalez-Hernandez, L., Lindström, B., Offutt, J., Andler, S. F., Potena, P. & Bohlin, M. (2018). Using Mutant Stubbornness to Create Minimal and Prioritized Test Sets. In: 2018 IEEE International Conference on Software Quality, Reliability and Security,  QRS 2018: . Paper presented at 2018 IEEE International Conference on Software Quality, Reliability and Security, QRS 2018 (pp. 446-457).
Open this publication in new window or tab >>Using Mutant Stubbornness to Create Minimal and Prioritized Test Sets
Show others...
2018 (English)In: 2018 IEEE International Conference on Software Quality, Reliability and Security,  QRS 2018, 2018, p. 446-457Conference paper, Published paper (Refereed)
Abstract [en]

In testing, engineers want to run the most useful tests early (prioritization). When tests are run hundreds or thousands of times, minimizing a test set can result in significant savings (minimization). This paper proposes a new analysis technique to address both the minimal test set and the test case prioritization problems. This paper precisely defines the concept of mutant stubbornness, which is the basis for our analysis technique. We empirically compare our technique with other test case minimization and prioritization techniques in terms of the size of the minimized test sets and how quickly mutants are killed. We used seven C language subjects from the Siemens Repository, specifically the test sets and the killing matrices from a previous study. We used 30 different orders for each set and ran every technique 100 times over each set. Results show that our analysis technique performed significantly better than prior techniques for creating minimal test sets and was able to establish new bounds for all cases. Also, our analysis technique killed mutants as fast or faster than prior techniques. These results indicate that our mutant stubbornness technique constructs test sets that are both minimal in size, and prioritized effectively, as well or better than other techniques.

Keywords
Test Case Minimization, Minimal Sets, Test Case Prioritization, Mutant Stubbornness
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-35178 (URN)10.1109/QRS.2018.00058 (DOI)2-s2.0-85052313827 (Scopus ID)978-1-5386-7757-5 (ISBN)
Conference
2018 IEEE International Conference on Software Quality, Reliability and Security, QRS 2018
Available from: 2018-09-12 Created: 2018-09-12 Last updated: 2019-03-07Bibliographically approved
Lisper, B., Lindstrom, B., Potena, P., Saadatmand, M. & Bohlin, M. (2017). Targeted Mutation: Efficient Mutation Analysis for Testing Non-Functional Properties. In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2017: . Paper presented at 10th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2017, 13 March 2017 through 17 March 2017 (pp. 65-68).
Open this publication in new window or tab >>Targeted Mutation: Efficient Mutation Analysis for Testing Non-Functional Properties
Show others...
2017 (English)In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2017, 2017, p. 65-68Conference paper, Published paper (Refereed)
Abstract [en]

Mutation analysis has proven to be a strong technique for software testing. Unfortunately, it is also computationally expensive and researchers have therefore proposed several different approaches to reduce the effort. None of these reduction techniques however, focuses on non-functional properties. Given that our goal is to create a strong test suite for testing a certain non-functional property, which mutants should be used? In this paper, we introduce the concept of targeted mutation, which focuses mutation effort to those parts of the code where a change can make a difference with respect to the targeted non-functional property. We show how targeted mutation can be applied to derive efficient test suites for estimating the Worst-Case Execution Time (WCET). We use program slicing to direct the mutations to the parts of the code that are likely to have the strongest influence on execution time. Finally, we outline an experimental procedure for how to evaluate the technique.

Keywords
Execution time, Mutation testing, Non-functional properties, Program slicing, Program processors, Verification, Experimental procedure, Mutation analysis, Non functional properties, Reduction techniques, Worst-case execution time, Software testing
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-30962 (URN)10.1109/ICSTW.2017.18 (DOI)2-s2.0-85018402349 (Scopus ID)9781509066766 (ISBN)
Conference
10th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2017, 13 March 2017 through 17 March 2017
Available from: 2017-09-06 Created: 2017-09-06 Last updated: 2019-02-05Bibliographically approved
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2165-7039

Search in DiVA

Show all publications
v. 2.35.9