Change search
Refine search result
1 - 5 of 5
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Flemström, Daniel
    et al.
    RISE - Research Institutes of Sweden, ICT, SICS.
    Gustafsson, Thomas
    Scania CV AB, Sweden.
    Kobetski, Avenir
    RISE - Research Institutes of Sweden, ICT, SICS.
    A Case Study of Interactive Development of Passive Tests2018Conference paper (Other academic)
    Abstract [en]

    Testing in the active sense is the most common way to performverification and validation of systems, but testing in the passivesense has one compelling property: independence. Independencefrom test stimuli and other passive tests opens up for parallel testingand off-line analysis. However, the tests can be difficult to developsince the complete testable state must be expressed using someformalism. We argue that a carefully chosen language togetherwith an interactive work flow, providing immediate feedback, canenable testers to approach passive testing. We have conducted a casestudy in the automotive domain, interviewing experienced testers.The testers have been introduced to, and had hands-on practicewith a tool. The tool is based on Easy Approach to RequirementsSyntax (EARS) and provides an interactive work flow for developingand evaluating test results. The case study shows that i) the testersbelieve passive testing is useful for many of their tests, ii) they seebenefits in parallelism and off-line analysis, iii) the interactive workflow is necessary for writing the testable state expression, but iv)when the testable state becomes too complex, then the proposedlanguage is a limitation. However, the language contributes toconcise tests, resembling executable requirements.

  • 2.
    Flemström, Daniel
    et al.
    RISE - Research Institutes of Sweden, ICT, SICS.
    Gustafsson, Thomas
    Scania CV AB, Sweden.
    Kobetski, Avenir
    RISE - Research Institutes of Sweden, ICT, SICS.
    SAGA Toolbox: Interactive Testing of Guarded Assertions2017In: Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, 2017, p. 516-523Conference paper (Refereed)
    Abstract [en]

    This paper presents the SAGA toolbox. It centers around development of tests, and analysis of test results, on Guarded Assertions (GA) format. Such a test defines when to test, and what to expect in such a state. The SAGA toolbox lets the user describe the test, and at the same time get immediate feedback on the test result based on a trace from the System Under Test (SUT). The feedback is visual using plots of the trace. This enables the test engineer to play around with the data and use an agile development method, since the data is already there. Moreover, the SAGA toolbox also enables the test engineer to change test stimuli plots to study the effect they have on a test. It can later generate computer programs that can feed these test stimuli to the SUT. This enables an interactive feedback loop, where immediate feedback on changes to the test, or to the test stimuli, indicate whether the test is correct and it passed or failed.

  • 3.
    Flemström, Daniel
    et al.
    RISE, Swedish ICT, SICS, Software and Systems Engineering Laboratory.
    Gustafsson, Thomas
    Scania CV AB, Sweden.
    Kobetski, Avenir
    RISE, Swedish ICT, SICS, Software and Systems Engineering Laboratory.
    Sundmark, Daniel
    Mälardalen University, Sweden.
    A Research Roadmap for Test Design in Automated Integration Testing of Vehicular Systems2016In: FASSI 2016: The Second International Conference on Fundamentals and Advances in Software Systems Integration, International Academy, Research and Industry Association (IARIA), 2016, 9, p. 18-23Conference paper (Refereed)
    Abstract [en]

    An increasing share of the innovations emerging in the vehicular industry are implemented in software. Consequently, vehicular electrical systems are becoming more and more complex with an increasing number of functions, computational nodes and complex sensors, e.g., cameras and radars. The introduction of autonomous functional components, such as advanced driver assistance systems, highlight the foreseeable complexity of different parts of the system interacting with each other and with the human driver. It is of utmost importance that the testing effort can scale with this increasing complexity. In this paper, we review the challenges that we are facing in integration testing of complex embedded vehicular systems. Further, based on these challenges we outline a set of research directions for semi-automated or automated test design and execution in integration testing of vehicular systems. While the discussion is exemplified with our hands-on experience of the automotive industry, much of the concepts can be generalised to a broader setting of complex embedded systems.

  • 4.
    Flemström, Daniel
    et al.
    Mälardalen University, Sweden.
    Potena, Pasqualina
    RISE - Research Institutes of Sweden, ICT, SICS.
    Sundmark, Daniel
    Mälardalen University, Sweden.
    Afzal, Wasif
    Mälardalen University, Sweden.
    Bohlin, Markus
    RISE - Research Institutes of Sweden, ICT, SICS.
    Similarity-based prioritization of test case automation2018In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 26, no 4, p. 1421-1449Article in journal (Refereed)
    Abstract [en]

    The importance of efficient software testing procedures is driven by an ever increasing system complexity as well as global competition. In the particular case of manual test cases at the system integration level, where thousands of test cases may be executed before release, time must be well spent in order to test the system as completely and as efficiently as possible. Automating a subset of the manual test cases, i.e, translating the manual instructions to automatically executable code, is one way of decreasing the test effort. It is further common that test cases exhibit similarities, which can be exploited through reuse when automating a test suite. In this paper, we investigate the potential for reducing test effort by ordering the test cases before such automation, given that we can reuse already automated parts of test cases. In our analysis, we investigate several approaches for prioritization in a case study at a large Swedish vehicular manufacturer. The study analyzes the effects with respect to test effort, on four projects with a total of 3919 integration test cases constituting 35,180 test steps, written in natural language. The results show that for the four projects considered, the difference in expected manual effort between the best and the worst order found is on average 12 percentage points. The results also show that our proposed prioritization method is nearly as good as more resource demanding meta-heuristic approaches at a fraction of the computational time. Based on our results, we conclude that the order of automation is important when the set of test cases contain similar steps (instructions) that cannot be removed, but are possible to reuse. More precisely, the order is important with respect to how quickly the manual test execution effort decreases for a set of test cases that are being automated. 

  • 5.
    Flemström, Daniel
    et al.
    RISE, Swedish ICT, SICS, Software and Systems Engineering Laboratory.
    Sundmark, Daniel
    Mälardalen University, Sweden.
    Afzal, Wasif
    Mälardalen University, Sweden.
    Vertical Test Reuse for Embedded Systems: A Systematic Mapping Study2015In: 2015 41st Euromicro Conference on Software Engineering and Advanced Applications, Conference Publishing Services , 2015, 11, p. 317-324, article id 7302469Conference paper (Refereed)
    Abstract [en]

    Vertical test reuse refers to the the reuse of test cases or other test artifacts over different integration levels in the software or system engineering process. Vertical test reuse has previously been proposed for reducing test effort and improving test effectiveness, particularly for embedded system development. The goal of this study is to provide an overview of the state of the art in the field of vertical test reuse for embedded system development. For this purpose, a systematic mapping study has been performed, identifying 11 papers on vertical test reuse for embedded systems. The primary result from the mapping is a classification of published work on vertical test reuse in the embedded system domain, covering motivations for reuse, reuse techniques, test levels and reusable test artifacts considered, and to what extent the effects of reuse have been evaluated.

1 - 5 of 5
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
v. 2.35.7