Endre søk
Link to record
Permanent link

Direct link
Saadatmand, Mehrdad, PhDORCID iD iconorcid.org/0000-0002-1512-0844
Alternativa namn
Publikasjoner (10 av 55) Visa alla publikasjoner
Kiss, A., Marín, B. & Saadatmand, M. (2023). 13th Workshop on Automating Test Case Design, Selection and Evaluation (A-TEST 2022) Co-Located with ESEC/FSE Conference. Software Engineering Notes: an Informal Newsletter of The Specia, 48(1), 76-78
Åpne denne publikasjonen i ny fane eller vindu >>13th Workshop on Automating Test Case Design, Selection and Evaluation (A-TEST 2022) Co-Located with ESEC/FSE Conference
2023 (engelsk)Inngår i: Software Engineering Notes: an Informal Newsletter of The Specia, ISSN 0163-5948, E-ISSN 1943-5843, Vol. 48, nr 1, s. 76-78Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

The Workshop on Automating Test Case Design, Selection and Evaluation (A-TEST) has provided a venue for researchers and industry members alike to exchange and discuss trending views, ideas, state of the art, work in progress, and scientific results on automated testing. Up until now it has run 13 editions since 2009. The 13th edition of the A-TEST workshop has been performed as an in-person workshop in Singapore during 17 to 18 of November, 2022. This edition of the A-TEST workshop was co-located with ESEC/FSE 2022 conference.

sted, utgiver, år, opplag, sider
Association for Computing Machinery, 2023
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-65768 (URN)10.1145/3573074.3573093 (DOI)
Tilgjengelig fra: 2023-08-14 Laget: 2023-08-14 Sist oppdatert: 2023-10-04bibliografisk kontrollert
Helali Moghadam, M., Borg, M., Saadatmand, M., Mousavirad, S., Bohlin, M. & Lisper, B. (2023). Machine learning testing in an ADAS case study using simulation-integrated bio-inspired search-based testing. Journal of Software: Evolution and Process
Åpne denne publikasjonen i ny fane eller vindu >>Machine learning testing in an ADAS case study using simulation-integrated bio-inspired search-based testing
Vise andre…
2023 (engelsk)Inngår i: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481Artikkel i tidsskrift (Fagfellevurdert) Epub ahead of print
Abstract [en]

This paper presents an extended version of Deeper, a search-based simulation-integrated test solution that generates failure-revealing test scenarios for testing a deep neural network-based lane-keeping system. In the newly proposed version, we utilize a new set of bio-inspired search algorithms, genetic algorithm (GA), (Formula presented.) and (Formula presented.) evolution strategies (ES), and particle swarm optimization (PSO), that leverage a quality population seed and domain-specific crossover and mutation operations tailored for the presentation model used for modeling the test scenarios. In order to demonstrate the capabilities of the new test generators within Deeper, we carry out an empirical evaluation and comparison with regard to the results of five participating tools in the cyber-physical systems testing competition at SBST 2021. Our evaluation shows the newly proposed test generators in Deeper not only represent a considerable improvement on the previous version but also prove to be effective and efficient in provoking a considerable number of diverse failure-revealing test scenarios for testing an ML-driven lane-keeping system. They can trigger several failures while promoting test scenario diversity, under a limited test time budget, high target failure severity, and strict speed limit constraints. © 2023 The Authors. 

sted, utgiver, år, opplag, sider
John Wiley and Sons Ltd, 2023
Emneord
advanced driver assistance systems, deep learning, evolutionary computation, lane-keeping system, machine learning testing, search-based testing, Automobile drivers, Biomimetics, Budget control, Deep neural networks, Embedded systems, Genetic algorithms, Learning systems, Particle swarm optimization (PSO), Software testing, Case-studies, Lane keeping, Machine-learning, Software Evolution, Software process, Test scenario
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-65687 (URN)10.1002/smr.2591 (DOI)2-s2.0-85163167144 (Scopus ID)
Merknad

 Correspondence Address: M.H. Moghadam; Smart Industrial Automation, RISE Research Institutes of Sweden, Västerås, Stora Gatan 36, 722 12, Sweden;  

This work has been funded by Vinnova through the ITEA3 European IVVES ( https://itea3.org/project/ivves.html ) and H2020‐ECSEL European AIDOaRT ( https://www.aidoart.eu/ ) and InSecTT ( https://www.insectt.eu/ ) projects. Furthermore, the project received partially financial support from the SMILE III project financed by Vinnova, FFI, Fordonsstrategisk forskning och innovation under the grant number: 2019‐05871.

Tilgjengelig fra: 2023-08-10 Laget: 2023-08-10 Sist oppdatert: 2023-10-04bibliografisk kontrollert
Abbas, M., Hamayouni, A., Helali Moghadam, M., Saadatmand, M. & Strandberg, P. E. (2023). Making Sense of Failure Logs in an Industrial DevOps Environment. In: Advances in Intelligent Systems and Computing book series (AISC,volume 1445): 20th International Conference on Information Technology New Generations. Paper presented at 20th International Conference on Information Technology New Generations (pp. 217-226). Springer International Publishing, 1445
Åpne denne publikasjonen i ny fane eller vindu >>Making Sense of Failure Logs in an Industrial DevOps Environment
Vise andre…
2023 (engelsk)Inngår i: Advances in Intelligent Systems and Computing book series (AISC,volume 1445): 20th International Conference on Information Technology New Generations, Springer International Publishing , 2023, Vol. 1445, s. 217-226Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Processing and reviewing nightly test execution failure logs for large industrial systems is a tedious activity. Furthermore, multiple failures might share one root/common cause during test execution sessions, and the review might therefore require redundant efforts. This paper presents the LogGrouper approach for automated grouping of failure logs to aid root/common cause analysis and for enabling the processing of each log group as a batch. LogGrouper uses state-of-art natural language processing and clustering approaches to achieve meaningful log grouping. The approach is evaluated in an industrial setting in both a qualitative and quantitative manner. Results show that LogGrouper produces good quality groupings in terms of our two evaluation metrics (Silhouette Coefficient and Calinski-Harabasz Index) for clustering quality. The qualitative evaluation shows that experts perceive the groups as useful, and the groups are seen as an initial pointer for root cause analysis and failure assignment.

sted, utgiver, år, opplag, sider
Springer International Publishing, 2023
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-67432 (URN)
Konferanse
20th International Conference on Information Technology New Generations
Tilgjengelig fra: 2023-09-28 Laget: 2023-09-28 Sist oppdatert: 2023-10-04bibliografisk kontrollert
Abbas, M., Ferrari, A., Shatnawi, A., Enoiu, E., Saadatmand, M. & Sundmark, D. (2023). On the relationship between similar requirements and similar software: A case study in the railway domain. Requirements Engineering, 28, 23-47
Åpne denne publikasjonen i ny fane eller vindu >>On the relationship between similar requirements and similar software: A case study in the railway domain
Vise andre…
2023 (engelsk)Inngår i: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 28, s. 23-47Artikkel i tidsskrift (Fagfellevurdert) Epub ahead of print
Abstract [en]

Recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a stakeholder proposes a new requirement, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn, identify previously developed code. Several NLP approaches for similarity computation between requirements are available. However, there is little empirical evidence on their effectiveness for code retrieval. This study compares different NLP approaches, from lexical ones to semantic, deep-learning techniques, and correlates the similarity among requirements with the similarity of their associated software. The evaluation is conducted on real-world requirements from two industrial projects from a railway company. Specifically, the most similar pairs of requirements across two industrial projects are automatically identified using six language models. Then, the trace links between requirements and software are used to identify the software pairs associated with each requirements pair. The software similarity between pairs is then automatically computed with JPLag. Finally, the correlation between requirements similarity and software similarity is evaluated to see which language model shows the highest correlation and is thus more appropriate for code retrieval. In addition, we perform a focus group with members of the company to collect qualitative data. Results show a moderately positive correlation between requirements similarity and software similarity, with the pre-trained deep learning-based BERT language model with preprocessing outperforming the other models. Practitioners confirm that requirements similarity is generally regarded as a proxy for software similarity. However, they also highlight that additional aspect comes into play when deciding software reuse, e.g., domain/project knowledge, information coming from test cases, and trace links. Our work is among the first ones to explore the relationship between requirements and software similarity from a quantitative and qualitative standpoint. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and change impact analysis.

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2023
Emneord
Correlation, Language models, Perception of similarity, Requirements similarity, Software similarity, Codes (symbols), Computer software reusability, Deep learning, Railroads, Recommender systems, Semantics, Software testing, Case-studies, Code retrievals, Industrial programs, Language model, Processing approach, Requirement similarities, Similarity computation, Software similarities, Natural language processing systems
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-58532 (URN)10.1007/s00766-021-00370-4 (DOI)2-s2.0-85123067513 (Scopus ID)
Merknad

 Funding text 1: This work has been supported by and received funding from the ITEA3 XIVT, and KK Foundation’s ARRAY project.

Tilgjengelig fra: 2022-02-17 Laget: 2022-02-17 Sist oppdatert: 2023-10-05bibliografisk kontrollert
Ferrari, F. C., Durelli, V. H. S., Andler, S. F., Offutt, J., Saadatmand, M. & Müllner, N. (2023). On transforming model‐based tests into code: A systematic literature review. Software testing, verification & reliability
Åpne denne publikasjonen i ny fane eller vindu >>On transforming model‐based tests into code: A systematic literature review
Vise andre…
2023 (engelsk)Inngår i: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689Artikkel i tidsskrift (Fagfellevurdert) Epub ahead of print
Abstract [en]

Model-based test design is increasingly being applied in practice and studied in research. Model-based testing (MBT) exploits abstract models of the software behaviour to generate abstract tests, which are then transformed into concrete tests ready to run on the code. Given that abstract tests are designed to cover models but are run on code (after transformation), the effectiveness of MBT is dependent on whether model coverage also ensures coverage of key functional code. In this article, we investigate how MBT approaches generate tests from model specifications and how the coverage of tests designed strictly based on the model translates to code coverage. We used snowballing to conduct a systematic literature review. We started with three primary studies, which we refer to as the initial seeds. At the end of our search iterations, we analysed 30 studies that helped answer our research questions. More specifically, this article characterizes how test sets generated at the model level are mapped and applied to the source code level, discusses how tests are generated from the model specifications, analyses how the test coverage of models relates to the test coverage of the code when the same test set is executed and identifies the technologies and software development tasks that are on focus in the selected studies. Finally, we identify common characteristics and limitations that impact the research and practice of MBT: (i) some studies did not fully describe how tools transform abstract tests into concrete tests, (ii) some studies overlooked the computational cost of model-based approaches and (iii) some studies found evidence that bears out a robust correlation between decision coverage at the model level and branch coverage at the code level. We also noted that most primary studies omitted essential details about the experiments.

HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-67498 (URN)10.1002/stvr.1860 (DOI)
Merknad

Fabiano Ferrari was partly supported by the Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) - Brasil, grant #2016/21251-0 and CNPq - Brasil, grants #306310/2016-3 and #312086/2021-0. Sten Andler was partly supported by KKS (The Knowledge Foundation), by project 20130085, Testing of Critical System Characteristics (TOCSYC). Mehrdad Saadatmand was partly funded by the SmartDelta Project (more information available at https://smartdelta.org/).

Tilgjengelig fra: 2023-10-07 Laget: 2023-10-07 Sist oppdatert: 2023-10-10bibliografisk kontrollert
Bashir, S., Abbas, M., Saadatmand, M., Enoiu, E., Bohlin, M. & Lindberg, P. (2023). Requirement or Not, That is the Question: A Case from the Railway Industry. In: Lecture Notes in Computer Science. Volume 13975. Pages 105 - 121 2023: . Paper presented at 29th International Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2023. Barcelona, Spain. 17 April 2023 through 20 April 2023 (pp. 105-121). Springer Science and Business Media Deutschland GmbH
Åpne denne publikasjonen i ny fane eller vindu >>Requirement or Not, That is the Question: A Case from the Railway Industry
Vise andre…
2023 (engelsk)Inngår i: Lecture Notes in Computer Science. Volume 13975. Pages 105 - 121 2023, Springer Science and Business Media Deutschland GmbH , 2023, s. 105-121Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Requirements in tender documents are often mixed with other supporting information. Identifying requirements in large tender documents could aid the bidding process and help estimate the risk associated with the project.  Manual identification of requirements in large documents is a resource-intensive activity that is prone to human error and limits scalability. This study compares various state-of-the-art approaches for requirements identification in an industrial context. For generalizability, we also present an evaluation on a real-world public dataset. We formulate the requirement identification problem as a binary text classification problem. Various state-of-the-art classifiers based on traditional machine learning, deep learning, and few-shot learning are evaluated for requirements identification based on accuracy, precision, recall, and F1 score. Results from the evaluation show that the transformer-based BERT classifier performs the best, with an average F1 score of 0.82 and 0.87 on industrial and public datasets, respectively. Our results also confirm that few-shot classifiers can achieve comparable results with an average F1 score of 0.76 on significantly lower samples, i.e., only 20% of the data.  There is little empirical evidence on the use of large language models and few-shots classifiers for requirements identification. This paper fills this gap by presenting an industrial empirical evaluation of the state-of-the-art approaches for requirements identification in large tender documents. We also provide a running tool and a replication package for further experimentation to support future research in this area. © 2023, The Author(s)

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2023
Emneord
NLP, Requirements classification, Requirements identification, tender documents, Deep learning, Information retrieval systems, Natural language processing systems, Requirements engineering, Risk perception, Text processing, Bidding process, F1 scores, Human errors, Manual identification, Public dataset, Railway industry, Requirement identification, Requirements classifications, State-of-the-art approach, Classification (of information)
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-64397 (URN)10.1007/978-3-031-29786-1_8 (DOI)2-s2.0-85152587069 (Scopus ID)9783031297854 (ISBN)
Konferanse
29th International Working Conference on Requirements Engineering: Foundation for Software Quality, REFSQ 2023. Barcelona, Spain. 17 April 2023 through 20 April 2023
Merknad

Correspondence Address: Abbas, M. RISE Research Institutes of Sweden, Sweden; email: muhammad.abbas@ri.se; Funding details: ITEA; Funding text 1: Acknowledgement. This work is partially funded by the AIDOaRt (KDT) and SmartDelta [27] (ITEA) projects.

Tilgjengelig fra: 2023-05-08 Laget: 2023-05-08 Sist oppdatert: 2023-11-03bibliografisk kontrollert
Bashir, S., Abbas, M., Ferrari, A., Saadatmand, M. & Lindberg, P. (2023). Requirements Classification for Smart Allocation: A Case Study in the Railway Industry. In: 31st IEEE International Requirements Engineering Conference: . Paper presented at 2023 IEEE 31st International Requirements Engineering Conference (RE). Hannover, Germany: IEEE
Åpne denne publikasjonen i ny fane eller vindu >>Requirements Classification for Smart Allocation: A Case Study in the Railway Industry
Vise andre…
2023 (engelsk)Inngår i: 31st IEEE International Requirements Engineering Conference, Hannover, Germany: IEEE , 2023Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Allocation of requirements to different teams is a typical preliminary task in large-scale system development projects. This critical activity is often performed manually and can benefit from automated requirements classification techniques. To date, limited evidence is available about the effectiveness of existing machine learning (ML) approaches for requirements classification in industrial cases. This paper aims to fill this gap by evaluating state-of-the-art language models and ML algorithms for classification in the railway industry. Since the interpretation of the results of ML systems is particularly relevant in the studied context, we also provide an information augmentation approach to complement the output of the ML-based classification. Our results show that the BERT uncased language model with the softmax classifier can allocate the requirements to different teams with a 76% F1 score when considering requirements allocation to the most frequent teams. Information augmentation provides potentially useful indications in 76% of the cases. The results confirm that currently available techniques can be applied to real-world cases, thus enabling the first step for technology transfer of automated requirements classification. The study can be useful to practitioners operating in requirements-centered contexts such as railways, where accurate requirements classification becomes crucial for better allocation of requirements to various teams.

sted, utgiver, år, opplag, sider
Hannover, Germany: IEEE, 2023
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-67433 (URN)10.1109/RE57278.2023.00028 (DOI)
Konferanse
2023 IEEE 31st International Requirements Engineering Conference (RE)
Tilgjengelig fra: 2023-09-28 Laget: 2023-09-28 Sist oppdatert: 2023-11-03bibliografisk kontrollert
Saadatmand, M., Abbas, M., Enoiu, E. P., Schlingloff, B.-H., Afzal, W., Dornauer, B. & Felderer, M. (2023). SmartDelta project: Automated quality assurance and optimization across product versions and variants. Microprocessors and microsystems, 104967-104967, Article ID 104967.
Åpne denne publikasjonen i ny fane eller vindu >>SmartDelta project: Automated quality assurance and optimization across product versions and variants
Vise andre…
2023 (engelsk)Inngår i: Microprocessors and microsystems, ISSN 0141-9331, E-ISSN 1872-9436, s. 104967-104967, artikkel-id 104967Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Software systems are often built in increments with additional features or enhancements on top of existing products. This incremental development may result in the deterioration of certain quality aspects. In other words, the software can be considered an evolving entity emanating different quality characteristics as it gets updated over time with new features or deployed in different operational environments. Approaching software development with this mindset and awareness regarding quality evolution over time can be a key factor for the long-term success of a company in today’s highly competitive market of industrial software-intensive products. Therefore, it is important to be able to accurately analyze and determine the quality implications of each change and increment to a software system. To address this challenge, the multinational SmartDelta project develops automated solutions for the quality assessment of product deltas in a continuous engineering environment. The project provides smart analytics from development artifacts and system executions, offering insights into quality degradation or improvements across different product versions, and providing recommendations for the next builds. This paper presents the challenges in incremental software development tackled in the scope of the SmartDelta project, and the solutions that are produced and planned in the project, along with the industrial impact of the project for software-intensive industrial systems.

HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-67581 (URN)10.1016/j.micpro.2023.104967 (DOI)
Forskningsfinansiär
Vinnova, 2021-04730
Merknad

This work has been supported by and done in the scope of theITEA3 SmartDelta project, which has been funded by the nationalfunding authorities of the participating countries: https://itea4.org/project/smartdelta.html. Vinnova: 2021-04730

Tilgjengelig fra: 2023-11-01 Laget: 2023-11-01 Sist oppdatert: 2023-11-03bibliografisk kontrollert
Helali Moghadam, M., Saadatmand, M., Borg, M., Bohlin, M. & Lisper, B. (2022). An autonomous performance testing framework using self-adaptive fuzzy reinforcement learning. Software quality journal, 127-159
Åpne denne publikasjonen i ny fane eller vindu >>An autonomous performance testing framework using self-adaptive fuzzy reinforcement learning
Vise andre…
2022 (engelsk)Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, s. 127-159Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case-based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments in a simulated performance testing setup, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process and performs adaptively without access to source code and performance models. © 2021, The Author(s).

sted, utgiver, år, opplag, sider
Springer, 2022
Emneord
Autonomous testing, Performance testing, Reinforcement learning, Stress testing, Test case generation, Automation, Computer programming languages, Testing, Transfer learning, Automated generation, Optimal performance, Performance Model, Performance testing framework, Performance tests, Simulated performance, Software systems, Software testing
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-52628 (URN)10.1007/s11219-020-09532-z (DOI)2-s2.0-85102446552 (Scopus ID)
Merknad

Funding text 1: This work has been supported by and received funding partially from the TESTOMAT, XIVT, IVVES and MegaM@Rt2 European projects.

Tilgjengelig fra: 2021-03-25 Laget: 2021-03-25 Sist oppdatert: 2023-10-04bibliografisk kontrollert
Saadatmand, M., Truscan, D. & Enoiu, E. (2022). Message from the ITEQS 2022 Workshop Chairs. Paper presented at 14th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2022, Virtual, Online, 4 April 2022 through 13 April 2022. Proceedings - 2022 IEEE 14th International Conference on Software Testing
Åpne denne publikasjonen i ny fane eller vindu >>Message from the ITEQS 2022 Workshop Chairs
2022 (engelsk)Inngår i: Proceedings - 2022 IEEE 14th International Conference on Software TestingArtikkel i tidsskrift, Editorial material (Annet vitenskapelig) Published
sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers Inc., 2022
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-59871 (URN)10.1109/ICSTW55395.2022.00006 (DOI)2-s2.0-85133233202 (Scopus ID)
Konferanse
14th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2022, Virtual, Online, 4 April 2022 through 13 April 2022
Tilgjengelig fra: 2022-08-01 Laget: 2022-08-01 Sist oppdatert: 2023-10-04bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-1512-0844
v. 2.41.0