Endre søk
Link to record
Permanent link

Direct link
Publikasjoner (10 av 77) Visa alla publikasjoner
Heyn, H. M., Habibullah, K. M., Knauss, E., Horkoff, J., Borg, M., Knauss, A. & Li, P. J. (2023). Automotive Perception Software Development: An Empirical Investigation into Data, Annotation, and Ecosystem Challenges. In: Proceedings - 2023 IEEE/ACM 2nd International Conference on AI Engineering - Software Engineering for AI, CAIN 2023: . Paper presented at 2nd IEEE/ACM International Conference on AI Engineering - Software Engineering for AI, CAIN 2023. Melbourne, Australia. 15 May 2023 through 16 May 2023 (pp. 13-24). Institute of Electrical and Electronics Engineers Inc.
Åpne denne publikasjonen i ny fane eller vindu >>Automotive Perception Software Development: An Empirical Investigation into Data, Annotation, and Ecosystem Challenges
Vise andre…
2023 (engelsk)Inngår i: Proceedings - 2023 IEEE/ACM 2nd International Conference on AI Engineering - Software Engineering for AI, CAIN 2023, Institute of Electrical and Electronics Engineers Inc. , 2023, s. 13-24Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Software that contains machine learning algorithms is an integral part of automotive perception, for example, in driving automation systems. The development of such software, specifically the training and validation of the machine learning components, requires large annotated datasets. An industry of data and annotation services has emerged to serve the development of such data-intensive automotive software components. Wide-spread difficulties to specify data and annotation needs challenge collaborations between OEMs (Original Equipment Manufacturers) and their suppliers of software components, data, and annotations.This paper investigates the reasons for these difficulties for practitioners in the Swedish automotive industry to arrive at clear specifications for data and annotations. The results from an interview study show that a lack of effective metrics for data quality aspects, ambiguities in the way of working, unclear definitions of annotation quality, and deficits in the business ecosystems are causes for the difficulty in deriving the specifications. We provide a list of recommendations that can mitigate challenges when deriving specifications and we propose future research opportunities to overcome these challenges. Our work contributes towards the on-going research on accountability of machine learning as applied to complex software systems, especially for high-stake applications such as automated driving. 

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers Inc., 2023
Emneord
accountability, annotations, data, ecosystems, machine learning, requirements specification, Application programs, Automation, Automotive industry, Large dataset, Learning algorithms, Machine components, Software design, Annotation, Automotives, Data annotation, Empirical investigation, Machine learning algorithms, Machine-learning, Requirements specifications, Software-component, Specifications
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-65685 (URN)10.1109/CAIN58948.2023.00011 (DOI)2-s2.0-85165140236 (Scopus ID)9798350301137 (ISBN)
Konferanse
2nd IEEE/ACM International Conference on AI Engineering - Software Engineering for AI, CAIN 2023. Melbourne, Australia. 15 May 2023 through 16 May 2023
Merknad

This project has received funding from Vinnova Swedenunder the FFI program with grant agreement No 2021-02572(precog), from the EU’s Horizon 2020 research and innovationprogram under grant agreement No 957197 (vedliot), and froma Swedish Research Council (VR) Project: Non-FunctionalRequirements for Machine Learning: Facilitating ContinuousQuality Awareness (iNFoRM).

Tilgjengelig fra: 2023-08-11 Laget: 2023-08-11 Sist oppdatert: 2023-08-11bibliografisk kontrollert
Borg, M., Henriksson, J., Socha, K., Lennartsson, O., Sonnsjö Lönegren, E., Bui, T., . . . Helali Moghadam, M. (2023). Ergo, SMIRK is safe: a safety case for a machine learning component in a pedestrian automatic emergency brake system. Software quality journal
Åpne denne publikasjonen i ny fane eller vindu >>Ergo, SMIRK is safe: a safety case for a machine learning component in a pedestrian automatic emergency brake system
Vise andre…
2023 (engelsk)Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367Artikkel i tidsskrift (Fagfellevurdert) Epub ahead of print
Abstract [en]

Integration of machine learning (ML) components in critical applications introduces novel challenges for software certification and verification. New safety standards and technical guidelines are under development to support the safety of ML-based systems, e.g., ISO 21448 SOTIF for the automotive domain and the Assurance of Machine Learning for use in Autonomous Systems (AMLAS) framework. SOTIF and AMLAS provide high-level guidance but the details must be chiseled out for each specific case. We initiated a research project with the goal to demonstrate a complete safety case for an ML component in an open automotive system. This paper reports results from an industry-academia collaboration on safety assurance of SMIRK, an ML-based pedestrian automatic emergency braking demonstrator running in an industry-grade simulator. We demonstrate an application of AMLAS on SMIRK for a minimalistic operational design domain, i.e., we share a complete safety case for its integrated ML-based component. Finally, we report lessons learned and provide both SMIRK and the safety case under an open-source license for the research community to reuse. © 2023, The Author(s).

sted, utgiver, år, opplag, sider
Springer, 2023
Emneord
Automotive demonstrator, Machine learning safety, Safety case, Safety standards
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-64234 (URN)10.1007/s11219-022-09613-1 (DOI)2-s2.0-85149021250 (Scopus ID)
Merknad

Open access funding provided by RISE Research Institutes of Sweden. This work was carried out within the SMILE III project financed by Vinnova, FFI, Fordonsstrategisk forskning och innovation under the grant number 2019-05871 and partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation.

Tilgjengelig fra: 2023-03-20 Laget: 2023-03-20 Sist oppdatert: 2024-01-10bibliografisk kontrollert
Vercammen, S., Demeyer, S. & Borg, M. (2023). F-ASTMut mutation optimisations techniques using the Clang front-end[Formula presented]. Software Impacts, 16, Article ID 100500.
Åpne denne publikasjonen i ny fane eller vindu >>F-ASTMut mutation optimisations techniques using the Clang front-end[Formula presented]
2023 (engelsk)Inngår i: Software Impacts, ISSN 2665-9638, Vol. 16, artikkel-id 100500Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

F-ASTMut is an open-source mutation testing research tool for the C language family based on manipulation of the abstract syntax tree. The tool is designed for detailed measurements, analysis, and tuning of optimisation techniques. The goal of F-ASTMut is to analyse the speedups of optimisation techniques to ultimately enable mutation testing in industrial settings. Currently, F-ASTMut features four optimisation techniques; an exclusion scheme for invalid mutants, a test-suit-scope reduction to only cover relevant mutants, mutant schemata, and split-stream mutation testing. The implementation relies on the Clang front-end, allowing future work to extend or build on top of our solution. © 2023 The Author(s)

sted, utgiver, år, opplag, sider
Elsevier B.V., 2023
Emneord
AST, C, C++, Clang, Mutation testing tool
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-64317 (URN)10.1016/j.simpa.2023.100500 (DOI)2-s2.0-85151320243 (Scopus ID)
Merknad

Funding details: Fonds Wetenschappelijk Onderzoek, FWO; Funding text 1: This work is supported financially by The Research Foundation – Flanders (FWO), a public funding organisation in Belgium. There is no direct or indirect industrial support for the research reported here.

Tilgjengelig fra: 2023-05-05 Laget: 2023-05-05 Sist oppdatert: 2023-05-05bibliografisk kontrollert
Helali Moghadam, M., Borg, M., Saadatmand, M., Mousavirad, S., Bohlin, M. & Lisper, B. (2023). Machine learning testing in an ADAS case study using simulation-integrated bio-inspired search-based testing. Journal of Software: Evolution and Process
Åpne denne publikasjonen i ny fane eller vindu >>Machine learning testing in an ADAS case study using simulation-integrated bio-inspired search-based testing
Vise andre…
2023 (engelsk)Inngår i: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481Artikkel i tidsskrift (Fagfellevurdert) Epub ahead of print
Abstract [en]

This paper presents an extended version of Deeper, a search-based simulation-integrated test solution that generates failure-revealing test scenarios for testing a deep neural network-based lane-keeping system. In the newly proposed version, we utilize a new set of bio-inspired search algorithms, genetic algorithm (GA), (Formula presented.) and (Formula presented.) evolution strategies (ES), and particle swarm optimization (PSO), that leverage a quality population seed and domain-specific crossover and mutation operations tailored for the presentation model used for modeling the test scenarios. In order to demonstrate the capabilities of the new test generators within Deeper, we carry out an empirical evaluation and comparison with regard to the results of five participating tools in the cyber-physical systems testing competition at SBST 2021. Our evaluation shows the newly proposed test generators in Deeper not only represent a considerable improvement on the previous version but also prove to be effective and efficient in provoking a considerable number of diverse failure-revealing test scenarios for testing an ML-driven lane-keeping system. They can trigger several failures while promoting test scenario diversity, under a limited test time budget, high target failure severity, and strict speed limit constraints. © 2023 The Authors. 

sted, utgiver, år, opplag, sider
John Wiley and Sons Ltd, 2023
Emneord
advanced driver assistance systems, deep learning, evolutionary computation, lane-keeping system, machine learning testing, search-based testing, Automobile drivers, Biomimetics, Budget control, Deep neural networks, Embedded systems, Genetic algorithms, Learning systems, Particle swarm optimization (PSO), Software testing, Case-studies, Lane keeping, Machine-learning, Software Evolution, Software process, Test scenario
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-65687 (URN)10.1002/smr.2591 (DOI)2-s2.0-85163167144 (Scopus ID)
Merknad

 Correspondence Address: M.H. Moghadam; Smart Industrial Automation, RISE Research Institutes of Sweden, Västerås, Stora Gatan 36, 722 12, Sweden;  

This work has been funded by Vinnova through the ITEA3 European IVVES ( https://itea3.org/project/ivves.html ) and H2020‐ECSEL European AIDOaRT ( https://www.aidoart.eu/ ) and InSecTT ( https://www.insectt.eu/ ) projects. Furthermore, the project received partially financial support from the SMILE III project financed by Vinnova, FFI, Fordonsstrategisk forskning och innovation under the grant number: 2019‐05871.

Tilgjengelig fra: 2023-08-10 Laget: 2023-08-10 Sist oppdatert: 2023-10-04bibliografisk kontrollert
Vercammen, S., Demeyer, S., Borg, M., Pettersson, N. & Hedin, G. (2023). Mutation testing optimisations using the Clang front-end. Software testing, verification & reliability
Åpne denne publikasjonen i ny fane eller vindu >>Mutation testing optimisations using the Clang front-end
Vise andre…
2023 (engelsk)Inngår i: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689Artikkel i tidsskrift (Fagfellevurdert) Epub ahead of print
Abstract [en]

Mutation testing is the state-of-the-art technique for assessing the fault detection capacity of a test suite. Unfortunately, a full mutation analysis is often prohibitively expensive. The CppCheck project for instance, demands a build time of 5.8Â min and a test execution time of 17Â s on our desktop computer. An unoptimised mutation analysis, for 55,000 generated mutants took 11.8Â days in total, of which 4.3Â days is spent on (re)compiling the project. In this paper, we present a feasibility study, investigating how a number of optimisation strategies can be implemented based on the Clang front-end. These optimisation strategies allow to eliminate the compilation and execution overhead in order to support efficient mutation testing for the C language family. We provide a proof-of-concept tool that achieves a speedup of between 2 (Formula presented.) and 30 (Formula presented.). We make a detailed analysis of the speedup induced by the optimisations, elaborate on the lessons learned and point out avenues for further improvements.

sted, utgiver, år, opplag, sider
John Wiley and Sons Ltd, 2023
Emneord
C++ (programming language); Fault detection; C++; CLANG; Front end; Mutant schema; Mutation analysis; Mutation testing; Optimisations; Optimization strategy; Software testings; Software-Reliability; Software testing
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-67671 (URN)10.1002/stvr.1865 (DOI)2-s2.0-85174318138 (Scopus ID)
Merknad

This work is supported by (a) the Research Foundation Flanders (FWO) under Grant number 1SA1519N; (a) the FWO‐Vlaanderen and F.R.S.‐FNRS via the Excellence of Science project 30446992 SECO‐ASSIST.

Tilgjengelig fra: 2023-11-29 Laget: 2023-11-29 Sist oppdatert: 2023-12-20bibliografisk kontrollert
Borg, M. (2022). Agility in Software 2.0 – Notebook Interfaces and MLOps with Buttresses and Rebars. In: International Conference on Lean and Agile Software DevelopmentLASD 2022: Lean and Agile Software Development pp 3-16: . Paper presented at International Conference on Lean and Agile Software Development LASD 2022: Lean and Agile Software Development 22 January 2022 through 22 January 2022 (pp. 3-16). Springer Science and Business Media Deutschland GmbH
Åpne denne publikasjonen i ny fane eller vindu >>Agility in Software 2.0 – Notebook Interfaces and MLOps with Buttresses and Rebars
2022 (engelsk)Inngår i: International Conference on Lean and Agile Software DevelopmentLASD 2022: Lean and Agile Software Development pp 3-16, Springer Science and Business Media Deutschland GmbH , 2022, s. 3-16Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Artificial intelligence through machine learning is increasingly used in the digital society. Solutions based on machine learning bring both great opportunities, thus coined “Software 2.0,” but also great challenges for the engineering community to tackle. Due to the experimental approach used by data scientists when developing machine learning models, agility is an essential characteristic. In this keynote address, we discuss two contemporary development phenomena that are fundamental in machine learning development, i.e., notebook interfaces and MLOps. First, we present a solution that can remedy some of the intrinsic weaknesses of working in notebooks by supporting easy transitions to integrated development environments. Second, we propose reinforced engineering of AI systems by introducing metaphorical buttresses and rebars in the MLOps context. Machine learning-based solutions are dynamic in nature, and we argue that reinforced continuous engineering is required to quality assure the trustworthy AI systems of tomorrow.

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2022
Emneord
Computer software, Reinforcement, AI systems, Digital society, Engineering community, Essential characteristic, Experimental approaches, Integrated development environment, Machine learning models, Machine learning
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-58571 (URN)10.1007/978-3-030-94238-0_1 (DOI)2-s2.0-85123981910 (Scopus ID)9783030942373 (ISBN)
Konferanse
International Conference on Lean and Agile Software Development LASD 2022: Lean and Agile Software Development 22 January 2022 through 22 January 2022
Merknad

Funding details: Lunds Universitet; Funding text 1: Martin Jakobsson and Johan Henriksson are the co-creators of the solution presented in Sect. 2 and deserve all credit for this work. Our thanks go to Backtick Technologies for hosting the MSc thesis project and Dr. Niklas Fors, Dept. of Computer Science, Lund University for acting as the examiner. This initiative received financial support through the AIQ Meta-Testbed project funded by Kompetensfonden at Campus Helsingborg, Lund University, Sweden and two internal RISE initiatives, i.e., ?SODA-Software & Data Intensive Applications? and ?MLOps by RISE.?

Tilgjengelig fra: 2022-02-18 Laget: 2022-02-18 Sist oppdatert: 2022-02-18bibliografisk kontrollert
Helali Moghadam, M., Saadatmand, M., Borg, M., Bohlin, M. & Lisper, B. (2022). An autonomous performance testing framework using self-adaptive fuzzy reinforcement learning. Software quality journal, 127-159
Åpne denne publikasjonen i ny fane eller vindu >>An autonomous performance testing framework using self-adaptive fuzzy reinforcement learning
Vise andre…
2022 (engelsk)Inngår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, s. 127-159Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case-based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments in a simulated performance testing setup, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process and performs adaptively without access to source code and performance models. © 2021, The Author(s).

sted, utgiver, år, opplag, sider
Springer, 2022
Emneord
Autonomous testing, Performance testing, Reinforcement learning, Stress testing, Test case generation, Automation, Computer programming languages, Testing, Transfer learning, Automated generation, Optimal performance, Performance Model, Performance testing framework, Performance tests, Simulated performance, Software systems, Software testing
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-52628 (URN)10.1007/s11219-020-09532-z (DOI)2-s2.0-85102446552 (Scopus ID)
Merknad

Funding text 1: This work has been supported by and received funding partially from the TESTOMAT, XIVT, IVVES and MegaM@Rt2 European projects.

Tilgjengelig fra: 2021-03-25 Laget: 2021-03-25 Sist oppdatert: 2023-10-04bibliografisk kontrollert
Scharinger, B., Borg, M., Vogelsang, A. & Olsson, T. (2022). Can RE Help Better Prepare Industrial AI for Commercial Scale?. IEEE Software, 39(6), 8-12
Åpne denne publikasjonen i ny fane eller vindu >>Can RE Help Better Prepare Industrial AI for Commercial Scale?
2022 (engelsk)Inngår i: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 39, nr 6, s. 8-12Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This issue marks the start of my term as department editor for the “Requirements” column. I very much look forward to exploring contemporary aspects of requirements and requirements engineering (RE) in the coming years! As an institute researcher with RISE, I primarily work in strictly regulated domains, in which requirements are cornerstones in the development activities. Please check my introduction in the September/October 2022 issue of IEEE Software for more about my background. In this issue—featuring a theme that perfectly matches my current research interests—we discuss RE4AI from the perspective of Siemens Digital Industries. Referring to this as industrial artifical intelligence (AI), we share insights from our numerous chats about this topic in the last two years, including formal interviews with key stakeholders. In this column, we argue that the business side of AI has been underexplored—and that RE can help us forward.

sted, utgiver, år, opplag, sider
IEEE Computer Society, 2022
Emneord
'current, Artifical intelligence, Development activity, Look-forward, Requirement engineering, Research interests, Siemens, Requirements engineering
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-61224 (URN)10.1109/MS.2022.3196205 (DOI)2-s2.0-85141637687 (Scopus ID)
Tilgjengelig fra: 2022-12-01 Laget: 2022-12-01 Sist oppdatert: 2022-12-01bibliografisk kontrollert
Tornhill, A. & Borg, M. (2022). Code Red: The Business Impact of Code Quality - A Quantitative Study of 39 Proprietary Production Codebases. In: Proceedings - International Conference on Technical Debt 2022, TechDebt 2022: . Paper presented at 5th International Conference on Technical Debt, TechDebt 2022, 17 May 2022 through 18 May 2022 (pp. 11-20). Institute of Electrical and Electronics Engineers Inc.
Åpne denne publikasjonen i ny fane eller vindu >>Code Red: The Business Impact of Code Quality - A Quantitative Study of 39 Proprietary Production Codebases
2022 (engelsk)Inngår i: Proceedings - International Conference on Technical Debt 2022, TechDebt 2022, Institute of Electrical and Electronics Engineers Inc. , 2022, s. 11-20Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Code quality remains an abstract concept that fails to get traction at the business level. Consequently, software companies keep trading code quality for time-to-market and new features. The resulting technical debt is estimated to waste up to 42% of developers' time. At the same time, there is a global shortage of software developers, meaning that developer productivity is key to software businesses. Our overall mission is to make code quality a business concern, not just a technical aspect. Our first goal is to understand how code quality impacts 1) the number of reported defects, 2) the time to resolve issues, and 3) the predictability of resolving issues on time. We analyze 39 proprietary production codebases from a variety of domains using the CodeScene tool based on a combination of source code analysis, version-control mining, and issue information from Jira. By analyzing activity in 30,737 files, we find that low quality code contains 15 times more defects than high quality code. Furthermore, resolving issues in low quality code takes on average 124% more time in development. Finally, we report that issue reso-lutions in low quality code involve higher uncertainty manifested as 9 times longer maximum cycle times. This study provides evi-dence that code quality cannot be dismissed as a technical concern. With 15 times fewer defects, twice the development speed, and substantially more predictable issue resolution times, the business advantage of high quality code should be unmistakably clear. 

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers Inc., 2022
Emneord
business impact, code quality, devel-oper productivity, mining software repositories, software defects, technical debt, Commerce, Defects, Low qualities, Mining software, Mining software repository, Quality codes, Software repositories, Technical debts, Productivity
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-59858 (URN)10.1145/3524843.3528091 (DOI)2-s2.0-85134326173 (Scopus ID)9781450393041 (ISBN)
Konferanse
5th International Conference on Technical Debt, TechDebt 2022, 17 May 2022 through 18 May 2022
Merknad

Funding text 1: Our thanks go to the CodeScene development team who supported our work and provided details on the Code Health metric. Moreover, we extend our deepest appreciation to the repository owners who let us analyze their data as part of this study.

Tilgjengelig fra: 2022-08-02 Laget: 2022-08-02 Sist oppdatert: 2022-08-09bibliografisk kontrollert
Song, Q., Borg, M., Engstrom, E., Ardo, H. & Rico, S. (2022). Exploring ML testing in practice - Lessons learned from an interactive rapid review with Axis Communications. In: Proceedings - 1st International Conference on AI Engineering - Software Engineering for AI, CAIN 2022: . Paper presented at 1st International Conference on AI Engineering - Software Engineering for AI, CAIN 2022, 16 May 2022 through 17 May 2022 (pp. 10-21). Institute of Electrical and Electronics Engineers Inc.
Åpne denne publikasjonen i ny fane eller vindu >>Exploring ML testing in practice - Lessons learned from an interactive rapid review with Axis Communications
Vise andre…
2022 (engelsk)Inngår i: Proceedings - 1st International Conference on AI Engineering - Software Engineering for AI, CAIN 2022, Institute of Electrical and Electronics Engineers Inc. , 2022, s. 10-21Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

There is a growing interest in industry and academia in machine learning (ML) testing. We believe that industry and academia need to learn together to produce rigorous and relevant knowledge. In this study, we initiate a collaboration between stakeholders from one case company, one research institute, and one university. To establish a common view of the problem domain, we applied an interactive rapid review of the state of the art. Four researchers from Lund University and RISE Research Institutes and four practitioners from Axis Communications reviewed a set of 180 primary studies on ML testing. We developed a taxonomy for the communication around ML testing challenges and results and identified a list of 12 review questions relevant for Axis Communications. The three most important questions (data testing, metrics for assessment, and test generation) were mapped to the literature, and an in-depth analysis of the 35 primary studies matching the most important question (data testing) was made. A final set of the five best matches were analysed and we reflect on the criteria for applicability and relevance for the industry. The taxonomies are helpful for communication but not final. Furthermore, there was no perfect match to the case company's investigated review question (data testing). However, we extracted relevant approaches from the five studies on a conceptual level to support later context-specific improvements. We found the interactive rapid review approach useful for triggering and aligning communication between the different stakeholders. 

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers Inc., 2022
Emneord
AI Engineering, Interactive Rapid Review, Machine Learning, Taxonomy, Testing, Data testing, Learn+, Lund University, Machine-learning, On-machines, Problem domain, Research institutes, State of the art, Taxonomies
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-60335 (URN)10.1145/3522664.3528596 (DOI)2-s2.0-85128924924 (Scopus ID)9781450392754 (ISBN)
Konferanse
1st International Conference on AI Engineering - Software Engineering for AI, CAIN 2022, 16 May 2022 through 17 May 2022
Merknad

Funding details: Lunds Universitet; Funding text 1: This initiative received financial support through the AIQ Meta-Testbed project funded by Kompetensfonden at Campus Helsing-borg, Lund University, Sweden. In addition, this work was supported in part by the Wallenberg AI, Autonomous Systems and Software Program (WASP).

Tilgjengelig fra: 2022-10-14 Laget: 2022-10-14 Sist oppdatert: 2022-10-14bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-7879-4371
v. 2.41.0