Exploring ML testing in practice - Lessons learned from an interactive rapid review with Axis CommunicationsShow others and affiliations
2022 (English)In: Proceedings - 1st International Conference on AI Engineering - Software Engineering for AI, CAIN 2022, Institute of Electrical and Electronics Engineers Inc. , 2022, p. 10-21Conference paper, Published paper (Refereed)
Abstract [en]
There is a growing interest in industry and academia in machine learning (ML) testing. We believe that industry and academia need to learn together to produce rigorous and relevant knowledge. In this study, we initiate a collaboration between stakeholders from one case company, one research institute, and one university. To establish a common view of the problem domain, we applied an interactive rapid review of the state of the art. Four researchers from Lund University and RISE Research Institutes and four practitioners from Axis Communications reviewed a set of 180 primary studies on ML testing. We developed a taxonomy for the communication around ML testing challenges and results and identified a list of 12 review questions relevant for Axis Communications. The three most important questions (data testing, metrics for assessment, and test generation) were mapped to the literature, and an in-depth analysis of the 35 primary studies matching the most important question (data testing) was made. A final set of the five best matches were analysed and we reflect on the criteria for applicability and relevance for the industry. The taxonomies are helpful for communication but not final. Furthermore, there was no perfect match to the case company's investigated review question (data testing). However, we extracted relevant approaches from the five studies on a conceptual level to support later context-specific improvements. We found the interactive rapid review approach useful for triggering and aligning communication between the different stakeholders.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2022. p. 10-21
Keywords [en]
AI Engineering, Interactive Rapid Review, Machine Learning, Taxonomy, Testing, Data testing, Learn+, Lund University, Machine-learning, On-machines, Problem domain, Research institutes, State of the art, Taxonomies
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:ri:diva-60335DOI: 10.1145/3522664.3528596Scopus ID: 2-s2.0-85128924924ISBN: 9781450392754 (print)OAI: oai:DiVA.org:ri-60335DiVA, id: diva2:1703726
Conference
1st International Conference on AI Engineering - Software Engineering for AI, CAIN 2022, 16 May 2022 through 17 May 2022
Note
Funding details: Lunds Universitet; Funding text 1: This initiative received financial support through the AIQ Meta-Testbed project funded by Kompetensfonden at Campus Helsing-borg, Lund University, Sweden. In addition, this work was supported in part by the Wallenberg AI, Autonomous Systems and Software Program (WASP).
2022-10-142022-10-142022-10-14Bibliographically approved