Towards Structured Evaluation of Deep Neural Network SupervisorsShow others and affiliations
2019 (English)In: Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 27-34Conference paper, Published paper (Refereed)
Abstract [en]
Deep Neural Networks (DNN) have improved the quality of several non-safety related products in the past years. However, before DNNs should be deployed to safety-critical applications, their robustness needs to be systematically analyzed. A common challenge for DNNs occurs when input is dissimilar to the training set, which might lead to high confidence predictions despite proper knowledge of the input. Several previous studies have proposed to complement DNNs with a supervisor that detects when inputs are outside the scope of the network. Most of these supervisors, however, are developed and tested for a selected scenario using a specific performance metric. In this work, we emphasize the need to assess and compare the performance of supervisors in a structured way. We present a framework constituted by four datasets organized in six test cases combined with seven evaluation metrics. The test cases provide varying complexity and include data from publicly available sources as well as a novel dataset consisting of images from simulated driving scenarios. The latter we plan to make publicly available. Our framework can be used to support DNN supervisor evaluation, which in turn could be used to motive development, validation, and deployment of DNNs in safety-critical applications.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2019. p. 27-34
Keywords [en]
Automotive perception, Deep neural networks, Out-of-distribution, Robustness, Supervisor, Artificial intelligence, Robustness (control systems), Safety engineering, Statistical tests, Supervisory personnel, Evaluation metrics, High confidence, Performance metrices, Safety critical applications, Safety-related products, Simulated driving, Structured evaluation
National Category
Natural Sciences
Identifiers
URN: urn:nbn:se:ri:diva-39270DOI: 10.1109/AITest.2019.00-12Scopus ID: 2-s2.0-85067113703ISBN: 9781728104928 (print)OAI: oai:DiVA.org:ri-39270DiVA, id: diva2:1334743
Conference
1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019, 4 April 2019 through 9 April 2019
Note
Funding details: Fellowships Fund Incorporated; Funding details: AI; Funding details: Knut och Alice Wallenbergs Stiftelse; Funding text 1: ACKNOWLEDGMENTS This work was carried out within the SMILE II project financed by Vinnova, FFI, Fordonsstrategisk forskning och; Funding text 2: innovation under the grant number: 2017-03066, and partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wal-lenberg Foundation.
2019-07-032019-07-032020-01-23Bibliographically approved