Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Experimental Analysis of Trustworthy In-Vehicle Intrusion Detection System Using eXplainable Artificial Intelligence (XAI)
Mid Sweden University, Sweden.
RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.
Mid Sweden University, Sweden.
Mid Sweden University, Sweden.
Show others and affiliations
2022 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 102831-102841Article in journal (Refereed) Published
Abstract [en]

Anomaly-based In-Vehicle Intrusion Detection System (IV-IDS) is one of the protection mechanisms to detect cyber attacks on automotive vehicles. Using artificial intelligence (AI) for anomaly detection to thwart cyber attacks is promising but suffers from generating false alarms and making decisions that are hard to interpret. Consequently, this issue leads to uncertainty and distrust towards such IDS design unless it can explain its behavior, e.g., by using eXplainable AI (XAI). In this paper, we consider the XAI-powered design of such an IV-IDS using CAN bus data from a public dataset, named 'Survival'. Novel features are engineered, and a Deep Neural Network (DNN) is trained over the dataset. A visualization-based explanation, 'VisExp', is created to explain the behavior of the AI-based IV-IDS, which is evaluated by experts in a survey, in relation to a rule-based explanation. Our results show that experts' trust in the AI-based IV-IDS is significantly increased when they are provided with VisExp (more so than the rule-based explanation). These findings confirm the effect, and by extension the need, of explainability in automated systems, and VisExp, being a source of increased explainability, shows promise in helping involved parties gain trust in such systems. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2022. Vol. 10, p. 102831-102841
Keywords [en]
Automotive, deep learning, intrusion detection system, machine learning, trustworthiness, XAI, Automation, Behavioral research, Computer crime, Crime, Decision trees, Deep neural networks, Network security, Vehicles, Automotives, Behavioral science, Intrusion Detection Systems, Intrusion-Detection, Machine-learning, Random forests, Trust management, Intrusion detection
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:ri:diva-61248DOI: 10.1109/ACCESS.2022.3208573Scopus ID: 2-s2.0-85139441364OAI: oai:DiVA.org:ri-61248DiVA, id: diva2:1715248
Available from: 2022-12-01 Created: 2022-12-01 Last updated: 2023-06-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Raza, Shahid

Search in DiVA

By author/editor
Raza, Shahid
By organisation
Mobility and SystemsData Science
In the same journal
IEEE Access
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 105 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf