Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
How Do ML Students Explain Their Models and What Can We Learn from This?
RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.ORCID iD: 0000-0003-2017-7914
2025 (English)In: Lecture Notes in Business Information Processing, ISSN 1865-1348, E-ISSN 1865-1356, Vol. 539 LNBIP, p. 351-365Article in journal (Refereed) Published
Abstract [en]

In recent years, artificial intelligence (AI) has made great progress. However, despite impressive results, modern data-driven AI systems are often very difficult to understand, challenging their use in software business and prompting the emergence of the explainable AI (XAI) field. This paper explores how machine learning (ML) students explain their models and draws implications for practice from this. Data was collected from ML master students, who were given a two-part assignment. First they developed a model predicting insurance claims based on an existing data set, then they received a request for explanation of insurance premiums in accordance with the GDPR right to meaningful information and had to come up with such an explanation. The students also peer-graded each other’s explanations. Analyzing this data set and comparing it to responses from actual insurance firms from a previous study illustrates some potential pitfalls—narrow technical focus and offering mere data dumps. There were also some promising directions—feature importance, graphics, and what-if scenarios—where the software business practice could benefit from being inspired by the students. The paper is concluded with a reflection about the importance of multiple kinds of expertise and team efforts for making the most of XAI in practice.

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH , 2025. Vol. 539 LNBIP, p. 351-365
Keywords [en]
Adversarial machine learning; Insurance; Artificial intelligence systems; Data driven; Data set; Explainable artificial intelligence; GDPR; Insurance claims; Learn+; Machine-learning; Master students; Software business; Students
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:ri:diva-78435DOI: 10.1007/978-3-031-85849-9_28Scopus ID: 2-s2.0-105001269309OAI: oai:DiVA.org:ri-78435DiVA, id: diva2:1998435
Conference
15th International Conference on Software Business, ICSOB 2024. 18 November 2024 - 20 November 2024
Note

This research was partially funded by the Swedish Competition Authority, grant no 456/2021.

Available from: 2025-09-16 Created: 2025-09-16 Last updated: 2025-09-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Franke, Ulrik

Search in DiVA

By author/editor
Franke, Ulrik
By organisation
Mobility and Systems
In the same journal
Lecture Notes in Business Information Processing
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 7 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf