Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 78) Show all publications
Franke, U. (2025). How Do ML Students Explain Their Models and What Can We Learn from This?. Paper presented at 15th International Conference on Software Business, ICSOB 2024. 18 November 2024 - 20 November 2024. Lecture Notes in Business Information Processing, 539 LNBIP, 351-365
Open this publication in new window or tab >>How Do ML Students Explain Their Models and What Can We Learn from This?
2025 (English)In: Lecture Notes in Business Information Processing, ISSN 1865-1348, E-ISSN 1865-1356, Vol. 539 LNBIP, p. 351-365Article in journal (Refereed) Published
Abstract [en]

In recent years, artificial intelligence (AI) has made great progress. However, despite impressive results, modern data-driven AI systems are often very difficult to understand, challenging their use in software business and prompting the emergence of the explainable AI (XAI) field. This paper explores how machine learning (ML) students explain their models and draws implications for practice from this. Data was collected from ML master students, who were given a two-part assignment. First they developed a model predicting insurance claims based on an existing data set, then they received a request for explanation of insurance premiums in accordance with the GDPR right to meaningful information and had to come up with such an explanation. The students also peer-graded each other’s explanations. Analyzing this data set and comparing it to responses from actual insurance firms from a previous study illustrates some potential pitfalls—narrow technical focus and offering mere data dumps. There were also some promising directions—feature importance, graphics, and what-if scenarios—where the software business practice could benefit from being inspired by the students. The paper is concluded with a reflection about the importance of multiple kinds of expertise and team efforts for making the most of XAI in practice.

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH, 2025
Keywords
Adversarial machine learning; Insurance; Artificial intelligence systems; Data driven; Data set; Explainable artificial intelligence; GDPR; Insurance claims; Learn+; Machine-learning; Master students; Software business; Students
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-78435 (URN)10.1007/978-3-031-85849-9_28 (DOI)2-s2.0-105001269309 (Scopus ID)
Conference
15th International Conference on Software Business, ICSOB 2024. 18 November 2024 - 20 November 2024
Note

This research was partially funded by the Swedish Competition Authority, grant no 456/2021.

Available from: 2025-09-16 Created: 2025-09-16 Last updated: 2025-09-23Bibliographically approved
Franke, U. & Orlando, A. (2025). Interdependent cyber risk and the role of insurers. Research in Economics, 79(3), Article ID 101059.
Open this publication in new window or tab >>Interdependent cyber risk and the role of insurers
2025 (English)In: Research in Economics, ISSN 1090-9443, E-ISSN 1090-9451, Vol. 79, no 3, article id 101059Article in journal (Refereed) Published
Abstract [en]

Increasing use of new digital services offers tremendous opportunities for modern society, but also entails new risks. One tool for managing cyber risk is cyber insurance. While cyber insurance has attracted much attention and optimism, interdependent cyber risks and lack of actuarial data have prompted some insurers to adopt a more proactive role, not only insuring losses but also assisting clients with preventive work such as managed detection and response solutions, i.e., investments in their own cybersecurity. The purpose of this paper is to propose and theoretically investigate yet a further extension of this role, where insurers facilitate security investments between interdependent firms, which get the opportunity to invest a share of their insurance premiums to improve the security of each other. It is demonstrated that if insurers can facilitate such investments, then under common theoretical assumptions this can make a positive contribution to overall welfare. The paper is concluded by a discussion of the relevance and applicability of this theoretical contribution in practice. 

Place, publisher, year, edition, pages
Academic Press, 2025
National Category
Economics and Business
Identifiers
urn:nbn:se:ri:diva-78380 (URN)10.1016/j.rie.2025.101059 (DOI)2-s2.0-105001107893 (Scopus ID)
Note

The foundations of this paper were laid during U. Franke’s two-week stay with A. Orlando at the Istituto per le Applicazioni del calcolo in Naples in July 2022, supported by the Short Term Mobility Program (STM 2022) of the National Research Council of Italy(CNR), contract number 5446. A. Orlando was partially supported by the SERICS project (PE00000014) under the MUR NationalRecovery and Resilience Plan funded by the European Union – NextGenerationEU.

Available from: 2025-06-09 Created: 2025-06-09 Last updated: 2025-09-23Bibliographically approved
Kianpour, M. & Franke, U. (2025). The use of simulations in economic cybersecurity decision-making. Journal of Cybersecurity, 11(1), Article ID tyaf003.
Open this publication in new window or tab >>The use of simulations in economic cybersecurity decision-making
2025 (English)In: Journal of Cybersecurity, ISSN 2057-2093, Vol. 11, no 1, article id tyaf003Article in journal (Refereed) Published
Abstract [en]

This paper presents an in-depth examination of the use of simulations in economic cybersecurity decision-making, highlighting the dual nature of their potential and the challenges they present. Drawing on examples from existing studies, we explore the role of simulations in generating new knowledge about probabilities and consequences in the cybersecurity domain, which is essential in understanding and managing risk and uncertainty. Additionally, we introduce the concepts of "bookkeeping"and "abstraction"within the context of simulations, discussing how they can sometimes fail and exploring the underlying reasons for their failures. This discussion leads us to suggest a framework of considerations for effectively utilizing simulations in cybersecurity. This framework is designed not as a rigid checklist but as a guide for critical thinking and evaluation, aiding users in assessing the suitability and reliability of a simulation model for a particular decision-making context. Future work should focus on applying this framework in real-world settings, continuously refining the use of simulations to ensure they remain effective and relevant in the dynamic field of cybersecurity. 

Place, publisher, year, edition, pages
Oxford University Press, 2025
Keywords
Bias; Critical evaluation; Critical thinking; Cyber security; Decision making under uncertainty; Decision-making under risks; Decisions makings; Risks and uncertainties; Simulation; Simulation model; Digital elevation model
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-78020 (URN)10.1093/cybsec/tyaf003 (DOI)2-s2.0-85218089039 (Scopus ID)
Note

This work was supported by H2020-SU-DS02-2020, grant number101020259 (M.K.) and the Swedish Foundation for Strategic Research,grant number SM22-0057 (U.F.).

Available from: 2025-09-25 Created: 2025-09-25 Last updated: 2025-09-25Bibliographically approved
Franke, U. (2024). Algorithmic Transparency, Manipulation, and Two Concepts of Liberty. Philosophy & Technology, 37, Article ID 22.
Open this publication in new window or tab >>Algorithmic Transparency, Manipulation, and Two Concepts of Liberty
2024 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, article id 22Article in journal (Refereed) Published
Abstract [en]

As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but rather as indifference to whether the information provided enhances decision-making by revealing reasons. This short commentary on Klenk uses Berlin’s (1958) two concepts of liberty to further illuminate the concept of transparency as manipulation, finding alignment between positive liberty and the critical account.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Algorithmic transparency, manipulation, Isaiah Berlin
National Category
Philosophy Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-72335 (URN)10.1007/s13347-024-00713-3 (DOI)
Note

 Open access funding provided by RISE Research Institutes of Sweden. The author received noexternal funding for this work.

Available from: 2024-03-15 Created: 2024-03-15 Last updated: 2025-09-23Bibliographically approved
Franke, U. (2024). Att utveckla och implementera cybersäkerhetspolicy: Lärdomar från den finansiella sektorn. Statsvetenskaplig Tidskrift, 126(2), 251-272
Open this publication in new window or tab >>Att utveckla och implementera cybersäkerhetspolicy: Lärdomar från den finansiella sektorn
2024 (Swedish)In: Statsvetenskaplig Tidskrift, ISSN 0039-0747, Vol. 126, no 2, p. 251-272Article in journal (Refereed) Published
Abstract [en]

Modern society is increasingly dependent on digital services, making their dependability a top priority. But while there is a consensus that cybersecurity is important, there is no corresponding agreement on the true extent of the problem, the most effective countermeasures, or the proper division of labor and responsibilities. This makes cybersecurity policy very difficult. This article addresses this issue based on observations and experiences from a period of guest research at the Swedish Financial Supervisory Authority (Finansinspektionen), which made it possible to study how cybersecurity policy is developed and implemented in the Swedish financial sector. Observations include policy implementation challenges related to squaring different roles and perspectives mandated by different laws, and to collaboration between independent government authorities, but also policy development challenges: How can the full range of perspectives and tools be included in cybersecurity policy development? As Sweden now revises its cybersecurity policy, this is a key issue.

National Category
Computer and Information Sciences Political Science
Identifiers
urn:nbn:se:ri:diva-74617 (URN)
Funder
Swedish Foundation for Strategic Research, SM22-0057
Note

Den aktuella studien är finansierad av Stiftelsen för Strategisk Forskning (avtalsnummer SM22-0057)

Available from: 2024-08-05 Created: 2024-08-05 Last updated: 2025-09-23Bibliographically approved
Andreasson, A., Artman, H., Brynielsson, J. & Franke, U. (2024). Cybersecurity work at Swedish administrative authorities: taking action or waiting for approval. Cognition, Technology & Work, 26(4), 709
Open this publication in new window or tab >>Cybersecurity work at Swedish administrative authorities: taking action or waiting for approval
2024 (English)In: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566, Vol. 26, no 4, p. 709-Article in journal (Refereed) Published
Abstract [en]

In recent years, the Swedish public sector has undergone rapid digitalization, while cybersecurity efforts have not kept even steps. This study investigates conditions for cybersecurity work at Swedish administrative authorities by examining organizational conditions at the authorities, what cybersecurity staff do to acquire the cyber situation awareness required for their role, as well as what experience cybersecurity staff have with incidents. In this study, 17 semi-structured interviews were held with respondents from Swedish administrative authorities. The results showed the diverse conditions for cybersecurity work that exist at the authorities and that a variety of roles are involved in that work. It was found that national-level support for cybersecurity was perceived as somewhat lacking. There were also challenges in getting access to information elements required for sufficient cyber situation awareness. 

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH, 2024
Keywords
Condition; Cyber situation awareness; Cyber security; National level; Organisational; Public sector; Security management; Semi structured interviews; Situation awareness; Swedishs
National Category
Information Systems
Identifiers
urn:nbn:se:ri:diva-76041 (URN)10.1007/s10111-024-00779-1 (DOI)2-s2.0-85205049306 (Scopus ID)
Note

 This work was supported by the Swedish Armed Forces. 

Available from: 2024-10-31 Created: 2024-10-31 Last updated: 2025-09-23Bibliographically approved
Franke, U. (2024). Livspusslet: Rilke och Nozick. In: Katarina O'Nils Franke (Ed.), Rilke och filosoferna: (pp. 79-86). Malmö: Ellerström förlag
Open this publication in new window or tab >>Livspusslet: Rilke och Nozick
2024 (Swedish)In: Rilke och filosoferna / [ed] Katarina O'Nils Franke, Malmö: Ellerström förlag , 2024, p. 79-86Chapter in book (Other (popular science, discussion, etc.))
Place, publisher, year, edition, pages
Malmö: Ellerström förlag, 2024
Keywords
Rainer Maria Rilke, Robert Nozick
National Category
Philosophy
Identifiers
urn:nbn:se:ri:diva-74618 (URN)9789172477308 (ISBN)
Available from: 2024-08-05 Created: 2024-08-05 Last updated: 2025-09-23Bibliographically approved
Franke, U. (2024). Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle. Philosophy & Technology, 37(3), Article ID 87.
Open this publication in new window or tab >>Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle
2024 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, no 3, article id 87Article in journal (Refereed) Published
Abstract [en]

Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. Based on complications with this approach identified in the literature, this article discusses how Rawls’s theory in general, and especially the difference principle, should reasonably be applied to algorithmic fairness decisions. It is observed that proposals to achieve Rawlsian algorithmic fairness often aim to uphold the difference principle in the individual situations where automated decision-making occurs. However, the Rawlsian difference principle applies to society at large and does not aggregate in such a way that upholding it in constituent situations also upholds it in the aggregate. But such aggregation is a hidden premise of many proposals in the literature and its falsity explains many complications encountered. 

Place, publisher, year, edition, pages
Springer Science and Business Media B.V., 2024
National Category
Philosophy, Ethics and Religion
Identifiers
urn:nbn:se:ri:diva-74644 (URN)10.1007/s13347-024-00779-z (DOI)2-s2.0-85198326990 (Scopus ID)
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2025-09-23Bibliographically approved
Franke, U., Hallstrom, C. H., Artman, H. & Dexe, J. (2024). Requirements on and Procurement of Explainable Algorithms-A Systematic Review of the Literature. In: NEW TRENDS IN DISRUPTIVE TECHNOLOGIES, TECH ETHICS, AND ARTIFICIAL INTELLIGENCE, DITTET 2024: . Paper presented at 4th International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence (DITTET), Salamanca, SPAIN, JUL 03-05, 2024 (pp. 40-52). SPRINGER INTERNATIONAL PUBLISHING AG, 1459
Open this publication in new window or tab >>Requirements on and Procurement of Explainable Algorithms-A Systematic Review of the Literature
2024 (English)In: NEW TRENDS IN DISRUPTIVE TECHNOLOGIES, TECH ETHICS, AND ARTIFICIAL INTELLIGENCE, DITTET 2024, SPRINGER INTERNATIONAL PUBLISHING AG , 2024, Vol. 1459, p. 40-52Conference paper, Published paper (Refereed)
Abstract [en]

Artificial intelligence is making progress, enabling automation of tasks previously the privilege of humans. This brings many benefits but also entails challenges, in particular with respect to ‘black box’ machine learning algorithms. Therefore, questions of transparency and explainability in these systems receive much attention. However, most organizations do not build their software from scratch, but rather procure it from others. Thus, it becomes imperative to consider not only requirements on but also procurement of explainable algorithms and decision support systems. This article offers a first systematic literature review of this area. Following construction of appropriate search queries, 503 unique items from Scopus, ACM Digital Library, and IEEE Xplore were screened for relevance. 37 items remained in the final analysis. An overview and a synthesis of the literature is offered, and it is concluded that more research is needed, in particular on procurement, human-computer interaction aspects, and different purposes of explainability.

Place, publisher, year, edition, pages
SPRINGER INTERNATIONAL PUBLISHING AG, 2024
Series
Advances in Intelligent Systems and Computing
Keywords
Requirements; Procurement; Explainable Artificial Intelligence (XAI); Transparency; Explainability
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-77406 (URN)10.1007/978-3-031-66635-3\_4 (DOI)
Conference
4th International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence (DITTET), Salamanca, SPAIN, JUL 03-05, 2024
Available from: 2025-02-12 Created: 2025-02-12 Last updated: 2025-09-23Bibliographically approved
Franke, U. (2024). The Limits of Calibration and the Possibility of Roles for Trustworthy AI. Philosophy & Technology, 37(3), Article ID 82.
Open this publication in new window or tab >>The Limits of Calibration and the Possibility of Roles for Trustworthy AI
2024 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, no 3, article id 82Article in journal (Refereed) Published
Abstract [en]

With increasing use of artificial intelligence (AI) in high-stakes contexts, a race for “trustworthy AI” is under way. However, Dorsch and Deroy (Philosophy & Technology 37, 62, 2024) recently argued that regardless of its feasibility, morally trustworthy AI is unnecessary: We should merely rely on rather than trust AI, and carefully calibrate our reliance using the reliability scores which are often available. This short commentary on Dorsch and Deroy engages with the claim that morally trustworthy AI is unnecessary and argues that since there are important limits to how good calibration based on reliability scores can be, some residual roles for trustworthy AI (if feasible) are still possible. 

Place, publisher, year, edition, pages
Springer Science and Business Media B.V., 2024
National Category
Philosophy, Ethics and Religion
Identifiers
urn:nbn:se:ri:diva-74801 (URN)10.1007/s13347-024-00771-7 (DOI)2-s2.0-85197260283 (Scopus ID)
Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2025-09-23Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2017-7914

Search in DiVA

Show all publications