Endre søk
Link to record
Permanent link

Direct link
Alternativa namn
Publikasjoner (10 av 78) Visa alla publikasjoner
Franke, U. (2025). How Do ML Students Explain Their Models and What Can We Learn from This?. Paper presented at 15th International Conference on Software Business, ICSOB 2024. 18 November 2024 - 20 November 2024. Lecture Notes in Business Information Processing, 539 LNBIP, 351-365
Åpne denne publikasjonen i ny fane eller vindu >>How Do ML Students Explain Their Models and What Can We Learn from This?
2025 (engelsk)Inngår i: Lecture Notes in Business Information Processing, ISSN 1865-1348, E-ISSN 1865-1356, Vol. 539 LNBIP, s. 351-365Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

In recent years, artificial intelligence (AI) has made great progress. However, despite impressive results, modern data-driven AI systems are often very difficult to understand, challenging their use in software business and prompting the emergence of the explainable AI (XAI) field. This paper explores how machine learning (ML) students explain their models and draws implications for practice from this. Data was collected from ML master students, who were given a two-part assignment. First they developed a model predicting insurance claims based on an existing data set, then they received a request for explanation of insurance premiums in accordance with the GDPR right to meaningful information and had to come up with such an explanation. The students also peer-graded each other’s explanations. Analyzing this data set and comparing it to responses from actual insurance firms from a previous study illustrates some potential pitfalls—narrow technical focus and offering mere data dumps. There were also some promising directions—feature importance, graphics, and what-if scenarios—where the software business practice could benefit from being inspired by the students. The paper is concluded with a reflection about the importance of multiple kinds of expertise and team efforts for making the most of XAI in practice.

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2025
Emneord
Adversarial machine learning; Insurance; Artificial intelligence systems; Data driven; Data set; Explainable artificial intelligence; GDPR; Insurance claims; Learn+; Machine-learning; Master students; Software business; Students
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-78435 (URN)10.1007/978-3-031-85849-9_28 (DOI)2-s2.0-105001269309 (Scopus ID)
Konferanse
15th International Conference on Software Business, ICSOB 2024. 18 November 2024 - 20 November 2024
Merknad

This research was partially funded by the Swedish Competition Authority, grant no 456/2021.

Tilgjengelig fra: 2025-09-16 Laget: 2025-09-16 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Franke, U. & Orlando, A. (2025). Interdependent cyber risk and the role of insurers. Research in Economics, 79(3), Article ID 101059.
Åpne denne publikasjonen i ny fane eller vindu >>Interdependent cyber risk and the role of insurers
2025 (engelsk)Inngår i: Research in Economics, ISSN 1090-9443, E-ISSN 1090-9451, Vol. 79, nr 3, artikkel-id 101059Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Increasing use of new digital services offers tremendous opportunities for modern society, but also entails new risks. One tool for managing cyber risk is cyber insurance. While cyber insurance has attracted much attention and optimism, interdependent cyber risks and lack of actuarial data have prompted some insurers to adopt a more proactive role, not only insuring losses but also assisting clients with preventive work such as managed detection and response solutions, i.e., investments in their own cybersecurity. The purpose of this paper is to propose and theoretically investigate yet a further extension of this role, where insurers facilitate security investments between interdependent firms, which get the opportunity to invest a share of their insurance premiums to improve the security of each other. It is demonstrated that if insurers can facilitate such investments, then under common theoretical assumptions this can make a positive contribution to overall welfare. The paper is concluded by a discussion of the relevance and applicability of this theoretical contribution in practice. 

sted, utgiver, år, opplag, sider
Academic Press, 2025
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-78380 (URN)10.1016/j.rie.2025.101059 (DOI)2-s2.0-105001107893 (Scopus ID)
Merknad

The foundations of this paper were laid during U. Franke’s two-week stay with A. Orlando at the Istituto per le Applicazioni del calcolo in Naples in July 2022, supported by the Short Term Mobility Program (STM 2022) of the National Research Council of Italy(CNR), contract number 5446. A. Orlando was partially supported by the SERICS project (PE00000014) under the MUR NationalRecovery and Resilience Plan funded by the European Union – NextGenerationEU.

Tilgjengelig fra: 2025-06-09 Laget: 2025-06-09 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Kianpour, M. & Franke, U. (2025). The use of simulations in economic cybersecurity decision-making. Journal of Cybersecurity, 11(1), Article ID tyaf003.
Åpne denne publikasjonen i ny fane eller vindu >>The use of simulations in economic cybersecurity decision-making
2025 (engelsk)Inngår i: Journal of Cybersecurity, ISSN 2057-2093, Vol. 11, nr 1, artikkel-id tyaf003Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This paper presents an in-depth examination of the use of simulations in economic cybersecurity decision-making, highlighting the dual nature of their potential and the challenges they present. Drawing on examples from existing studies, we explore the role of simulations in generating new knowledge about probabilities and consequences in the cybersecurity domain, which is essential in understanding and managing risk and uncertainty. Additionally, we introduce the concepts of "bookkeeping"and "abstraction"within the context of simulations, discussing how they can sometimes fail and exploring the underlying reasons for their failures. This discussion leads us to suggest a framework of considerations for effectively utilizing simulations in cybersecurity. This framework is designed not as a rigid checklist but as a guide for critical thinking and evaluation, aiding users in assessing the suitability and reliability of a simulation model for a particular decision-making context. Future work should focus on applying this framework in real-world settings, continuously refining the use of simulations to ensure they remain effective and relevant in the dynamic field of cybersecurity. 

sted, utgiver, år, opplag, sider
Oxford University Press, 2025
Emneord
Bias; Critical evaluation; Critical thinking; Cyber security; Decision making under uncertainty; Decision-making under risks; Decisions makings; Risks and uncertainties; Simulation; Simulation model; Digital elevation model
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-78020 (URN)10.1093/cybsec/tyaf003 (DOI)2-s2.0-85218089039 (Scopus ID)
Merknad

This work was supported by H2020-SU-DS02-2020, grant number101020259 (M.K.) and the Swedish Foundation for Strategic Research,grant number SM22-0057 (U.F.).

Tilgjengelig fra: 2025-09-25 Laget: 2025-09-25 Sist oppdatert: 2025-09-25bibliografisk kontrollert
Franke, U. (2024). Algorithmic Transparency, Manipulation, and Two Concepts of Liberty. Philosophy & Technology, 37, Article ID 22.
Åpne denne publikasjonen i ny fane eller vindu >>Algorithmic Transparency, Manipulation, and Two Concepts of Liberty
2024 (engelsk)Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, artikkel-id 22Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but rather as indifference to whether the information provided enhances decision-making by revealing reasons. This short commentary on Klenk uses Berlin’s (1958) two concepts of liberty to further illuminate the concept of transparency as manipulation, finding alignment between positive liberty and the critical account.

sted, utgiver, år, opplag, sider
Springer Nature, 2024
Emneord
Algorithmic transparency, manipulation, Isaiah Berlin
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-72335 (URN)10.1007/s13347-024-00713-3 (DOI)
Merknad

 Open access funding provided by RISE Research Institutes of Sweden. The author received noexternal funding for this work.

Tilgjengelig fra: 2024-03-15 Laget: 2024-03-15 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Franke, U. (2024). Att utveckla och implementera cybersäkerhetspolicy: Lärdomar från den finansiella sektorn. Statsvetenskaplig Tidskrift, 126(2), 251-272
Åpne denne publikasjonen i ny fane eller vindu >>Att utveckla och implementera cybersäkerhetspolicy: Lärdomar från den finansiella sektorn
2024 (svensk)Inngår i: Statsvetenskaplig Tidskrift, ISSN 0039-0747, Vol. 126, nr 2, s. 251-272Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Modern society is increasingly dependent on digital services, making their dependability a top priority. But while there is a consensus that cybersecurity is important, there is no corresponding agreement on the true extent of the problem, the most effective countermeasures, or the proper division of labor and responsibilities. This makes cybersecurity policy very difficult. This article addresses this issue based on observations and experiences from a period of guest research at the Swedish Financial Supervisory Authority (Finansinspektionen), which made it possible to study how cybersecurity policy is developed and implemented in the Swedish financial sector. Observations include policy implementation challenges related to squaring different roles and perspectives mandated by different laws, and to collaboration between independent government authorities, but also policy development challenges: How can the full range of perspectives and tools be included in cybersecurity policy development? As Sweden now revises its cybersecurity policy, this is a key issue.

HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-74617 (URN)
Forskningsfinansiär
Swedish Foundation for Strategic Research, SM22-0057
Merknad

Den aktuella studien är finansierad av Stiftelsen för Strategisk Forskning (avtalsnummer SM22-0057)

Tilgjengelig fra: 2024-08-05 Laget: 2024-08-05 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Andreasson, A., Artman, H., Brynielsson, J. & Franke, U. (2024). Cybersecurity work at Swedish administrative authorities: taking action or waiting for approval. Cognition, Technology & Work, 26(4), 709
Åpne denne publikasjonen i ny fane eller vindu >>Cybersecurity work at Swedish administrative authorities: taking action or waiting for approval
2024 (engelsk)Inngår i: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566, Vol. 26, nr 4, s. 709-Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

In recent years, the Swedish public sector has undergone rapid digitalization, while cybersecurity efforts have not kept even steps. This study investigates conditions for cybersecurity work at Swedish administrative authorities by examining organizational conditions at the authorities, what cybersecurity staff do to acquire the cyber situation awareness required for their role, as well as what experience cybersecurity staff have with incidents. In this study, 17 semi-structured interviews were held with respondents from Swedish administrative authorities. The results showed the diverse conditions for cybersecurity work that exist at the authorities and that a variety of roles are involved in that work. It was found that national-level support for cybersecurity was perceived as somewhat lacking. There were also challenges in getting access to information elements required for sufficient cyber situation awareness. 

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2024
Emneord
Condition; Cyber situation awareness; Cyber security; National level; Organisational; Public sector; Security management; Semi structured interviews; Situation awareness; Swedishs
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-76041 (URN)10.1007/s10111-024-00779-1 (DOI)2-s2.0-85205049306 (Scopus ID)
Merknad

 This work was supported by the Swedish Armed Forces. 

Tilgjengelig fra: 2024-10-31 Laget: 2024-10-31 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Franke, U. (2024). Livspusslet: Rilke och Nozick. In: Katarina O'Nils Franke (Ed.), Rilke och filosoferna: (pp. 79-86). Malmö: Ellerström förlag
Åpne denne publikasjonen i ny fane eller vindu >>Livspusslet: Rilke och Nozick
2024 (svensk)Inngår i: Rilke och filosoferna / [ed] Katarina O'Nils Franke, Malmö: Ellerström förlag , 2024, s. 79-86Kapittel i bok, del av antologi (Annet (populærvitenskap, debatt, mm))
sted, utgiver, år, opplag, sider
Malmö: Ellerström förlag, 2024
Emneord
Rainer Maria Rilke, Robert Nozick
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-74618 (URN)9789172477308 (ISBN)
Tilgjengelig fra: 2024-08-05 Laget: 2024-08-05 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Franke, U. (2024). Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle. Philosophy & Technology, 37(3), Article ID 87.
Åpne denne publikasjonen i ny fane eller vindu >>Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle
2024 (engelsk)Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, nr 3, artikkel-id 87Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. Based on complications with this approach identified in the literature, this article discusses how Rawls’s theory in general, and especially the difference principle, should reasonably be applied to algorithmic fairness decisions. It is observed that proposals to achieve Rawlsian algorithmic fairness often aim to uphold the difference principle in the individual situations where automated decision-making occurs. However, the Rawlsian difference principle applies to society at large and does not aggregate in such a way that upholding it in constituent situations also upholds it in the aggregate. But such aggregation is a hidden premise of many proposals in the literature and its falsity explains many complications encountered. 

sted, utgiver, år, opplag, sider
Springer Science and Business Media B.V., 2024
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-74644 (URN)10.1007/s13347-024-00779-z (DOI)2-s2.0-85198326990 (Scopus ID)
Tilgjengelig fra: 2024-08-07 Laget: 2024-08-07 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Franke, U., Hallstrom, C. H., Artman, H. & Dexe, J. (2024). Requirements on and Procurement of Explainable Algorithms-A Systematic Review of the Literature. In: NEW TRENDS IN DISRUPTIVE TECHNOLOGIES, TECH ETHICS, AND ARTIFICIAL INTELLIGENCE, DITTET 2024: . Paper presented at 4th International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence (DITTET), Salamanca, SPAIN, JUL 03-05, 2024 (pp. 40-52). SPRINGER INTERNATIONAL PUBLISHING AG, 1459
Åpne denne publikasjonen i ny fane eller vindu >>Requirements on and Procurement of Explainable Algorithms-A Systematic Review of the Literature
2024 (engelsk)Inngår i: NEW TRENDS IN DISRUPTIVE TECHNOLOGIES, TECH ETHICS, AND ARTIFICIAL INTELLIGENCE, DITTET 2024, SPRINGER INTERNATIONAL PUBLISHING AG , 2024, Vol. 1459, s. 40-52Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Artificial intelligence is making progress, enabling automation of tasks previously the privilege of humans. This brings many benefits but also entails challenges, in particular with respect to ‘black box’ machine learning algorithms. Therefore, questions of transparency and explainability in these systems receive much attention. However, most organizations do not build their software from scratch, but rather procure it from others. Thus, it becomes imperative to consider not only requirements on but also procurement of explainable algorithms and decision support systems. This article offers a first systematic literature review of this area. Following construction of appropriate search queries, 503 unique items from Scopus, ACM Digital Library, and IEEE Xplore were screened for relevance. 37 items remained in the final analysis. An overview and a synthesis of the literature is offered, and it is concluded that more research is needed, in particular on procurement, human-computer interaction aspects, and different purposes of explainability.

sted, utgiver, år, opplag, sider
SPRINGER INTERNATIONAL PUBLISHING AG, 2024
Serie
Advances in Intelligent Systems and Computing
Emneord
Requirements; Procurement; Explainable Artificial Intelligence (XAI); Transparency; Explainability
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-77406 (URN)10.1007/978-3-031-66635-3\_4 (DOI)
Konferanse
4th International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence (DITTET), Salamanca, SPAIN, JUL 03-05, 2024
Tilgjengelig fra: 2025-02-12 Laget: 2025-02-12 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Franke, U. (2024). The Limits of Calibration and the Possibility of Roles for Trustworthy AI. Philosophy & Technology, 37(3), Article ID 82.
Åpne denne publikasjonen i ny fane eller vindu >>The Limits of Calibration and the Possibility of Roles for Trustworthy AI
2024 (engelsk)Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, nr 3, artikkel-id 82Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

With increasing use of artificial intelligence (AI) in high-stakes contexts, a race for “trustworthy AI” is under way. However, Dorsch and Deroy (Philosophy & Technology 37, 62, 2024) recently argued that regardless of its feasibility, morally trustworthy AI is unnecessary: We should merely rely on rather than trust AI, and carefully calibrate our reliance using the reliability scores which are often available. This short commentary on Dorsch and Deroy engages with the claim that morally trustworthy AI is unnecessary and argues that since there are important limits to how good calibration based on reliability scores can be, some residual roles for trustworthy AI (if feasible) are still possible. 

sted, utgiver, år, opplag, sider
Springer Science and Business Media B.V., 2024
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-74801 (URN)10.1007/s13347-024-00771-7 (DOI)2-s2.0-85197260283 (Scopus ID)
Tilgjengelig fra: 2024-08-27 Laget: 2024-08-27 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-2017-7914
v. 2.47.0