Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 75) Show all publications
Franke, U. (2024). Algorithmic Transparency, Manipulation, and Two Concepts of Liberty. Philosophy & Technology, 37, Article ID 22.
Open this publication in new window or tab >>Algorithmic Transparency, Manipulation, and Two Concepts of Liberty
2024 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, article id 22Article in journal (Refereed) Published
Abstract [en]

As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but rather as indifference to whether the information provided enhances decision-making by revealing reasons. This short commentary on Klenk uses Berlin’s (1958) two concepts of liberty to further illuminate the concept of transparency as manipulation, finding alignment between positive liberty and the critical account.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Algorithmic transparency, manipulation, Isaiah Berlin
National Category
Philosophy Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-72335 (URN)10.1007/s13347-024-00713-3 (DOI)
Note

 Open access funding provided by RISE Research Institutes of Sweden. The author received noexternal funding for this work.

Available from: 2024-03-15 Created: 2024-03-15 Last updated: 2024-03-18Bibliographically approved
Franke, U. (2024). Att utveckla och implementera cybersäkerhetspolicy: Lärdomar från den finansiella sektorn. Statsvetenskaplig Tidskrift, 126(2), 251-272
Open this publication in new window or tab >>Att utveckla och implementera cybersäkerhetspolicy: Lärdomar från den finansiella sektorn
2024 (Swedish)In: Statsvetenskaplig Tidskrift, ISSN 0039-0747, Vol. 126, no 2, p. 251-272Article in journal (Refereed) Published
Abstract [en]

Modern society is increasingly dependent on digital services, making their dependability a top priority. But while there is a consensus that cybersecurity is important, there is no corresponding agreement on the true extent of the problem, the most effective countermeasures, or the proper division of labor and responsibilities. This makes cybersecurity policy very difficult. This article addresses this issue based on observations and experiences from a period of guest research at the Swedish Financial Supervisory Authority (Finansinspektionen), which made it possible to study how cybersecurity policy is developed and implemented in the Swedish financial sector. Observations include policy implementation challenges related to squaring different roles and perspectives mandated by different laws, and to collaboration between independent government authorities, but also policy development challenges: How can the full range of perspectives and tools be included in cybersecurity policy development? As Sweden now revises its cybersecurity policy, this is a key issue.

National Category
Computer and Information Sciences Political Science
Identifiers
urn:nbn:se:ri:diva-74617 (URN)
Funder
Swedish Foundation for Strategic Research, SM22-0057
Note

Den aktuella studien är finansierad av Stiftelsen för Strategisk Forskning (avtalsnummer SM22-0057)

Available from: 2024-08-05 Created: 2024-08-05 Last updated: 2024-08-29Bibliographically approved
Andreasson, A., Artman, H., Brynielsson, J. & Franke, U. (2024). Cybersecurity work at Swedish administrative authorities: taking action or waiting for approval. Cognition, Technology & Work
Open this publication in new window or tab >>Cybersecurity work at Swedish administrative authorities: taking action or waiting for approval
2024 (English)In: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566Article in journal (Refereed) Epub ahead of print
Abstract [en]

In recent years, the Swedish public sector has undergone rapid digitalization, while cybersecurity efforts have not kept even steps. This study investigates conditions for cybersecurity work at Swedish administrative authorities by examining organizational conditions at the authorities, what cybersecurity staff do to acquire the cyber situation awareness required for their role, as well as what experience cybersecurity staff have with incidents. In this study, 17 semi-structured interviews were held with respondents from Swedish administrative authorities. The results showed the diverse conditions for cybersecurity work that exist at the authorities and that a variety of roles are involved in that work. It was found that national-level support for cybersecurity was perceived as somewhat lacking. There were also challenges in getting access to information elements required for sufficient cyber situation awareness. 

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH, 2024
Keywords
Condition; Cyber situation awareness; Cyber security; National level; Organisational; Public sector; Security management; Semi structured interviews; Situation awareness; Swedishs
National Category
Information Systems
Identifiers
urn:nbn:se:ri:diva-76041 (URN)10.1007/s10111-024-00779-1 (DOI)2-s2.0-85205049306 (Scopus ID)
Note

 This work was supported by the Swedish Armed Forces. 

Available from: 2024-10-31 Created: 2024-10-31 Last updated: 2024-11-01Bibliographically approved
Franke, U. (2024). Livspusslet: Rilke och Nozick. In: Katarina O'Nils Franke (Ed.), Rilke och filosoferna: (pp. 79-86). Malmö: Ellerström förlag
Open this publication in new window or tab >>Livspusslet: Rilke och Nozick
2024 (Swedish)In: Rilke och filosoferna / [ed] Katarina O'Nils Franke, Malmö: Ellerström förlag , 2024, p. 79-86Chapter in book (Other (popular science, discussion, etc.))
Place, publisher, year, edition, pages
Malmö: Ellerström förlag, 2024
Keywords
Rainer Maria Rilke, Robert Nozick
National Category
Philosophy
Identifiers
urn:nbn:se:ri:diva-74618 (URN)9789172477308 (ISBN)
Available from: 2024-08-05 Created: 2024-08-05 Last updated: 2024-08-05Bibliographically approved
Franke, U. (2024). Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle. Philosophy & Technology, 37(3), Article ID 87.
Open this publication in new window or tab >>Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle
2024 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, no 3, article id 87Article in journal (Refereed) Published
Abstract [en]

Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. Based on complications with this approach identified in the literature, this article discusses how Rawls’s theory in general, and especially the difference principle, should reasonably be applied to algorithmic fairness decisions. It is observed that proposals to achieve Rawlsian algorithmic fairness often aim to uphold the difference principle in the individual situations where automated decision-making occurs. However, the Rawlsian difference principle applies to society at large and does not aggregate in such a way that upholding it in constituent situations also upholds it in the aggregate. But such aggregation is a hidden premise of many proposals in the literature and its falsity explains many complications encountered. 

Place, publisher, year, edition, pages
Springer Science and Business Media B.V., 2024
National Category
Philosophy, Ethics and Religion
Identifiers
urn:nbn:se:ri:diva-74644 (URN)10.1007/s13347-024-00779-z (DOI)2-s2.0-85198326990 (Scopus ID)
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2024-08-07Bibliographically approved
Franke, U., Hallstrom, C. H., Artman, H. & Dexe, J. (2024). Requirements on and Procurement of Explainable Algorithms-A Systematic Review of the Literature. In: NEW TRENDS IN DISRUPTIVE TECHNOLOGIES, TECH ETHICS, AND ARTIFICIAL INTELLIGENCE, DITTET 2024: . Paper presented at 4th International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence (DITTET), Salamanca, SPAIN, JUL 03-05, 2024 (pp. 40-52). SPRINGER INTERNATIONAL PUBLISHING AG, 1459
Open this publication in new window or tab >>Requirements on and Procurement of Explainable Algorithms-A Systematic Review of the Literature
2024 (English)In: NEW TRENDS IN DISRUPTIVE TECHNOLOGIES, TECH ETHICS, AND ARTIFICIAL INTELLIGENCE, DITTET 2024, SPRINGER INTERNATIONAL PUBLISHING AG , 2024, Vol. 1459, p. 40-52Conference paper, Published paper (Refereed)
Abstract [en]

Artificial intelligence is making progress, enabling automation of tasks previously the privilege of humans. This brings many benefits but also entails challenges, in particular with respect to ‘black box’ machine learning algorithms. Therefore, questions of transparency and explainability in these systems receive much attention. However, most organizations do not build their software from scratch, but rather procure it from others. Thus, it becomes imperative to consider not only requirements on but also procurement of explainable algorithms and decision support systems. This article offers a first systematic literature review of this area. Following construction of appropriate search queries, 503 unique items from Scopus, ACM Digital Library, and IEEE Xplore were screened for relevance. 37 items remained in the final analysis. An overview and a synthesis of the literature is offered, and it is concluded that more research is needed, in particular on procurement, human-computer interaction aspects, and different purposes of explainability.

Place, publisher, year, edition, pages
SPRINGER INTERNATIONAL PUBLISHING AG, 2024
Series
Advances in Intelligent Systems and Computing
Keywords
Requirements; Procurement; Explainable Artificial Intelligence (XAI); Transparency; Explainability
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-77406 (URN)10.1007/978-3-031-66635-3\_4 (DOI)
Conference
4th International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence (DITTET), Salamanca, SPAIN, JUL 03-05, 2024
Available from: 2025-02-12 Created: 2025-02-12 Last updated: 2025-02-12Bibliographically approved
Franke, U. (2024). The Limits of Calibration and the Possibility of Roles for Trustworthy AI. Philosophy & Technology, 37(3), Article ID 82.
Open this publication in new window or tab >>The Limits of Calibration and the Possibility of Roles for Trustworthy AI
2024 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, no 3, article id 82Article in journal (Refereed) Published
Abstract [en]

With increasing use of artificial intelligence (AI) in high-stakes contexts, a race for “trustworthy AI” is under way. However, Dorsch and Deroy (Philosophy & Technology 37, 62, 2024) recently argued that regardless of its feasibility, morally trustworthy AI is unnecessary: We should merely rely on rather than trust AI, and carefully calibrate our reliance using the reliability scores which are often available. This short commentary on Dorsch and Deroy engages with the claim that morally trustworthy AI is unnecessary and argues that since there are important limits to how good calibration based on reliability scores can be, some residual roles for trustworthy AI (if feasible) are still possible. 

Place, publisher, year, edition, pages
Springer Science and Business Media B.V., 2024
National Category
Philosophy, Ethics and Religion
Identifiers
urn:nbn:se:ri:diva-74801 (URN)10.1007/s13347-024-00771-7 (DOI)2-s2.0-85197260283 (Scopus ID)
Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2024-08-27Bibliographically approved
Franke, U. (2024). Two Metaverse Dystopias. Res Publica
Open this publication in new window or tab >>Two Metaverse Dystopias
2024 (English)In: Res Publica, ISSN 1356-4765, E-ISSN 1572-8692Article in journal (Refereed) Epub ahead of print
Abstract [en]

In recent years, the metaverse—some form of immersive digital extension of the physical world—has received much attention. As tech companies present their bold visions, scientists and scholars have also turned to metaverse issues, from technological challenges via societal implications to profound philosophical questions. This article contributes to this growing literature by identifying the possibilities of two dystopian metaverse scenarios, namely one based on the experience machine and one based on demoktesis—two concepts from Nozick (Anarchy, State, and Utopia, Basic Books, 1974). These dystopian scenarios are introduced, and the potential for a metaverse to evolve into either of them is explained. The article is concluded with an argument for why the two dystopian scenarios are not strongly wedded to any particular theory of ethics or political philosophy, but constitute a more general contribution.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Metaverse, Dystopia, Experience machine, Demoktesis, "Anarchy, State, and Utopia"
National Category
Philosophy Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-72334 (URN)10.1007/s11158-024-09655-1 (DOI)2-s2.0-85186547321 (Scopus ID)
Note

 Open access funding provided by RISE Research Institutes of Sweden. The author received no external funding for this work.

Available from: 2024-03-15 Created: 2024-03-15 Last updated: 2024-05-23Bibliographically approved
Franke, U. (2023). Algorithmic Fairness, Risk, and the Dominant Protective Agency. Philosophy & Technology, 36, Article ID 76.
Open this publication in new window or tab >>Algorithmic Fairness, Risk, and the Dominant Protective Agency
2023 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 36, article id 76Article in journal (Refereed) Published
Abstract [en]

With increasing use of automated algorithmic decision-making, issues of algorithmic fairness have attracted much attention lately. In this growing literature, existing concepts from ethics and political philosophy are often applied to new contexts. The reverse—that novel insights from the algorithmic fairness literature are fed back into ethics and political philosophy—is far less established. However, this short commentary on Baumann and Loi (Philosophy & Technology, 36(3), 45 2023) aims to do precisely this. Baumann and Loi argue that among algorithmic group fairness measures proposed, one—sufficiency (well-calibration) is morally defensible for insurers to use, whereas independence (statistical parity or demographic parity) and separation (equalized odds) are not normatively appropriate in the insurance context. Such a result may seem to be of relatively narrow interest to insurers and insurance scholars only. We argue, however, that arguments such as that offered by Baumann and Loi have an important but so far overlooked connection to the derivation of the minimal state offered by Nozick (1974) and thus to political philosophy at large.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2023
Keywords
Algorithmic fairness, Insurance, Risk, "Anarchy, state, and utopia"
National Category
Philosophy Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-68577 (URN)10.1007/s13347-023-00684-x (DOI)
Available from: 2023-12-13 Created: 2023-12-13 Last updated: 2024-01-31Bibliographically approved
Barreto, C., Reinert, O., Wiesinger, T. & Franke, U. (2023). Duopoly insurers’ incentives for data quality under a mandatory cyber data sharing regime. Computers & security (Print), 131, Article ID 103292.
Open this publication in new window or tab >>Duopoly insurers’ incentives for data quality under a mandatory cyber data sharing regime
2023 (English)In: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 131, article id 103292Article in journal (Refereed) Published
Abstract [en]

We study the impact of data sharing policies on cyber insurance markets. These policies have been proposed to address the scarcity of data about cyber threats, which is essential to manage cyber risks. We propose a Cournot duopoly competition model in which two insurers choose the number of policies they offer (i.e., their production level) and also the resources they invest to ensure the quality of data regarding the cost of claims (i.e., the data quality of their production cost). We find that enacting mandatory data sharing sometimes creates situations in which at most one of the two insurers invests in data quality, whereas both insurers would invest when information sharing is not mandatory. This raises concerns about the merits of making data sharing mandatory. 

Place, publisher, year, edition, pages
Elsevier Ltd, 2023
Keywords
Cournot model, Cyber insurance, Cyber risk, Data quality, Data sharing, Data reduction, Information analysis, Insurance, Competition modeling, Cournot duopoly, Cybe insurance, Cybe risk, Cyber threats, Insurance markets, Production level, Competition
National Category
Economics
Identifiers
urn:nbn:se:ri:diva-64932 (URN)10.1016/j.cose.2023.103292 (DOI)2-s2.0-85160592819 (Scopus ID)
Note

Funding details: Stiftelsen för Strategisk Forskning, SSF, SM19-0009; Funding text 1: This research was supported by Länsförsäkringar (O. Reinert & T. Wiesinger), the Swedish Foundation for Strategic Research, grant no. SM19-0009 (U. Franke), and Digital Futures (U. Franke & C. Barreto). 

Available from: 2023-06-12 Created: 2023-06-12 Last updated: 2023-06-12Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2017-7914

Search in DiVA

Show all publications