Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 67) Show all publications
Franke, U. (2023). Algorithmic Fairness, Risk, and the Dominant Protective Agency. Philosophy & Technology, 36, Article ID 76.
Open this publication in new window or tab >>Algorithmic Fairness, Risk, and the Dominant Protective Agency
2023 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 36, article id 76Article in journal (Refereed) Published
Abstract [en]

With increasing use of automated algorithmic decision-making, issues of algorithmic fairness have attracted much attention lately. In this growing literature, existing concepts from ethics and political philosophy are often applied to new contexts. The reverse—that novel insights from the algorithmic fairness literature are fed back into ethics and political philosophy—is far less established. However, this short commentary on Baumann and Loi (Philosophy & Technology, 36(3), 45 2023) aims to do precisely this. Baumann and Loi argue that among algorithmic group fairness measures proposed, one—sufficiency (well-calibration) is morally defensible for insurers to use, whereas independence (statistical parity or demographic parity) and separation (equalized odds) are not normatively appropriate in the insurance context. Such a result may seem to be of relatively narrow interest to insurers and insurance scholars only. We argue, however, that arguments such as that offered by Baumann and Loi have an important but so far overlooked connection to the derivation of the minimal state offered by Nozick (1974) and thus to political philosophy at large.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2023
Keywords
Algorithmic fairness, Insurance, Risk, "Anarchy, state, and utopia"
National Category
Philosophy Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-68577 (URN)10.1007/s13347-023-00684-x (DOI)
Available from: 2023-12-13 Created: 2023-12-13 Last updated: 2024-01-31Bibliographically approved
Barreto, C., Reinert, O., Wiesinger, T. & Franke, U. (2023). Duopoly insurers’ incentives for data quality under a mandatory cyber data sharing regime. Computers & security (Print), 131, Article ID 103292.
Open this publication in new window or tab >>Duopoly insurers’ incentives for data quality under a mandatory cyber data sharing regime
2023 (English)In: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 131, article id 103292Article in journal (Refereed) Published
Abstract [en]

We study the impact of data sharing policies on cyber insurance markets. These policies have been proposed to address the scarcity of data about cyber threats, which is essential to manage cyber risks. We propose a Cournot duopoly competition model in which two insurers choose the number of policies they offer (i.e., their production level) and also the resources they invest to ensure the quality of data regarding the cost of claims (i.e., the data quality of their production cost). We find that enacting mandatory data sharing sometimes creates situations in which at most one of the two insurers invests in data quality, whereas both insurers would invest when information sharing is not mandatory. This raises concerns about the merits of making data sharing mandatory. 

Place, publisher, year, edition, pages
Elsevier Ltd, 2023
Keywords
Cournot model, Cyber insurance, Cyber risk, Data quality, Data sharing, Data reduction, Information analysis, Insurance, Competition modeling, Cournot duopoly, Cybe insurance, Cybe risk, Cyber threats, Insurance markets, Production level, Competition
National Category
Economics
Identifiers
urn:nbn:se:ri:diva-64932 (URN)10.1016/j.cose.2023.103292 (DOI)2-s2.0-85160592819 (Scopus ID)
Note

Funding details: Stiftelsen för Strategisk Forskning, SSF, SM19-0009; Funding text 1: This research was supported by Länsförsäkringar (O. Reinert & T. Wiesinger), the Swedish Foundation for Strategic Research, grant no. SM19-0009 (U. Franke), and Digital Futures (U. Franke & C. Barreto). 

Available from: 2023-06-12 Created: 2023-06-12 Last updated: 2023-06-12Bibliographically approved
Franke, U. (2023). En oavslutad dikt om ett oavslutat uppror. Slovo: Journal of Slavic Languages, Literatures and Cultures , 63, 64-73
Open this publication in new window or tab >>En oavslutad dikt om ett oavslutat uppror
2023 (Swedish)In: Slovo: Journal of Slavic Languages, Literatures and Cultures , E-ISSN 2001-7359, Vol. 63, p. 64-73Article in journal (Other (popular science, discussion, etc.)) Published
Abstract [en]

The legendary Russian literary critic Belinsky famously described Pushkin’s novel in verse Eugene Onegin as an encyclopedia of Russian life. However, this encyclopedia seems seriously incomplete in that it largely leaves out elements of oppression, war, and insurrection. There are many valid explanations for this, but one, very blunt and prosaic, is that oppression and censorship actually worked – that it is absent in the fiction because it was present in reality. As a case in point, this article presents a novel translation into Swedish, with rhymes and meter preserved, of the fragments remaining of the unfinished tenth chapter of Eugene Onegin. This tenth chapter deals with the failed Decembrist uprising of 1825, and the misrule precipitating it, and it is not surprising that it could not be published at the time it was written. Though well known in the academic community, this fragment is rarely published in foreign translations, and as far as known, this is the first translation into a Scandinavian language. The article offers some commentary on the translation and concludes with a few remarks on the value of reading the classics even in times of turmoil.

Place, publisher, year, edition, pages
Uppsala universitet, 2023
National Category
Languages and Literature
Identifiers
urn:nbn:se:ri:diva-69380 (URN)
Available from: 2024-01-15 Created: 2024-01-15 Last updated: 2024-01-16Bibliographically approved
Besker, T., Franke, U. & Axelsson, J. (2023). Navigating the Cyber-Security Risks and Economics of System-of-Systems. In: 2023 18th Annual System of Systems Engineering Conference, SoSe 2023: . Paper presented at 2023 18th Annual System of Systems Engineering Conference, SoSe 2023. Lille, France. 14 June 2023 through 16 June 2023. Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Navigating the Cyber-Security Risks and Economics of System-of-Systems
2023 (English)In: 2023 18th Annual System of Systems Engineering Conference, SoSe 2023, Institute of Electrical and Electronics Engineers Inc. , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Cybersecurity is an important concern in systems-of-systems (SoS), where the effects of cyber incidents, whether deliberate attacks or unintentional mistakes, can propagate from an individual constituent system (CS) throughout the entire SoS. Unfortunately, the security of an SoS cannot be guaranteed by separately addressing the security of each CS. Security must also be addressed at the SoS level. This paper reviews some of the most prominent cybersecurity risks within the SoS research field and combines this with the cyber and information security economics perspective. This sets the scene for a structured assessment of how various cyber risks can be addressed in different SoS architectures. More precisely, the paper discusses the effectiveness and appropriateness of five cybersecurity policy options in each of the four assessed SoS archetypes and concludes that cybersecurity risks should be addressed using both traditional design-focused and more novel policy-oriented tools. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2023
Keywords
Cyber Security Investment, Cybersecurity, Economics, Incentives, System-of-Systems, Risk assessment, System of systems, Cybe security investment, Cyber security, Entire system, Incentive, Securities investments, Security Economics, Security risks, System levels
National Category
Computer Systems
Identifiers
urn:nbn:se:ri:diva-65973 (URN)10.1109/SoSE59841.2023.10178677 (DOI)2-s2.0-85166741236 (Scopus ID)9798350327236 (ISBN)
Conference
2023 18th Annual System of Systems Engineering Conference, SoSe 2023. Lille, France. 14 June 2023 through 16 June 2023
Available from: 2023-08-24 Created: 2023-08-24 Last updated: 2023-08-24Bibliographically approved
Franke, U. (2022). Algorithmic Political Bias—an Entrenchment Concern. Philosophy & Technology, 35(3), Article ID 75.
Open this publication in new window or tab >>Algorithmic Political Bias—an Entrenchment Concern
2022 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 3, article id 75Article in journal (Refereed) Published
Abstract [en]

This short commentary on Peters (Philosophy & Technology 35, 2022) identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan (2016). Second, following Hacking (1999), the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick (1989), it is argued that purist political positions may stand in the way of the pursuit of all worthy values and goals to be pursued in the political realm and that to the extent that algorithmic political bias entrenches political positions, it also hinders this healthy “zigzag of politics”. © 2022, The Author(s).

Place, publisher, year, edition, pages
Springer Science and Business Media B.V., 2022
National Category
Computer Sciences
Identifiers
urn:nbn:se:ri:diva-59896 (URN)10.1007/s13347-022-00562-y (DOI)2-s2.0-85135247844 (Scopus ID)
Available from: 2022-08-11 Created: 2022-08-11 Last updated: 2023-06-08Bibliographically approved
Franke, U., Andreasson, A., Artman, H., Brynielsson, J., Varga, S. & Vilhelm, N. (2022). Cyber situational awareness issues and challenges. In: Cybersecurity and Cognitive Science: (pp. 235-265). Elsevier
Open this publication in new window or tab >>Cyber situational awareness issues and challenges
Show others...
2022 (English)In: Cybersecurity and Cognitive Science, Elsevier , 2022, p. 235-265Chapter in book (Other academic)
Abstract [en]

Today, most enterprises are increasingly reliant on information technology to carry out their operations. This also entails an increasing need for cyber situational awareness—roughly, to know what is going on in the cyber domain, and thus be able to adequately respond to events such as attacks or accidents. This chapter argues that cyber situational awareness is best understood by combining three complementary points of view: the technological, the socio-cognitive, and the organizational perspectives. In addition, the chapter investigates the prospects for reasoning about adversarial actions. This part also reports on a small empirical investigation where participants in the Locked Shields cyber defense exercise were interviewed about their information needs with respect to threat actors. The chapter is concluded with a discussion regarding important challenges to be addressed along with suggestions for further research. 

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Adversarial behavior, Cognition, Cyber situational awareness, Organization, Technology
National Category
Clinical Medicine
Identifiers
urn:nbn:se:ri:diva-60271 (URN)10.1016/B978-0-323-90570-1.00015-2 (DOI)2-s2.0-85137911650 (Scopus ID)9780323905701 (ISBN)9780323906968 (ISBN)
Available from: 2022-10-10 Created: 2022-10-10 Last updated: 2023-06-08Bibliographically approved
Franke, U. (2022). Dags för cybersäkerhetsekonomi. Ekonomisk Debatt, 50(2), 71-74
Open this publication in new window or tab >>Dags för cybersäkerhetsekonomi
2022 (Swedish)In: Ekonomisk Debatt, ISSN 0345-2646, Vol. 50, no 2, p. 71-74Article in journal (Other academic) Published
National Category
Economics Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-58832 (URN)
Available from: 2022-03-18 Created: 2022-03-18 Last updated: 2023-06-08Bibliographically approved
Dexe, J., Franke, U., Söderlund, K., van Berkel, N., Jensen, R. H., Lepinkäinen, N. & Vaiste, J. (2022). Explaining automated decision-making: a multinational study of the GDPR right to meaningful information. Geneva papers on risk and insurance. Issues and practice, 47(3), 669-697
Open this publication in new window or tab >>Explaining automated decision-making: a multinational study of the GDPR right to meaningful information
Show others...
2022 (English)In: Geneva papers on risk and insurance. Issues and practice, ISSN 1018-5895, E-ISSN 1468-0440, Vol. 47, no 3, p. 669-697Article in journal (Refereed) Published
Abstract [en]

The General Data Protection Regulation (GDPR) establishes a right for individuals to get access to information about automated decision-making based on their personal data. However, the application of this right comes with caveats. This paper investigates how European insurance companies have navigated these obstacles. By recruiting volunteering insurance customers, requests for information about how insurance premiums are set were sent to 26 insurance companies in Denmark, Finland, The Netherlands, Poland and Sweden. Findings illustrate the practice of responding to GDPR information requests and the paper identifies possible explanations for shortcomings and omissions in the responses. The paper also adds to existing research by showing how the wordings in the different language versions of the GDPR could lead to different interpretations. Finally, the paper discusses what can reasonably be expected from explanations in consumer oriented information.

Keywords
GDPR, Right of access, Meaningful information, Transparency, Insurance, Automated decision-making
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:ri:diva-59313 (URN)10.1057/s41288-022-00271-9 (DOI)2-s2.0-85129328785 (Scopus ID)
Projects
TALFÖR
Funder
Länsförsäkringar AB, No. P4/18
Available from: 2022-06-07 Created: 2022-06-07 Last updated: 2023-06-08Bibliographically approved
Franke, U. (2022). First- and Second-Level Bias in Automated Decision-making. Philosophy & Technology, 35(2), Article ID 21.
Open this publication in new window or tab >>First- and Second-Level Bias in Automated Decision-making
2022 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 2, article id 21Article in journal (Refereed) Published
Abstract [en]

Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains. © 2022, The Author(s).

Place, publisher, year, edition, pages
Springer Science and Business Media B.V., 2022
Keywords
Arbitrariness, Bias, Decision-support, Discrimination, Explainable artificial intelligence (XAI)
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:ri:diva-58998 (URN)10.1007/s13347-022-00500-y (DOI)2-s2.0-85127109812 (Scopus ID)
Note

Funding details: P4/18; Funding text 1: Open access funding provided by RISE Research Institutes of Sweden. This work was supported by Länsförsäkringsgruppens Forsknings- & Utvecklingsfond, agreement no. P4/18.

Available from: 2022-06-13 Created: 2022-06-13 Last updated: 2023-06-08Bibliographically approved
Franke, U. (2022). How Much Should You Care About Algorithmic Transparency as Manipulation?. Philosophy & Technology, 35(4), Article ID 92.
Open this publication in new window or tab >>How Much Should You Care About Algorithmic Transparency as Manipulation?
2022 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 4, article id 92Article in journal (Refereed) Published
Abstract [en]

Wang (Philosophy & Technology 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of acts in equilibrium (Nozick, 1981) is then used to explain how different individuals reading Wang can end up with different evaluative attitudes towards algorithmic transparency, despite factual agreement. The commentary concludes by situating constructionist commitment inside a larger question of how much to think of our actions, identifying conflicting arguments. © 2022, The Author(s).

Place, publisher, year, edition, pages
Springer Science and Business Media B.V., 2022
Keywords
Acts in equilibrium, Algorithmic transparency, Constructionism, Hume’s law
National Category
Mathematics
Identifiers
urn:nbn:se:ri:diva-61196 (URN)10.1007/s13347-022-00586-4 (DOI)2-s2.0-85139987612 (Scopus ID)
Available from: 2022-12-05 Created: 2022-12-05 Last updated: 2023-06-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2017-7914

Search in DiVA

Show all publications