Change search
Refine search result
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    Algorithmic Fairness, Risk, and the Dominant Protective Agency2023In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 36, article id 76Article in journal (Refereed)
    Abstract [en]

    With increasing use of automated algorithmic decision-making, issues of algorithmic fairness have attracted much attention lately. In this growing literature, existing concepts from ethics and political philosophy are often applied to new contexts. The reverse—that novel insights from the algorithmic fairness literature are fed back into ethics and political philosophy—is far less established. However, this short commentary on Baumann and Loi (Philosophy & Technology, 36(3), 45 2023) aims to do precisely this. Baumann and Loi argue that among algorithmic group fairness measures proposed, one—sufficiency (well-calibration) is morally defensible for insurers to use, whereas independence (statistical parity or demographic parity) and separation (equalized odds) are not normatively appropriate in the insurance context. Such a result may seem to be of relatively narrow interest to insurers and insurance scholars only. We argue, however, that arguments such as that offered by Baumann and Loi have an important but so far overlooked connection to the derivation of the minimal state offered by Nozick (1974) and thus to political philosophy at large.

  • 2.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    Algorithmic Political Bias—an Entrenchment Concern2022In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 3, article id 75Article in journal (Refereed)
    Abstract [en]

    This short commentary on Peters (Philosophy & Technology 35, 2022) identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan (2016). Second, following Hacking (1999), the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick (1989), it is argued that purist political positions may stand in the way of the pursuit of all worthy values and goals to be pursued in the political realm and that to the extent that algorithmic political bias entrenches political positions, it also hinders this healthy “zigzag of politics”. © 2022, The Author(s).

  • 3.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    Algorithmic Transparency, Manipulation, and Two Concepts of Liberty2024In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, article id 22Article in journal (Refereed)
    Abstract [en]

    As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but rather as indifference to whether the information provided enhances decision-making by revealing reasons. This short commentary on Klenk uses Berlin’s (1958) two concepts of liberty to further illuminate the concept of transparency as manipulation, finding alignment between positive liberty and the critical account.

    Download full text (pdf)
    fulltext
  • 4.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    First- and Second-Level Bias in Automated Decision-making2022In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 2, article id 21Article in journal (Refereed)
    Abstract [en]

    Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains. © 2022, The Author(s).

  • 5.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.
    How Much Should You Care About Algorithmic Transparency as Manipulation?2022In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 4, article id 92Article in journal (Refereed)
    Abstract [en]

    Wang (Philosophy & Technology 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of acts in equilibrium (Nozick, 1981) is then used to explain how different individuals reading Wang can end up with different evaluative attitudes towards algorithmic transparency, despite factual agreement. The commentary concludes by situating constructionist commitment inside a larger question of how much to think of our actions, identifying conflicting arguments. © 2022, The Author(s).

  • 6.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle2024In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, no 3, article id 87Article in journal (Refereed)
    Abstract [en]

    Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. Based on complications with this approach identified in the literature, this article discusses how Rawls’s theory in general, and especially the difference principle, should reasonably be applied to algorithmic fairness decisions. It is observed that proposals to achieve Rawlsian algorithmic fairness often aim to uphold the difference principle in the individual situations where automated decision-making occurs. However, the Rawlsian difference principle applies to society at large and does not aggregate in such a way that upholding it in constituent situations also upholds it in the aggregate. But such aggregation is a hidden premise of many proposals in the literature and its falsity explains many complications encountered. 

    Download full text (pdf)
    fulltext
  • 7.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.
    Rawls’s Original Position and Algorithmic Fairness2021In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 34, p. 1803-1817Article in journal (Refereed)
    Abstract [en]

    Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. This article aims to identify some complications with this approach: Under which circumstances can Rawls’s original position reasonably be applied to algorithmic fairness decisions? First, it is argued that there are important differences between Rawls’s original position and a parallel algorithmic fairness original position with respect to risk attitudes. Second, it is argued that the application of Rawls’s original position to algorithmic fairness faces a boundary problem in defining relevant stakeholders. Third, it is observed that the definition of the least advantaged, necessary for applying the difference principle, requires some attention in the context of algorithmic fairness. Finally, it is argued that appropriate deliberation in algorithmic fairness contexts often require more knowledge about probabilities than the Rawlsian original position allows. Provided that these complications are duly considered, the thought-experiment of the Rawlsian original position can be useful in algorithmic fairness decisions. © 2021, The Author(s).

  • 8.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    The Limits of Calibration and the Possibility of Roles for Trustworthy AI2024In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, no 3, article id 82Article in journal (Refereed)
    Abstract [en]

    With increasing use of artificial intelligence (AI) in high-stakes contexts, a race for “trustworthy AI” is under way. However, Dorsch and Deroy (Philosophy & Technology 37, 62, 2024) recently argued that regardless of its feasibility, morally trustworthy AI is unnecessary: We should merely rely on rather than trust AI, and carefully calibrate our reliance using the reliability scores which are often available. This short commentary on Dorsch and Deroy engages with the claim that morally trustworthy AI is unnecessary and argues that since there are important limits to how good calibration based on reliability scores can be, some residual roles for trustworthy AI (if feasible) are still possible. 

    Download full text (pdf)
    fulltext
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf