Change search
Refine search result
1 - 10 of 10
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    Algorithmic Fairness, Risk, and the Dominant Protective Agency2023In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 36, article id 76Article in journal (Refereed)
    Abstract [en]

    With increasing use of automated algorithmic decision-making, issues of algorithmic fairness have attracted much attention lately. In this growing literature, existing concepts from ethics and political philosophy are often applied to new contexts. The reverse—that novel insights from the algorithmic fairness literature are fed back into ethics and political philosophy—is far less established. However, this short commentary on Baumann and Loi (Philosophy & Technology, 36(3), 45 2023) aims to do precisely this. Baumann and Loi argue that among algorithmic group fairness measures proposed, one—sufficiency (well-calibration) is morally defensible for insurers to use, whereas independence (statistical parity or demographic parity) and separation (equalized odds) are not normatively appropriate in the insurance context. Such a result may seem to be of relatively narrow interest to insurers and insurance scholars only. We argue, however, that arguments such as that offered by Baumann and Loi have an important but so far overlooked connection to the derivation of the minimal state offered by Nozick (1974) and thus to political philosophy at large.

  • 2.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    Algorithmic Political Bias—an Entrenchment Concern2022In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 3, article id 75Article in journal (Refereed)
    Abstract [en]

    This short commentary on Peters (Philosophy & Technology 35, 2022) identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan (2016). Second, following Hacking (1999), the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick (1989), it is argued that purist political positions may stand in the way of the pursuit of all worthy values and goals to be pursued in the political realm and that to the extent that algorithmic political bias entrenches political positions, it also hinders this healthy “zigzag of politics”. © 2022, The Author(s).

  • 3.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    Algorithmic Transparency, Manipulation, and Two Concepts of Liberty2024In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, article id 22Article in journal (Refereed)
    Abstract [en]

    As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but rather as indifference to whether the information provided enhances decision-making by revealing reasons. This short commentary on Klenk uses Berlin’s (1958) two concepts of liberty to further illuminate the concept of transparency as manipulation, finding alignment between positive liberty and the critical account.

    Download full text (pdf)
    fulltext
  • 4.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    First- and Second-Level Bias in Automated Decision-making2022In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 2, article id 21Article in journal (Refereed)
    Abstract [en]

    Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains. © 2022, The Author(s).

  • 5.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.
    How Much Should You Care About Algorithmic Transparency as Manipulation?2022In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 4, article id 92Article in journal (Refereed)
    Abstract [en]

    Wang (Philosophy & Technology 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of acts in equilibrium (Nozick, 1981) is then used to explain how different individuals reading Wang can end up with different evaluative attitudes towards algorithmic transparency, despite factual agreement. The commentary concludes by situating constructionist commitment inside a larger question of how much to think of our actions, identifying conflicting arguments. © 2022, The Author(s).

  • 6.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.
    Livspusslet: Rilke och Nozick2024In: Rilke och filosoferna / [ed] Katarina O'Nils Franke, Malmö: Ellerström förlag , 2024, p. 79-86Chapter in book (Other (popular science, discussion, etc.))
  • 7.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems.
    Rawls’s Original Position and Algorithmic Fairness2021In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 34, p. 1803-1817Article in journal (Refereed)
    Abstract [en]

    Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. This article aims to identify some complications with this approach: Under which circumstances can Rawls’s original position reasonably be applied to algorithmic fairness decisions? First, it is argued that there are important differences between Rawls’s original position and a parallel algorithmic fairness original position with respect to risk attitudes. Second, it is argued that the application of Rawls’s original position to algorithmic fairness faces a boundary problem in defining relevant stakeholders. Third, it is observed that the definition of the least advantaged, necessary for applying the difference principle, requires some attention in the context of algorithmic fairness. Finally, it is argued that appropriate deliberation in algorithmic fairness contexts often require more knowledge about probabilities than the Rawlsian original position allows. Provided that these complications are duly considered, the thought-experiment of the Rawlsian original position can be useful in algorithmic fairness decisions. © 2021, The Author(s).

  • 8.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.
    Two Metaverse Dystopias2024In: Res Publica, ISSN 1356-4765, E-ISSN 1572-8692Article in journal (Refereed)
    Abstract [en]

    In recent years, the metaverse—some form of immersive digital extension of the physical world—has received much attention. As tech companies present their bold visions, scientists and scholars have also turned to metaverse issues, from technological challenges via societal implications to profound philosophical questions. This article contributes to this growing literature by identifying the possibilities of two dystopian metaverse scenarios, namely one based on the experience machine and one based on demoktesis—two concepts from Nozick (Anarchy, State, and Utopia, Basic Books, 1974). These dystopian scenarios are introduced, and the potential for a metaverse to evolve into either of them is explained. The article is concluded with an argument for why the two dystopian scenarios are not strongly wedded to any particular theory of ethics or political philosophy, but constitute a more general contribution.

    Download full text (pdf)
    fulltext
  • 9.
    Persson, Erik
    et al.
    Lund University, Sweden.
    Eriksson, Kerstin
    RISE Research Institutes of Sweden, Safety and Transport, Safety.
    Knaggård, Åsa
    Lund University, Sweden.
    A Fair Distribution of Responsibility for Climate Adaptation-Translating Principles of Distribution from an International to a Local Context2021In: Philosophies, E-ISSN 2409-9287, Vol. 6, no 3, article id 68Article in journal (Refereed)
    Abstract [en]

    Distribution of responsibility is one of the main focus areas in discussions about climate change ethics. Most of these discussions deal with the distribution of responsibility for climate change mitigation at the international level. The aim of this paper is to investigate if and how these principles can be used to inform the search for a fair distribution of responsibility for climate change adaptation on the local level. We found that the most influential distribution principles on the international level were in turn built on one or more of seven basic principles: (P1) equal shares, (P2) desert, (P3) beneficiary pays, (P4) ability, (P5) self-help, (P6) limited responsibility for the worst off, and (P7) status quo preservation. It was found that all the basic principles, but P1, P3, and P7, are to some extent translatable to local climate adaptation. Two major problems hamper their usefulness on the local level: (1) several categories of agents need to take on responsibility; and (2) emissions do not work as a base for all principles. P4, P5, and P6 are applicable to local adaptation without changes. P4 is of particular importance as it seems to solve the first problem. P2 is applicable only if the second problem is solved, which can be achieved by using risk of harm instead of emissions as the basis for desert.

  • 10.
    Sahlgren, Magnus
    et al.
    AI Sweden, Sweden.
    Carlsson, Fredrik
    RISE Research Institutes of Sweden, Digital Systems, Data Science.
    The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point2021In: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 4, article id 682578Article in journal (Refereed)
    Abstract [en]

    This paper discusses the current critique against neural network-based Natural Language Understanding solutions known as language models. We argue that much of the current debate revolves around an argumentation error that we refer to as the singleton fallacy: the assumption that a concept (in this case, language, meaning, and understanding) refers to a single and uniform phenomenon, which in the current debate is assumed to be unobtainable by (current) language models. By contrast, we argue that positing some form of (mental) “unobtanium” as definiens for understanding inevitably leads to a dualistic position, and that such a position is precisely the original motivation for developing distributional methods in computational linguistics. As such, we argue that language models present a theoretically (and practically) sound approach that is our current best bet for computers to achieve language understanding. This understanding must however be understood as a computational means to an end.

1 - 10 of 10
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf