Endre søk
Begrens søket
1 - 9 of 9
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digitala system, Mobilitet och system. KTH Royal Institute of Technology, Sweden.
    Algorithmic Fairness, Risk, and the Dominant Protective Agency2023Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 36, artikkel-id 76Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    With increasing use of automated algorithmic decision-making, issues of algorithmic fairness have attracted much attention lately. In this growing literature, existing concepts from ethics and political philosophy are often applied to new contexts. The reverse—that novel insights from the algorithmic fairness literature are fed back into ethics and political philosophy—is far less established. However, this short commentary on Baumann and Loi (Philosophy & Technology, 36(3), 45 2023) aims to do precisely this. Baumann and Loi argue that among algorithmic group fairness measures proposed, one—sufficiency (well-calibration) is morally defensible for insurers to use, whereas independence (statistical parity or demographic parity) and separation (equalized odds) are not normatively appropriate in the insurance context. Such a result may seem to be of relatively narrow interest to insurers and insurance scholars only. We argue, however, that arguments such as that offered by Baumann and Loi have an important but so far overlooked connection to the derivation of the minimal state offered by Nozick (1974) and thus to political philosophy at large.

  • 2.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digitala system, Mobilitet och system. KTH Royal Institute of Technology, Sweden.
    Algorithmic Political Bias—an Entrenchment Concern2022Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, nr 3, artikkel-id 75Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This short commentary on Peters (Philosophy & Technology 35, 2022) identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan (2016). Second, following Hacking (1999), the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick (1989), it is argued that purist political positions may stand in the way of the pursuit of all worthy values and goals to be pursued in the political realm and that to the extent that algorithmic political bias entrenches political positions, it also hinders this healthy “zigzag of politics”. © 2022, The Author(s).

  • 3.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digitala system, Mobilitet och system. KTH Royal Institute of Technology, Sweden.
    Algorithmic Transparency, Manipulation, and Two Concepts of Liberty2024Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 37, artikkel-id 22Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but rather as indifference to whether the information provided enhances decision-making by revealing reasons. This short commentary on Klenk uses Berlin’s (1958) two concepts of liberty to further illuminate the concept of transparency as manipulation, finding alignment between positive liberty and the critical account.

    Fulltekst (pdf)
    fulltext
  • 4.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digitala system, Mobilitet och system. KTH Royal Institute of Technology, Sweden.
    First- and Second-Level Bias in Automated Decision-making2022Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, nr 2, artikkel-id 21Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains. © 2022, The Author(s).

  • 5.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digitala system, Mobilitet och system.
    How Much Should You Care About Algorithmic Transparency as Manipulation?2022Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, nr 4, artikkel-id 92Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Wang (Philosophy & Technology 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of acts in equilibrium (Nozick, 1981) is then used to explain how different individuals reading Wang can end up with different evaluative attitudes towards algorithmic transparency, despite factual agreement. The commentary concludes by situating constructionist commitment inside a larger question of how much to think of our actions, identifying conflicting arguments. © 2022, The Author(s).

  • 6.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digitala system, Mobilitet och system.
    Rawls’s Original Position and Algorithmic Fairness2021Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 34, s. 1803-1817Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. This article aims to identify some complications with this approach: Under which circumstances can Rawls’s original position reasonably be applied to algorithmic fairness decisions? First, it is argued that there are important differences between Rawls’s original position and a parallel algorithmic fairness original position with respect to risk attitudes. Second, it is argued that the application of Rawls’s original position to algorithmic fairness faces a boundary problem in defining relevant stakeholders. Third, it is observed that the definition of the least advantaged, necessary for applying the difference principle, requires some attention in the context of algorithmic fairness. Finally, it is argued that appropriate deliberation in algorithmic fairness contexts often require more knowledge about probabilities than the Rawlsian original position allows. Provided that these complications are duly considered, the thought-experiment of the Rawlsian original position can be useful in algorithmic fairness decisions. © 2021, The Author(s).

  • 7.
    Franke, Ulrik
    RISE Research Institutes of Sweden, Digitala system, Mobilitet och system. KTH Royal Institute of Technology, Sweden.
    Two Metaverse Dystopias2024Inngår i: Res Publica, ISSN 1356-4765, E-ISSN 1572-8692Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In recent years, the metaverse—some form of immersive digital extension of the physical world—has received much attention. As tech companies present their bold visions, scientists and scholars have also turned to metaverse issues, from technological challenges via societal implications to profound philosophical questions. This article contributes to this growing literature by identifying the possibilities of two dystopian metaverse scenarios, namely one based on the experience machine and one based on demoktesis—two concepts from Nozick (Anarchy, State, and Utopia, Basic Books, 1974). These dystopian scenarios are introduced, and the potential for a metaverse to evolve into either of them is explained. The article is concluded with an argument for why the two dystopian scenarios are not strongly wedded to any particular theory of ethics or political philosophy, but constitute a more general contribution.

    Fulltekst (pdf)
    fulltext
  • 8.
    Persson, Erik
    et al.
    Lund University, Sweden.
    Eriksson, Kerstin
    RISE Research Institutes of Sweden, Säkerhet och transport, Säkerhetsforskning.
    Knaggård, Åsa
    Lund University, Sweden.
    A Fair Distribution of Responsibility for Climate Adaptation-Translating Principles of Distribution from an International to a Local Context2021Inngår i: Philosophies, E-ISSN 2409-9287, Vol. 6, nr 3, artikkel-id 68Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Distribution of responsibility is one of the main focus areas in discussions about climate change ethics. Most of these discussions deal with the distribution of responsibility for climate change mitigation at the international level. The aim of this paper is to investigate if and how these principles can be used to inform the search for a fair distribution of responsibility for climate change adaptation on the local level. We found that the most influential distribution principles on the international level were in turn built on one or more of seven basic principles: (P1) equal shares, (P2) desert, (P3) beneficiary pays, (P4) ability, (P5) self-help, (P6) limited responsibility for the worst off, and (P7) status quo preservation. It was found that all the basic principles, but P1, P3, and P7, are to some extent translatable to local climate adaptation. Two major problems hamper their usefulness on the local level: (1) several categories of agents need to take on responsibility; and (2) emissions do not work as a base for all principles. P4, P5, and P6 are applicable to local adaptation without changes. P4 is of particular importance as it seems to solve the first problem. P2 is applicable only if the second problem is solved, which can be achieved by using risk of harm instead of emissions as the basis for desert.

  • 9.
    Sahlgren, Magnus
    et al.
    AI Sweden, Sweden.
    Carlsson, Fredrik
    RISE Research Institutes of Sweden, Digitala system, Datavetenskap.
    The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point2021Inngår i: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 4, artikkel-id 682578Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper discusses the current critique against neural network-based Natural Language Understanding solutions known as language models. We argue that much of the current debate revolves around an argumentation error that we refer to as the singleton fallacy: the assumption that a concept (in this case, language, meaning, and understanding) refers to a single and uniform phenomenon, which in the current debate is assumed to be unobtainable by (current) language models. By contrast, we argue that positing some form of (mental) “unobtanium” as definiens for understanding inevitably leads to a dualistic position, and that such a position is precisely the original motivation for developing distributional methods in computational linguistics. As such, we argue that language models present a theoretically (and practically) sound approach that is our current best bet for computers to achieve language understanding. This understanding must however be understood as a computational means to an end.

1 - 9 of 9
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
v. 2.43.0