Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
First- and Second-Level Bias in Automated Decision-making
RISE Research Institutes of Sweden, Digital Systems, Mobility and Systems. KTH Royal Institute of Technology, Sweden.ORCID iD: 0000-0003-2017-7914
2022 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 2, article id 21Article in journal (Refereed) Published
Abstract [en]

Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains. © 2022, The Author(s).

Place, publisher, year, edition, pages
Springer Science and Business Media B.V. , 2022. Vol. 35, no 2, article id 21
Keywords [en]
Arbitrariness, Bias, Decision-support, Discrimination, Explainable artificial intelligence (XAI)
National Category
Philosophy Computer and Information Sciences
Identifiers
URN: urn:nbn:se:ri:diva-58998DOI: 10.1007/s13347-022-00500-yScopus ID: 2-s2.0-85127109812OAI: oai:DiVA.org:ri-58998DiVA, id: diva2:1668287
Note

Funding details: P4/18; Funding text 1: Open access funding provided by RISE Research Institutes of Sweden. This work was supported by Länsförsäkringsgruppens Forsknings- & Utvecklingsfond, agreement no. P4/18.

Available from: 2022-06-13 Created: 2022-06-13 Last updated: 2024-03-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Franke, Ulrik

Search in DiVA

By author/editor
Franke, Ulrik
By organisation
Mobility and Systems
In the same journal
Philosophy & Technology
PhilosophyComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 131 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf