First- and Second-Level Bias in Automated Decision-making

被引:0
作者
Franke U. [1 ,2 ]
机构
[1] RISE Research Institutes of Sweden, Kista
[2] KTH Royal Institute of Technology, Stockholm
关键词
Arbitrariness; Bias; Decision-support; Discrimination; Explainable artificial intelligence (XAI);
D O I
10.1007/s13347-022-00500-y
中图分类号
学科分类号
摘要
Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains. © 2022, The Author(s).
引用
收藏
相关论文
共 56 条
  • [1] Altman A., The Stanford Encyclopedia of Philosophy, Winter 2020 Edn, (2020)
  • [2] Social Choice and Individual Values. Wiley, (1951)
  • [3] Berenguer A., Goncalves J., Hosio S., Ferreira D., Anagnostopoulos T., Kostakos V., Are smartphones ubiquitous?: An in-depth survey of smartphone adoption by seniors, IEEE Consumer Electronics Magazine, 6, 1, pp. 104-110, (2016)
  • [4] Bickel P.J., Hammel E.A., O'Connell J.W., Sex bias in graduate admissions: Data from Berkeley, Science, 187, 4175, pp. 398-404, (1975)
  • [5] Binns R., Algorithmic accountability and public reason, Philosophy & Technology, 31, 4, pp. 543-556, (2018)
  • [6] Binns R., Fairness in machine learning: Lessons from political philosophy, Proceedings of the 1St Conference on Fairness, Accountability and Transparency, 81, pp. 149-159, (2018)
  • [7] Borges J.L., Funes the Memorious [Funes el memorioso, pp. 59-66, (2007)
  • [8] Carcary M., Maccani G., Doherty E., Conway G., Exploring the determinants of IoT adoption: Findings from a systematic literature review, International Conference on Business Informatics Research, pp. 113-125, (2018)
  • [9] Castelvecchi D., Can we open the black box of AI?, Nature News, 538, 7623, (2016)
  • [10] Cavazos J.G., Phillips P.J., Castillo C.D., O'Toole A.J., Accuracy Comparison across Face Recognition Algorithms, (2020)