Fair Machine Guidance to Enhance Fair Decision Making in Biased People

被引:0
作者
Yang, Mingzhe [1 ]
Arai, Hiromi [2 ]
Yamashita, Naomi [3 ]
Baba, Yukino [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
[2] RIKEN, Tokyo, Japan
[3] Kyoto Univ, Kyoto, Japan
来源
PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS (CHI 2024) | 2024年
关键词
fairness-aware machine learning; machine guidance; STEREOTYPE SUPPRESSION; CONSEQUENCES; ADVICE;
D O I
10.1145/3613904.3642627
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Teaching unbiased decision-making is crucial for addressing biased decision-making in daily life. Although both raising awareness of personal biases and providing guidance on unbiased decision-making are essential, the latter topics remains under-researched. In this study, we developed and evaluated an AI system aimed at educating individuals on making unbiased decisions using fairness-aware machine learning. In a between-subjects experimental design, 99 participants who were prone to bias performed personal assessment tasks. They were divided into two groups: a) those who received AI guidance for fair decision-making before the task and b) those who received no such guidance but were informed of their biases. The results suggest that although several participants doubted the fairness of the AI system, fair machine guidance prompted them to reassess their views regarding fairness, reflect on their biases, and modify their decision-making criteria. Our findings provide insights into the design of AI systems for guiding fair decision-making in humans.
引用
收藏
页数:18
相关论文
共 88 条
  • [1] Agarwal Alekh, 2018, P 2018 INT C MACH LE, DOI [10.48550/ arXiv.1803.02453, DOI 10.48550/ARXIV.1803.02453]
  • [2] Angwin Julia, 2022, Ethics of Data and Analytics, V1st, P254, DOI [10.1201/9781003278290-37, DOI 10.1201/9781003278290-37]
  • [3] Baba Yukino, 2020, P 2020 AAAI C HUM CO, DOI [10.48550/ arXiv. 2008.02354, DOI 10.48550/ARXIV.2008.02354]
  • [4] "If I Had All the Time in theWorld": Ophthalmologists' Perceptions of Anchoring Bias Mitigation in Clinical AI Support
    Bach, Anne Kathrine Petersen
    Norgaard, Trine Munch
    Brok, Jens Christian
    Van Berkel, Niels
    [J]. PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2023), 2023,
  • [5] Becker B., 1996, Adult, DOI DOI 10.24432/C5XW20
  • [6] Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination
    Bertrand, M
    Mullainathan, S
    [J]. AMERICAN ECONOMIC REVIEW, 2004, 94 (04) : 991 - 1013
  • [7] Brehm Jack W., 1966, A Theory of Psychological Reactance, V135
  • [8] A systematic review of algorithm aversion in augmented decision making
    Burton, Jason W.
    Stein, Mari-Klara
    Jensen, Tina Blegind
    [J]. JOURNAL OF BEHAVIORAL DECISION MAKING, 2020, 33 (02) : 220 - 239
  • [9] Calders Toon, 2010, Data mining and knowledge discovery, V21, P2, DOI DOI 10.1007/S10618-010-0190-X
  • [10] The mixed effects of online diversity training
    Chang, Edward H.
    Milkman, Katherine L.
    Gromet, Dena M.
    Rebele, Robert W.
    Massey, Cade
    Duckworth, Angela L.
    Grant, Adam M.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (16) : 7778 - 7783