Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness

被引:25
作者
Nakao, Yuri [1 ]
Strappelli, Lorenzo [2 ]
Stumpf, Simone [2 ]
Naseer, Aisha [3 ]
Regoli, Daniele [4 ]
Del Gamba, Giulia [4 ]
机构
[1] Fujitsu Labs Ltd, Artificial Intelligence Lab, Kawasaki, Kanagawa, Japan
[2] City Univ London, Ctr HCI Design, London, England
[3] Fujitsu Labs Europe, Hayes, England
[4] Intesa Sanpaolo SpA, Turin, Italy
关键词
JUSTICE; BIAS; PERCEPTIONS;
D O I
10.1080/10447318.2022.2067936
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With Artificial intelligence (AI) to aid or automate decision-making advancing rapidly, a particular concern is its fairness. In order to create reliable, safe and trustworthy systems through human-centred artificial intelligence (HCAI) design, recent efforts have produced user interfaces (UIs) for AI experts to investigate the fairness of AI models. In this work, we provide a design space exploration that supports not only data scientists but also domain experts to investigate AI fairness. Using loan applications as an example, we held a series of workshops with loan officers and data scientists to elicit their requirements. We instantiated these requirements into FairHIL, a UI to support human-in-the-loop fairness investigations, and describe how this UI could be generalized to other use cases. We evaluated FairHIL through a think-aloud user study. Our work contributes better designs to investigate an AI model's fairness-and move closer towards responsible AI.
引用
收藏
页码:1762 / 1788
页数:27
相关论文
共 64 条
[1]  
Agarwal A, 2018, 35 INT C MACHINE LEA, V80
[2]   FairSight: Visual Analytics for Fairness in Decision Making [J].
Ahn, Yongsu ;
Lin, Yu-Ru .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (01) :1086-1095
[3]  
Barocas S., 2019, Fairness and machine learning: Limitations and opportunities
[4]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[5]  
Bellamy R., 2018, AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias.
[6]   CROSS-CULTURAL SIMILARITIES AND DIFFERENCES IN PERCEPTIONS OF FAIRNESS [J].
BERMAN, JJ ;
MURPHYBERMAN, V ;
SINGH, P .
JOURNAL OF CROSS-CULTURAL PSYCHOLOGY, 1985, 16 (01) :55-67
[7]   'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions [J].
Binns, Reuben ;
Van Kleek, Max ;
Veale, Michael ;
Lyngs, Ulrik ;
Zhao, Jun ;
Shadbolt, Nigel .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[8]  
Bird S., 2020, Tech. Rep. MSR-TR-2020-32
[9]   The ontogeny of fairness in seven societies [J].
Blake, P. R. ;
McAuliffe, K. ;
Corbit, J. ;
Callaghan, T. C. ;
Barry, O. ;
Bowie, A. ;
Kleutsch, L. ;
Kramer, K. L. ;
Ross, E. ;
Vongsachang, H. ;
Wrangham, R. ;
Warneken, F. .
NATURE, 2015, 528 (7581) :258-+
[10]   How Do Price Fairness Perceptions Differ Across Culture? [J].
Bolton, Lisa E. ;
Keh, Hean Tat ;
Alba, Joseph W. .
JOURNAL OF MARKETING RESEARCH, 2010, 47 (03) :564-576