Toward Involving End-users in Interactive Human-in-the-loop AI Fairness

被引:34
作者
Nakao, Yuri [1 ]
Stumpf, Simone [2 ]
Ahmed, Subeida [3 ]
Naseer, Aisha [4 ]
Strappelli, Lorenzo [3 ]
机构
[1] Fujitsu Ltd, Res Ctr AI Eth, Kawasaki, Kanagawa, Japan
[2] Univ Glasgow, Glasgow, Lanark, Scotland
[3] City Univ London, London, England
[4] Fujitsu Res Europe Ltd, Hayes, England
关键词
AI fairness; loan application decisions; end-users; human-in-the-loop; explanatory debugging; cultural dimensions;
D O I
10.1145/3514258
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications. Recent work has started to investigate how humans judge fairness and how to support machine learning experts in making their AI models fairer. Drawing inspiration from an Explainable AI approach called explanatory debugging used in interactive machine learning, our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users without any technical or domain background to identify potential fairness issues and possibly fix them in the context of loan decisions. Through workshops with end-users, we co-designed and implemented a prototype system that allowed end-users to see why predictions were made, and then to change weights on features to "debug" fairness issues. We evaluated the use of this prototype system through an online study. To investigate the implications of diverse human values about fairness around the globe, we also explored how cultural dimensions might play a role in using this prototype. Our results contribute to the design of interfaces to allow end-users to be involved in judging and addressing AI fairness through a human-in-the-loop approach.
引用
收藏
页数:30
相关论文
共 79 条
[1]  
Adebayo J. A., 2016, Fairml: Toolbox for diagnosing bias in predictive modeling
[2]   Black Box Fairness Testing of Machine Learning Models [J].
Aggarwal, Aniya ;
Lohia, Pranay ;
Nagar, Seema ;
Dey, Kuntal ;
Saha, Diptikalyan .
ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, :625-635
[3]   FairSight: Visual Analytics for Fairness in Decision Making [J].
Ahn, Yongsu ;
Lin, Yu-Ru .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (01) :1086-1095
[4]  
[Anonymous], 1989, Beyond Culture
[5]   In AI we trust? Perceptions about automated decision-making by artificial intelligence [J].
Araujo, Theo ;
Helberger, Natali ;
Kruikemeier, Sanne ;
de Vreese, Claes H. .
AI & SOCIETY, 2020, 35 (03) :611-623
[6]   JUDGMENT CALL THE GAME USING VALUE SENSITIVE DESIGN AND DESIGN FICTION TO SURFACE ETHICAL CONCERNS RELATED TO TECHNOLOGY [J].
Ballard, Stephanie ;
Chappell, Karen M. ;
Kennedy, Kristen .
PROCEEDINGS OF THE 2019 ACM DESIGNING INTERACTIVE SYSTEMS CONFERENCE (DIS 2019), 2019, :421-433
[7]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[8]   AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias [J].
Bellamy, R. K. E. ;
Dey, K. ;
Hind, M. ;
Hoffman, S. C. ;
Houde, S. ;
Kannan, K. ;
Lohia, P. ;
Martino, J. ;
Mehta, S. ;
Mojsilovie, A. ;
Nagar, S. ;
Ramamurthy, K. Natesan ;
Richards, J. ;
Saha, D. ;
Sattigeri, P. ;
Singh, M. ;
Varshney, K. R. ;
Zhang, Y. .
IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
[9]   Intelligibility and accountability: Human considerations in context-aware systems [J].
Bellotti, V ;
Edwards, K .
HUMAN-COMPUTER INTERACTION, 2001, 16 (2-4) :193-212
[10]   On the Apparent Conflict Between Individual and Group Fairness [J].
Binns, Reuben .
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, :514-524