Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

被引:119
作者
Bhatt, Umang [1 ,2 ]
Antoran, Javier [2 ]
Zhang, Yunfeng [3 ]
Liao, Q. Vera [3 ]
Sattigeri, Prasanna [3 ]
Fogliato, Riccardo [1 ,4 ]
Melancon, Gabrielle [5 ]
Krishnan, Ranganath [6 ]
Stanley, Jason [5 ]
Tickoo, Omesh [6 ]
Nachman, Lama [6 ]
Chunara, Rumi [7 ]
Srikumar, Madhulika [1 ]
Weller, Adrian [2 ,8 ]
Xiang, Alice [1 ,9 ]
机构
[1] Partnership AI, San Francisco, CA 94104 USA
[2] Univ Cambridge, Cambridge, England
[3] IBM Res, Yorktown Hts, NY USA
[4] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[5] Element AI, Montreal, PQ, Canada
[6] Intel Labs, Santa Clara, CA USA
[7] NYU, New York, NY 10003 USA
[8] Alan Turing Inst, London, England
[9] Sony AI, Tokyo, Japan
来源
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY | 2021年
基金
英国工程与自然科学研究理事会;
关键词
uncertainty; transparency; machine learning; visualization; TRUST; RISK; NUMERACY; BIAS; COMPREHENSION; INFORMATION; VALIDATION; AUTOMATION; INSIGHTS; HEALTH;
D O I
10.1145/3461702.3462571
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders. However, understanding a model's specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how to collect information required for incorporating uncertainty into existing ML pipelines. This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness. We aim to encourage researchers and practitioners to measure, communicate, and use uncertainty as a form of transparency.
引用
收藏
页码:401 / 413
页数:13
相关论文
共 141 条
[71]  
Korber M., 2019, P 20 C INT ERG ASS, V823, P13, DOI [10.1007/ 978-3-319-96074-6_2, DOI 10.1007/978-3-319-96074-6_2]
[72]  
Kruschke JK, 2015, DOING BAYESIAN DATA, V2nd, P759
[73]  
Lakshminarayanan B, 2017, ADV NEUR IN, V30
[74]  
Lamy A.L., 2019, ADV NEURAL INFORM PR, P294
[75]   Trust in automation: Designing for appropriate reliance [J].
Lee, JD ;
See, KA .
HUMAN FACTORS, 2004, 46 (01) :50-80
[76]  
Ley Dan., 2021, ARXIV PREPRINT ARXIV
[77]  
Lichtenstein Sarah, 1967, EMPIRICAL SCALING CO, V9, P563
[78]  
Lipkus I. M, ARXIV10854471, V25, P149, DOI 10/gd589v
[79]  
Lobato Daniel Hernandez, 2009, PREDICTION BASED AVE
[80]  
Lovejoy J., 2018, 2018 AAAI SPRING S S