Operationalizing Human-Centered Perspectives in Explainable AI

被引:7
作者
Ehsan, Upol [1 ]
Wintersberger, Philipp [2 ]
Liao, Q. Vera [3 ]
Mara, Martina [4 ]
Streit, Marc [4 ]
Wachter, Sandra [5 ]
Riener, Andreas [6 ]
Riedl, Mark O. [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] TH Ingolstadt THI, CARISSMA, Ingolstadt, Bavaria, Germany
[3] IBM Res AI, Yorktown Hts, NY USA
[4] Johannes Kepler Univ Linz, Linz, Upper Austria, Austria
[5] Univ Oxford, Oxford Internet Inst, Oxford, England
[6] TH Ingolstadt THI, Ingolstadt, Bavaria, Germany
来源
EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21) | 2021年
关键词
Explainable Artificial Intelligence; Interpretable Machine Learning; Interpretability; Artificial Intelligence; Critical Technical Practice; Human-centered Computing; Trust in Automation; Algorithmic Fairness;
D O I
10.1145/3411763.3441343
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The realm of Artificial Intelligence (AI)'s impact on our lives is far reaching - with AI systems proliferating high-stakes domains such as healthcare, finance, mobility, law, etc., these systems must be able to explain their decision to diverse end-users comprehensibly. Yet the discourse of Explainable AI (XAI) has been predominantly focused on algorithm-centered approaches, suffering from gaps in meeting user needs and exacerbating issues of algorithmic opacity. To address these issues, researchers have called for human-centered approaches to XAI. There is a need to chart the domain and shape the discourse of XAI with reflective discussions from diverse stakeholders. The goal of this workshop is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we put an emphasis on "operationalizing", aiming to produce actionable frameworks, transferable evaluation methods, concrete design guidelines, and articulate a coordinated research agenda for XAI.
引用
收藏
页数:6
相关论文
共 30 条
  • [1] Refractory Abdominal Pain in a Patient with Chronic Lymphocytic Leukemia: Be Wary of Acquired Angioedema due to C1 Esterase Inhibitor Deficiency
    Abdulkareem, Abdullateef
    D'Souza, Ryan S.
    Mundorff, Joshua
    Shrestha, Pragya
    Shogbesan, Oluwaseun
    Donato, Anthony
    [J]. CASE REPORTS IN HEMATOLOGY, 2018, 2018
  • [2] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [3] Arya Vijay, 2019, ARXIV190903012
  • [4] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [5] Becerra Zoe M., 2020, IEEE PERVASIVE UNPUB
  • [6] Bijker W.E., 1989, The social construction of technological systems: New directions in the sociology and history of technology
  • [7] Machine Learning Interpretability: A Survey on Methods and Metrics
    Carvalho, Diogo, V
    Pereira, Eduardo M.
    Cardoso, Jaime S.
    [J]. ELECTRONICS, 2019, 8 (08)
  • [8] Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment
    Dodge, Jonathan
    Liao, Q. Vera
    Zhang, Yunfeng
    Bellamy, Rachel K. E.
    Dugan, Casey
    [J]. PROCEEDINGS OF IUI 2019, 2019, : 275 - 285
  • [9] Doshi-Velez F., 2017, ARXIV
  • [10] Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
    Ehsan, Upol
    Tambwekar, Pradyumna
    Chan, Larry
    Harrison, Brent
    Riedl, Mark O.
    [J]. PROCEEDINGS OF IUI 2019, 2019, : 263 - 274