Human performance consequences of normative and contrastive explanations: An experiment in machine learning for reliability maintenance

被引:3
作者
Gentile, Davide [1 ]
Donmez, Birsen [1 ]
Jamieson, Greg A. [1 ]
机构
[1] Univ Toronto, Dept Mech & Ind Engn, 5 Kings Coll Rd, Toronto, ON M5S 3G8, Canada
关键词
Human-AI interaction; Explainable AI; Automation reliance behavior; Automation transparency; AGENT TRANSPARENCY; AUTOMATION; FEEDBACK; IMPACT; TRUST;
D O I
10.1016/j.artint.2023.103945
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Decision aids based on artificial intelligence and machine learning can benefit human decisions and system performance, but can also provide incorrect advice, and invite operators to inappropriately rely on automation. This paper examined the extent to which example-based explanations could improve reliance on a decision aid that is based on machine learning. Participants engaged in a preventive maintenance task by providing their diagnosis of the conditions of three components of a hydraulic system. A decision aid based on machine learning provided advice but was not always reliable. Three explanation displays (baseline, normative, normative plus contrastive) were manipulated within-participants. With the normative explanation display, we found improvements in participants' decision time and subjective workload. With the addition of contrastive explanations, we found improvements in participants' hit rate and sensitivity in discriminating between correct and incorrect ML advice. Implications for the design of explainable interfaces to support human-AI interaction in data intensive environments are discussed.& COPY; 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:17
相关论文
共 53 条
  • [1] Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
    Abdul, Ashraf
    Vermeulen, Jo
    Wang, Danding
    Lim, Brian
    Kankanhalli, Mohan
    [J]. PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
  • [2] Adhikari A., 2019, 2019 IEEE INT C FUZZ, P1
  • [3] Allen Micah, 2019, Wellcome Open Res, V4, P63, DOI 10.12688/wellcomeopenres.15191.1
  • [4] Analytics IoT, 2019, TOP 10 IND US CAS
  • [5] Anjomshoae S, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P1078
  • [6] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [7] Effect of automation transparency in the management of multiple unmanned vehicles
    Bhaskara, Adella
    Duong, Lain
    Brooks, James
    Li, Ryan
    McInerney, Ronan
    Skinner, Michael
    Pongracic, Helen
    Loft, Shayne
    [J]. APPLIED ERGONOMICS, 2021, 90
  • [8] Agent Transparency: A Review of Current Theory and Evidence
    Bhaskara, Adella
    Skinner, Michael
    Loft, Shayne
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2020, 50 (03) : 215 - 224
  • [9] Byrne RMJ, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P6276
  • [10] The Effects of Example-Based Explanations in a Machine Learning Interface
    Cai, Carrie J.
    Jongejan, Jonas
    Holbrook, Jess
    [J]. PROCEEDINGS OF IUI 2019, 2019, : 258 - 262