LIMEADE: From AI Explanations to Advice Taking

被引:1
作者
Lee, Benjamin Charles Germain [1 ,2 ]
Downey, Doug [2 ]
Lo, Kyle [2 ]
Weld, Daniel S. [1 ,2 ]
机构
[1] Univ Washington, Paul G Allen Sch Comp Sci & Engn, Box 352355, Seattle, WA 98195 USA
[2] Allen Inst Artificial Intelligence, 2157 N Northlake Way 110, Seattle, WA 98103 USA
基金
美国国家科学基金会;
关键词
Explainable recommendations; explainable AI; advice taking; interactive machine learning; Human-AI interaction;
D O I
10.1145/3589345
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well developed for transparent learning models (e.g., linear models and GA2Ms) and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little attention has been given to advice methods for opaque models. This article introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on 70 real-world models across two broad domains: image classification and text recommendation. We show that our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.
引用
收藏
页数:29
相关论文
共 91 条
  • [11] Bau David, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P351, DOI 10.1007/978-3-030-58452-8_21
  • [12] Research-paper recommender systems: a literature survey
    Beel, Joeran
    Gipp, Bela
    Langer, Stefan
    Breitinger, Corinna
    [J]. INTERNATIONAL JOURNAL ON DIGITAL LIBRARIES, 2016, 17 (04) : 305 - 338
  • [13] Beltagy I, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P3615
  • [14] Bhagavatula C., 2018, P NAACL, P238, DOI 10.18653/v1/N18-1022
  • [15] Bird, 2006, P COLING ACL INT PRE, P69
  • [16] Bostandjiev Svetlin, 2012, RECSYS 12, P35, DOI DOI 10.1145/2365952.2365964
  • [17] Bruns Simon, 2015, Human Interface and the Management of Information. Information and Knowledge in Context. 17th International Conference, held as part of HCI International 2015. Proceedings: LNCS 9173, P89, DOI 10.1007/978-3-319-20618-9_9
  • [18] Interfaces and Human Decision Making for Recommender Systems
    Brusilovsky, Peter
    de Gemmis, Marco
    Felfernig, Alexander
    Lops, Pasquale
    O'Donovan, John
    Semeraro, Giovanni
    Willemsen, Martijn C.
    [J]. RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, : 613 - 618
  • [19] Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
    Caruana, Rich
    Lou, Yin
    Gehrke, Johannes
    Koch, Paul
    Sturm, Marc
    Elhadad, Noemie
    [J]. KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 1721 - 1730
  • [20] Cohan A, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P2270