LIMEADE: From AI Explanations to Advice Taking

被引:1
作者
Lee, Benjamin Charles Germain [1 ,2 ]
Downey, Doug [2 ]
Lo, Kyle [2 ]
Weld, Daniel S. [1 ,2 ]
机构
[1] Univ Washington, Paul G Allen Sch Comp Sci & Engn, Box 352355, Seattle, WA 98195 USA
[2] Allen Inst Artificial Intelligence, 2157 N Northlake Way 110, Seattle, WA 98103 USA
基金
美国国家科学基金会;
关键词
Explainable recommendations; explainable AI; advice taking; interactive machine learning; Human-AI interaction;
D O I
10.1145/3589345
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well developed for transparent learning models (e.g., linear models and GA2Ms) and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little attention has been given to advice methods for opaque models. This article introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on 70 real-world models across two broad domains: image classification and text recommendation. We show that our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.
引用
收藏
页数:29
相关论文
共 91 条
  • [1] Ahn J.-W., 2015, P 20 INT C INT US IN, P202, DOI [DOI 10.1145/2678025.2701410, 10.1145/2678025.2701410]
  • [2] Ahn Jae-wook, 2007, Proceedings of the 16th International Conference on World Wide Web, P11
  • [3] Akata Z, 2015, PROC CVPR IEEE, P2927, DOI 10.1109/CVPR.2015.7298911
  • [4] Guidelines for Human-AI Interaction
    Amershi, Saleema
    Weld, Dan
    Vorvoreanu, Mihaela
    Fourney, Adam
    Nushi, Besmira
    Collisson, Penny
    Suh, Jina
    Iqbal, Shamsi
    Bennett, Paul N.
    Inkpen, Kori
    Teevan, Jaime
    Kikin-Gil, Ruth
    Horvitz, Eric
    [J]. CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [5] Power to the People: The Role of Humans in Interactive Machine Learning
    Amershi, Saleema
    Cakmak, Maya
    Knox, W. Bradley
    Kulesza, Todd
    [J]. AI MAGAZINE, 2014, 35 (04) : 105 - 120
  • [6] Ammar W., 2018, P 2018 C N AM CHAPT, P84, DOI DOI 10.18653/V1/N18-3011
  • [7] [Anonymous], 2012, P SIGCHI C HUM FACT, DOI [DOI 10.1145/2207676.2207680, 10.1145/2207676.2207680]
  • [8] [Anonymous], 2013, P INT C INT US INT I, DOI [DOI 10.1145/2449396.2449412, 10.1145/2449396.2449412]
  • [9] [Anonymous], 2013, P 2013 INT C INTELLI, DOI DOI 10.1145/2449396.2449442
  • [10] Bakalov F., 2013, P 2013 INT C INTELLI, P49