DARPA's Explainable Artificial Intelligence Program

被引:917
作者
Gunning, David [1 ]
Aha, David W. [2 ]
机构
[1] DARPAs Informat Innovat Off, Arlington, VA 22203 USA
[2] NRLs Navy Ctr Appl Res AI, Washington, DC USA
关键词
EXPLANATION;
D O I
10.1609/aimag.v40i2.2850
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA's explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems' explanations improve user understanding, user trust, and user task performance.
引用
收藏
页码:44 / 58
页数:15
相关论文
共 30 条
[11]  
Chakraborty Supriyo, Interpretability of Deep Learning Models: A Survey of Results, 2017 IEEE SMARTWORLD
[12]   How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games [J].
Dodge, Jonathan ;
Penney, Sean ;
Hilderbrand, Claudia ;
Anderson, Andrew ;
Burnett, Margaret .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[13]   Towards Explanation of DNN-based Prediction with Guided Feature Inversion [J].
Du, Mengnan ;
Liu, Ninghao ;
Song, Qingquan ;
Hu, Xia .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :1358-1367
[14]   The Effect of the Degree of Astigmatism on Optical Quality in Children [J].
Gao, Jing ;
Wang, Xiao-xia ;
Wang, Lin ;
Sun, Yuan ;
Liu, Rui-fen ;
Zhao, Qi .
JOURNAL OF OPHTHALMOLOGY, 2017, 2017
[15]   Probabilistic Theorem Proving [J].
Gogate, Vibhav ;
Domingos, Pedro .
COMMUNICATIONS OF THE ACM, 2016, 59 (07) :107-115
[16]  
Hefny A., 2018, P 35 INT C MACHINE L
[17]   Grounding Visual Explanations [J].
Hendricks, Lisa Anne ;
Hu, Ronghang ;
Darrell, Trevor ;
Akata, Zeynep .
COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 :269-286
[18]   Explaining Explanation, Part 4: A Deep Dive on Deep Nets [J].
Hoffman, Robert ;
Miller, Tim ;
Mueller, Shane T. ;
Klein, Gary ;
Clancey, William J. .
IEEE INTELLIGENT SYSTEMS, 2018, 33 (03) :87-95
[19]   Explaining Explanation, Part 2: Empirical Foundations [J].
Hoffman, Robert R. ;
Mueller, Shane T. ;
Klein, Gary .
IEEE INTELLIGENT SYSTEMS, 2017, 32 (04) :78-86
[20]   Explaining Explanation, Part 1: Theoretical Foundations [J].
Hoffman, Robert R. ;
Klein, Gary .
IEEE INTELLIGENT SYSTEMS, 2017, 32 (03) :68-73