Towards Trustable Explainable AI

被引:0
作者
Ignatiev, Alexey [1 ]
机构
[1] Monash Univ, Clayton, Vic, Australia
来源
PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2020年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.
引用
收藏
页码:5154 / 5158
页数:5
相关论文
共 36 条
  • [1] ACM, 2018, TUR AW
  • [2] Angwin J., 2016, Machine Bias
  • [3] [Anonymous], 2015, Nature, DOI [DOI 10.1038/NATURE14539, 10.1038/nature14539]
  • [4] [Anonymous], 2018, IJCAR, DOI DOI 10.1007/978-3-319-94205-6_41
  • [5] [Anonymous], 2018, NEURIPS
  • [6] [Anonymous], 2016, ECAI, DOI DOI 10.3233/978-1-61499-672-9-1327
  • [7] [Anonymous], 2019, AAAI
  • [8] Baehrens D, 2010, J MACH LEARN RES, V11, P1803
  • [9] CACM Letters to the Editor, 2019, COMMUN ACM, V62, P9
  • [10] Chalasani Prasad, 2018, ABS181006583 CORR