OpenXAI: Towards a Transparent Evaluation of Post hoc Model Explanations

被引:0
作者
Agarwal, Chirag [1 ,2 ]
Krishna, Satyapriya [1 ]
Saxena, Eshika [1 ]
Pawelczyk, Martin [3 ]
Johnson, Nari [4 ]
Puri, Isha [1 ]
Zitnik, Marinka [1 ]
Lakkaraju, Himabindu [1 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Adobe, San Jose, CA 95110 USA
[3] Univ Tubingen, Tubingen, Germany
[4] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022) | 2022年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While several types of post hoc explanation methods have been proposed in recent literature, there is very little work on systematically benchmarking these methods. Here, we introduce OpenXAI, a comprehensive and extensible open-source framework for evaluating and benchmarking post hoc explanation methods. OpenXAI comprises of the following key components: (i) a flexible synthetic data generator and a collection of diverse real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, (ii) open-source implementations of twenty-two quantitative metrics for evaluating faithfulness, stability (robustness), and fairness of explanation methods, and (iii) the first ever public XAI leaderboards to readily compare several explanation methods across a wide variety of metrics, models, and datasets. OpenXAI is easily extensible, as users can readily evaluate custom explanation methods and incorporate them into our leaderboards. Overall, OpenXAI provides an automated end-to-end pipeline that not only simplifies and standardizes the evaluation of post hoc explanation methods, but also promotes transparency and reproducibility in benchmarking these methods. While the first release of OpenXAI supports only tabular datasets, the explanation methods and metrics that we consider are general enough to be applicable to other data modalities. OpenXAI datasets and data loaders, implementations of state-of-the-art explanation methods and evaluation metrics, as well as leaderboards are publicly available at https://open- xai.github.io/. OpenXAI will be regularly updated to incorporate text and image datasets, other new metrics and explanation methods, and welcomes inputs from the community.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Towards a General Model for Supporting Explanations to Enhance Learning
    Weerasinghe, Amali
    Mitrovic, Antonija
    Martin, Brent
    ARTIFICIAL INTELLIGENCE IN EDUCATION: BUILDING TECHNOLOGY RICH LEARNING CONTEXTS THAT WORK, 2007, 158 : 665 - 667
  • [32] Towards Automating Model Explanations with Certified Robustness Guarantees
    Huai, Mengdi
    Liu, Jinduo
    Miao, Chenglin
    Yao, Liuyi
    Zhang, Aidong
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6935 - 6943
  • [33] TOWARDS EVALUATION OF THIN TRANSPARENT COATINGS ON PARTICULATES
    MCCORMACK, J
    HARRIS, JEC
    SCULLION, HJ
    POWDER TECHNOLOGY, 1973, 7 (05) : 281 - 284
  • [34] Using model explanations to guide deep learning models towards consistent explanations for EHR data
    Watson, Matthew
    Hasan, Bashar Awwad Shiekh
    Al Moubayed, Noura
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [35] Using model explanations to guide deep learning models towards consistent explanations for EHR data
    Matthew Watson
    Bashar Awwad Shiekh Hasan
    Noura Al Moubayed
    Scientific Reports, 12
  • [36] Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking Fairness and Algorithm Utility
    Cui, Sen
    Pan, Weishen
    Zhang, Changshui
    Wang, Fei
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 207 - 217
  • [37] Fairness and Explainability: Bridging the Gap towards Fair Model Explanations
    Zhao, Yuying
    Wang, Yu
    Derr, Tyler
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11363 - 11371
  • [38] Assessing fidelity in XAI post-hoc techniques: A comparative study with ground truth explanations datasets
    Miro-Nicolau, Miquel
    Jaume-i-Capo, Antoni
    Moya-Alcover, Gabriel
    ARTIFICIAL INTELLIGENCE, 2024, 335
  • [39] Clinical domain knowledge-derived template improves post hoc AI explanations in pneumothorax classification
    Yuan, Han
    Hong, Chuan
    Jiang, Peng-Tao
    Zhao, Gangming
    Tran, Nguyen Tuan Anh
    Xu, Xinxing
    Yan, Yet Yen
    Liu, Nan
    JOURNAL OF BIOMEDICAL INFORMATICS, 2024, 156
  • [40] Ontology-Based Post-Hoc Neural Network Explanations Via Simultaneous Concept Extraction
    Ponomarev, Andrew
    Agafonov, Anton
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, INTELLISYS 2023, 2024, 823 : 433 - 446