Multi-objective Feature Attribution Explanation for Explainable Machine Learning

被引:3
|
作者
Wang Z. [1 ]
Huang C. [1 ]
Li Y. [2 ]
Yao X. [1 ,3 ]
机构
[1] Research Institute of Trustworthy Autonomous Systems, Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Departmentof Computer Science and Engineering, Southern University of Science and Technology, Shenzhen
[2] The Advanced Cognitive Technology Lab, Huawei Technologies Co. Ltd, Shanghai
[3] School of Computer Science, University of Birmingham, Birmingham
来源
ACM Transactions on Evolutionary Learning and Optimization | 2024年 / 4卷 / 01期
基金
中国国家自然科学基金;
关键词
Explainable machine learning; feature attribution explanations; multi-objective evolutionary algorithms; multi-objective learning;
D O I
10.1145/3617380
中图分类号
学科分类号
摘要
The feature attribution-based explanation (FAE) methods, which indicate how much each input feature contributes to the model's output for a given data point, are one of the most popular categories of explainable machine learning techniques. Although various metrics have been proposed to evaluate the explanation quality, no single metric could capture different aspects of the explanations. Different conclusions might be drawn using different metrics. Moreover, during the processes of generating explanations, existing FAE methods either do not consider any evaluation metric or only consider the faithfulness of the explanation, failing to consider multiple metrics simultaneously. To address this issue, we formulate the problem of creating FAE explainable models as a multi-objective learning problem that considers multiple explanation quality metrics simultaneously. We first reveal conflicts between various explanation quality metrics, including faithfulness, sensitivity, and complexity. Then, we define the considered multi-objective explanation problem and propose a multi-objective feature attribution explanation (MOFAE) framework to address this newly defined problem. Subsequently, we instantiate the framework by simultaneously considering the explanation's faithfulness, sensitivity, and complexity. Experimental results comparing with six state-of-The-Art FAE methods on eight datasets demonstrate that our method can optimize multiple conflicting metrics simultaneously and can provide explanations with higher faithfulness, lower sensitivity, and lower complexity than the compared methods. Moreover, the results have shown that our method has better diversity, i.e., it provides various explanations that achieve different tradeoffs between multiple conflicting explanation quality metrics. Therefore, it can provide tailored explanations to different stakeholders based on their specific requirements. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
引用
收藏
相关论文
共 50 条
  • [21] Attribution of hydrological droughts in large river-connected lakes: Insights from an explainable machine learning model
    Xue, Chenyang
    Zhang, Qi
    Jia, Yuxue
    Tang, Hongwu
    Zhang, Huiming
    SCIENCE OF THE TOTAL ENVIRONMENT, 2024, 952
  • [22] Gradient Backpropagation based Feature Attribution to Enable Explainable-AI on the Edge
    Bhat, Ashwin
    Assoa, Adou Sangbone
    Raychowdhury, Arijit
    PROCEEDINGS OF THE 2022 IFIP/IEEE 30TH INTERNATIONAL CONFERENCE ON VERY LARGE SCALE INTEGRATION (VLSI-SOC), 2022,
  • [23] Explainable machine learning for medicinal chemistry: exploring multi-target compounds
    Bajorath, Juergen
    FUTURE MEDICINAL CHEMISTRY, 2022, 14 (16) : 1171 - 1173
  • [24] Explainable Machine Learning via Argumentation
    Prentzas, Nicoletta
    Pattichis, Constantinos
    Kakas, Antonis
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT III, 2023, 1903 : 371 - 398
  • [25] Explainable machine learning for diffraction patterns
    Nawaz, Shah
    Rahmani, Vahid
    Pennicard, David
    Setty, Shabarish Pala Ramakantha
    Klaudel, Barbara
    Graafsma, Heinz
    JOURNAL OF APPLIED CRYSTALLOGRAPHY, 2023, 56 : 1494 - 1504
  • [26] Explainable Machine Learning for Intrusion Detection
    Bellegdi, Sameh
    Selamat, Ali
    Olatunji, Sunday O.
    Fujita, Hamido
    Krejcar, Ondfrej
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE: THEORY AND APPLICATIONS, IEA-AIE 2024, 2024, 14748 : 122 - 134
  • [27] Multi-Augmentation Contrastive Learning as Multi-Objective Optimization for Graph Neural Networks
    Li, Xu
    Chen, Yongsheng
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT II, 2023, 13936 : 495 - 507
  • [28] A methodology for evaluating multi-objective evolutionary feature selection for classification in the context of virtual screening
    Fernando Jiménez
    Horacio Pérez-Sánchez
    José Palma
    Gracia Sánchez
    Carlos Martínez
    Soft Computing, 2019, 23 : 8775 - 8800
  • [29] Collaborative Multi-objective Ranking
    Hu, Jun
    Li, Ping
    CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, : 1363 - 1372
  • [30] A methodology for evaluating multi-objective evolutionary feature selection for classification in the context of virtual screening
    Jimenez, Fernando
    Perez-Sanchez, Horacio
    Palma, Jose
    Sanchez, Gracia
    Martinez, Carlos
    SOFT COMPUTING, 2019, 23 (18) : 8775 - 8800