Multi-objective Feature Attribution Explanation for Explainable Machine Learning

被引:3
|
作者
Wang Z. [1 ]
Huang C. [1 ]
Li Y. [2 ]
Yao X. [1 ,3 ]
机构
[1] Research Institute of Trustworthy Autonomous Systems, Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Departmentof Computer Science and Engineering, Southern University of Science and Technology, Shenzhen
[2] The Advanced Cognitive Technology Lab, Huawei Technologies Co. Ltd, Shanghai
[3] School of Computer Science, University of Birmingham, Birmingham
来源
ACM Transactions on Evolutionary Learning and Optimization | 2024年 / 4卷 / 01期
基金
中国国家自然科学基金;
关键词
Explainable machine learning; feature attribution explanations; multi-objective evolutionary algorithms; multi-objective learning;
D O I
10.1145/3617380
中图分类号
学科分类号
摘要
The feature attribution-based explanation (FAE) methods, which indicate how much each input feature contributes to the model's output for a given data point, are one of the most popular categories of explainable machine learning techniques. Although various metrics have been proposed to evaluate the explanation quality, no single metric could capture different aspects of the explanations. Different conclusions might be drawn using different metrics. Moreover, during the processes of generating explanations, existing FAE methods either do not consider any evaluation metric or only consider the faithfulness of the explanation, failing to consider multiple metrics simultaneously. To address this issue, we formulate the problem of creating FAE explainable models as a multi-objective learning problem that considers multiple explanation quality metrics simultaneously. We first reveal conflicts between various explanation quality metrics, including faithfulness, sensitivity, and complexity. Then, we define the considered multi-objective explanation problem and propose a multi-objective feature attribution explanation (MOFAE) framework to address this newly defined problem. Subsequently, we instantiate the framework by simultaneously considering the explanation's faithfulness, sensitivity, and complexity. Experimental results comparing with six state-of-The-Art FAE methods on eight datasets demonstrate that our method can optimize multiple conflicting metrics simultaneously and can provide explanations with higher faithfulness, lower sensitivity, and lower complexity than the compared methods. Moreover, the results have shown that our method has better diversity, i.e., it provides various explanations that achieve different tradeoffs between multiple conflicting explanation quality metrics. Therefore, it can provide tailored explanations to different stakeholders based on their specific requirements. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
引用
收藏
相关论文
共 50 条
  • [31] Exploring sustainable solutions for soil stabilization through explainable Gaussian process-assisted multi-objective optimization
    Gautam
    Gupta, Kritesh Kumar
    Bhowmik, Debjit
    MATERIALS TODAY COMMUNICATIONS, 2024, 40
  • [32] Differentiating Inhibitors of Closely Related Protein Kinases with Single- or Multi-Target Activity via Explainable Machine Learning and Feature Analysis
    Feldmann, Christian
    Bajorath, Juergen
    BIOMOLECULES, 2022, 12 (04)
  • [33] Application of Multi-Objective Hyper-Heuristics to Solve the Multi-Objective Software Module Clustering Problem
    Alshareef, Haya
    Maashi, Mashael
    APPLIED SCIENCES-BASEL, 2022, 12 (11):
  • [34] Explainable machine learning for project management control
    Ignacio Santos, Jose
    Pereda, Maria
    Ahedo, Virginia
    Manuel Galan, Jose
    COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 180
  • [35] Multi-objective evolutionary algorithms in the automatic learning of Boolean queries: A comparative study
    Lopez-Herrera, A. G.
    Herrera-Viedma, E.
    Herrera, F.
    Porcel, C.
    Alonso, S.
    THEORETICAL ADVANCES AND APPLICATIONS OF FUZZY LOGIC AND SOFT COMPUTING, 2007, 42 : 71 - +
  • [36] Explainable machine learning for motor fault diagnosis
    Wang, Yuming
    Wang, Peng
    2023 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC, 2023,
  • [37] Explainable Machine Learning for Scientific Insights and Discoveries
    Roscher, Ribana
    Bohn, Bastian
    Duarte, Marco F.
    Garcke, Jochen
    IEEE ACCESS, 2020, 8 : 42200 - 42216
  • [38] SoK: Explainable Machine Learning in Adversarial Environments
    Noppel, Maximilian
    Wressnegger, Christian
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2441 - 2459
  • [39] Explainable Machine Learning in the Research of Materials Science
    Wang, Guanjie
    Liu, Shengxian
    Zhou, Jian
    Sun, Zhimei
    ACTA METALLURGICA SINICA, 2024, 60 (10) : 1345 - 1361
  • [40] Addressing Overlapping in Classification with Imbalanced Datasets: A First Multi-objective Approach for Feature and Instance Selection
    Fernandez, Alberto
    Jose del Jesus, Maria
    Herrera, Francisco
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2015, 2015, 9375 : 36 - 44