Multi-objective Feature Attribution Explanation for Explainable Machine Learning

被引:3
|
作者
Wang Z. [1 ]
Huang C. [1 ]
Li Y. [2 ]
Yao X. [1 ,3 ]
机构
[1] Research Institute of Trustworthy Autonomous Systems, Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Departmentof Computer Science and Engineering, Southern University of Science and Technology, Shenzhen
[2] The Advanced Cognitive Technology Lab, Huawei Technologies Co. Ltd, Shanghai
[3] School of Computer Science, University of Birmingham, Birmingham
来源
ACM Transactions on Evolutionary Learning and Optimization | 2024年 / 4卷 / 01期
基金
中国国家自然科学基金;
关键词
Explainable machine learning; feature attribution explanations; multi-objective evolutionary algorithms; multi-objective learning;
D O I
10.1145/3617380
中图分类号
学科分类号
摘要
The feature attribution-based explanation (FAE) methods, which indicate how much each input feature contributes to the model's output for a given data point, are one of the most popular categories of explainable machine learning techniques. Although various metrics have been proposed to evaluate the explanation quality, no single metric could capture different aspects of the explanations. Different conclusions might be drawn using different metrics. Moreover, during the processes of generating explanations, existing FAE methods either do not consider any evaluation metric or only consider the faithfulness of the explanation, failing to consider multiple metrics simultaneously. To address this issue, we formulate the problem of creating FAE explainable models as a multi-objective learning problem that considers multiple explanation quality metrics simultaneously. We first reveal conflicts between various explanation quality metrics, including faithfulness, sensitivity, and complexity. Then, we define the considered multi-objective explanation problem and propose a multi-objective feature attribution explanation (MOFAE) framework to address this newly defined problem. Subsequently, we instantiate the framework by simultaneously considering the explanation's faithfulness, sensitivity, and complexity. Experimental results comparing with six state-of-The-Art FAE methods on eight datasets demonstrate that our method can optimize multiple conflicting metrics simultaneously and can provide explanations with higher faithfulness, lower sensitivity, and lower complexity than the compared methods. Moreover, the results have shown that our method has better diversity, i.e., it provides various explanations that achieve different tradeoffs between multiple conflicting explanation quality metrics. Therefore, it can provide tailored explanations to different stakeholders based on their specific requirements. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
引用
收藏
相关论文
共 50 条
  • [41] Convergence analysis of sliding mode trajectories in multi-objective neural networks learning
    Costa, Marcelo Azevedo
    Braga, Antonio Padua
    de Menezes, Benjamin Rodrigues
    NEURAL NETWORKS, 2012, 33 : 21 - 31
  • [42] Explainable machine learning model for multi-step forecasting of reservoir inflow with uncertainty quantification
    Fan, Ming
    Liu, Siyan
    Lu, Dan
    Gangrade, Sudershan
    Kao, Shih-Chieh
    ENVIRONMENTAL MODELLING & SOFTWARE, 2023, 170
  • [43] Polycystic Ovary Syndrome Detection Machine Learning Model Based on Optimized Feature Selection and Explainable Artificial Intelligence
    Elmannai, Hela
    El-Rashidy, Nora
    Mashal, Ibrahim
    Alohali, Manal Abdullah
    Farag, Sara
    El-Sappagh, Shaker
    Saleh, Hager
    DIAGNOSTICS, 2023, 13 (08)
  • [44] An effective model of multiple multi-objective evolutionary algorithms with the assistance of regional multi-objective evolutionary algorithms: VIPMOEAs
    Cheshmehgaz, Hossein Rajabalipour
    Desa, Mohamad Ishak
    Wibowo, Antoni
    APPLIED SOFT COMPUTING, 2013, 13 (05) : 2863 - 2895
  • [45] 3D Object Pose Estimation Using Multi-Objective Quaternion Learning
    Papaioannidis, Christos
    Pitas, Ioannis
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (08) : 2683 - 2693
  • [46] Federated Online Learning Aided Multi-Objective Proactive Caching in Heterogeneous Edge Networks
    Li, Tan
    Song, Linqi
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2023, 9 (04) : 1080 - 1095
  • [47] Mobile Learning Requirement Mining for Mobile Phone Based on Multi-Objective Evolutionary Algorithm
    Ting, Zhang
    2018 7TH INTERNATIONAL CONFERENCE ON ADVANCED MATERIALS AND COMPUTER SCIENCE (ICAMCS 2018), 2019, : 303 - 307
  • [48] Understanding cirrus clouds using explainable machine learning
    Jeggle, Kai
    Neubauer, David
    Camps-Valls, Gustau
    Lohmann, Ulrike
    ENVIRONMENTAL DATA SCIENCE, 2023, 2
  • [49] Discrimination of Quartz Genesis Based on Explainable Machine Learning
    Zhu, Guo-Dong
    Niu, Yun-Yun
    Liao, Shu-Bing
    Ruan, Long
    Zhang, Xiao-Hao
    MINERALS, 2023, 13 (08)
  • [50] Explainable, trustworthy, and ethical machine learning for healthcare: A survey
    Rasheed, Khansa
    Qayyum, Adnan
    Ghaly, Mohammed
    Al-Fuqaha, Ala
    Razi, Adeel
    Qadir, Junaid
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 149