Multi-objective Feature Attribution Explanation for Explainable Machine Learning

被引:3
|
作者
Wang Z. [1 ]
Huang C. [1 ]
Li Y. [2 ]
Yao X. [1 ,3 ]
机构
[1] Research Institute of Trustworthy Autonomous Systems, Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Departmentof Computer Science and Engineering, Southern University of Science and Technology, Shenzhen
[2] The Advanced Cognitive Technology Lab, Huawei Technologies Co. Ltd, Shanghai
[3] School of Computer Science, University of Birmingham, Birmingham
来源
ACM Transactions on Evolutionary Learning and Optimization | 2024年 / 4卷 / 01期
基金
中国国家自然科学基金;
关键词
Explainable machine learning; feature attribution explanations; multi-objective evolutionary algorithms; multi-objective learning;
D O I
10.1145/3617380
中图分类号
学科分类号
摘要
The feature attribution-based explanation (FAE) methods, which indicate how much each input feature contributes to the model's output for a given data point, are one of the most popular categories of explainable machine learning techniques. Although various metrics have been proposed to evaluate the explanation quality, no single metric could capture different aspects of the explanations. Different conclusions might be drawn using different metrics. Moreover, during the processes of generating explanations, existing FAE methods either do not consider any evaluation metric or only consider the faithfulness of the explanation, failing to consider multiple metrics simultaneously. To address this issue, we formulate the problem of creating FAE explainable models as a multi-objective learning problem that considers multiple explanation quality metrics simultaneously. We first reveal conflicts between various explanation quality metrics, including faithfulness, sensitivity, and complexity. Then, we define the considered multi-objective explanation problem and propose a multi-objective feature attribution explanation (MOFAE) framework to address this newly defined problem. Subsequently, we instantiate the framework by simultaneously considering the explanation's faithfulness, sensitivity, and complexity. Experimental results comparing with six state-of-The-Art FAE methods on eight datasets demonstrate that our method can optimize multiple conflicting metrics simultaneously and can provide explanations with higher faithfulness, lower sensitivity, and lower complexity than the compared methods. Moreover, the results have shown that our method has better diversity, i.e., it provides various explanations that achieve different tradeoffs between multiple conflicting explanation quality metrics. Therefore, it can provide tailored explanations to different stakeholders based on their specific requirements. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
引用
收藏
相关论文
共 50 条
  • [1] Multi-Objective Feature Selection in QSAR Using a Machine Learning Approach
    Soto, Axel J.
    Cecchini, Rocio L.
    Vazquez, Gustavo E.
    Ponzoni, Ignacio
    QSAR & COMBINATORIAL SCIENCE, 2009, 28 (11-12): : 1509 - 1523
  • [2] Accelerating surrogate assisted evolutionary algorithms for expensive multi-objective optimization via explainable machine learning
    Li, Bingdong
    Yang, Yanting
    Liu, Dacheng
    Zhang, Yan
    Zhou, Aimin
    Yao, Xin
    SWARM AND EVOLUTIONARY COMPUTATION, 2024, 88
  • [3] Fairer Machine Learning Through Multi-objective Evolutionary Learning
    Zhang, Qingquan
    Liu, Jialin
    Zhang, Zeqi
    Wen, Junyi
    Mao, Bifei
    Yao, Xin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 111 - 123
  • [4] Sparse Learning-Based Feature Selection in Classification: A Multi-Objective Perspective
    Jiao, Ruwang
    Xue, Bing
    Zhang, Mengjie
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024,
  • [5] A Novel Outlook on Feature Selection as a Multi-objective Problem
    Barbiero, Pietro
    Lutton, Evelyne
    Squillero, Giovanni
    Tonda, Alberto
    ARTIFICIAL EVOLUTION, EA 2019, 2020, 12052 : 68 - 81
  • [6] Enhancing Deep Vein Thrombosis Diagnosis with Multi-Objective Evolutionary Algorithm and Machine Learning
    Sorano, Ruslan
    Ripon, Kazi Shah Nawaz
    Magnusson, Lars Vidar
    Halstensen, Thor-David
    Ghanima, Waleed
    2024 4TH INTERNATIONAL CONFERENCE ON APPLIED ARTIFICIAL INTELLIGENCE, ICAPAI, 2024, : 139 - 146
  • [7] Model-agnostic counterfactual explanation: A feature weights-based comprehensive causal multi-objective counterfactual framework
    Liu, Jinping
    Wu, Xiaoqiang
    Liu, Shiyi
    Gong, Subo
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 266
  • [8] Pedestrian simulation as multi-objective reinforcement learning
    Ravichandran, Naresh Balaji
    Yang, Fangkai
    Peters, Christopher
    Lansner, Anders
    Herman, Pawel
    18TH ACM INTERNATIONAL CONFERENCE ON INTELLIGENT VIRTUAL AGENTS (IVA'18), 2018, : 307 - 312
  • [9] Gradient Descent Decomposition for Multi-objective Learning
    Costa, Marcelo Azevedo
    Braga, Antonio Padua
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2011, 2011, 6936 : 377 - +
  • [10] A multi-objective approach to RBF network learning
    Kokshenev, Illya
    Braga, Antonio Padua
    NEUROCOMPUTING, 2008, 71 (7-9) : 1203 - 1209