A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods

被引:18
|
作者
Vilone, Giulia [1 ]
Longo, Luca [1 ]
机构
[1] Technol Univ Dublin, Appl Intelligence Res Ctr, Sch Comp Sci, Artificial Intelligence Cognit Load Res Lab, Dublin, Ireland
来源
FRONTIERS IN ARTIFICIAL INTELLIGENCE | 2021年 / 4卷
关键词
explainable artificial intelligence; rule extraction; method comparison and evaluation; metrics of explainability; method automatic ranking; SYMBOLIC RULES; INTERPRETABILITY; CLASSIFICATION; NETWORKS; SYSTEMS;
D O I
10.3389/frai.2021.717899
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of these methods. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by five model-agnostic, post-hoc rule extractors by employing eight quantitative metrics. Eventually, the Friedman test was employed to check whether a method consistently performed better than the others, in terms of the selected metrics, and could be considered superior. Findings demonstrate that these metrics do not provide sufficient evidence to identify superior methods over the others. However, when used together, these metrics form a tool, applicable to every rule-extraction method and machine-learned models, that is, suitable to highlight the strengths and weaknesses of the rule-extractors in various applications in an objective and straightforward manner, without any human interventions. Thus, they are capable of successfully modelling distinctively aspects of explainability, providing to researchers and practitioners vital insights on what a model has learned during its training process and how it makes its predictions.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Post-hoc Rule Based Explanations for Black Box Bayesian Optimization
    Chakraborty, Tanmay
    Wirth, Christian
    Seifert, Christin
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 1, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1947 : 320 - 337
  • [2] The level of strength of an explanation: A quantitative evaluation technique for post-hoc XAI methods
    Bello, Marilyn
    Amador, Rosalis
    Garcia, Maria-Matilde
    Del Ser, Javier
    Mesejo, Pablo
    Cordon, Oscar
    PATTERN RECOGNITION, 2025, 161
  • [3] LIMREF: Local Interpretable Model Agnostic Rule-Based Explanations for Forecasting, with an Application to Electricity Smart Meter Data
    Rajapaksha, Dilini
    Bergmeir, Christoph
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12098 - 12107
  • [4] Preference-based and local post-hoc explanations for recommender systems
    Brunot, Leo
    Canovas, Nicolas
    Chanson, Alexandre
    Labroche, Nicolas
    Verdeaux, Willeme
    INFORMATION SYSTEMS, 2022, 108
  • [5] RULE-BASED METHODS FOR ELECTROENCEPHALOGRAM EVALUATION
    BOURNE, JR
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 1983, 30 (08) : 550 - 550
  • [6] Ontology-Based Post-Hoc Explanations via Simultaneous Concept Extraction
    Ponomarev, Andrew
    Agafonov, Anton
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 887 - 890
  • [7] OpenXAI: Towards a Transparent Evaluation of Post hoc Model Explanations
    Agarwal, Chirag
    Krishna, Satyapriya
    Saxena, Eshika
    Pawelczyk, Martin
    Johnson, Nari
    Puri, Isha
    Zitnik, Marinka
    Lakkaraju, Himabindu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers
    Mekonnen, Ephrem Tibebe
    Longo, Luca
    Dondio, Pierpaolo
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [9] Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking Fairness and Algorithm Utility
    Cui, Sen
    Pan, Weishen
    Zhang, Changshui
    Wang, Fei
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 207 - 217
  • [10] GLOR-FLEX: Local to Global Rule-based EXplanations for Federated Learning
    Haffar, Rami
    Naretto, Francesca
    Sanchez, David
    Monreale, Anna
    Domingo-Ferrer, Josep
    2024 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, FUZZ-IEEE 2024, 2024,