ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework

被引:0
|
作者
Yapicioglu, Fatima Rabia [1 ]
Stramiglio, Alessandra [1 ]
Vitali, Fabio [1 ]
机构
[1] Univ Bologna, Dept Comp Sci & Engn, DISI, Via Zamboni 33, I-40126 Bologna, Italy
来源
EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT III, XAI 2024 | 2024年 / 2155卷
关键词
Generative Explainability; Conformal Prediction; Uncertainty Estimation; Model-Agnostic Explainability; UNCERTAINTY;
D O I
10.1007/978-3-031-63800-8_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conformal inference or prediction is a method in statistics to yield resilient uncertainty bounds for predictions from black-box models regardless of any presupposed data dissemination. It has emerged as a simple practice to establish intervals of uncertainty, especially in critical scenarios. By incorporating a user-defined probability threshold, conformal inference ensures that the resulting sets-such as the predicted range in regression tasks or the prediction set in classification scenarios-reliably encompass the actual value. For instance, by defining a threshold probability, we can compute price or quality tiers ranges for pre-owned cars, assuring that the actual values will fall within these intervals. While these models offer transparency in terms of uncertainty quantification, they often come up short in explainability when it comes to grasping comprehension of factors driving changes in conformal metrics such as set size, coverage, and thus formation of prediction set-type outputs. Our paper introduces a comprehensive global explainability framework based on conformal inference, addressing the void of accommodating prediction set-type outputs in various classifiers. This understanding not only enhances transparency but furthermore ensures verifiability in comprehending the factors driving changes in conformal metrics and the formation of prediction sets which are assured to have actual value with the help of counterfactual instances of calibration-sets. Moreover, ConformaSight's capability to capture and rank significant features, boosting classifier coverage, enables it to effectively identify the minimal dataset required for optimal model performance. We also showcase the flexibility of employing user-defined thresholds and re-calibration techniques to generate robust and reliable global feature importance estimates on test sets with significantly diverse distributions, obtained by perturbing the original test sets.
引用
收藏
页码:270 / 293
页数:24
相关论文
共 50 条
  • [1] Deep Learning Explainability with Local Interpretable Model-Agnostic Explanations for Monkeypox Prediction
    Angmo, Motup
    Sharma, Nonita
    Mohanty, Sachi Nandan
    Ijaz Khan, M.
    Mamatov, Abdugafur
    Kallel, Mohamed
    JOURNAL OF MECHANICS IN MEDICINE AND BIOLOGY, 2025,
  • [2] Model-Agnostic Nonconformity Functions for Conformal Classification
    Johansson, Ulf
    Linusson, Henrik
    Lofstrom, Tuve
    Bostrom, Henrik
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 2072 - 2079
  • [3] Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
    Zafar, Muhammad Rehman
    Khan, Naimul
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2021, 3 (03): : 525 - 541
  • [4] Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations
    Scholbeck, Christian A.
    Molnar, Christoph
    Heumann, Christian
    Bischl, Bernd
    Casalicchio, Giuseppe
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 1167 : 205 - 216
  • [5] DocXplain: A Novel Model-Agnostic Explainability Method for Document Image Classification
    Saifullah, Saifullah
    Agne, Stefan
    Dengel, Andreas
    Ahmed, Sheraz
    DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024, PT IV, 2024, 14807 : 103 - 123
  • [6] Model-agnostic explanations for survival prediction models
    Suresh, Krithika
    Gorg, Carsten
    Ghosh, Debashis
    STATISTICS IN MEDICINE, 2024, 43 (11) : 2161 - 2182
  • [7] MAP: A Model-agnostic Pretraining Framework for Click-through Rate Prediction
    Lin, Jianghao
    Qu, Yanru
    Guo, Wei
    Dai, Xinyi
    Tang, Ruiming
    Yu, Yong
    Zhang, Weinan
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 1384 - 1395
  • [8] Model-Agnostic Bias Measurement in Link Prediction
    Schwertmann, Lena
    Ravi, Manoj Prabhakar Kannan
    De Melo, Gerard
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1632 - 1648
  • [9] A Model-Agnostic Framework for Fast Spatial Anomaly Detection
    Wu, Mingxi
    Jermaine, Chris
    Ranka, Sanjay
    Song, Xiuyao
    Gums, John
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2010, 4 (04)
  • [10] Model-Agnostic Multi-Agent Perception Framework
    Xu, Runsheng
    Chen, Weizhe
    Xiang, Hao
    Xia, Xin
    Liu, Lantao
    Ma, Jiaqi
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 1471 - 1478