Using Explanations to Estimate the Quality of Computer Vision Models

被引:0
作者
Oliveira, Filipe [1 ]
Carneiro, Davide [1 ,2 ]
Pereira, Joao [3 ]
机构
[1] INESC TEC, Rua Dr Roberto Frias, P-4200465 Porto, Portugal
[2] Politecn Porto, ESTG, Felgueiras, Portugal
[3] Politecn Porto, CIICESI, ESTG, Felgueiras, Portugal
来源
HUMAN-CENTRED TECHNOLOGY MANAGEMENT FOR A SUSTAINABLE FUTURE, VOL 2, IAMOT | 2025年
关键词
Machine learning; Computer vision; Explainability;
D O I
10.1007/978-3-031-72494-7_29
中图分类号
F [经济];
学科分类号
02 ;
摘要
Explainable AI (xAI) emerged as one of the ways of addressing the interpretability issues of the so-called black-box models. Most of the xAI artifacts proposed so far were designed, as expected, for human users. In this work, we posit that such artifacts can also be used by computer systems. Specifically, we propose a set of metrics derived from LIME explanations, that can eventually be used to ascertain the quality of each output of an underlying image classification model. We validate these metrics against quantitative human feedback, and identify 4 potentially interesting metrics for this purpose. This research is particularly useful in concept drift scenarios, in which models are deployed into production and there is no new labelled data to continuously evaluate them, becoming impossible to know the current performance of the model.
引用
收藏
页码:293 / 301
页数:9
相关论文
共 8 条
  • [1] The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection
    Bergmann, Paul
    Batzner, Kilian
    Fauser, Michael
    Sattlegger, David
    Steger, Carsten
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (04) : 1038 - 1059
  • [2] Bharadiya J. P., 2023, International Journal of Computer (IJC), V48, P123
  • [3] A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images
    Messina, Pablo
    Pino, Pablo
    Parra, Denis
    Soto, Alvaro
    Besa, Cecilia
    Uribe, Sergio
    Andia, Marcelo
    Tejos, Cristian
    Prieto, Claudia
    Capurro, Daniel
    [J]. ACM COMPUTING SURVEYS, 2022, 54 (10S)
  • [4] Explainable AI: from black box to glass box
    Rai, Arun
    [J]. JOURNAL OF THE ACADEMY OF MARKETING SCIENCE, 2020, 48 (01) : 137 - 141
  • [5] Redmon J, 2016, Arxiv, DOI [arXiv:1506.02640, DOI 10.48550/ARXIV.1506.02640]
  • [6] Ribeiro MT., 2016, Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min, P1135, DOI DOI 10.1145/2939672.2939778
  • [7] ImageNet Large Scale Visual Recognition Challenge
    Russakovsky, Olga
    Deng, Jia
    Su, Hao
    Krause, Jonathan
    Satheesh, Sanjeev
    Ma, Sean
    Huang, Zhiheng
    Karpathy, Andrej
    Khosla, Aditya
    Bernstein, Michael
    Berg, Alexander C.
    Fei-Fei, Li
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 115 (03) : 211 - 252
  • [8] A survey on machine learning for recurring concept drifting data streams
    Suarez-Cetrulo, Andres L.
    Quintana, David
    Cervantes, Alejandro
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213