Explainable Model-Agnostic Similarity and Confidence in Face Verification

被引:7
|
作者
Knoche, Martin [1 ]
Teepe, Torben [1 ]
Hoermann, Stefan [1 ]
Rigoll, Gerhard [1 ]
机构
[1] Tech Univ Munich, Arcisstr 23, D-80333 Munich, Germany
关键词
MARGIN LOSS;
D O I
10.1109/WACVW58289.2023.00078
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, face recognition systems have demonstrated remarkable performances and thus gained a vital role in our daily life. They already surpass human face verification accountability in many scenarios. However, they lack explanations for their predictions. Compared to human operators, typical face recognition network system generate only binary decisions without further explanation and insights into those decisions. This work focuses on explanations for face recognition systems, vital for developers and operators. First, we introduce a confidence score for those systems based on facial feature distances between two input images and the distribution of distances across a dataset. Secondly, we establish a novel visualization approach to obtain more meaningful predictions from a face recognition system, which maps the distance deviation based on a systematic occlusion of images. The result is blended with the original images and highlights similar and dissimilar facial regions. Lastly, we calculate confidence scores and explanation maps for several state-of-the-art face verification datasets and release the results on a web platform. We optimize the platform for a user-friendly interaction and hope to further improve the understanding of machine learning decisions. The source code is available on GitHub(1), and the web platform is publicly available at http://explainable-face-verification.ey.r.appspot.com.
引用
收藏
页码:711 / 718
页数:8
相关论文
共 50 条
  • [1] CRFace: Confidence Ranker for Model-Agnostic Face Detection Refinement
    Vesdapunt, Noranart
    Wang, Baoyuan
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1674 - 1684
  • [2] An Explainable Model-Agnostic Algorithm for CNN-based Biometrics Verification
    Alonso-Fernandez, Fernando
    Hernandez-Diaz, Kevin
    Buades, Jose M.
    Tiwari, Prayag
    Bigun, Josef
    2023 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY, WIFS, 2023,
  • [3] A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI
    Barbalau, Antonio
    Cosma, Adrian
    Ionescu, Radu Tudor
    Popescu, Marius
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 190 - 205
  • [4] Model-agnostic explainable artificial intelligence for object detection in image data
    Moradi, Milad
    Yan, Ke
    Colwell, David
    Samwald, Matthias
    Asgari, Rhona
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [5] A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability
    Xu, Zhichao
    Zeng, Hansi
    Tan, Juntao
    Fu, Zuohui
    Zhang, Yongfeng
    Ai, Qingyao
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (01)
  • [6] Improving understandability of feature contributions in model-agnostic explainable AI tools
    Hadash, Sophia
    Willemsen, Martijn C.
    Snijders, Chris
    IJsselsteijn, Wijnand A.
    PROCEEDINGS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI' 22), 2022,
  • [7] Assessment of Software Vulnerability Contributing Factors by Model-Agnostic Explainable AI
    Li, Ding
    Liu, Yan
    Huang, Jun
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (02): : 1087 - 1113
  • [8] Model-agnostic vs. Model-intrinsic Interpretability for Explainable Product Search
    Ai, Qingyao
    Narayanan, Lakshmi R.
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 5 - 15
  • [9] Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review
    Ladbury, Colton
    Zarinshenas, Reza
    Semwal, Hemal
    Tam, Andrew
    Vaidehi, Nagarajan
    Rodin, Andrei S.
    Liu, An
    Glaser, Scott
    Salgia, Ravi
    Amini, Arya
    TRANSLATIONAL CANCER RESEARCH, 2022, : 3853 - 3868
  • [10] Computational Evaluation of Model-Agnostic Explainable AI Using Local Feature Importance in Healthcare
    Erdeniz, Seda Polat
    Schrempf, Michael
    Kramer, Diether
    Rainer, Peter P.
    Felfernig, Alexander
    Tran, Trang
    Burgstaller, Tamim
    Lubos, Sebastian
    ARTIFICIAL INTELLIGENCE IN MEDICINE, AIME 2023, 2023, 13897 : 114 - 119