Explaining Deep Face Algorithms Through Visualization: A Survey

被引:0
作者
John, Thrupthi Ann [1 ]
Balasubramanian, Vineeth N. [2 ]
Jawahar, C. V. [1 ]
机构
[1] Int Inst Informat Technol Hyderabad, Ctr Visual Informat Technol, Hyderabad 500032, India
[2] Indian Inst Technol Hyderabad, Dept Comp Sci & Engn, Hyderabad 502285, India
来源
IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE | 2024年 / 6卷 / 01期
关键词
Face recognition; Visualization; Task analysis; Surveys; Biological system modeling; Artificial intelligence; Behavioral sciences; Deep neural networks; face understanding; explainability; accountability; transparency; interpretability; XAI; fairness; survey; NEURAL-NETWORK;
D O I
10.1109/TBIOM.2023.3319837
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although current deep models for face tasks surpass human performance on some benchmarks, we do not understand how they work. Thus, we cannot predict how it will react to novel inputs, resulting in catastrophic failures and unwanted biases in the algorithms. Explainable AI helps bridge the gap, but currently, there are very few visualization algorithms designed for faces. This work undertakes a first-of-its-kind meta-analysis of explainability algorithms in the face domain. We explore the nuances and caveats of adapting general-purpose visualization algorithms to the face domain, illustrated by computing visualizations on popular face models. We review existing face explainability works and reveal valuable insights into the structure and hierarchy of face networks. We also determine the design considerations for practical face visualizations accessible to AI practitioners by conducting a user study on the utility of various explainability algorithms.
引用
收藏
页码:15 / 29
页数:15
相关论文
共 81 条
  • [1] Adebayo J, 2018, ADV NEUR IN, V31
  • [2] Nguyen A, 2016, ADV NEUR IN, V29
  • [3] [Anonymous], 2016, REG EU 2016 679 EUR
  • [4] Antipov G, 2017, IEEE IMAGE PROC, P2089, DOI 10.1109/ICIP.2017.8296650
  • [5] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [6] Balduzzi David, 2017, P 34 INT C MACH LEAR, P342
  • [7] Network Dissection: Quantifying Interpretability of Deep Visual Representations
    Bau, David
    Zhou, Bolei
    Khosla, Aditya
    Oliva, Aude
    Torralba, Antonio
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3319 - 3327
  • [8] Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
    Buhrmester, Vanessa
    Muench, David
    Arens, Michael
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2021, 3 (04): : 966 - 989
  • [9] Machine Learning Interpretability: A Survey on Methods and Metrics
    Carvalho, Diogo, V
    Pereira, Eduardo M.
    Cardoso, Jaime S.
    [J]. ELECTRONICS, 2019, 8 (08)
  • [10] Visualizing and Quantifying Discriminative Features for Face Recognition
    Castanon, Gregory
    Byrne, Jeffrey
    [J]. PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 16 - 23