Clinical domain knowledge-derived template improves post hoc AI explanations in pneumothorax classification

被引:2
作者
Yuan, Han [1 ]
Hong, Chuan [2 ]
Jiang, Peng-Tao [3 ]
Zhao, Gangming [4 ]
Tran, Nguyen Tuan Anh [5 ]
Xu, Xinxing [6 ]
Yan, Yet Yen [7 ]
Liu, Nan [1 ,8 ,9 ]
机构
[1] Duke NUS Med Sch, Ctr Quantitat Med, 8 Coll Rd, Singapore 169857, Singapore
[2] Duke Univ, Dept Biostat & Bioinformat, Durham, NC USA
[3] Nankai Univ, Coll Comp Sci, Nankai, Peoples R China
[4] Univ Hong Kong, Fac Engn, Hong Kong, Peoples R China
[5] Singapore Gen Hosp, Dept Diagnost Radiol, Singapore, Singapore
[6] Agcy Sci Technol & Res, Inst High Performance Comp, Singapore, Singapore
[7] Changi Gen Hosp, Dept Radiol, Singapore, Singapore
[8] Duke NUS Med Sch, Programme Hlth Serv & Syst Res, Singapore, Singapore
[9] Natl Univ Singapore, Inst Data Sci, Singapore, Singapore
关键词
Pneumothorax Diagnosis; Convolutional Neural Networks; Explainable Artificial Intelligence; Saliency Map; Grad-CAM; Integrated Gradients; DEEP; SEGMENTATION; LOCALIZATION; NETWORK;
D O I
10.1016/j.jbi.2024.104673
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Objective: Pneumothorax is an acute thoracic disease caused by abnormal air collection between the lungs and chest wall. Recently, artificial intelligence (AI), especially deep learning (DL), has been increasingly employed for automating the diagnostic process of pneumothorax. To address the opaqueness often associated with DL models, explainable artificial intelligence (XAI) methods have been introduced to outline regions related to pneumothorax. However, these explanations sometimes diverge from actual lesion areas, highlighting the need for further improvement. Method: We propose a template-guided approach to incorporate the clinical knowledge of pneumothorax into model explanations generated by XAI methods, thereby enhancing the quality of the explanations. Utilizing one lesion delineation created by radiologists, our approach first generates a template that represents potential areas of pneumothorax occurrence. This template is then superimposed on model explanations to filter out extraneous explanations that fall outside the template's boundaries. To validate its efficacy, we carried out a comparative analysis of three XAI methods (Saliency Map, Grad-CAM, and Integrated Gradients) with and without our template guidance when explaining two DL models (VGG-19 and ResNet-50) in two real-world datasets (SIIMACR and ChestX-Det). Results: The proposed approach consistently improved baseline XAI methods across twelve benchmark scenarios built on three XAI methods, two DL models, and two datasets. The average incremental percentages, calculated by the performance improvements over the baseline performance, were 97.8% in Intersection over Union (IoU) and 94.1% in Dice Similarity Coefficient (DSC) when comparing model explanations and ground-truth lesion areas. We further visualized baseline and template-guided model explanations on radiographs to showcase the performance of our approach. Conclusions: In the context of pneumothorax diagnoses, we proposed a template-guided approach for improving model explanations. Our approach not only aligns model explanations more closely with clinical insights but also exhibits extensibility to other thoracic diseases. We anticipate that our template guidance will forge a novel approach to elucidating AI models by integrating clinical domain expertise.
引用
收藏
页数:10
相关论文
共 73 条
  • [1] Adebayo J, 2018, ADV NEUR IN, V31
  • [2] The emergence of a concept in shallow neural networks
    Agliari, Elena
    Alemanno, Francesco
    Barra, Adriano
    De Marzo, Giordano
    [J]. NEURAL NETWORKS, 2022, 148 : 232 - 253
  • [3] Evaluating the faithfulness of saliency maps in explaining deep learning models using realistic perturbations
    Amorim, Jose P.
    Abreu, Pedro H.
    Santos, Joao
    Cortes, Marc
    Vila, Victor
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (02)
  • [4] Bel Bordes G., 2023, Fairness and Explainability in Chest X-ray Image Classifiers
  • [5] Detecting shortcut learning for fair medical AI using shortcut testing
    Brown, Alexander
    Tomasev, Nenad
    Freyberg, Jan
    Liu, Yuan
    Karthikesalingam, Alan
    Schrouff, Jessica
    [J]. NATURE COMMUNICATIONS, 2023, 14 (01)
  • [6] Grad-CAM plus plus : Generalized Gradient-based Visual Explanations for Deep Convolutional Networks
    Chattopadhay, Aditya
    Sarkar, Anirban
    Howlader, Prantik
    Balasubramanian, Vineeth N.
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 839 - 847
  • [7] C-CAM: Causal CAM for Weakly Supervised Semantic Segmentation on Medical Image
    Chen, Zhang
    Tian, Zhiqiang
    Zhu, Jihua
    Li, Ce
    Du, Shaoyi
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11666 - 11675
  • [8] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [9] Crosby J., 2020, P SPIE MED IMAG
  • [10] Challenges of Deep Learning in Medical Image Analysis - Improving Explainability and Trust
    Jadavpur University, Department of Electrical Engineering, Kolkata
    712235, India
    不详
    700156, India
    不详
    560109, India
    不详
    RG6 6AH, United Kingdom
    [J]. IEEE Trans. Technol. Soc., 2023, 1 (68-75):