Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

被引:148
作者
Arun, Nishanth [1 ,2 ]
Gaw, Nathan [3 ]
Singh, Praveer [1 ]
Chang, Ken [1 ,4 ]
Aggarwal, Mehak [1 ]
Chen, Bryan [1 ,4 ]
Hoebel, Katharina [1 ,4 ]
Gupta, Sharut [1 ]
Patel, Jay [1 ,4 ]
Gidwani, Mishka [1 ]
Adebayo, Julius [4 ]
Li, Matthew D. [1 ]
Kalpathy-Cramer, Jayashree [1 ]
机构
[1] Harvard Med Sch, Massachusetts Gen Hosp, Athinoula A Martinos Ctr Biomed Imaging, Dept Radiol, 149 13th St, Boston, MA 02129 USA
[2] Shiv Nadar Univ, Dept Comp Sci, Greater Noida, India
[3] Air Force Inst Technol, Grad Sch Engn & Management, Dept Operat Sci, Dayton, OH USA
[4] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
基金
美国国家卫生研究院;
关键词
Technology Assessment; Technical Aspects; Feature Detection; Convolutional Neural Network (CNN);
D O I
10.1148/ryai.2021200267
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Purpose: To evaluate the trustworthiness of saliency maps for abnormality localization in medical imaging. Materials and Methods: Using two large publicly available radiology datasets (Society for Imaging Informatics in Medicine-American College of Radiology Pneumothorax Segmentation dataset and Radiological Society of North America Pneumonia Detection Challenge dataset), the performance of eight commonly used saliency map techniques were quantified in regard to (a) localization utility (segmentation and detection), (b) sensitivity to model weight randomization, (c) repeatability, and (d) reproducibility. Their performances versus baseline methods and localization network architectures were compared, using area under the precision-recall curve (AUPRC) and structural similarity index measure (SSIM) as metrics. Results: All eight saliency map techniques failed at least one of the criteria and were inferior in performance compared with localization networks. For pneumothorax segmentation, the AUPRC ranged from 0.024 to 0.224, while a U-Net achieved a significantly superior AUPRC of 0.404 (P < .005). For pneumonia detection, the AUPRC ranged from 0.160 to 0.519, while a RetinaNet achieved a significantly superior AUPRC of 0.596 (P < .005). Five and two saliency methods (of eight) failed the model randomization test on the segmentation and detection datasets, respectively, suggesting that these methods are not sensitive to changes in model parameters. The repeatability and reproducibility of the majority of the saliency methods were worse than localization networks for both the segmentation and detection datasets. Conclusion: The use of saliency maps in the high-risk domain of medical imaging warrants additional scrutiny and recommend that detection or segmentation models be used if localization is the desired output of the network. Supplemental material is available for this article. (C) RSNA, 2021.
引用
收藏
页数:12
相关论文
共 35 条
[31]  
Smilkov D., ARXIV170603825
[32]  
Sundararajan M, 2017, PR MACH LEARN RES, V70
[33]   Deep Learning for Diagnosis and Segmentation of Pneumothorax: The Results on the Kaggle Competition and Validation Against Radiologists [J].
Tolkachev, Alexey ;
Sirazitdinov, Ilyas ;
Kholiavchenko, Maksym ;
Mustafaev, Tamerlan ;
Ibragimov, Bulat .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (05) :1660-1672
[34]   Deep Neural Network or Dermatologist? [J].
Young, Kyle ;
Booth, Gareth ;
Simpson, Becks ;
Dutton, Reuben ;
Shrapnel, Sally .
INTERPRETABILITY OF MACHINE INTELLIGENCE IN MEDICAL IMAGE COMPUTING AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, 2020, 11797 :48-55
[35]  
Zar J. H., 2005, ENCY BIOSTATISTICS, V7