Influence based explainability of brain tumors segmentation in magnetic resonance imaging

被引:0
作者
Torda, Tommaso [1 ,2 ]
Ciardiello, Andrea [1 ,2 ]
Gargiulo, Simona [1 ,2 ]
Grillo, Greta [2 ]
Scardapane, Simone [1 ,2 ]
Voena, Cecilia [1 ,2 ]
Giagu, Stefano [1 ,2 ]
机构
[1] Sapienza Univ Rome, Piazzale Aldo Moro 5, I-00185 Rome, Italy
[2] INFN, Sez Roma, Piazzale Aldo Moro 5, I-00185 Rome, Italy
关键词
Artificial intelligence; Explainability; Deep learning; Healthcare; Brain tumors; Segmentation;
D O I
10.1007/s13748-025-00367-y; 10.1007/s13748-025-00367-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years Artificial Intelligence has emerged as a fundamental tool in medical applications. Despite this rapid development, deep neural networks remain black boxes that are difficult to explain, and this represents a major limitation for their use in clinical practice. In this paper we focus on the task of segmenting medical images, where most explainability methods proposed so far provide a visual explanation in terms of an input saliency map. The aim of this work is to extend, implement and test an alternative influence-based explainability algorithm (TracIn), proposed originally for classification tasks, to the challenging clinical problem of multiclass segmentation of tumor brains in multimodal magnetic resonance imaging. We verify the faithfulness of the proposed algorithm in linking the similarities of the latent representation of the network to the TracIn output. We further test the capacity of the algorithm to provide local and global explanations, and we suggest that it can be adopted as a tool to select the most relevant features used in the decision process. The method is generalizable for all semantic segmentation tasks where classes are mutually exclusive, which is the standard framework in these cases.
引用
收藏
页数:15
相关论文
共 39 条
[1]  
Adebayo J., 2018, Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, P9525, DOI DOI 10.48550/ARXIV.1810.03292
[2]   Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging [J].
Arun, Nishanth ;
Gaw, Nathan ;
Singh, Praveer ;
Chang, Ken ;
Aggarwal, Mehak ;
Chen, Bryan ;
Hoebel, Katharina ;
Gupta, Sharut ;
Patel, Jay ;
Gidwani, Mishka ;
Adebayo, Julius ;
Li, Matthew D. ;
Kalpathy-Cramer, Jayashree .
RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2021, 3 (06)
[3]  
Bakas S., 2018, arXiv, p1811.02629, DOI DOI 10.48550/ARXIV.1811.02629
[4]   Data Descriptor: Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features [J].
Bakas, Spyridon ;
Akbari, Hamed ;
Sotiras, Aristeidis ;
Bilello, Michel ;
Rozycki, Martin ;
Kirby, Justin S. ;
Freymann, John B. ;
Farahani, Keyvan ;
Davatzikos, Christos .
SCIENTIFIC DATA, 2017, 4
[5]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[6]  
Barshan E., 2020, Tech. rep, DOI DOI 10.48550/ARXIV.2003.11630
[7]   Explainable AI in medical imaging: An overview for clinical practitioners-Beyond saliency-based XAI approaches [J].
Borys, Katarzyna ;
Schmitt, Yasmin Alyssa ;
Nauta, Meike ;
Seifert, Christin ;
Kraemer, Nicole ;
Friedrich, Christoph M. ;
Nensa, Felix .
EUROPEAN JOURNAL OF RADIOLOGY, 2023, 162
[8]   A survey on deep learning applied to medical images: from simple artificial neural networks to generative models [J].
Celard, P. ;
Iglesias, E. L. ;
Sorribes-Fdez, J. M. ;
Romero, R. ;
Vieira, A. Seara ;
Borrajo, L. .
NEURAL COMPUTING & APPLICATIONS, 2023, 35 (03) :2291-2323
[9]   TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models [J].
Chatterjee, Soumick ;
Das, Arnab ;
Mandal, Chirag ;
Mukhopadhyay, Budhaditya ;
Vipinraj, Manish ;
Shukla, Aniruddh ;
Rao, Rajatha Nagaraja ;
Sarasaen, Chompunuch ;
Speck, Oliver ;
Nuernberger, Andreas .
APPLIED SCIENCES-BASEL, 2022, 12 (04)
[10]   ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data [J].
Diakogiannis, Foivos, I ;
Waldner, Francois ;
Caccetta, Peter ;
Wu, Chen .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 162 :94-114