Adapting Classification Neural Network Architectures for Medical Image Segmentation Using Explainable AI

被引:0
作者
Nikulins, Arturs [1 ]
Edelmers, Edgars [1 ,2 ,3 ]
Sudars, Kaspars [3 ]
Polaka, Inese [1 ]
机构
[1] Riga Tech Univ, Fac Comp Sci Informat Technol & Energy, LV-1048 Riga, Latvia
[2] Riga Stradins Univ, Fac Med, LV-1010 Riga, Latvia
[3] Inst Elect & Comp Sci, LV-1006 Riga, Latvia
关键词
medical imaging; classification models; image segmentation; explainable artificial intelligence; neural networks; ARTIFICIAL-INTELLIGENCE;
D O I
10.3390/jimaging11020055
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Segmentation neural networks are widely used in medical imaging to identify anomalies that may impact patient health. Despite their effectiveness, these networks face significant challenges, including the need for extensive annotated patient data, time-consuming manual segmentation processes and restricted data access due to privacy concerns. In contrast, classification neural networks, similar to segmentation neural networks, capture essential parameters for identifying objects during training. This paper leverages this characteristic, combined with explainable artificial intelligence (XAI) techniques, to address the challenges of segmentation. By adapting classification neural networks for segmentation tasks, the proposed approach reduces dependency on manual segmentation. To demonstrate this concept, the Medical Segmentation Decathlon 'Brain Tumours' dataset was utilised. A ResNet classification neural network was trained, and XAI tools were applied to generate segmentation-like outputs. Our findings reveal that GuidedBackprop is among the most efficient and effective methods, producing heatmaps that closely resemble segmentation masks by accurately highlighting the entirety of the target object.
引用
收藏
页数:12
相关论文
共 26 条
  • [1] Shimron E., Perlman O., AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow, Bioengineering, 10, (2023)
  • [2] Bumm R., Zaffino P., Lasso A., Estepar R.S.J., Pieper S., Wasserthal J., Spadea M.F., Latshang T., Kawel-Boehm N., Wackerlin A., Et al., Artificial Intelligence (AI)-Assisted Chest Computer Tomography (CT) Insights: A Study on Intensive Care Unit (ICU) Admittance Trends in 78 Coronavirus Disease 2019 (COVID-19) Patients, J. Thorac. Dis, 16, pp. 1009-1020, (2024)
  • [3] Dicle O., Artificial Intelligence in Diagnostic Ultrasonography, Diagn. Interv. Radiol, 29, pp. 40-45, (2023)
  • [4] Azeez I.I., Chow L.S., Solihin M.I., Ang C.K., 3D Brain Tumour Segmentation Using UNet with Quantitative Analysis of the Tumour Features, J. Phys. Conf. Ser, 2622, (2023)
  • [5] Kokhlikyan N., Miglani V., Martin M., Wang E., Alsallakh B., Reynolds J., Melnikov A., Kliushkina N., Araya C., Yan S., Et al., Captum: A Unified and Generic Model Interpretability Library for PyTorch 2020, arXiv, (2020)
  • [6] Selvaraju R.R., Cogswell M., Das A., Vedantam R., Parikh D., Batra D., Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization, Int. J. Comput. Vis, 128, pp. 336-359, (2020)
  • [7] Nori H., Jenkins S., Koch P., Caruana R., InterpretML: A Unified Framework for Machine Learning Interpretability, arXiv, (2019)
  • [8] Klaise J., Looveren A.V., Vacanti G., Coca A., Alibi Explain: Algorithms for Explaining Machine Learning Models, J. Mach. Learn. Res, 22, pp. 1-7, (2021)
  • [9] Arya V., Bellamy R.K.E., Chen P.-Y., Dhurandhar A., Hind M., Hoffman S.C., Houde S., Liao Q.V., Luss R., Mojsilovic A., Et al., One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques, arXiv, (2019)
  • [10] Simonyan K., Vedaldi A., Zisserman A., Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, arXiv, (2014)