Evaluating Retinal Disease Diagnosis with an Interpretable Lightweight CNN Model Resistant to Adversarial Attacks

被引:9
作者
Bhandari, Mohan [1 ]
Shahi, Tej Bahadur [2 ,3 ]
Neupane, Arjun [2 ]
机构
[1] Samriddhi Coll, Dept Sci & Technol, Bhaktapur 44800, Nepal
[2] Cent Queensland Univ, Sch Engn & Technol, Rockhampton, Qld 4701, Australia
[3] Tribhuvan Univ, Cent Dept Comp Sci & IT, Kathmandu 44600, Nepal
基金
英国科研创新办公室;
关键词
adversarial attacks; deep learning; health informatics; lightweight CNN; retinal image classification; AUTOMATED DETECTION; IMAGES;
D O I
10.3390/jimaging9100219
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Optical Coherence Tomography (OCT) is an imperative symptomatic tool empowering the diagnosis of retinal diseases and anomalies. The manual decision towards those anomalies by specialists is the norm, but its labor-intensive nature calls for more proficient strategies. Consequently, the study recommends employing a Convolutional Neural Network (CNN) for the classification of OCT images derived from the OCT dataset into distinct categories, including Choroidal NeoVascularization (CNV), Diabetic Macular Edema (DME), Drusen, and Normal. The average k-fold (k = 10) training accuracy, test accuracy, validation accuracy, training loss, test loss, and validation loss values of the proposed model are 96.33%, 94.29%, 94.12%, 0.1073, 0.2002, and 0.1927, respectively. Fast Gradient Sign Method (FGSM) is employed to introduce non-random noise aligned with the cost function's data gradient, with varying epsilon values scaling the noise, and the model correctly handles all noise levels below 0.1 epsilon. Explainable AI algorithms: Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are utilized to provide human interpretable explanations approximating the behaviour of the model within the region of a particular retinal image. Additionally, two supplementary datasets, namely, COVID-19 and Kidney Stone, are assimilated to enhance the model's robustness and versatility, resulting in a level of precision comparable to state-of-the-art methodologies. Incorporating a lightweight CNN model with 983,716 parameters, 2.37x108 floating point operations per second (FLOPs) and leveraging explainable AI strategies, this study contributes to efficient OCT-based diagnosis, underscores its potential in advancing medical diagnostics, and offers assistance in the Internet-of-Medical-Things.
引用
收藏
页数:20
相关论文
共 39 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]   DeepOCT: An explainable deep learning architecture to analyze macular edema on OCT images [J].
Altan, Gokhan .
ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2022, 34
[3]  
Banerjee P., 2023, Explainable AI: Foundations, Methodologies and Applications, P61, DOI [DOI 10.1007/978-3-031-12807-34, 10.1007/978-3-031-12807-34]
[4]   Explanatory classification of CXR images into COVID-19, Pneumonia and Tuberculosis using deep learning and XAI [J].
Bhandari, Mohan ;
Shahi, Tej Bahadur ;
Siku, Birat ;
Neupane, Arjun .
COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 150
[5]   Eye Disease Prediction from Optical Coherence Tomography Images with Transfer Learning [J].
Bhowmik, Arka ;
Kumar, Sanjay ;
Bhat, Neeraj .
ENGINEERING APPLICATIONS OF NEURAL NETWORKSX, 2019, 1000 :104-114
[6]   The ethical, legal and social implications of using artificial intelligence systems in breast cancer care [J].
Carter, Stacy M. ;
Rogers, Wendy ;
Win, Khin Than ;
Frazer, Helen ;
Richards, Bernadette ;
Houssami, Nehmat .
BREAST, 2020, 49 :25-32
[7]   SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk [J].
Gramegna, Alex ;
Giudici, Paolo .
FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 4
[8]   "Why Should I Trust Your IDS?": An Explainable Deep Learning Framework for Intrusion Detection Systems in Internet of Things Networks [J].
Houda, Zakaria Abou El ;
Brik, Bouziane ;
Khoukhi, Lyes .
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2022, 3 :1164-1176
[9]   Vision transformer and explainable transfer learning models for auto detection of kidney cyst, stone and tumor from CT-radiography [J].
Islam, Md Nazmul ;
Hasan, Mehedi ;
Hossain, Md. Kabir ;
Alam, Md. Golam Rabiul ;
Uddin, Md Zia ;
Soylu, Ahmet .
SCIENTIFIC REPORTS, 2022, 12 (01)
[10]  
kaggle, JTIPTJ Chest X-ray (Pneumonia, COVID-19, Tuberculosis)