Enhancing Trust in Alzheimer's Disease Classification using Explainable Artificial Intelligence: Incorporating Local Post Hoc Explanations for a Glass-box Model

被引:0
作者
Varghese, Abraham [1 ]
George, Ben [1 ]
Sherimon, Vinu [1 ]
Al Shuaily, Huda Salim [2 ]
机构
[1] Univ Technol & Appl Sci, Coll Comp & Informat Sci, Muscat, Oman
[2] Univ Technol & Appl Sci, Deputy Vice Chancellor Off, Muscat, Oman
关键词
Machine learning; Alzheimer's disease; Interpretable Machine learning; LIME; SHAP; Explainable AI; Neural network;
D O I
暂无
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background: Alzheimer's disease (AD) leads to cognitive dysfunction among older people worldwide, making it nearly impossible for them to carry out their daily lives. Due to the inherent characteristics of Alzheimer's disease and its impact on the brain, timely intervention is crucial to delay its onset and mitigate its progression. Currently, the diagnosis of Alzheimer's disease often occurs at a stage where it is too late for effective prevention measures, allowing the disease to cause significant damage to the brain. The use of machine learning and deep learning models is critical for the classification of demented and non-demented cases, but most highly accurate models are non-linear and less transparent, not revealing the logic behind the predictions. Therefore, incorporating interpretability components into the models will make them more transparent and trustworthy. This study is aimed to develop appropriate diagnostic methods capable of assessing Mild Cognitive Impairment (MCI), the early stage of Alzheimer's disease that occurs before the irreversible loss of neurons. Methods: Explainable artificial intelligence (XAI) refers to AI systems that can provide explanations for their decisions or predictions. In the context of AD classification, explainable AI systems aim to provide insights into the features or characteristics of the model used to make a prediction. This XAI provides a mechanism to understand and interpret the basis of a model's predictions which is more important for improving the trust in the system and its results. As such, a non-linear neural network is employed in this work to distinguish between demented and non-demented cases while local post hoc explanations are incorporated to make it a glass-box model using the XAI techniques such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). Results: The application of LIME provided valuable insights into the impact of various factors on predictions. Notably, factors such as CDR, Age, and ASF aligned with clinical knowledge and proved instrumental in predicting dementia cases. Conversely, features like nWBV, MMSE, and eTIV adversely affected the predictions, highlighting their significance in identifying non-demented cases. Similarly, exploring SHAP values yielded a comprehensive understanding of the decision-making process employed by the model in detecting Alzheimer's disease. Conclusion: Through the utilization of explainable artificial intelligence (XAI) methods, this study endeavors to develop a dependable and transparent technique for early detection, monitoring, and personalized interventions in the realm of Alzheimer's disease.
引用
收藏
页码:1471 / 1478
页数:8
相关论文
共 22 条
[1]  
alz.org, WHAT IS ALZH DIS SYM
[2]  
alzint, ADI-Dementia Statistics
[3]  
[Anonymous], ALZH DIS FACT SHEET
[4]  
Doshi-Velez F., 2017, Towards a rigorous science of interpretable machine learning, P2
[5]   A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer's disease [J].
El-Sappagh, Shaker ;
Alonso, Jose M. ;
Islam, S. M. Riazul ;
Sultan, Ahmad M. ;
Kwak, Kyung Sup .
SCIENTIFIC REPORTS, 2021, 11 (01)
[6]   OASIS 2: online application for survival analysis 2 with features for the analysis of maximal lifespan and healthspan in aging research [J].
Han, Seong Kyu ;
Lee, Dongyeop ;
Lee, Heetak ;
Kim, Donghyo ;
Son, Heehwa G. ;
Yang, Jae-Seong ;
Lee, Seung-Jae V. ;
Kim, Sanguk .
ONCOTARGET, 2016, 7 (35) :56147-56152
[7]   Explainable AI toward understanding the performance of the top three TADPOLE Challenge methods in the forecast of Alzheimer's disease diagnosis [J].
Hernandez, Monica ;
Ramon-Julvez, Ubaldo ;
Ferraz, Francisco ;
ADNI Consortium .
PLOS ONE, 2022, 17 (05)
[8]  
Khedkar S., 2019, EXPLAINABLE AI HEALT
[9]   Predicting Alzheimer's disease progression using multi-modal deep learning approach [J].
Lee, Garam ;
Nho, Kwangsik ;
Kang, Byungkon ;
Sohn, Kyung-Ah ;
Kim, Dokyoon ;
Weiner, Michael W. ;
Aisen, Paul ;
Petersen, Ronald ;
Jack, Clifford R., Jr. ;
Jagust, William ;
Trojanowki, John Q. ;
Toga, Arthur W. ;
Beckett, Laurel ;
Green, Robert C. ;
Saykin, Andrew J. ;
Morris, John ;
Shaw, Leslie M. ;
Khachaturian, Zaven ;
Sorensen, Greg ;
Carrillo, Maria ;
Kuller, Lew ;
Raichle, Marc ;
Paul, Steven ;
Davies, Peter ;
Fillit, Howard ;
Hefti, Franz ;
Holtzman, Davie ;
Mesulam, M. Marcel ;
Potter, William ;
Snyder, Peter ;
Montine, Tom ;
Thomas, Ronald G. ;
Donohue, Michael ;
Walter, Sarah ;
Sather, Tamie ;
Jiminez, Gus ;
Balasubramanian, Archana B. ;
Mason, Jennifer ;
Sim, Iris ;
Harvey, Danielle ;
Bernstein, Matthew ;
Fox, Nick ;
Thompson, Paul ;
Schuff, Norbert ;
DeCArli, Charles ;
Borowski, Bret ;
Gunter, Jeff ;
Senjem, Matt ;
Vemuri, Prashanthi ;
Jones, David .
SCIENTIFIC REPORTS, 2019, 9 (1)
[10]  
Lundberg SM, 2017, ADV NEUR IN, V30