HAMMF: Hierarchical attention-based multi-task and multi-modal fusion model for computer-aided diagnosis of Alzheimer's disease

被引:1
作者
Liu X. [1 ]
Li W. [1 ]
Miao S. [1 ]
Liu F. [2 ,3 ,4 ]
Han K. [5 ]
Bezabih T.T. [1 ]
机构
[1] School of Computer Engineering and Science, Shanghai University, Shanghai
[2] Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen
[3] University of Chinese Academy of Sciences, Beijing
[4] BGI-Shenzhen, Shenzhen
[5] Medical and Health Center, Liaocheng People's Hospital, LiaoCheng
关键词
Alzheimer's disease; Attention mechanism; Deep learning; Multi-modal fusion; Multi-task learning; Transformer;
D O I
10.1016/j.compbiomed.2024.108564
中图分类号
学科分类号
摘要
Alzheimer's disease (AD) is a progressive neurodegenerative condition, and early intervention can help slow its progression. However, integrating multi-dimensional information and deep convolutional networks increases the model parameters, affecting diagnosis accuracy and efficiency and hindering clinical diagnostic model deployment. Multi-modal neuroimaging can offer more precise diagnostic results, while multi-task modeling of classification and regression tasks can enhance the performance and stability of AD diagnosis. This study proposes a Hierarchical Attention-based Multi-task Multi-modal Fusion model (HAMMF) that leverages multi-modal neuroimaging data to concurrently learn AD classification tasks, cognitive score regression, and age regression tasks using attention-based techniques. Firstly, we preprocess MRI and PET image data to obtain two modal data, each containing distinct information. Next, we incorporate a novel Contextual Hierarchical Attention Module (CHAM) to aggregate multi-modal features. This module employs channel and spatial attention to extract fine-grained pathological features from unimodal image data across various dimensions. Using these attention mechanisms, the Transformer can effectively capture correlated features of multi-modal inputs. Lastly, we adopt multi-task learning in our model to investigate the influence of different variables on diagnosis, with a primary classification task and a secondary regression task for optimal multi-task prediction performance. Our experiments utilized MRI and PET images from 720 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results show that our proposed model achieves an overall accuracy of 93.15% for AD/NC recognition, and the visualization results demonstrate its strong pathological feature recognition performance. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 48 条
  • [1] Association A., Et al., 2010 Alzheimer's disease facts and figures, Alzheimer's Dementia, 6, 2, pp. 158-194, (2010)
  • [2] Jagust W., Vulnerable neural systems and the borderland of brain aging and neurodegeneration, Neuron, 77, 2, pp. 219-234, (2013)
  • [3] Cummings J., Lee G., Zhong K., Fonseca J., Taghva K., Alzheimer's disease drug development pipeline: 2021, Alzheimer's Dementia: Transl. Res. Clin. Interv., 7, 1, (2021)
  • [4] Nawaz M., Nazir T., Masood M., Mehmood A., Mahum R., Khan M.A., Kadry S., Thinnukool O., Analysis of brain MRI images using improved cornernet approach, Diagnostics, 11, 10, (2021)
  • [5] Ullah M.S., Khan M.A., Masood A., Mzoughi O., Saidani O., Alturki N., Brain tumor classification from MRI scans: a framework of hybrid deep learning model with Bayesian optimization and quantum theory-based marine predator algorithm, Front. Oncol., 14, (2024)
  • [6] Chetelat G., Desgranges B., De La Sayette V., Viader F., Eustache F., Baron J.-C., Mild cognitive impairment: can FDG-PET predict who is to rapidly convert to Alzheimer's disease?, Neurology, 60, 8, pp. 1374-1377, (2003)
  • [7] He K., Zhang X., Ren S., Sun J., Deep residual learning for image recognition, pp. 770-778, (2016)
  • [8] Huang G., Liu Z., pp. 4700-4708, (2017)
  • [9] Calhoun V.D., Sui J., Multimodal fusion of brain imaging data: a key to finding the missing link (s) in complex mental illness, Biol. Psychiatry: Cogn. Neurosci. Neuroimaging, 1, 3, pp. 230-244, (2016)
  • [10] Hu J., Shen L., Sun G., pp. 7132-7141, (2018)