Fully automatic deep convolutional approaches for the screening of neurodegeneratives diseases using multi-view OCT images

被引:0
作者
Alvarez-Rodriguez, Lorena [1 ,2 ]
Pueyo, Ana [1 ,2 ,3 ,4 ]
de Moura, Joaquim
Vilades, Elisa [3 ,4 ]
Garcia-Martin, Elena [3 ,4 ]
Sanchez, Clara I. [5 ,6 ]
Novo, Jorge [1 ,2 ]
Ortega, Marcos [1 ,2 ]
机构
[1] Univ A Coruna, Biomed Res Inst A Coruna INIBIC, VARPA Grp, La Coruna, Spain
[2] Univ A Coruna, CITIC Res Ctr Informat & Commun Technol, La Coruna, Spain
[3] Miguel Servet Univ Hosp, Dept Ophthalmol, Zaragoza, Spain
[4] Univ Zaragoza, Aragon Inst Hlth Res IIS Aragon, Miguel Servet Ophthalmol Innovat & Res Grp GIMSO, Zaragoza, Spain
[5] Univ Amsterdam, Informat Inst, Quantitat Healthcare Anal QurAI Grp, Amsterdam, Netherlands
[6] Amsterdam UMC, Dept Biomed Engn & Phys, Biomed Engn & Phys, AMC, Amsterdam, Netherlands
关键词
Neurodegenerative diseases; OCT; Multi-view; Retinal layers; Deep learning; Screening; Retinal layers segmentation; NERVE-FIBER LAYER; ALZHEIMERS-DISEASE; RETINAL LAYERS; U-NET; SEGMENTATION; CLASSIFICATION; ABNORMALITIES; THICKNESS;
D O I
10.1016/j.artmed.2024.103006
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The prevalence of neurodegenerative diseases (NDDs) such as Alzheimer's (AD), Parkinson's (PD), Essential tremor (ET), and Multiple Sclerosis (MS) is increasing alongside the aging population. Recent studies suggest that these disorders can be identified through retinal imaging, allowing for early detection and monitoring via Optical Coherence Tomography (OCT) scans. This study is at the forefront of research, pioneering the application of multi-view OCT and 3D information to the neurological diseases domain. Our methodology consists of two main steps. In the first one, we focus on the segmentation of the retinal nerve fiber layer (RNFL) and a class layer grouping between the ganglion cell layer and Bruch's membrane (GCL-BM) in both macular and optic disc OCT scans. These are the areas where changes in thickness serve as a potential indicator of NDDs. The second phase is to select patients based on information about the retinal layers. We explore how the integration of both views (macula and optic disc) improves each screening scenario: Healthy Controls (HC) vs. NDD, AD vs. NDD, ET vs. NDD, MS vs. NDD, PD vs. NDD, and a final multi-class approach considering all four NDDs. For the segmentation task, we obtained satisfactory results for both 2D and 3D approaches in macular segmentation, in which 3D performed better due to the inclusion of depth and cross-sectional information. As for the optic disc view, transfer learning did not improve the metrics over training from scratch, but it did provide a faster training. As for screening, 3D computational biomarkers provided better results than 2D ones, and multi-view methods were usually better than the single-view ones. Regarding separability among diseases, MS and PD were the ones that provided better results in their screening approaches, being also the most represented classes. In conclusion, our methodology has been successfully validated with an extensive experimentation of configurations, techniques and OCT views, becoming the first multi-view analysis that merges data from both macula-centered and optic disc-centered perspectives. Besides, it is also the first effort to examine key retinal layers across four major NDDs within the framework of pathological screening.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Diagnosis and multi-classification of lung diseases in CXR images using optimized deep convolutional neural network
    Ashwini, S.
    Arunkumar, J. R.
    Prabu, R. Thandaiah
    Singh, Ngangbam Herojit
    Singh, Ngangbam Phalguni
    SOFT COMPUTING, 2023, 28 (7-8) : 6219 - 6233
  • [32] Automatic Identification of Depression Using Facial Images with Deep Convolutional Neural Network
    Kong, Xinru
    Yao, Yan
    Wang, Cuiying
    Wang, Yuangeng
    Teng, Jing
    Qi, Xianghua
    MEDICAL SCIENCE MONITOR, 2022, 28
  • [33] Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance
    Yuan, Yading
    Chao, Ming
    Lo, Yeh-Chi
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2017, 36 (09) : 1876 - 1886
  • [34] Multi-View Feature Fusion Based Four Views Model for Mammogram Classification Using Convolutional Neural Network
    Khan, Hasan Nasir
    Shahid, Ahmad Raza
    Raza, Basit
    Dar, Amir Hanif
    Alquhayz, Hani
    IEEE ACCESS, 2019, 7 : 165724 - 165733
  • [35] Using a Multi-view Convolutional Neural Network to monitor solar irradiance
    Huertas-Tato, Javier
    Galvan, Ines M.
    Aler, Ricardo
    Javier Rodriguez-Benitez, Francisco
    Pozo-Vazquez, David
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (13) : 10295 - 10307
  • [36] Automatic and quantitative measurement of alveolar bone level in OCT images using deep learning
    Kim, Sul-Hee
    Kim, Jin
    Yang, Su
    Oh, Sung-Hye
    Lee, Seung-Pyo
    Yang, Hoon Joo
    Kim, Tae-Il
    Yi, Won-Jin
    BIOMEDICAL OPTICS EXPRESS, 2022, 13 (10) : 5468 - 5482
  • [37] Multi-view 3D Models from Single Images with a Convolutional Network
    Tatarchenko, Maxim
    Dosovitskiy, Alexey
    Brox, Thomas
    COMPUTER VISION - ECCV 2016, PT VII, 2016, 9911 : 322 - 337
  • [38] Using a Multi-view Convolutional Neural Network to monitor solar irradiance
    Javier Huertas-Tato
    Inés M. Galván
    Ricardo Aler
    Francisco Javier Rodríguez-Benítez
    David Pozo-Vázquez
    Neural Computing and Applications, 2022, 34 : 10295 - 10307
  • [39] Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks
    Lopez-Linares, Karen
    Aranjuelo, Nerea
    Kabongo, Luis
    Maclair, Gregory
    Lete, Nerea
    Ceresa, Mario
    Garcia-Familiar, Ainhoa
    Macia, Ivan
    Gonzalez Ballester, Miguel A.
    MEDICAL IMAGE ANALYSIS, 2018, 46 : 202 - 214
  • [40] Multi-view face recognition using deep neural networks
    Zhao, Feng
    Li, Jing
    Zhang, Lu
    Li, Zhe
    Na, Sang-Gyun
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 111 (375-380): : 375 - 380