Automatic glaucoma detection based on transfer induced attention network

被引:26
作者
Xu, Xi [1 ]
Guan, Yu [1 ]
Li, Jianqiang [1 ]
Ma, Zerui [1 ]
Zhang, Li [2 ]
Li, Li [3 ]
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
[2] Capital Med Univ, Beijing Tongren Hosp, Beijing, Peoples R China
[3] Capital Med Univ, Beijing Childrens Hosp, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Automatic glaucoma diagnosis; Transfer learning; Deep learning; Attention mechanism; NEURAL-NETWORK; FUNDUS; DIAGNOSIS; EXTRACTION; ALGORITHM; FEATURES; SYSTEM;
D O I
10.1186/s12938-021-00877-5
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Background Glaucoma is one of the causes that leads to irreversible vision loss. Automatic glaucoma detection based on fundus images has been widely studied in recent years. However, existing methods mainly depend on a considerable amount of labeled data to train the model, which is a serious constraint for real-world glaucoma detection. Methods In this paper, we introduce a transfer learning technique that leverages the fundus feature learned from similar ophthalmic data to facilitate diagnosing glaucoma. Specifically, a Transfer Induced Attention Network (TIA-Net) for automatic glaucoma detection is proposed, which extracts the discriminative features that fully characterize the glaucoma-related deep patterns under limited supervision. By integrating the channel-wise attention and maximum mean discrepancy, our proposed method can achieve a smooth transition between general and specific features, thus enhancing the feature transferability. Results To delimit the boundary between general and specific features precisely, we first investigate how many layers should be transferred during training with the source dataset network. Next, we compare our proposed model to previously mentioned methods and analyze their performance. Finally, with the advantages of the model design, we provide a transparent and interpretable transferring visualization by highlighting the key specific features in each fundus image. We evaluate the effectiveness of TIA-Net on two real clinical datasets and achieve an accuracy of 85.7%/76.6%, sensitivity of 84.9%/75.3%, specificity of 86.9%/77.2%, and AUC of 0.929 and 0.835, far better than other state-of-the-art methods. Conclusion Different from previous studies applied classic CNN models to transfer features from the non-medical dataset, we leverage knowledge from the similar ophthalmic dataset and propose an attention-based deep transfer learning model for the glaucoma diagnosis task. Extensive experiments on two real clinical datasets show that our TIA-Net outperforms other state-of-the-art methods, and meanwhile, it has certain medical value and significance for the early diagnosis of other medical tasks.
引用
收藏
页数:19
相关论文
共 53 条
[41]   Beyond Sharing Weights for Deep Domain Adaptation [J].
Rozantsev, Artem ;
Salzmann, Mathieu ;
Fua, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (04) :801-814
[42]   Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms [J].
Samala, Ravi K. ;
Chan, Heang-Ping ;
Hadjiiski, Lubomir M. ;
Helvie, Mark A. ;
Cha, Kenny H. ;
Richter, Caleb D. .
PHYSICS IN MEDICINE AND BIOLOGY, 2017, 62 (23) :8894-8908
[43]   Joint Optic Disc and Cup Segmentation Using Fully Convolutional and Adversarial Networks [J].
Shankaranarayana, Sharath M. ;
Ram, Keerthi ;
Mitra, Kaushik ;
Sivaprakasam, Mohanasankar .
FETAL, INFANT AND OPHTHALMIC MEDICAL IMAGE ANALYSIS, 2017, 10554 :168-176
[44]   Development of a deep residual learning algorithm to screen for glaucoma from fundus photography [J].
Shibata, Naoto ;
Tanito, Masaki ;
Mitsuhashi, Keita ;
Fujino, Yuri ;
Matsuura, Masato ;
Murata, Hiroshi ;
Asaoka, Ryo .
SCIENTIFIC REPORTS, 2018, 8
[45]  
Simonyan K, 2015, Arxiv, DOI [arXiv:1409.1556, 10.48550/arXiv.1409.1556, DOI 10.48550/ARXIV.1409.1556]
[46]  
Szegedy C, 2015, PROC CVPR IEEE, P1, DOI 10.1109/CVPR.2015.7298594
[47]  
Tzeng E., 2014, ARXIV
[48]  
Xu K, 2015, PR MACH LEARN RES, V37, P2048
[49]  
Yadav D, 2014, INT CONF CONTEMP, P109, DOI 10.1109/IC3.2014.6897157
[50]  
Yosinski J, 2014, ADV NEUR IN, V27