Promoting smartphone-based keratitis screening using meta-learning: A multicenter study

被引:0
作者
Li, Zhongwen [1 ,2 ]
Wang, Yangyang [1 ,2 ]
Chen, Kuan [4 ]
Qiang, Wei [1 ]
Zong, Xihang [1 ]
Ding, Ke [3 ]
Wang, Shihong [1 ]
Yin, Shiqi [1 ]
Jiang, Jiewei [3 ]
Chen, Wei [1 ,2 ]
机构
[1] Wenzhou Med Univ, Ningbo Eye Hosp, Ningbo Eye Inst, Ningbo Key Lab Med Res Blinding Eye Dis, Ningbo 315040, Peoples R China
[2] Wenzhou Med Univ, Eye Hosp, Natl Clin Res Ctr Ocular Dis, Wenzhou 325027, Peoples R China
[3] Xian Univ Posts & Telecommun, Sch Elect Engn, Xian 710121, Peoples R China
[4] Wenzhou Med Univ, Cangnan Hosp, Dept Ophthalmol, Wenzhou 325000, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Meta learning; Metric learning; Keratitis; Slit-lamp; Smartphone;
D O I
10.1016/j.jbi.2024.104722
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Objective: Keratitis is the primary cause of corneal blindness worldwide. Prompt identification and referral of patients with keratitis are fundamental measures to improve patient prognosis. Although deep learning can assist ophthalmologists in automatically detecting keratitis through a slit lamp camera, remote and underserved areas often lack this professional equipment. Smartphones, a widely available device, have recently been found to have potential in keratitis screening. However, given the limited data available from smartphones, employing traditional deep learning algorithms to construct a robust intelligent system presents a significant challenge. This study aimed to propose a meta-learning framework, cosine nearest centroid-based metric learning (CNCML), for developing a smartphone-based keratitis screening model in the case of insufficient smartphone data by leveraging the prior knowledge acquired from slit-lamp photographs. Methods: We developed and assessed CNCML based on 13,009 slit-lamp photographs and 4,075 smartphone photographs that were obtained from 3 independent clinical centers. To mimic real-world scenarios with various degrees of sample scarcity, we used training sets of different sizes (0 to 20 photographs per class) from the HUAWEI smartphone to train CNCML. We evaluated the performance of CNCML not only on an internal test dataset but also on two external datasets that were collected by two different brands of smartphones (VIVO and XIAOMI) in another clinical center. Furthermore, we compared the performance of CNCML with that of traditional deep learning models on these smartphone datasets. The accuracy and macro-average area under the curve (macro-AUC) were utilized to evaluate the performance of models. Results: With merely 15 smartphone photographs per class used for training, CNCML reached accuracies of 84.59%, 83.15%, and 89.99% on three smartphone datasets, with corresponding macro-AUCs of 0.96, 0.95, and 0.98, respectively. The accuracies of CNCML on these datasets were 0.56% to 9.65% higher than those of the most competitive traditional deep learning models. Conclusions: CNCML exhibited fast learning capabilities, attaining remarkable performance with a small number of training samples. This approach presents a potential solution for transitioning intelligent keratitis detection from professional devices (e.g., slit-lamp cameras) to more ubiquitous devices (e.g., smartphones), making keratitis screening more convenient and effective.
引用
收藏
页数:12
相关论文
共 35 条
  • [1] [Anonymous], Statista
  • [2] [Anonymous], 2022, List of countries by smartphone penetration
  • [3] Update on the Management of Infectious Keratitis
    Austin, Ariana
    Lietman, Tom
    Rose-Nussbaumer, Jennifer
    [J]. OPHTHALMOLOGY, 2017, 124 (11) : 1678 - 1689
  • [4] Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
    Chen, Yinbo
    Liu, Zhuang
    Xu, Huijuan
    Darrell, Trevor
    Wang, Xiaolong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9042 - 9051
  • [5] Global causes of blindness and distance vision impairment 1990-2020: a systematic review and meta-analysis
    Flaxman, Seth R.
    Bourne, Rupert R. A.
    Resnikoff, Serge
    Ackland, Peter
    Braithwaite, Tasanee
    Cicinelli, Maria V.
    Das, Aditi
    Jonas, Jost B.
    Keeffe, Jill
    Kempen, John H.
    Leasher, Janet
    Limburg, Hans
    Naidoo, Kovin
    Pesudovs, Konrad
    Silvester, Alex
    Stevens, Gretchen A.
    Tahhan, Nina
    Wong, Tien Y.
    Taylor, Hugh R.
    [J]. LANCET GLOBAL HEALTH, 2017, 5 (12): : E1221 - E1234
  • [6] Dynamic Few-Shot Visual Learning without Forgetting
    Gidaris, Spyros
    Komodakis, Nikos
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4367 - 4375
  • [7] Deep learning for identifying corneal diseases from ocular surface slit-lamp photographs
    Gu, Hao
    Guo, Youwen
    Gu, Lei
    Wei, Anji
    Xie, Shirong
    Ye, Zhengqiang
    Xu, Jianjiang
    Zhou, Xingtao
    Lu, Yi
    Liu, Xiaoqing
    Hong, Jiaxu
    [J]. SCIENTIFIC REPORTS, 2020, 10 (01)
  • [8] Embracing Change: Continual Learning in Deep Neural Networks
    Hadsell, Raia
    Rao, Dushyant
    Rusu, Andrei A.
    Pascanu, Razvan
    [J]. TRENDS IN COGNITIVE SCIENCES, 2020, 24 (12) : 1028 - 1040
  • [9] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [10] Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference
    Hu, Shell Xu
    Li, Da
    Stuhmer, Jan
    Kim, Minyoung
    Hospedales, Timothy M.
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9058 - 9067