Variability in Grading Diabetic Retinopathy Using Retinal Photography and Its Comparison with an Automated Deep Learning Diabetic Retinopathy Screening Software

被引:3
|
作者
Teoh, Chin Sheng [1 ]
Wong, Kah Hie [1 ]
Xiao, Di [2 ]
Wong, Hung Chew [3 ]
Zhao, Paul [1 ]
Chan, Hwei Wuen [1 ]
Yuen, Yew Sen [1 ]
Naing, Thet [1 ]
Yogesan, Kanagasingam [4 ]
Koh, Victor Teck Chang [1 ,5 ]
机构
[1] Natl Univ Hlth Syst, Dept Ophthalmol, Singapore 119228, Singapore
[2] CSIRO, Urrbrae 5064, Australia
[3] Natl Univ Singapore, Yong Loo Lin Sch Med, Med Biostat Unit, Singapore 119077, Singapore
[4] Univ Notre Dame, Sch Med, Fremantle 6160, Australia
[5] Natl Univ Singapore, Ctr Innovat & Precis Eye Hlth, Yong Loo Lin Sch Med, Singapore 119077, Singapore
关键词
automated screening software; deep learning; diabetes retinopathy; grading; variability; TELEMEDICINE; AGREEMENT; PROGRAM;
D O I
10.3390/healthcare11121697
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Diabetic retinopathy (DR) screening using colour retinal photographs is cost-effective and time-efficient. In real-world clinical settings, DR severity is frequently graded by individuals of different expertise levels. We aim to determine the agreement in DR severity grading between human graders of varying expertise and an automated deep learning DR screening software (ADLS). Methods: Using the International Clinical DR Disease Severity Scale, two hundred macula-centred fundus photographs were graded by retinal specialists, ophthalmology residents, family medicine physicians, medical students, and the ADLS. Based on referral urgency, referral grading was divided into no referral, non-urgent referral, and urgent referral to an ophthalmologist. Inter-observer and intra-group variations were analysed using Gwet's agreement coefficient, and the performance of ADLS was evaluated using sensitivity and specificity. Results: The agreement coefficient for inter-observer and intra-group variability ranged from fair to very good, and moderate to good, respectively. The ADLS showed a high area under curve of 0.879, 0.714, and 0.836 for non-referable DR, non-urgent referable DR, and urgent referable DR, respectively, with varying sensitivity and specificity values. Conclusion: Inter-observer and intra-group agreements among human graders vary widely, but ADLS is a reliable and reasonably sensitive tool for mass screening to detect referable DR and urgent referable DR.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Automated Grading of Diabetic Retinopathy in Retinal Fundus Images using Deep Learning
    Hathwar, Sagar B.
    Srinivasa, Gowri
    PROCEEDINGS OF THE 2019 IEEE INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING APPLICATIONS (IEEE ICSIPA 2019), 2019, : 73 - 77
  • [2] Automated diabetic retinopathy screening using deep learning
    Guefrachi, Sarra
    Echtioui, Amira
    Hamam, Habib
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (24) : 65249 - 65266
  • [3] Comparison of automated and expert human grading of diabetic retinopathy using smartphone-based retinal photography
    Tyson N. Kim
    Michael T. Aaberg
    Patrick Li
    Jose R. Davila
    Malavika Bhaskaranand
    Sandeep Bhat
    Chaithanya Ramachandra
    Kaushal Solanki
    Frankie Myers
    Clay Reber
    Rohan Jalalizadeh
    Todd P. Margolis
    Daniel Fletcher
    Yannis M. Paulus
    Eye, 2021, 35 : 334 - 342
  • [4] Comparison of automated and expert human grading of diabetic retinopathy using smartphone-based retinal photography
    Kim, Tyson
    Li, Patrick
    Niziol, Leslie M.
    Bhaskaranand, Malavika
    Bhat, Sandeep
    Ramachandra, Chaithanya
    Solanki, Kaushal
    Davila, Jose R.
    Myers, Frankie
    Reber, Clay
    Musch, David C.
    Margolis, Todd P.
    Fletcher, Daniel
    Woodward, Maria A.
    Paulus, Yannis Mantas
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2017, 58 (08)
  • [5] Comparison of automated and expert human grading of diabetic retinopathy using smartphone-based retinal photography
    Kim, Tyson N.
    Aaberg, Michael T.
    Li, Patrick
    Davila, Jose R.
    Bhaskaranand, Malavika
    Bhat, Sandeep
    Ramachandra, Chaithanya
    Solanki, Kaushal
    Myers, Frankie
    Reber, Clay
    Jalalizadeh, Rohan
    Margolis, Todd P.
    Fletcher, Daniel
    Paulus, Yannis M.
    EYE, 2021, 35 (01) : 334 - 342
  • [6] Software for reading and grading diabetic retinopathy - Aravind Diabetic Retinopathy Screening 3.0
    Perumalsamy, Namperumalsamy
    Prasad, Noela M.
    Sathya, Shankar
    Ramasamy, Kim
    DIABETES CARE, 2007, 30 (09) : 2302 - 2306
  • [7] The Evidence for Automated Grading in Diabetic Retinopathy Screening
    Fleming, Alan D.
    Philip, Sam
    Goatman, Keith A.
    Prescott, Gordon J.
    Sharp, Peter F.
    Olson, John A.
    CURRENT DIABETES REVIEWS, 2011, 7 (04) : 246 - 252
  • [8] Automated image curation in diabetic retinopathy screening using deep learning
    Nderitu, Paul
    do Rio, Joan M. Nunez
    Webster, Ms Laura
    Mann, Samantha S.
    Hopkins, David
    Cardoso, M. Jorge
    Modat, Marc
    Bergeles, Christos
    Jackson, Timothy L.
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [9] Automated image curation in diabetic retinopathy screening using deep learning
    Paul Nderitu
    Joan M. Nunez do Rio
    Ms Laura Webster
    Samantha S. Mann
    David Hopkins
    M. Jorge Cardoso
    Marc Modat
    Christos Bergeles
    Timothy L. Jackson
    Scientific Reports, 12
  • [10] Comparison of ophthalmoscopy and fundus photography in the grading of diabetic retinopathy
    Sarraf, D
    Coleman, AL
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 1999, 40 (04) : S305 - S305