Automated deep-learning system for Gleason grading of prostate cancer using biopsies: a diagnostic study

被引:413
作者
Bulten, Wouter [1 ]
Pinckaers, Hans [1 ]
van Boven, Hester [3 ]
Vink, Robert [4 ]
de Bel, Thomas [1 ]
van Ginneken, Bram [2 ]
van der Laak, Jeroen [1 ]
Hulsbergen-van de Kaa, Christina [4 ]
Litjens, Geert [1 ]
机构
[1] Radboud Univ Nijmegen, Med Ctr, Radboud Inst Hlth Sci, Dept Pathol, Nijmegen, Netherlands
[2] Radboud Univ Nijmegen, Med Ctr, Radboud Inst Hlth Sci, Dept Radiol & Nucl Med, Nijmegen, Netherlands
[3] Antoni van Leeuwenhoek Hosp, Netherlands Canc Inst, Dept Pathol, Amsterdam, Netherlands
[4] Lab Pathol East Netherlands, Hengelo, Netherlands
关键词
ISUP CONSENSUS CONFERENCE; INTEROBSERVER REPRODUCIBILITY; INTERNATIONAL-SOCIETY; CARCINOMA;
D O I
10.1016/S1470-2045(19)30739-9
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Background The Gleason score is the strongest correlating predictor of recurrence for prostate cancer, but has substantial inter-observer variability, limiting its usefulness for individual patients. Specialised urological pathologists have greater concordance; however, such expertise is not widely available. Prostate cancer diagnostics could thus benefit from robust, reproducible Gleason grading. We aimed to investigate the potential of deep learning to perform automated Gleason grading of prostate biopsies. Methods In this retrospective study, we developed a deep-learning system to grade prostate biopsies following the Gleason grading standard. The system was developed using randomly selected biopsies, sampled by the biopsy Gleason score, from patients at the Radboud University Medical Center (pathology report dated between Jan 1, 2012, and Dec 31, 2017). A semi-automatic labelling technique was used to circumvent the need for manual annotations by pathologists, using pathologists' reports as the reference standard during training. The system was developed to delineate individual glands, assign Gleason growth patterns, and determine the biopsy-level grade. For validation of the method, a consensus reference standard was set by three expert urological pathologists on an independent test set of 550 biopsies. Of these 550, 100 were used in an observer experiment, in which the system, 13 pathologists, and two pathologists in training were compared with respect to the reference standard. The system was also compared to an external test dataset of 886 cores, which contained 245 cores from a different centre that were independently graded by two pathologists. Findings We collected 5759 biopsies from 1243 patients. The developed system achieved a high agreement with the reference standard (quadratic Cohen's kappa 0.918, 95% CI 0.891-0.941) and scored highly at clinical decision thresholds: benign versus malignant (area under the curve 0.990, 95% CI 0.982-0.996), grade group of 2 or more (0.978, 0.966-0.988), and grade group of 3 or more (0.974, 0.962-0.984). In an observer experiment, the deep-learning system scored higher (kappa 0.854) than the panel (median kappa 0.819), outperforming 10 of 15 pathologist observers. On the external test dataset, the system obtained a high agreement with the reference standard set independently by two pathologists (quadratic Cohen's kappa 0.723 and 0.707) and within inter-observer variability (kappa 0.71). Interpretation Our automated deep-learning system achieved a performance similar to pathologists for Gleason grading and could potentially contribute to prostate cancer diagnosis. The system could potentially assist pathologists by screening biopsies, providing second opinions on grade group, and presenting quantitative measurements of volume percentages. Copyright (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页码:233 / 241
页数:9
相关论文
共 32 条
  • [1] Interobserver reproducibility of Gleason grading of prostatic carcinoma: Urologic pathologists
    Allsbrook, WC
    Mangold, KA
    Johnson, MH
    Lane, RB
    Lane, CG
    Amin, MB
    Bostwick, DG
    Humphrey, PA
    Jones, EC
    Reuter, VE
    Sakr, W
    Sesterhenn, IA
    Troncoso, P
    Wheeler, TM
    Epstein, JI
    [J]. HUMAN PATHOLOGY, 2001, 32 (01) : 74 - 80
  • [2] Interobserver reproducibility of Gleason grading of prostatic carcinoma: General pathologists
    Allsbrook, WC
    Mangold, KA
    Johnson, MH
    Lane, RB
    Lane, CG
    Epstein, JI
    [J]. HUMAN PATHOLOGY, 2001, 32 (01) : 81 - 88
  • [3] End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography
    Ardila, Diego
    Kiraly, Atilla P.
    Bharadwaj, Sujeeth
    Choi, Bokyung
    Reicher, Joshua J.
    Peng, Lily
    Tse, Daniel
    Etemadi, Mozziyar
    Ye, Wenxing
    Corrado, Greg
    Naidich, David P.
    Shetty, Shravya
    [J]. NATURE MEDICINE, 2019, 25 (06) : 954 - +
  • [4] Automated Gleason grading of prostate cancer tissue microarrays via deep learning
    Arvaniti, Eirini
    Fricker, Kim S.
    Moret, Michael
    Rupp, Niels
    Hermanns, Thomas
    Fankhauser, Christian
    Wey, Norbert
    Wild, Peter J.
    Ruschoff, Jan H.
    Claassen, Manfred
    [J]. SCIENTIFIC REPORTS, 2018, 8
  • [5] Bándi P, 2017, I S BIOMED IMAGING, P591, DOI 10.1109/ISBI.2017.7950590
  • [6] Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer
    Bejnordi, Babak Ehteshami
    Veta, Mitko
    van Diest, Paul Johannes
    van Ginneken, Bram
    Karssemeijer, Nico
    Litjens, Geert
    van der Laak, Jeroen A. W. M.
    [J]. JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2017, 318 (22): : 2199 - 2210
  • [7] Bulten W, 2019, US CAN AC PATH 108 A
  • [8] Epithelium segmentation using deep learning in H&E-stained prostate specimens with immunohistochemistry as reference standard
    Bulten, Wouter
    Bandi, Peter
    Hoven, Jeffrey
    van de Loo, Rob
    Lotz, Johannes
    Weiss, Nick
    van der Laak, Jeroen
    van Ginneken, Bram
    Hulsbergen-van de Kaa, Christina
    Litjens, Geert
    [J]. SCIENTIFIC REPORTS, 2019, 9 (1)
  • [9] Clinical-grade computational pathology using weakly supervised deep learning on whole slide images
    Campanella, Gabriele
    Hanna, Matthew G.
    Geneslaw, Luke
    Miraflor, Allen
    Silva, Vitor Werneck Krauss
    Busam, Klaus J.
    Brogi, Edi
    Reuter, Victor E.
    Klimstra, David S.
    Fuchs, Thomas J.
    [J]. NATURE MEDICINE, 2019, 25 (08) : 1301 - +
  • [10] de Bel T, 2019, PR MACH LEARN RES, V102, P151