Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study

被引:39
作者
Hosny, Ahmed [1 ,2 ,3 ]
Bitterman, Danielle S. [1 ,2 ,3 ,4 ]
Guthier, Christian V. [2 ,3 ]
Qian, Jack M. [5 ]
Roberts, Hannah [5 ]
Perni, Subha [5 ]
Saraf, Anurag [5 ]
Peng, Luke C. [5 ]
Pashtan, Itai [2 ,3 ]
Ye, Zezhong [1 ,2 ,3 ]
Kann, Benjamin H. [1 ,2 ,3 ]
Kozono, David E. [2 ,3 ]
Christiani, David [6 ,7 ]
Catalano, Paul J. [8 ]
Aerts, Hugo J. W. L. [1 ,2 ,3 ,9 ]
Mak, Raymond H. [1 ,2 ,3 ]
机构
[1] Harvard Med Sch, Mass Gen Brigham, Artificial Intelligence Med Program, Boston, MA 02115 USA
[2] Harvard Med Sch, Brigham & Womens Hosp, Dept Radiat Oncol, Boston, MA 02115 USA
[3] Harvard Med Sch, Dana Farber Canc Inst, Boston, MA 02115 USA
[4] Boston Childrens Hosp, Computat Hlth Informat Program, Boston, MA USA
[5] Brigham & Womens Hosp, Dana Farber Canc Inst, Harvard Radiat Oncol Program, Mass Gen Brigham, 75 Francis St, Boston, MA 02115 USA
[6] Massachusetts Gen Hosp, Harvard TH Chan Sch Publ Hlth, Baltimore, MD USA
[7] Harvard Med Sch, Baltimore, MD USA
[8] Johns Hopkins Univ, Sch Med, Dept Radiat Oncol & Mol Radiat Sci, Baltimore, MD USA
[9] Maastricht Univ, Radiol & Nucl Med, CARIM & GROW, Maastricht, Netherlands
基金
欧洲研究理事会; 美国国家卫生研究院;
关键词
INTEROBSERVER VARIABILITY; SEGMENTATION; STATISTICS; TUMOR;
D O I
10.1016/S2589-7500(22)00129-7
中图分类号
R-058 [];
学科分类号
摘要
Background Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts. Methods In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting. Findings We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0.91 [IQR 0.83-0.92], p=0.0062; SD 0.86 [0.71-0.91], p=0.0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0.83 (IQR 0.76-0.88) and SD 0.79 (0.68-0.88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0.70 (0.56-0.80) and SD 0.50 (0.34-0.71). Performance on RTOG-0617 clinical trial data was VD 0.71 (0.60-0.81) and SD 0.47 (0.35-0.59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5.4 min; p<0.0001) and a 32% reduction in interobserver variability (SD; p=0.013). Interpretation We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts' segmentation style and preference might affect model performance. Copyright (C) 2022 The Author(s). Published by Elsevier Ltd.
引用
收藏
页码:E657 / E666
页数:10
相关论文
共 32 条
[1]   Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians [J].
Asan, Onur ;
Bayrak, Alparslan Emrah ;
Choudhury, Avishek .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2020, 22 (06)
[2]   A radiogenomic dataset of non-small cell lung cancer [J].
Bakr, Shaimaa ;
Gevaert, Olivier ;
Echegaray, Sebastian ;
Ayers, Kelsey ;
Zhou, Mu ;
Shafiq, Majid ;
Zheng, Hong ;
Benson, Jalen Anthony ;
Zhang, Weiruo ;
Leung, Ann N. C. ;
Kadoch, Michael ;
Hoang, Chuong D. ;
Shrager, Joseph ;
Quon, Andrew ;
Rubin, Daniel L. ;
Plevritis, Sylvia K. ;
Napel, Sandy .
SCIENTIFIC DATA, 2018, 5
[3]   Deep Learning Improved Clinical Target Volume Contouring Quality and Efficiency for Postoperative Radiation Therapy in Non-small Cell Lung Cancer [J].
Bi, Nan ;
Wang, Jingbo ;
Zhang, Tao ;
Chen, Xinyuan ;
Xia, Wenlong ;
Miao, Junjie ;
Xu, Kunpeng ;
Wu, Linfang ;
Fan, Quanrong ;
Wang, Luhua ;
Li, Yexiong ;
Zhou, Zongmei ;
Dai, Jianrong .
FRONTIERS IN ONCOLOGY, 2019, 9
[4]   Standard-dose versus high-dose conformal radiotherapy with concurrent and consolidation carboplatin plus paclitaxel with or without cetuximab for patients with stage IIIA or IIIB non-small-cell lung cancer (RTOG 0617): a randomised, two-by-two factorial phase 3 study [J].
Bradley, Jeffrey D. ;
Paulus, Rebecca ;
Komaki, Ritsuko ;
Masters, Gregory ;
Blumenschein, George ;
Schild, Steven ;
Bogart, Jeffrey ;
Hu, Chen ;
Forster, Kenneth ;
Magliocco, Anthony ;
Kavadi, Vivek ;
Garces, Yolanda I. ;
Narayan, Samir ;
Iyengar, Puneeth ;
Robinson, Cliff ;
Wynn, Raymond B. ;
Koprowski, Christopher ;
Meng, Joanne ;
Beitler, Jonathan ;
Gaur, Rakesh ;
Curran, Walter, Jr. ;
Choy, Hak .
LANCET ONCOLOGY, 2015, 16 (02) :187-199
[5]   Impact of Neuroradiology-Based Peer Review on Head and Neck Radiotherapy Target Delineation [J].
Braunstein, S. ;
Glastonbury, C. M. ;
Chen, J. ;
Quivey, J. M. ;
Yom, S. S. .
AMERICAN JOURNAL OF NEURORADIOLOGY, 2017, 38 (01) :146-153
[6]  
Cancer Imaging Archive, LARG SCAL CT PET CT
[7]  
Cancer Imaging Archive, NSCLC CET RTOG 0617
[8]   Analysis of Geometric Performance and Dosimetric Impact of Using Automatic Contour Segmentation for Radiotherapy Planning [J].
Cao, Minsong ;
Stiehl, Bradley ;
Yu, Victoria Y. ;
Sheng, Ke ;
Kishan, Amar U. ;
Chin, Robert K. ;
Yang, Yingli ;
Ruan, Dan .
FRONTIERS IN ONCOLOGY, 2020, 10
[9]   Intensity-Modulated Radiotherapy for Lung Cancer: Current Status and Future Developments [J].
Chan, Clara ;
Lang, Stephanie ;
Rowbottom, Carl ;
Guckenberger, Matthias ;
Faivre-Finn, Corinne .
JOURNAL OF THORACIC ONCOLOGY, 2014, 9 (11) :1598-1608
[10]   Contouring variations and the role of atlas in non-small cell lung cancer radiation therapy: Analysis of a multi-institutional preclinical trial planning study [J].
Cui, Yunfeng ;
Chen, Wenzhou ;
Kong, Feng-Ming ;
Olsen, Lindsey A. ;
Beatty, Ronald E. ;
Maxim, Peter G. ;
Ritter, Timothy ;
Sohn, Jason W. ;
Higgins, Jane ;
Galvin, James M. ;
Xiao, Ying .
PRACTICAL RADIATION ONCOLOGY, 2015, 5 (02) :E67-E75