Interpretable survival prediction for colorectal cancer using deep learning

被引:0
|
作者
Ellery Wulczyn
David F. Steiner
Melissa Moran
Markus Plass
Robert Reihs
Fraser Tan
Isabelle Flament-Auvigne
Trissia Brown
Peter Regitnig
Po-Hsuan Cameron Chen
Narayan Hegde
Apaar Sadhwani
Robert MacDonald
Benny Ayalew
Greg S. Corrado
Lily H. Peng
Daniel Tse
Heimo Müller
Zhaoyang Xu
Yun Liu
Martin C. Stumpe
Kurt Zatloukal
Craig H. Mermel
机构
[1] Google Health,
[2] Medical University of Graz,undefined
[3] Google Health via Advanced Clinical,undefined
[4] Google Health,undefined
[5] Tempus Labs Inc.,undefined
来源
npj Digital Medicine | / 4卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Deriving interpretable prognostic features from deep-learning-based prognostic histopathology models remains a challenge. In this study, we developed a deep learning system (DLS) for predicting disease-specific survival for stage II and III colorectal cancer using 3652 cases (27,300 slides). When evaluated on two validation datasets containing 1239 cases (9340 slides) and 738 cases (7140 slides), respectively, the DLS achieved a 5-year disease-specific survival AUC of 0.70 (95% CI: 0.66–0.73) and 0.69 (95% CI: 0.64–0.72), and added significant predictive value to a set of nine clinicopathologic features. To interpret the DLS, we explored the ability of different human-interpretable features to explain the variance in DLS scores. We observed that clinicopathologic features such as T-category, N-category, and grade explained a small fraction of the variance in DLS scores (R2 = 18% in both validation sets). Next, we generated human-interpretable histologic features by clustering embeddings from a deep-learning-based image-similarity model and showed that they explained the majority of the variance (R2 of 73–80%). Furthermore, the clustering-derived feature most strongly associated with high DLS scores was also highly prognostic in isolation. With a distinct visual appearance (poorly differentiated tumor cell clusters adjacent to adipose tissue), this feature was identified by annotators with 87.0–95.5% accuracy. Our approach can be used to explain predictions from a prognostic deep learning model and uncover potentially-novel prognostic features that can be reliably identified by people for future validation studies.
引用
收藏
相关论文
共 50 条
  • [21] From slides to insights: Harnessing deep learning for prognostic survival prediction in human colorectal cancer histology
    Verma, Jyoti
    Sandhu, Archana
    Popli, Renu
    Kumar, Rajeev
    Khullar, Vikas
    Kansal, Isha
    Sharma, Ashutosh
    Garg, Kanwal
    Kashyap, Neeru
    Aurangzeb, Khursheed
    OPEN LIFE SCIENCES, 2023, 18 (01):
  • [22] An Interpretable Deep Learning Classifier for Epileptic Seizure Prediction Using EEG Data
    Jemal, Imene
    Mezghani, Neila
    Abou-Abbas, Lina
    Mitiche, Amar
    IEEE ACCESS, 2022, 10 : 60141 - 60150
  • [23] Using an interpretable deep learning model for the prediction of riverine suspended sediment load
    Mohammadi-Raigani Z.
    Gholami H.
    Mohamadifar A.
    Samani A.N.
    Pradhan B.
    Environmental Science and Pollution Research, 2024, 31 (22) : 32480 - 32493
  • [24] Prediction of anticancer drug sensitivity using an interpretable model guided by deep learning
    Pang, Weixiong
    Chen, Ming
    Qin, Yufang
    BMC BIOINFORMATICS, 2024, 25 (01)
  • [25] Infusing theory into deep learning for interpretable reactivity prediction
    Shih-Han Wang
    Hemanth Somarajan Pillai
    Siwen Wang
    Luke E. K. Achenie
    Hongliang Xin
    Nature Communications, 12
  • [26] Generalizable and Interpretable Deep Learning for Network Congestion Prediction
    Poularakis, Konstantinos
    Qin, Qiaofeng
    Le, Franck
    Kompella, Sastry
    Tassiulas, Leandros
    2021 IEEE 29TH INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP 2021), 2021,
  • [27] Deep Natural Language Feature Learning for Interpretable Prediction
    Urrutia, Felipe
    Buc, Cristian
    Barriere, Valentin
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3736 - 3763
  • [28] Infusing theory into deep learning for interpretable reactivity prediction
    Wang, Shih-Han
    Pillai, Hemanth Somarajan
    Wang, Siwen
    Achenie, Luke E. K.
    Xin, Hongliang
    NATURE COMMUNICATIONS, 2021, 12 (01)
  • [29] A Novel Interpretable Deep Learning Model for Ozone Prediction
    Chen, Xingguo
    Li, Yang
    Xu, Xiaoyan
    Shao, Min
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [30] Industry return prediction via interpretable deep learning
    Zografopoulos, Lazaros
    Iannino, Maria Chiara
    Psaradellis, Ioannis
    Sermpinis, Georgios
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2025, 321 (01) : 257 - 268