Deep-learning based fast and accurate 3D CT deformable image registration in lung cancer

被引:8
|
作者
Ding, Yuzhen [1 ]
Feng, Hongying [1 ]
Yang, Yunze [1 ]
Holmes, Jason [1 ]
Liu, Zhengliang [2 ]
Liu, David [3 ]
Wong, William W. W. [1 ]
Yu, Nathan Y. Y. [1 ]
Sio, Terence T. [1 ]
Schild, Steven E. [1 ]
Li, Baoxin [4 ]
Liu, Wei [1 ,5 ]
机构
[1] Mayo Clin, Dept Radiat Oncol, Phoenix, AZ USA
[2] Univ Georgia, Dept Comp Sci, Athens, GA USA
[3] Athens Acad, Athens, GA USA
[4] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ USA
[5] Mayo Clin Arizona, Dept Radiat Oncol, 5777 E Mayo Blvd, Phoenix, AZ 85054 USA
关键词
3D lung CT images; deep neural networks; deformable image registration; INTENSITY-MODULATED PROTON; ADAPTIVE RADIATION-THERAPY; ROBUST OPTIMIZATION; ARC THERAPY; RESPIRATORY MOTION; TREATMENT UNCERTAINTIES; AUTOMATIC SEGMENTATION; ESOPHAGEAL-CARCINOMA; CONTOUR PROPAGATION; HEAD;
D O I
10.1002/mp.16548
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
BackgroundDeformable Image Registration (DIR) is an essential technique required in many applications of radiation oncology. However, conventional DIR approaches typically take several minutes to register one pair of 3D CT images and the resulting deformable vector fields (DVFs) are only specific to the pair of images used, making it less appealing for clinical application. PurposeA deep-learning-based DIR method using CT images is proposed for lung cancer patients to address the common drawbacks of the conventional DIR approaches and in turn can accelerate the speed of related applications, such as contour propagation, dose deformation, adaptive radiotherapy (ART), etc. MethodsA deep neural network based on VoxelMorph was developed to generate DVFs using CT images collected from 114 lung cancer patients. Two models were trained with the weighted mean absolute error (wMAE) loss and structural similarity index matrix (SSIM) loss (optional) (i.e., the MAE model and the M+S model). In total, 192 pairs of initial CT (iCT) and verification CT (vCT) were included as a training dataset and the other independent 10 pairs of CTs were included as a testing dataset. The vCTs usually were taken 2 weeks after the iCTs. The synthetic CTs (sCTs) were generated by warping the vCTs according to the DVFs generated by the pre-trained model. The image quality of the synthetic CTs was evaluated by measuring the similarity between the iCTs and the sCTs generated by the proposed methods and the conventional DIR approaches, respectively. Per-voxel absolute CT-number-difference volume histogram (CDVH) and MAE were used as the evaluation metrics. The time to generate the sCTs was also recorded and compared quantitatively. Contours were propagated using the derived DVFs and evaluated with SSIM. Forward dose calculations were done on the sCTs and the corresponding iCTs. Dose volume histograms (DVHs) were generated based on dose distributions on both iCTs and sCTs generated by two models, respectively. The clinically relevant DVH indices were derived for comparison. The resulted dose distributions were also compared using 3D Gamma analysis with thresholds of 3 mm/3%/10% and 2 mm/2%/10%, respectively. ResultsThe two models (wMAE and M+S) achieved a speed of 263.7 +/- 163 / 265.8 +/- 190 ms and a MAE of 13.15 +/- 3.8 / 17.52 +/- 5.8 HU for the testing dataset, respectively. The average SSIM scores of 0.987 +/- 0.006 and 0.988 +/- 0.004 were achieved by the two proposed models, respectively. For both models, CDVH of a typical patient showed that less than 5% of the voxels had a per-voxel absolute CT-number-difference larger than 55 HU. The dose distribution calculated based on a typical sCT showed differences of <= 2cGy[RBE] for clinical target volume (CTV) D-95 and D-5, within +/- 0.06% for total lung V-5, <= 1.5cGy[RBE] for heart and esophagus D-mean, and <= 6cGy[RBE] for cord D-max compared to the dose distribution calculated based on the iCT. The good average 3D Gamma passing rates (> 96% for 3 mm/3%/10% and > 94% for 2 mm/2%/10%, respectively) were also observed. ConclusionA deep neural network-based DIR approach was proposed and has been shown to be reasonably accurate and efficient to register the initial CTs and verification CTs in lung cancer.
引用
收藏
页码:6864 / 6880
页数:17
相关论文
共 50 条
  • [1] Unsupervised deep learning for fast and accurate CBCT to CT deformable image registration
    Van Kranen, S. R.
    Kanehira, T.
    Rozendaal, R.
    Sonke, J.
    RADIOTHERAPY AND ONCOLOGY, 2019, 133 : S267 - S268
  • [2] Accurate and Efficient Deep Neural Network Based Deformable Image Registration Method in Lung Cancer
    Ding, Y.
    Liu, Z.
    Feng, H.
    Holmes, J.
    Yang, Y.
    Yu, N.
    Sio, T.
    Schild, S.
    Li, B.
    Liu, W.
    MEDICAL PHYSICS, 2022, 49 (06) : E148 - E148
  • [3] 3D deformable registration of longitudinal abdominopelvic CT images using unsupervised deep learning
    van Eijnatten, Maureen
    Rundo, Leonardo
    Batenburg, K. Joost
    Lucka, Felix
    Beddowes, Emma
    Caldas, Carlos
    Gallagher, Ferdia A.
    Sala, Evis
    Schonlieb, Carola-Bibiane
    Woitek, Ramona
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2021, 208
  • [4] A Fast Deformable Model Based 3D Registration Algorithm for Image Guided Prostate Radiotherapy
    Zhou, J.
    Kim, S.
    Jabbour, S.
    Goyal, S.
    Haffty, B.
    Chen, T.
    Chang, S.
    Metaxas, D.
    Yue, N.
    MEDICAL PHYSICS, 2009, 36 (06)
  • [5] Image Based Lung Cancer Phenotyping with Deep-Learning Radiomics
    Chaunzwa, T.
    Xu, Y.
    Mak, R.
    Christiani, D.
    Lanuti, M.
    Shafer, A.
    Dia, N.
    Aerts, H.
    MEDICAL PHYSICS, 2018, 45 (06) : E165 - E165
  • [6] Fast and Accurate Electron Microscopy Image Registration with 3D Convolution
    Zhou, Shenglong
    Xiong, Zhiwei
    Chen, Chang
    Chen, Xuejin
    Liu, Dong
    Zhang, Yueyi
    Zha, Zheng-Jun
    Wu, Feng
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT I, 2019, 11764 : 478 - 486
  • [7] An unsupervised deep learning framework for large-scale lung CT deformable image registration
    Zhao, Yuqian
    Xie, Zhongyu
    Zhang, Fan
    Liao, Miao
    Di, Shuanhu
    OPTICS AND LASER TECHNOLOGY, 2024, 170
  • [8] Deep-learning based surrogate modeling for fast and accurate simulation in realistic 3D reservoir with varying well controls
    Huang, Hu
    Gong, Bin
    Liu, Yimin
    Sun, Wenyue
    GEOENERGY SCIENCE AND ENGINEERING, 2023, 222
  • [9] Deep-learning-based deformable image registration of head CT and MRI scans
    Ratke, Alexander
    Darsht, Elena
    Heinzelmann, Feline
    Kroeninger, Kevin
    Timmermann, Beate
    Baeumer, Christian
    FRONTIERS IN PHYSICS, 2023, 11
  • [10] Adversarial Learning for Deformable Image Registration: Application to 3D Ultrasound Image Fusion
    Li, Zisheng
    Ogino, Masahiro
    SMART ULTRASOUND IMAGING AND PERINATAL, PRETERM AND PAEDIATRIC IMAGE ANALYSIS, SUSI 2019, PIPPI 2019, 2019, 11798 : 56 - 64