Unsupervised computed tomography and cone-beam computed tomography image registration using a dual attention network

被引:11
作者
Hu, Rui [1 ]
Yan, Hui [2 ]
Nian, Fudong [3 ]
Mao, Ronghu [4 ]
Li, Teng [1 ]
机构
[1] Anhui Univ, Sch Artificial Intelligence, Minist Educ, Key Lab Intelligent Comp & Signal Proc, Hefei 230039, Peoples R China
[2] Chinese Acad Med Sci & Peking Union Med Coll, Canc Hosp, Natl Clin Res Ctr Canc, Natl Canc Ctr,Dept Radiat Oncol, Beijing, Peoples R China
[3] Hefei Univ, Sch Adv Mfg Engn, Hefei, Peoples R China
[4] Zhengzhou Univ, Affiliated Canc Hosp, Radiat Oncol, Zhengzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Image registration; computed tomography (CT); cone-beam computed tomography (CBCT); image-guided radiotherapy (IGRT); deep-learning; neural network;
D O I
10.21037/qims-21-1194
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: The registration of computed tomography (CT) and cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT). However, the large intensity variation between CT and CBCT images limits the registration performance and its clinical application in IGRT. In this study, a learning-based unsupervised approach was developed to address this issue and accurately register CT and CBCT images by predicting the deformation field. Methods: A dual attention module was used to handle the large intensity variation between CT and CBCT images. Specifically, a scale-aware position attention block (SP-BLOCK) and a scale-aware channel attention block (SC-BLOCK) were employed to integrate contextual information from the image space and channel dimensions. The SP-BLOCK enhances the correlation of similar features by weighting and aggregating multi-scale features at different positions, while the SC-BLOCK handles the multiple features of all channels to selectively emphasize dependencies between channel maps. Results: The proposed method was compared with existing mainstream methods on the 4D-LUNG data set. Compared to other mainstream methods, it achieved the highest structural similarity (SSIM) and dice similarity coefficient (DICE) scores of 86.34% and 89.74%, respectively, and the lowest target registration error (TRE) of 2.07 mm. Conclusions: The proposed method can register CT and CBCT images with high accuracy without the needs of manual labeling. It provides an effective way for high-accuracy patient positioning and target localization in IGRT.
引用
收藏
页码:3705 / 3716
页数:12
相关论文
共 35 条
  • [1] Voxel-based morphometry - The methods
    Ashburner, J
    Friston, KJ
    [J]. NEUROIMAGE, 2000, 11 (06) : 805 - 821
  • [2] Avants B, 2009, INSIGHT J, DOI 10.54294/uvnhin
  • [3] MULTIRESOLUTION ELASTIC MATCHING
    BAJCSY, R
    KOVACIC, S
    [J]. COMPUTER VISION GRAPHICS AND IMAGE PROCESSING, 1989, 46 (01): : 1 - 21
  • [4] VoxelMorph: A Learning Framework for Deformable Medical Image Registration
    Balakrishnan, Guha
    Zhao, Amy
    Sabuncu, Mert R.
    Guttag, John
    Dalca, Adrian, V
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (08) : 1788 - 1800
  • [5] Deformable Image Registration Using a Cue-Aware Deep Regression Network
    Cao, Xiaohuan
    Yang, Jianhua
    Zhang, Jun
    Wang, Qian
    Yap, Pew-Thian
    Shen, Dinggang
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2018, 65 (09) : 1900 - 1911
  • [6] VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images
    Chen, Hao
    Dou, Qi
    Yu, Lequan
    Qin, Jing
    Heng, Pheng-Ann
    [J]. NEUROIMAGE, 2018, 170 : 446 - 455
  • [7] The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository
    Clark, Kenneth
    Vendt, Bruce
    Smith, Kirk
    Freymann, John
    Kirby, Justin
    Koppel, Paul
    Moore, Stephen
    Phillips, Stanley
    Maffitt, David
    Pringle, Michael
    Tarbox, Lawrence
    Prior, Fred
    [J]. JOURNAL OF DIGITAL IMAGING, 2013, 26 (06) : 1045 - 1057
  • [8] Adversarial learning for mono- or multi-modal registration
    Fan, Jingfan
    Cao, Xiaohuan
    Wang, Qian
    Yap, Pew-Thian
    Shen, Dinggang
    [J]. MEDICAL IMAGE ANALYSIS, 2019, 58
  • [9] BIRNet: Brain image registration using dual-supervised fully convolutional networks
    Fan, Jingfan
    Cao, Xiaohuan
    Yap, Pew-Thian
    Shen, Dinggang
    [J]. MEDICAL IMAGE ANALYSIS, 2019, 54 : 193 - 206
  • [10] Dual Attention Network for Scene Segmentation
    Fu, Jun
    Liu, Jing
    Tian, Haijie
    Li, Yong
    Bao, Yongjun
    Fang, Zhiwei
    Lu, Hanqing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3141 - 3149