Rigid and non-rigid motion artifact reduction in X-ray CT using attention module

被引:29
作者
Ko, Youngjun [1 ]
Moon, Seunghyuk [1 ]
Baek, Jongduk [1 ]
Shim, Hyunjung [1 ]
机构
[1] Yonsei Univ, Sch Integrated Technol, Songdogwahak Ro 85, Incheon, South Korea
基金
新加坡国家研究基金会;
关键词
Attention module; CT motion artifact reduction; Deep learning; Perceptual loss; Residual block; CONE-BEAM CT; GENERATIVE ADVERSARIAL NETWORK; LOW-DOSE CT; COMPUTED-TOMOGRAPHY; DRIVEN; COMPENSATION; MODEL;
D O I
10.1016/j.media.2020.101883
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Motion artifacts are a major factor that can degrade the diagnostic performance of computed tomography (CT) images. In particular, the motion artifacts become considerably more severe when an imaging system requires a long scan time such as in dental CT or cone-beam CT (CBCT) applications, where patients generate rigid and non-rigid motions. To address this problem, we proposed a new real-time technique for motion artifacts reduction that utilizes a deep residual network with an attention module. Our attention module was designed to increase the model capacity by amplifying or attenuating the residual features according to their importance. We trained and evaluated the network by creating four benchmark datasets with rigid motions or with both rigid and non-rigid motions under a step-and-shoot fan-beam CT (FBCT) or a CBCT. Each dataset provided a set of motion-corrupted CT images and their ground-truth CT image pairs. The strong modeling power of the proposed network model allowed us to successfully handle motion artifacts from the two CT systems under various motion scenarios in real-time. As a result, the proposed model demonstrated clear performance benefits. In addition, we compared our model with Wasserstein generative adversarial network (WGAN)-based models and a deep residual network (DRN)-based model, which are one of the most powerful techniques for CT denoising and natural RGB image deblurring, respectively. Based on the extensive analysis and comparisons using four benchmark datasets, we confirmed that our model outperformed the aforementioned competitors. Our benchmark datasets and implementation code are available at https://github.com/youngjun-ko/ct _ mar _ attention . (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 54 条
[1]  
[Anonymous], 2018, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2018.00262
[2]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.90
[3]  
[Anonymous], 2010, ICML
[4]   Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network [J].
Anthimopoulos, Marios ;
Christodoulidis, Stergios ;
Ebner, Lukas ;
Christe, Andreas ;
Mougiakakou, Stavroula .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (05) :1207-1216
[5]   Using a flat-panel detector in high resolution cone beam CT for dental imaging [J].
Baba, R ;
Ueda, K ;
Okabe, M .
DENTOMAXILLOFACIAL RADIOLOGY, 2004, 33 (05) :285-290
[6]  
Bahdanau D., 2014, ABS14090473 CORR
[7]   Marker-free motion correction in weight-bearing cone-beam CT of the knee joint [J].
Berger, M. ;
Mueller, K. ;
Aichert, A. ;
Unberath, M. ;
Thies, J. ;
Choi, J. -H. ;
Fahrig, R. ;
Maier, A. .
MEDICAL PHYSICS, 2016, 43 (03) :1235-1248
[8]   McSART: an iterative model-based, motion-compensated SART algorithm for CBCT reconstruction [J].
Chee, G. ;
O'Connell, D. ;
Yang, Y. M. ;
Singhrao, K. ;
Low, D. A. ;
Lewis, J. H. .
PHYSICS IN MEDICINE AND BIOLOGY, 2019, 64 (09)
[9]   Low-dose CT via convolutional neural network [J].
Chen, Hu ;
Zhang, Yi ;
Zhang, Weihua ;
Liao, Peixi ;
Li, Ke ;
Zhou, Jiliu ;
Wang, Ge .
BIOMEDICAL OPTICS EXPRESS, 2017, 8 (02) :679-694
[10]  
Cheng X, 2018, INT C PATT RECOG, P147, DOI 10.1109/ICPR.2018.8546130