Deep-learning-based direct inversion for material decomposition

被引:36
|
作者
Gong, Hao [1 ]
Tao, Shengzhen [1 ]
Rajendran, Kishore [1 ]
Zhou, Wei [1 ]
McCollough, Cynthia H. [1 ]
Leng, Shuai [1 ]
机构
[1] Mayo Clin, Dept Radiol, Rochester, MN 55901 USA
基金
美国国家卫生研究院;
关键词
convolutional neural network; deep learning; material decomposition; multi‐ energy CT; photon‐ counting detector CT; DUAL-ENERGY CT; IODINE QUANTIFICATION; MULTIMATERIAL DECOMPOSITION; NEURAL-NETWORK; RECONSTRUCTION; ATTENUATION; DIAGNOSIS; ACCURACY;
D O I
10.1002/mp.14523
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose To develop a convolutional neural network (CNN) that can directly estimate material density distribution from multi-energy computed tomography (CT) images without performing conventional material decomposition. Methods The proposed CNN (denoted as Incept-net) followed the general framework of encoder-decoder network, with an assumption that local image information was sufficient for modeling the nonlinear physical process of multi-energy CT. Incept-net was implemented with a customized loss function, including an in-house-designed image-gradient-correlation (IGC) regularizer to improve edge preservation. The network consisted of two types of customized multibranch modules exploiting multiscale feature representation to improve the robustness over local image noise and artifacts. Inserts with various densities of different materials [hydroxyapatite (HA), iodine, a blood-iodine mixture, and fat] were scanned using a research photon-counting detector (PCD) CT with two energy thresholds and multiple radiation dose levels. The network was trained using phantom image patches only, and tested with different-configurations of full field-of-view phantom and in vivo porcine images. Furthermore, the nominal mass densities of insert materials were used as the labels in CNN training, which potentially provided an implicit mass conservation constraint. The Incept-net performance was evaluated in terms of image noise, detail preservation, and quantitative accuracy. Its performance was also compared to common material decomposition algorithms including least-square-based material decomposition (LS-MD), total-variation regularized material decomposition (TV-MD), and U-net-based method. Results Incept-net improved accuracy of the predicted mass density of basis materials compared with the U-net, TV-MD, and LS-MD: the mean absolute error (MAE) of iodine was 0.66, 1.0, 1.33, and 1.57 mgI/cc for Incept-net, U-net, TV-MD, and LS-MD, respectively, across all iodine-present inserts (2.0-24.0 mgI/cc). With the LS-MD as the baseline, Incept-net and U-net achieved comparable noise reduction (both around 95%), both higher than TV-MD (85%). The proposed IGC regularizer effectively helped both Incept-net and U-net to reduce image artifact. Incept-net closely conserved the total mass densities (i.e., mass conservation constraint) in porcine images, which heuristically validated the quantitative accuracy of its outputs in anatomical background. In general, Incept-net performance was less dependent on radiation dose levels than the two conventional methods; with approximately 40% less parameters, the Incept-net achieved relatively improved performance than the comparator U-net, indicating that performance gain by Incept-net was not achieved by simply increasing network learning capacity. Conclusion Incept-net demonstrated superior qualitative image appearance, quantitative accuracy, and lower noise than the conventional methods and less sensitive to dose change. Incept-net generalized and performed well with unseen image structures and different material mass densities. This study provided preliminary evidence that the proposed CNN may be used to improve the material decomposition quality in multi-energy CT.
引用
收藏
页码:6294 / 6309
页数:16
相关论文
共 50 条
  • [1] Development of virtual CBCT simulator and deep-learning-based elemental material decomposition
    Shimomura, T.
    Fujiwara, D.
    Inoue, Y.
    Takeya, A.
    Ohta, T.
    Nozawa, Y.
    Imae, T.
    Nawa, K.
    Nakagawa, K.
    Haga, A.
    RADIOTHERAPY AND ONCOLOGY, 2023, 182 : S1408 - S1409
  • [2] Virtual computed-tomography system for deep-learning-based material decomposition
    Fujiwara, Daiyu
    Shimomura, Taisei
    Zhao, Wei
    Li, Kai-Wen
    Haga, Akihiro
    Geng, Li-Sheng
    PHYSICS IN MEDICINE AND BIOLOGY, 2022, 67 (15):
  • [3] Mitigating ambiguity by deep-learning-based modal decomposition method
    Fan, Xiaojie
    Ren, Fang
    Xie, Yulai
    Zhang, Yiying
    Niu, Jingjing
    Zhang, Jingyu
    Wang, Jianping
    OPTICS COMMUNICATIONS, 2020, 471
  • [4] Deep-Learning-Based Prestack Seismic Inversion Constrained by AVO Attributes
    Ge, Qiang
    Cao, Hong
    Yang, Zhifang
    Yuan, Sanyi
    Song, Cao
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21 : 1 - 5
  • [5] Deep-Learning-Based Calibration in Contrast Source Inversion Based Microwave Subsurface Imaging
    Hanabusa, Takahiro
    Morooka, Takahide
    Kidera, Shouhei
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [6] Deep-learning-based airborne transient electromagnetic inversion providing the depth of investigation
    Kang, Hyeonwoo
    Bang, Minkyu
    Seol, Soon Jee
    Byun, Joongmoo
    GEOPHYSICS, 2024, 89 (02) : E31 - E45
  • [7] Deep decomposition learning for reflectivity inversion
    Torres, Kristian
    Sacchi, Mauricio D.
    GEOPHYSICAL PROSPECTING, 2023, 71 (06) : 963 - 982
  • [8] Texture and artifact decomposition for improving generalization in deep-learning-based deepfake detection
    Gao, Jie
    Micheletto, Marco
    Orru, Giulia
    Concas, Sara
    Feng, Xiaoyi
    Marcialis, Gian Luca
    Roli, Fabio
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [9] Deep-Learning-Based Low-Frequency Reconstruction in Full-Waveform Inversion
    Gu, Zhiyuan
    Chai, Xintao
    Yang, Taihui
    REMOTE SENSING, 2023, 15 (05)
  • [10] Bathymetry Inversion Using a Deep-Learning-Based Surrogate for Shallow Water Equations Solvers
    Liu, Xiaofeng
    Song, Yalan
    Shen, Chaopeng
    WATER RESOURCES RESEARCH, 2024, 60 (03)