Generation of quantification maps and weighted images from synthetic magnetic resonance imaging using deep learning network

被引:9
作者
Liu, Yawen [1 ,2 ]
Niu, Haijun [1 ,2 ]
Ren, Pengling [3 ]
Ren, Jialiang [4 ]
Wei, Xuan [3 ]
Liu, Wenjuan [3 ]
Ding, Heyu [3 ]
Li, Jing [3 ]
Xia, Jingjing [5 ]
Zhang, Tingting [3 ]
Lv, Han [3 ]
Yin, Hongxia [3 ]
Wang, Zhenchang [1 ,2 ,3 ]
机构
[1] Beihang Univ, Sch Biol Sci & Med Engn, Beijing 100191, Peoples R China
[2] Beihang Univ, Beijing Adv Innovat Ctr Biomed Engn, Beijing 100191, Peoples R China
[3] Capital Med Univ, Beijing Friendship Hosp, Dept Radiol, Beijing 100050, Peoples R China
[4] GE Healthcare China, Beijing 100176, Peoples R China
[5] GE Healthcare, Shanghai 200040, Peoples R China
关键词
synthetic MRI; magnetic resonance image compilation (MAGiC); deep learning; U-net model; SEGMENTATION; BRAIN; MRI;
D O I
10.1088/1361-6560/ac46dd
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. The generation of quantification maps and weighted images in synthetic MRI techniques is based on complex fitting equations. This process requires longer image generation times. The objective of this study is to evaluate the feasibility of deep learning method for fast reconstruction of synthetic MRI. Approach. A total of 44 healthy subjects were recruited and random divided into a training set (30 subjects) and a testing set (14 subjects). A multiple-dynamic, multiple-echo (MDME) sequence was used to acquire synthetic MRI images. Quantification maps (T1, T2, and proton density (PD) maps) and weighted (T1W, T2W, and T2W FLAIR) images were created with MAGiC software and then used as the ground truth images in the deep learning (DL) model. An improved multichannel U-Net structure network was trained to generate quantification maps and weighted images from raw synthetic MRI imaging data (8 module images). Quantitative evaluation was performed on quantification maps. Quantitative evaluation metrics, as well as qualitative evaluation were used in weighted image evaluation. Nonparametric Wilcoxon signed-rank tests were performed in this study. Main results. The results of quantitative evaluation show that the error between the generated quantification images and the reference images is small. For weighted images, no significant difference in overall image quality or signal-to-noise ratio was identified between DL images and synthetic images. Notably, the DL images achieved improved image contrast with T2W images, and fewer artifacts were present on DL images than synthetic images acquired by T2W FLAIR. Significance. The DL algorithm provides a promising method for image generation in synthetic MRI techniques, in which every step of the calculation can be optimized and faster, thereby simplifying the workflow of synthetic MRI techniques.
引用
收藏
页数:12
相关论文
共 29 条
  • [1] Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions
    Akkus, Zeynettin
    Galimzianova, Alfiia
    Hoogi, Assaf
    Rubin, Daniel L.
    Erickson, Bradley J.
    [J]. JOURNAL OF DIGITAL IMAGING, 2017, 30 (04) : 449 - 459
  • [2] Review of synthetic MRI in pediatric brains: Basic principle of MR quantification, its features, clinical applications, and limitations
    Andica, Christina
    Hagiwara, Akifumi
    Hori, Masaaki
    Kamagata, Koji
    Koshino, Saori
    Maekawa, Tomoko
    Suzuki, Michimasa
    Fujiwara, Hirokazu
    Ikeno, Mitsuru
    Shimizu, Toshiaki
    Suzuki, Hiroharu
    Sugano, Hidenori
    Arai, Hajime
    Aoki, Shigeki
    [J]. JOURNAL OF NEURORADIOLOGY, 2019, 46 (04) : 268 - 275
  • [3] DeepQSM - using deep learning to solve the dipole inversion for quantitative susceptibility mapping
    Bollmann, Steffen
    Rasmussen, Kasper Gade Botker
    Kristensen, Mads
    Blendal, Rasmus Guldhammer
    Ostergaard, Lasse Riis
    Plocharski, Maciej
    O'Brien, Kieran
    Langkammer, Christian
    Janke, Andrew
    Barth, Markus
    [J]. NEUROIMAGE, 2019, 195 : 373 - 383
  • [4] Synthetic CT generation from CBCT images via deep learning
    Chen, Liyuan
    Liang, Xiao
    Shen, Chenyang
    Jiang, Steve
    Wang, Jing
    [J]. MEDICAL PHYSICS, 2020, 47 (03) : 1115 - 1125
  • [5] Feng C., 2019, GENERATIVE ADVERSARI, DOI [10.1007/978-3-030-00320-3, DOI 10.1007/978-3-030-00320-3]
  • [6] Fujita Shohei, 2020, J Neuroradiol, V47, P134, DOI 10.1016/j.neurad.2020.02.001
  • [7] Deep Learning Approach for Generating MRA Images From 3D Quantitative Synthetic MRI Without Additional Scans
    Fujita, Shohei
    Hagiwara, Akifumi
    Otsuka, Yujiro
    Hori, Masaaki
    Takei, Naoyuki
    Hwang, Ken-Pin
    Irie, Ryusuke
    Andica, Christina
    Kamagata, Koji
    Akashi, Toshiaki
    Kumamaru, Kanako Kunishima
    Suzuki, Michimasa
    Wada, Akihiko
    Abe, Osamu
    Aoki, Shigeki
    [J]. INVESTIGATIVE RADIOLOGY, 2020, 55 (04) : 249 - 256
  • [8] Generative Adversarial Networks
    Goodfellow, Ian
    Pouget-Abadie, Jean
    Mirza, Mehdi
    Xu, Bing
    Warde-Farley, David
    Ozair, Sherjil
    Courville, Aaron
    Bengio, Yoshua
    [J]. COMMUNICATIONS OF THE ACM, 2020, 63 (11) : 139 - 144
  • [9] Generation of Synthetic CT Images From MRI for Treatment Planning and Patient Positioning Using a 3-Channel U-Net Trained on Sagittal Images
    Gupta, Dinank
    Kim, Michelle
    Vineberg, Karen A.
    Baiter, James M.
    [J]. FRONTIERS IN ONCOLOGY, 2019, 9
  • [10] Improving the Quality of Synthetic FLAIR Images with Deep Learning Using a Conditional Generative Adversarial Network for Pixel-by-Pixel Image Translation
    Hagiwara, A.
    Otsuka, Y.
    Hori, M.
    Tachibana, Y.
    Yokoyama, K.
    Fujita, S.
    Andica, C.
    Kamagata, K.
    Irie, R.
    Koshino, S.
    Maekawa, T.
    Chougar, L.
    Wada, A.
    Takemura, M. Y.
    Hattori, N.
    Aoki, S.
    [J]. AMERICAN JOURNAL OF NEURORADIOLOGY, 2019, 40 (02) : 224 - 230