Ea-GANs: Edge-Aware Generative Adversarial Networks for Cross-Modality MR Image Synthesis

被引:182
|
作者
Yu, Biting [1 ]
Zhou, Luping [2 ]
Wang, Lei [1 ]
Shi, Yinghuan [3 ]
Fripp, Jurgen [4 ]
Bourgeat, Pierrick [4 ]
机构
[1] Univ Wollongong, Sch Comp & Informat Technol, Wollongong, NSW 2522, Australia
[2] Univ Sydney, Sch Elect & Informat Engn, Sydney, NSW 2006, Australia
[3] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing 210023, Jiangsu, Peoples R China
[4] CSIRO Hlth & Biosecur, Brisbane, Qld 4029, Australia
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Neural networks; machine learning; magnetic resonance imaging (MRI); brain; ATTENUATION CORRECTION; RANDOM FOREST; SEGMENTATION; REGRESSION;
D O I
10.1109/TMI.2019.2895894
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Magnetic resonance (MR) imaging is a widely used medical imaging protocol that can be configured to provide different contrasts between the tissues in human body. By setting different scanning parameters, each MR imaging modality reflects the unique visual characteristic of scanned body part, benefiting the subsequent analysis from multiple perspectives. To utilize the complementary information from multiple imaging modalities, cross-modality MR image synthesis has aroused increasing research interest recently. However, most existing methods only focus on minimizing pixel/voxel-wise intensity difference but ignore the textural details of image content structure, which affects the quality of synthesized images. In this paper, we propose edge-aware generative adversarial networks (Ea-GANs) for cross-modality MR image synthesis. Specifically, we integrate edge information, which reflects the textural structure of image content and depicts the boundaries of different objects in images, to reduce this gap. Corresponding to different learning strategies, two frameworks are proposed, i.e., a generator-induced Ea-GAN (gEa-GAN) and a discriminator-induced Ea-GAN (dEa-GAN). The gEa-GAN incorporates the edge information via its generator, while the dEa-GAN further does this from both the generator and the discriminator so that the edge similarity is also adversarially learned. In addition, the proposed Ea-GANs are 3D-based and utilize hierarchical features to capture contextual information. The experimental results demonstrate that the proposed Ea-GANs, especially the dEa-GAN, outperform multiple state-of-the-art methods for cross-modality MR image synthesis in both qualitative and quantitative measures. Moreover, the dEa-GAN also shows excellent generality to generic image synthesis tasks on benchmark datasets about facades, maps, and cityscapes.
引用
收藏
页码:1750 / 1762
页数:13
相关论文
共 47 条
  • [1] Edge-aware image outpainting with attentional generative adversarial networks
    Li, Xiaoming
    Zhang, Hengzhi
    Feng, Lei
    Hu, Jing
    Zhang, Rongguo
    Qiao, Qiang
    IET IMAGE PROCESSING, 2022, 16 (07) : 1807 - 1821
  • [2] Multi-Scale Transformer Network With Edge-Aware Pre-Training for Cross-Modality MR Image Synthesis
    Li, Yonghao
    Zhou, Tao
    He, Kelei
    Zhou, Yi
    Shen, Dinggang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (11) : 3395 - 3407
  • [3] Cross-Modality Breast Image Translation with Improved Resolution Using Generative Adversarial Networks
    Akanksha Sharma
    Neeru Jindal
    Wireless Personal Communications, 2021, 119 : 2877 - 2891
  • [4] CROSS-MODALITY DISTILLATION: A CASE FOR CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS
    Roheda, Siddharth
    Riggan, Benjamin S.
    Krim, Hamid
    Dai, Liyi
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 2926 - 2930
  • [5] Cross-Modality Breast Image Translation with Improved Resolution Using Generative Adversarial Networks
    Sharma, Akanksha
    Jindal, Neeru
    WIRELESS PERSONAL COMMUNICATIONS, 2021, 119 (04) : 2877 - 2891
  • [6] Edge-Aware Image Super-Resolution Using a Generative Adversarial Network
    Das B.
    Roy S.D.
    SN Computer Science, 4 (2)
  • [7] Generative Adversarial Networks (GANs) for Retinal Fundus Image Synthesis
    Bellemo, Valentina
    Burlina, Philippe
    Yong, Liu
    Wong, Tien Yin
    Ting, Daniel Shu Wei
    COMPUTER VISION - ACCV 2018 WORKSHOPS, 2019, 11367 : 289 - 302
  • [8] Sample-Adaptive GANs: Linking Global and Local Mappings for Cross-Modality MR Image Synthesis
    Yu, Biting
    Zhou, Luping
    Wang, Lei
    Shi, Yinghuan
    Fripp, Jurgen
    Bourgeat, Pierrick
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (07) : 2339 - 2350
  • [9] Cross-Modal PET Synthesis Method Based on Improved Edge-Aware Generative Adversarial Network
    Lei, Liting
    Zhang, Rui
    Zhang, Haifei
    Li, Xiujing
    Zou, Yuchao
    Aldosary, Saad
    Hassanein, Azza S.
    JOURNAL OF NANOELECTRONICS AND OPTOELECTRONICS, 2023, 18 (10) : 1184 - 1192
  • [10] Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation
    Cui, Hengfei
    Chang Yuwen
    Lei Jiang
    Yong Xia
    Zhang, Yanning
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 136