MFGAN: Multi-modal Feature-fusion for CT Metal Artifact Reduction Using GANs

被引:1
|
作者
Xu, Liming [1 ]
Zeng, Xianhua [2 ]
Li, Weisheng [2 ]
Zheng, Bochuan [1 ]
机构
[1] China West Normal Univ, 1 Shida Rd, Nanchong 637009, Sichuan, Peoples R China
[2] Chongqing Univ Posts & Telecommun, 2 Chongwen Rd, Chongqing 400065, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature fusion; generative adversarial nets; metal artifact reduction; second artifact; edge enhancement; INFORMATION; NETWORK; MODEL;
D O I
10.1145/3528172
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the existence of metallic implants in certain patients, the Computed Tomography (CT) images from these patients are often corrupted by undesirable metal artifacts, which causes severe problem of metal artifact. Although many methods have been proposed to reduce metal artifact, reduction is still challenging and inadequate. Some reduced results are suffering from symptom variance, second artifact, and poor subjective evaluation. To address these, we propose a novel method based on generative adversarial nets (GANs) to reduce metal artifacts. Specifically, we firstly encode interactive information (text) and imaging CT (image) to yield multi-modal feature-fusion representation, which overcomes representative ability limitation of single-modal CT images. The incorporation of interaction information constrains feature generation, which ensures symptom consistency between corrected and target CT. Then, we design an enhancement network to avoid second artifact and enhance edge as well as suppress noise. Besides, three radiology physicians are invited to evaluate the corrected CT image. Experiments show that our method gains significant improvement over other methods. Objectively, ours achieves an average increment of 7.44% PSNR and 6.12% SSIM on two medical image datasets. Subjectively, ours outperforms others in comparison in term of sharpness, resolution, invariance, and acceptability.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Multi-modal feature-fusion for CT metal artifact reduction using edge-enhanced generative adversarial networks
    Huang, Zhiwei
    Zhang, Guo
    Lin, Jinzhao
    Pang, Yu
    Wang, Huiqian
    Bai, Tong
    Zhong, Lisha
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2022, 217
  • [2] Adaptive Feature Fusion for Multi-modal Entity Alignment
    Guo H.
    Li X.-Y.
    Tang J.-Y.
    Guo Y.-M.
    Zhao X.
    Zidonghua Xuebao/Acta Automatica Sinica, 2024, 50 (04): : 758 - 770
  • [3] Metal artifact reduction in CT using fusion based prior image
    Wang, Jun
    Wang, Shijie
    Chen, Yang
    Wu, Jiasong
    Coatrieux, Jean-Louis
    Luo, Limin
    MEDICAL PHYSICS, 2013, 40 (08)
  • [4] Multi-modal multi-task feature fusion for RGBT tracking
    Cai, Yujue
    Sui, Xiubao
    Gu, Guohua
    INFORMATION FUSION, 2023, 97
  • [5] Joint and Individual Feature Fusion Hashing for Multi-modal Retrieval
    Yu, Jun
    Zheng, Yukun
    Wang, Yinglin
    Li, Zuhe
    Zhu, Liang
    COGNITIVE COMPUTATION, 2023, 15 (03) : 1053 - 1064
  • [6] Joint and Individual Feature Fusion Hashing for Multi-modal Retrieval
    Jun Yu
    Yukun Zheng
    Yinglin Wang
    Zuhe Li
    Liang Zhu
    Cognitive Computation, 2023, 15 : 1053 - 1064
  • [7] Feature Disentanglement and Adaptive Fusion for Improving Multi-modal Tracking
    Li, Zheng
    Cai, Weibo
    Dong, Junhao
    Lai, Jianhuang
    Xie, Xiaohua
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII, 2024, 14436 : 68 - 80
  • [8] An Improved Estimation Algorithm of Space Targets Pose Based on Multi-Modal Feature Fusion
    Hua, Jiang
    Hao, Tonglin
    Zeng, Liangcai
    Yu, Gui
    MATHEMATICS, 2021, 9 (17)
  • [9] FuseNet: a multi-modal feature fusion network for 3D shape classification
    Zhao, Xin
    Chen, Yinhuang
    Yang, Chengzhuan
    Fang, Lincong
    VISUAL COMPUTER, 2025, 41 (04) : 2973 - 2985
  • [10] Citrus Huanglongbing Detection Based on Multi-Modal Feature Fusion Learning
    Yang, Dongzi
    Wang, Fengcheng
    Hu, Yuqi
    Lan, Yubin
    Deng, Xiaoling
    FRONTIERS IN PLANT SCIENCE, 2021, 12