Autoencoder-Based Collaborative Attention GAN for Multi-Modal Image Synthesis

被引:9
|
作者
Cao, Bing [1 ,2 ]
Cao, Haifang [1 ,3 ]
Liu, Jiaxu [1 ,3 ]
Zhu, Pengfei [1 ,3 ]
Zhang, Changqing [1 ,3 ]
Hu, Qinghua [1 ,3 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300403, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710000, Peoples R China
[3] Tianjin Univ, Haihe Lab Informat echnol Applicat Innovat, Tianjin 300403, Peoples R China
关键词
Image synthesis; Collaboration; Task analysis; Generative adversarial networks; Feature extraction; Data models; Image reconstruction; Multi-modal image synthesis; collaborative attention; single-modal attention; multi-modal attention; TRANSLATION; NETWORK;
D O I
10.1109/TMM.2023.3274990
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-modal images are required in a wide range of practical scenarios, from clinical diagnosis to public security. However, certain modalities may be incomplete or unavailable because of the restricted imaging conditions, which commonly leads to decision bias in many real-world applications. Despite the significant advancement of existing image synthesis techniques, learning complementary information from multi-modal inputs remains challenging. To address this problem, we propose an autoencoder-based collaborative attention generative adversarial network (ACA-GAN) that uses available multi-modal images to generate the missing ones. The collaborative attention mechanism deploys a single-modal attention module and a multi-modal attention module to effectively extract complementary information from multiple available modalities. Considering the significant modal gap, we further developed an autoencoder network to extract the self-representation of target modality, guiding the generative model to fuse target-specific information from multiple modalities. This considerably improves cross-modal consistency with the desired modality, thereby greatly enhancing the image synthesis performance. Quantitative and qualitative comparisons for various multi-modal image synthesis tasks highlight the superiority of our approach over several prior methods by demonstrating more precise and realistic results.
引用
收藏
页码:995 / 1010
页数:16
相关论文
共 50 条
  • [21] AMC: Attention guided Multi-modal Correlation Learning for Image Search
    Chen, Kan
    Bui, Trung
    Fang, Chen
    Wang, Zhaowen
    Nevatia, Ram
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6203 - 6211
  • [22] Hybrid generative adversarial network based on a mixed attention fusion module for multi-modal MR image synthesis algorithm
    Li, Haiyan
    Han, Yongqiang
    Chang, Jun
    Zhou, Liping
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (06) : 2111 - 2130
  • [23] Multi-modal Medical Image Fusion Based on GAN and the Shift-Invariant Shearlet Transform
    Wang, Lei
    Chang, Chunhong
    Hao, Benli
    Liu, Chunxiang
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 2538 - 2543
  • [24] Autoencoder-based OFDM for Agricultural Image Transmission
    Li, Dongbo
    Liu, Xiangyu
    Shao, Yuxuan
    Sun, Yuchen
    Cheng, Siyao
    Liu, Jie
    2022 TENTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA, CBD, 2022, : 157 - 162
  • [25] Convolutional Autoencoder-Based Multispectral Image Fusion
    Azarang, Arian
    Manoochehri, Hafez E.
    Kehtarnavaz, Nasser
    IEEE ACCESS, 2019, 7 : 35673 - 35683
  • [26] Contrast-Enhanced Liver Magnetic Resonance Image Synthesis Using Gradient Regularized Multi-Modal Multi-Discrimination Sparse Attention Fusion GAN
    Jiao, Changzhe
    Ling, Diane
    Bian, Shelly
    Vassantachart, April
    Cheng, Karen
    Mehta, Shahil
    Lock, Derrick
    Zhu, Zhenyu
    Feng, Mary
    Thomas, Horatio
    Scholey, Jessica E.
    Sheng, Ke
    Fan, Zhaoyang
    Yang, Wensha
    CANCERS, 2023, 15 (14)
  • [27] Convolutional Autoencoder-Based Transfer Learning for Multi-Task Image Inferences
    Lu, Jie
    Verma, Naveen
    Jha, Niraj K.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (02) : 1045 - 1057
  • [28] Multi-Modal Sentiment Analysis Based on Image and Text Fusion Based on Cross-Attention Mechanism
    Li, Hongchan
    Lu, Yantong
    Zhu, Haodong
    ELECTRONICS, 2024, 13 (11)
  • [29] A Collaborative Anomaly Localization Method Based on Multi-Modal Images
    Li, Yuanhang
    Yao, Junfeng
    Chen, Kai
    Zhang, Han
    Sun, Xiaodong
    Qian, Quan
    Wu, Xing
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1322 - 1327
  • [30] Multi-modal Emotion Recognition Based on Speech and Image
    Li, Yongqiang
    He, Qi
    Zhao, Yongping
    Yao, Hongxun
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT I, 2018, 10735 : 844 - 853