Adversarial Representation Learning for Text-to-Image Matching

被引:174
|
作者
Sarafianos, Nikolaos [1 ]
Xu, Xiang [1 ]
Kakadiaris, Ioannis A. [1 ]
机构
[1] Univ Houston, Computat Biomed Lab, Houston, TX 77004 USA
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
关键词
D O I
10.1109/ICCV.2019.00591
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For many computer vision applications such as image captioning, visual question answering, and person search, learning discriminative feature representations at both image and text level is an essential yet challenging problem. Its challenges originate from the large word variance in the text domain as well as the difficulty of accurately measuring the distance between the features of the two modalities. Most prior work focuses on the latter challenge, by introducing loss functions that help the network learn better feature representations but fail to account for the complexity of the textual input. With that in mind, we introduce TIMAM: a Text-Image Modality Adversarial Matching approach that learns modality-invariant feature representations using adversarial and cross-modal matching objectives. In addition, we demonstrate that BERT, a publicly-available language model that extracts word embeddings, can successfully be applied in the text-to-image matching domain. The proposed approach achieves state-of-the-art cross-modal matching performance on four widely-used publicly-available datasets resulting in absolute improvements ranging from 2% to 5% in terms of rank-1 accuracy.
引用
收藏
页码:5813 / 5823
页数:11
相关论文
共 50 条
  • [1] Semantic Distance Adversarial Learning for Text-to-Image Synthesis
    Yuan, Bowen
    Sheng, Yefei
    Bao, Bing-Kun
    Chen, Yi-Ping Phoebe
    Xu, Changsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1255 - 1266
  • [2] Adversarial text-to-image synthesis: A review
    Frolov, Stanislav
    Hinz, Tobias
    Raue, Federico
    Hees, Joern
    Dengel, Andreas
    NEURAL NETWORKS, 2021, 144 : 187 - 209
  • [3] Cross-Modal Semantic Matching Generative Adversarial Networks for Text-to-Image Synthesis
    Tan, Hongchen
    Liu, Xiuping
    Yin, Baocai
    Li, Xin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 832 - 845
  • [4] Dual Adversarial Inference for Text-to-Image Synthesis
    Lao, Qicheng
    Havaei, Mohammad
    Pesaranghader, Ahmad
    Dutil, Francis
    Di Jorio, Lisa
    Fevens, Thomas
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 7566 - 7575
  • [5] Vision-Language Matching for Text-to-Image Synthesis via Generative Adversarial Networks
    Cheng, Qingrong
    Wen, Keyu
    Gu, Xiaodong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7062 - 7075
  • [6] Bridge-GAN: Interpretable Representation Learning for Text-to-Image Synthesis
    Yuan, Mingkuan
    Peng, Yuxin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (11) : 4258 - 4268
  • [7] Generative adversarial text-to-image generation with style image constraint
    Zekang Wang
    Li Liu
    Huaxiang Zhang
    Dongmei Liu
    Yu Song
    Multimedia Systems, 2023, 29 : 3291 - 3303
  • [8] Generative adversarial text-to-image generation with style image constraint
    Wang, Zekang
    Liu, Li
    Zhang, Huaxiang
    Liu, Dongmei
    Song, Yu
    MULTIMEDIA SYSTEMS, 2023, 29 (06) : 3291 - 3303
  • [9] Perceptual Pyramid Adversarial Networks for Text-to-Image Synthesis
    Gao, Lianli
    Chen, Daiyuan
    Song, Jingkuan
    Xu, Xing
    Zhang, Dongxiang
    Shen, Heng Tao
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 8312 - 8319
  • [10] GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis
    Tao, Ming
    Bao, Bing-Kun
    Tang, Hao
    Xu, Changsheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 14214 - 14223