Adversarial Representation Learning for Text-to-Image Matching

被引:174
|
作者
Sarafianos, Nikolaos [1 ]
Xu, Xiang [1 ]
Kakadiaris, Ioannis A. [1 ]
机构
[1] Univ Houston, Computat Biomed Lab, Houston, TX 77004 USA
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
关键词
D O I
10.1109/ICCV.2019.00591
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For many computer vision applications such as image captioning, visual question answering, and person search, learning discriminative feature representations at both image and text level is an essential yet challenging problem. Its challenges originate from the large word variance in the text domain as well as the difficulty of accurately measuring the distance between the features of the two modalities. Most prior work focuses on the latter challenge, by introducing loss functions that help the network learn better feature representations but fail to account for the complexity of the textual input. With that in mind, we introduce TIMAM: a Text-Image Modality Adversarial Matching approach that learns modality-invariant feature representations using adversarial and cross-modal matching objectives. In addition, we demonstrate that BERT, a publicly-available language model that extracts word embeddings, can successfully be applied in the text-to-image matching domain. The proposed approach achieves state-of-the-art cross-modal matching performance on four widely-used publicly-available datasets resulting in absolute improvements ranging from 2% to 5% in terms of rank-1 accuracy.
引用
收藏
页码:5813 / 5823
页数:11
相关论文
共 50 条
  • [31] DTGAN: Dual Attention Generative Adversarial Networks for Text-to-Image Generation
    Zhang, Zhenxing
    Schomaker, Lambert
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [32] Generative adversarial network based on semantic consistency for text-to-image generation
    Ma, Yue
    Liu, Li
    Zhang, Huaxiang
    Wang, Chunjing
    Wang, Zekang
    APPLIED INTELLIGENCE, 2023, 53 (04) : 4703 - 4716
  • [33] Exploring Generative Adversarial Networks for Text-to-Image Generation with Evolution Strategies
    Costa, Victor
    Lourenco, Nuno
    Correia, Joao
    Machado, Penousal
    PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2023 COMPANION, 2023, : 271 - 274
  • [34] Generative Adversarial Networks with Adaptive Semantic Normalization for text-to-image synthesis
    Huang, Siyue
    Chen, Ying
    DIGITAL SIGNAL PROCESSING, 2022, 120
  • [35] A survey on generative adversarial network-based text-to-image synthesis
    Zhou, Rui
    Jiang, Cong
    Xu, Qingyang
    NEUROCOMPUTING, 2021, 451 : 316 - 336
  • [36] RIATIG: Reliable and Imperceptible Adversarial Text-to-Image Generation with Natural Prompts
    Liu, Han
    Wu, Yuhao
    Zhai, Shixuan
    Yuan, Bo
    Zhang, Ning
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20585 - 20594
  • [37] AttnGAN: Realistic Text-to-Image Synthesis with Attentional Generative Adversarial Networks
    Mathesul, Shubham
    Bhutkar, Ganesh
    Rambhad, Ayush
    SENSE, FEEL, DESIGN, INTERACT 2021, 2022, 13198 : 397 - 403
  • [38] Object-driven Text-to-Image Synthesis via Adversarial Training
    Li, Wenbo
    Zhang, Pengchuan
    Zhang, Lei
    Huang, Qiuyuan
    He, Xiaodong
    Lyu, Siwei
    Gao, Jianfeng
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 12166 - 12174
  • [39] A Neural Space-Time Representation for Text-to-Image Personalization
    Alaluf, Yuval
    Richardson, Elad
    Metzer, Gal
    Cohen-Or, Daniel
    ACM TRANSACTIONS ON GRAPHICS, 2023, 42 (06):
  • [40] Variational Distribution Learning for Unsupervised Text-to-Image Generation
    Kang, Minsoo
    Lee, Doyup
    Kim, Jiseob
    Kim, Saehoon
    Han, Bohyung
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 23380 - 23389