Multi-Modal Transformer With Global-Local Alignment for Composed Query Image Retrieval

被引:18
作者
Xu, Yahui [1 ]
Bin, Yi [1 ]
Wei, Jiwei [1 ]
Yang, Yang [1 ,2 ,3 ]
Wang, Guoqing [1 ]
Shen, Heng Tao [4 ,5 ]
机构
[1] Univ Elect Sci & Technol China, Ctr Future Media, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[3] Univ Elect Sci & Technol China, Inst Elect & Informat Engn, Chengdu 523808, Guangdong, Peoples R China
[4] Univ Elect Sci & Technol China, Ctr Future Multimedia, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[5] Peng Cheng Lab, Shenzhen 518066, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformers; Image retrieval; Visualization; Task analysis; Feature extraction; Bit error rate; Fuses; Transformer; composed query image retrieval; local alignment; spatial attention; multi-modal learning; NETWORK;
D O I
10.1109/TMM.2023.3235495
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we study the composed query image retrieval, which aims at retrieving the target image similar to the composed query, i.e., a reference image and the desired modification text. Compared with conventional image retrieval, this task is more challenging as it not only requires precisely aligning the composed query and target image in a common embedding space, but also simultaneously extracting related information from the reference image and modification text. In order to properly extract related information from the composed query, existing methods usually embed vision-language inputs using different feature encoders, e.g., CNN for images and LSTM/BERT for text, and then employ a complicated manually-designed composition module for learning the joint image-text representation. However, the architecture discrepancy in feature encoders would restrict the vision-language plenitudinous interaction. Meanwhile, certain complicated composition designs might significantly hamper the generalization ability of the model. To tackle these problems, we propose a new framework termed ComqueryFormer, which effectively processes the composed query with the Transformer for this task. Specifically, to eliminate the architecture discrepancy, we leverage a unified transformer-based architecture to homogeneously encode the vision-language inputs. Meanwhile, instead of the complicated composition module, the neat yet effective cross-modal transformer is adopted to hierarchically fuse the composed query at various vision scales. On the other hand, we introduce an efficient global-local alignment module to narrow the distance between the composed query and the target image. It not only considers the divergence in the global joint embedding space but also forces the model to focus on the local detail differences. Extensive experiments on three real-world datasets demonstrate the superiority of our ComqueryFormer.
引用
收藏
页码:8346 / 8357
页数:12
相关论文
共 66 条
  • [1] Compositional Learning of Image-Text Query for Image Retrieval
    Anwaar, Muhammad Umer
    Labintcev, Egor
    Kleinsteuber, Martin
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 1139 - 1148
  • [2] Bhunia AK, 2020, PROC CVPR IEEE, P9776, DOI 10.1109/CVPR42600.2020.00980
  • [3] Entity Slot Filling for Visual Captioning
    Bin, Yi
    Ding, Yujuan
    Peng, Bo
    Peng, Liang
    Yang, Yang
    Chua, Tat-Seng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (01) : 52 - 62
  • [4] Describing Video With Attention-Based Bidirectional LSTM
    Bin, Yi
    Yang, Yang
    Shen, Fumin
    Xie, Ning
    Shen, Heng Tao
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (07) : 2631 - 2641
  • [5] Image Search with Text Feedback by Visiolinguistic Attention Learning
    Chen, Yanbei
    Gong, Shaogang
    Bazzani, Loris
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2998 - 3008
  • [6] Content-Based Image Retrieval Using Multiresolution Color and Texture Features
    Chun, Young Deok
    Kim, Nam Chul
    Jang, Ick Hoon
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2008, 10 (06) : 1073 - 1084
  • [7] Delmas Ginger, 2022, P INT C LEARN REPR
  • [8] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [9] Dodds E, 2020, Arxiv, DOI arXiv:2007.00145
  • [10] Dosovitskiy A., 2020, ICLR, V20, DOI 10.48550/arXiv.2010.11929