Enhance Composed Image Retrieval via Multi-Level Collaborative Localization and Semantic Activeness Perception

被引:1
作者
Zhang, Gangjian [1 ,2 ]
Wei, Shikui [1 ,2 ]
Pang, Huaxin [1 ,2 ]
Qiu, Shuang [3 ]
Zhao, Yao [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Key Lab Adv Informat Sci & Network Technol, Beijing 100044, Peoples R China
[3] Taiyuan Univ Technol, Coll Data Sci, Taiyuan 030600, Peoples R China
关键词
Semantics; Location awareness; Task analysis; Image retrieval; Training; Collaboration; Transformers; Composed image retrieval; multi-modal fusion and embedding; multi-modal representation learning; multi-modal retrieval; image retrieval;
D O I
10.1109/TMM.2023.3273466
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Composed image retrieval (CIR) is an emerging and challenging research task that combines two modalities, a reference image, and a modification text, into one query to retrieve the target image. In online shopping scenarios, the user would use the modification text as feedback to describe the difference between the reference and the desired image. In order to handle the task, there must be two main problems needed to be addressed. One is the localization problem: how to precisely find those spatial areas of the image mentioned by the text. The other is the modification problem: how to effectively modify the image semantics based on the text. However, existing methods merely fuse information coarsely from the two-modality, while the accurate spatial and semantic correspondence between these two heterogeneous features tends to be neglected. Therefore, image details cannot be precisely located and modified. To this end, we consider integrating information from the two modalities more accurately from spatial and semantic aspects. Thus, we propose an end-to-end framework for the CIR task, which contains three key components, i.e., Multi-level Collaborative Localization module (MCL), Differential Semantics Discrimination module (DSD), and Image Difference Enhancement constraints (IDE). Specifically, to solve the localization problem, MCL precisely locates the text to the image areas by collaboratively using text positioning information on multiple image layers. For the modification problem, DSD builds a distribution to evaluate the modification possibility of each image semantic dimension, and IDE effectively learns the modification patterns of text against image embedding based on the distribution. Extensive experiments on three datasets show that the proposed method achieves outstanding performance against the SOTA methods.
引用
收藏
页码:916 / 928
页数:13
相关论文
共 23 条
  • [21] A Multi-Level Convolution Pyramid Semantic Fusion Framework for High-Resolution Remote Sensing Image Scene Classification and Annotation
    Sun, Xiongli
    Zhu, Qiqi
    Qin, Qianqing
    IEEE ACCESS, 2021, 9 (09): : 18195 - 18208
  • [22] CDFKD-MFS: Collaborative Data-Free Knowledge Distillation via Multi-Level Feature Sharing
    Hao, Zhiwei
    Luo, Yong
    Wang, Zhi
    Hu, Han
    An, Jianping
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 4262 - 4274
  • [23] Zero-Shot Sketch-Based Remote-Sensing Image Retrieval Based on Multi-Level and Attention-Guided Tokenization
    Yang, Bo
    Wang, Chen
    Ma, Xiaoshuang
    Song, Beiping
    Liu, Zhuang
    Sun, Fangde
    REMOTE SENSING, 2024, 16 (10)