Multi-Granularity Matching Transformer for Text-Based Person Search

被引:5
作者
Bao, Liping [1 ]
Wei, Longhui [2 ]
Zhou, Wengang [1 ]
Liu, Lin [1 ]
Xie, Lingxi [3 ]
Li, Houqiang [1 ]
Tian, Qi [3 ]
机构
[1] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230027, Peoples R China
[2] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230027, Peoples R China
[3] Huawei Cloud, Shenzhen 518129, Peoples R China
关键词
Transformers; Feature extraction; Task analysis; Pedestrians; Visualization; Search problems; Training; Text-based person search; transformer; vision-language pre-trained model; REIDENTIFICATION; ALIGNMENT;
D O I
10.1109/TMM.2023.3321504
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Text-based person search aims to retrieve the most relevant pedestrian images from an image gallery based on textual descriptions. Most existing methods rely on two separate encoders to extract the image and text features, and then elaborately design various schemes to bridge the gap between image and text modalities. However, the shallow interaction between both modalities in these methods is still insufficient to eliminate the modality gap. To address the above problem, we propose TransTPS, a transformer-based framework that enables deeper interaction between both modalities through the self-attention mechanism in transformer, effectively alleviating the modality gap. In addition, due to the small inter-class variance and large intra-class variance in image modality, we further develop two techniques to overcome these limitations. Specifically, Cross-modal Multi-Granularity Matching (CMGM) is proposed to address the problem caused by small inter-class variance and facilitate distinguishing pedestrians with similar appearance. Besides, Contrastive Loss with Weakly Positive pairs (CLWP) is introduced to mitigate the impact of large intra-class variance and contribute to the retrieval of more target images. Experiments on CUHK-PEDES and RSTPReID datasets demonstrate that our proposed framework achieves state-of-the-art performance compared to previous methods.
引用
收藏
页码:4281 / 4293
页数:13
相关论文
共 58 条
[11]  
Farooq A, 2022, AAAI CONF ARTIF INTE, P4477
[12]  
Gao CY, 2021, Arxiv, DOI arXiv:2101.03036
[13]   LAG-Net: Multi-Granularity Network for Person Re-Identification via Local Attention System [J].
Gong, Xun ;
Yao, Zu ;
Li, Xin ;
Fan, Yueqiao ;
Luo, Bin ;
Fan, Jianfeng ;
Lao, Boji .
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 :217-229
[14]  
Hadsell R., 2006, IEEE COMP SOC C COMP
[15]  
Han X., 2021, PROC BRIT MACH VIS C, P1
[16]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[17]   TransReID: Transformer-based Object Re-Identification [J].
He, Shuting ;
Luo, Hao ;
Wang, Pichao ;
Wang, Fan ;
Li, Hao ;
Jiang, Wei .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :14993-15002
[18]  
Henaff OJ, 2020, PR MACH LEARN RES, V119
[19]   ASMR: Learning Attribute-Based Person Search with Adaptive Semantic Margin Regularizer [J].
Jeong, Boseung ;
Park, Jicheol ;
Kwak, Suha .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :11996-12005
[20]  
Jia C, 2021, PR MACH LEARN RES, V139