Learning Granularity-Unified Representations for Text-to-Image Person Re-identification

被引:73
作者
Shao, Zhiyin [1 ]
Zhang, Xinyu [2 ]
Fang, Meng [3 ]
Lin, Zhifeng [1 ]
Wang, Jian [2 ]
Ding, Changxing [1 ]
机构
[1] South China Univ Technol, Guangzhou, Peoples R China
[2] Baidu VIS, Beijing, Peoples R China
[3] Univ Liverpool, Liverpool, Merseyside, England
来源
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022 | 2022年
基金
中国国家自然科学基金;
关键词
Person Re-identification; Text-to-image Retrieval;
D O I
10.1145/3503161.3548028
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Text-to-image person re-identification (ReID) aims to search for pedestrian images of an interested identity via textual descriptions. It is challenging due to both rich intra-modal variations and significant inter-modal gaps. Existing works usually ignore the difference in feature granularity between the two modalities, i.e., the visual features are usually fine-grained while textual features are coarse, which is mainly responsible for the large inter-modal gaps. In this paper, we propose an end-to-end framework based on transformers to learn granularity-unified representations for both modalities, denoted as LGUR. LGUR framework contains two modules: a Dictionary-based Granularity Alignment (DGA) module and a Prototype-based Granularity Unification (PGU) module. In DGA, in order to align the granularities of two modalities, we introduce a Multi-modality Shared Dictionary (MSD) to reconstruct both visual and textual features. Besides, DGA has two important factors, i.e., the cross-modality guidance and the foreground-centric reconstruction, to facilitate the optimization of MSD. In PGU, we adopt a set of shared and learnable prototypes as the queries to extract diverse and semantically aligned features for both modalities in the granularity-unified feature space, which further promotes the ReID performance. Comprehensive experiments show that our LGUR consistently outperforms state-of-the-arts by large margins on both CUHK-PEDES and ICFG-PEDES datasets. Code will be released at https://github.com/ZhiyinShao-H/LGUR.
引用
收藏
页码:5566 / 5574
页数:9
相关论文
共 49 条
  • [1] Aggarwal S, 2020, IEEE WINT CONF APPL, P2606, DOI [10.1109/WACV45572.2020.9093640, 10.1109/wacv45572.2020.9093640]
  • [2] Carion N., 2020, EUROPEAN C COMPUTER, V12346, P213, DOI 10.1007/978-3-030-58452-8_13
  • [3] Improving Text-based Person Search by Spatial Matching and Adaptive Threshold
    Chen, Tianlang
    Xu, Chenliang
    Luo, Jiebo
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 1879 - 1887
  • [4] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [5] Multi-Task Learning With Coarse Priors for Robust Part-Aware Person Re-Identification
    Ding, Changxing
    Wang, Kan
    Wang, Pengfei
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (03) : 1474 - 1488
  • [6] Ding Zefeng, 2021, ARXIV210712666
  • [7] Faghri F., 2018, P BRIT MACH VIS C BM, P12
  • [8] Gao Chenyang, 2021, ARXIV210103036
  • [9] Ge Jing, 2019, ARXIV191203083
  • [10] Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1007/978-3-642-24797-2, 10.1162/neco.1997.9.1.1]