Leaning compact and representative features for cross-modality person re-identification

被引:16
作者
Gao, Guangwei [1 ,2 ]
Shao, Hao [1 ]
Wu, Fei [1 ]
Yang, Meng [3 ]
Yu, Yi [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Inst Adv Technol, Nanjing, Peoples R China
[2] Natl Inst Informat, Digital Content & Media Sci Res Div, Tokyo, Japan
[3] Sun Yat Sen Univ, Key Lab Machine Intelligence & Adv Comp, Minist Educ, Guangzhou, Peoples R China
来源
WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS | 2022年 / 25卷 / 04期
基金
中国国家自然科学基金;
关键词
Person re-identification; Cross-modality; Angular triplet loss; Knowledge distillation loss;
D O I
10.1007/s11280-022-01014-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper pays close attention to the cross-modality visible-infrared person re-identification (VI Re-ID) task, which aims to match pedestrian samples between visible and infrared modes. In order to reduce the modality-discrepancy between samples from different cameras, most existing works usually use constraints based on Euclidean metric. Because of the Euclidean based distance metric strategy cannot effectively measure the internal angles between the embedded vectors, the existing solutions cannot learn the angularly discriminative feature embedding. Since the most important factor affecting the classification task based on embedding vector is whether there is an angularly discriminative feature space, in this paper, we present a new loss function called Enumerate Angular Triplet (EAT) loss. Also, motivated by the knowledge distillation, to narrow down the features between different modalities before feature embedding, we further present a novel Cross-Modality Knowledge Distillation (CMKD) loss. Benefit from the above two considerations, the embedded features are discriminative enough in a way to tackle modality-discrepancy problem. The experimental results on RegDB and SYSU-MM01 datasets have demonstrated that the proposed method is superior to the other most advanced methods in terms of impressive performance. Code is available at https://github.com/IVIPLab/LCCRF.
引用
收藏
页码:1649 / 1666
页数:18
相关论文
共 53 条
  • [1] Beyond triplet loss: a deep quadruplet network for person re-identification
    Chen, Weihua
    Chen, Xiaotang
    Zhang, Jianguo
    Huang, Kaiqi
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1320 - 1329
  • [2] Choi S, 2020, PROC CVPR IEEE, P10254, DOI 10.1109/CVPR42600.2020.01027
  • [3] Dai PY, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P677
  • [4] SphereRelD: Deep hypersphere manifold embedding for person re-identification
    Fan, Xing
    Jiang, Wei
    Luo, Hao
    Fei, Mengjuan
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 51 - 58
  • [5] Learning Modality-Specific Representations for Visible-Infrared Person Re-Identification
    Feng, Zhanxiang
    Lai, Jianhuang
    Xie, Xiaohua
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 579 - 590
  • [6] Learning View-Specific Deep Networks for Person Re-Identification
    Feng, Zhanxiang
    Lai, Jianhuang
    Xie, Xiaohua
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (07) : 3472 - 3483
  • [7] Hierarchical Deep CNN Feature Set-Based Representation Learning for Robust Cross-Resolution Face Recognition
    Gao, Guangwei
    Yu, Yi
    Yang, Jian
    Qi, Guo-Jun
    Yang, Meng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 2550 - 2560
  • [8] Constructing multilayer locality-constrained matrix regression framework for noise robust face super-resolution
    Gao, Guangwei
    Yu, Yi
    Xie, Jin
    Yang, Jian
    Yang, Meng
    Zhang, Jian
    [J]. PATTERN RECOGNITION, 2021, 110 (110)
  • [9] Learning robust and discriminative low-rank representations for face recognition with occlusion
    Gao, Guangwei
    Yang, Jian
    Jing, Xiao-Yuan
    Shen, Fumin
    Yang, Wankou
    Yue, Dong
    [J]. PATTERN RECOGNITION, 2017, 66 : 129 - 143
  • [10] Hao Y, 2019, AAAI CONF ARTIF INTE, P8385