Multi-granularity feature utilization network for cross-modality visible-infrared person re-identification

被引:1
作者
Zhang, Guoqing [1 ,2 ]
Zhang, Yinyin [1 ]
Chen, Yuhao [1 ]
Zhang, Hongwei [1 ]
Zheng, Yuhui [1 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp Sci, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Engn Res Ctr Digital Forens, Minist Educ, Nanjing 210044, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modality; Person re-identification; Multi-granularity; Heterogeneous center loss;
D O I
10.1007/s00500-023-08321-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modality visible-infrared person re-identification (VI-ReID) aims to recognize images with the same identity between visible modality and infrared modality, which is a very challenging task because it not only includes the troubles of variations between cross-cameras in traditional person ReID, but also suffers from the huge differences between two modalities. Some existing VI-ReID methods extract the modality-shared information from global features through single-stream or double-stream networks, while ignoring the complementarity between fine-grained and coarse-grained information. To solve this problem, our paper designs a multi-granularity feature utilization network (MFUN) to make up for the lack of shared features between different modalities by promoting the complementarity between coarse-grained features and fine-grained features. Firstly, in order to learn fine-grained shared features better, we design a local feature constraint module, which uses both hard-mining triplet loss and heterogeneous center loss to constrain local features in the common subspace, so as to better promote intra-class compactness and inter-class differences at the level of sample and class center. Then, our method uses a multi-modality feature aggregationmodule for global features to fuse the information of two modalities to narrowthe modality gap. Through the combination of these two modules, visible and infrared image features can be better fused, thus alleviating the problem of modality discrepancy and supplementing the lack of modality-shared information. Extensive experimental results on RegDB and SYSU-MM01 datasets fully prove that our proposed MFUN outperforms the state-of-the-art solutions. Our code is available at https://github.com/ZhangYinyinzzz/MFUN.
引用
收藏
页数:14
相关论文
共 51 条
  • [31] Cross-Modality Person Re-Identification Based on Dual-Path Multi-Branch Network
    Xiang, Xuezhi
    Lv, Ning
    Yu, Zeting
    Zhai, Mingliang
    El Saddik, Abdulmotaleb
    [J]. IEEE SENSORS JOURNAL, 2019, 19 (23) : 11706 - 11713
  • [32] Learning Deep Feature Representations with Domain Guided Dropout for Person Re-identification
    Xiao, Tong
    Li, Hongsheng
    Ouyang, Wanli
    Wang, Xiaogang
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1249 - 1258
  • [33] Ye M, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P1092
  • [34] Deep Learning for Person Re-Identification: A Survey and Outlook
    Ye, Mang
    Shen, Jianbing
    Lin, Gaojie
    Xiang, Tao
    Shao, Ling
    Hoi, Steven C. H.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (06) : 2872 - 2893
  • [35] Visible-Infrared Person Re-Identification via Homogeneous Augmented Tri-Modal Learning
    Ye, Mang
    Shen, Jianbing
    Shao, Ling
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 728 - 739
  • [36] Cross-Modality Person Re-Identification via Modality-Aware Collaborative Ensemble Learning
    Ye, Mang
    Lan, Xiangyuan
    Leng, Qingming
    Shen, Jianbing
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 (29) : 9387 - 9399
  • [37] Modality-aware Collaborative Learning for Visible Thermal Person Re-Identification
    Ye, Mang
    Lan, Xiangyuan
    Leng, Qingming
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 347 - 355
  • [38] Bi-Directional Center-Constrained Top-Ranking for Visible Thermal Person Re-Identification
    Ye, Mang
    Lan, Xiangyuan
    Wang, Zheng
    Yuen, Pong C.
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 407 - 419
  • [39] Ye M, 2018, AAAI CONF ARTIF INTE, P7501
  • [40] Hetero-Center loss for cross -modality person Re-identification
    Yuanxin Zhu
    Zhao Yang
    Li Wang
    Sai Zhao
    Xiao Hu
    Dapeng Tao
    [J]. NEUROCOMPUTING, 2020, 386 : 97 - 109