Learning dual attention enhancement feature for visible-infrared person re-identification

被引:2
|
作者
Zhang, Guoqing [1 ,2 ]
Zhang, Yinyin [1 ]
Zhang, Hongwei [1 ]
Chen, Yuhao [1 ]
Zheng, Yuhui [1 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp Sci, Nanjing 210044, Peoples R China
[2] Nanjing Univ Sci & Technol, Jiangsu Key Lab Image & Video Understanding Social, Nanjing 210094, Peoples R China
关键词
Cross-modality; Person re-identification; Shallow feature; NETWORK;
D O I
10.1016/j.jvcir.2024.104076
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Most previous visible-infrared person re-identification methods emphasized learning modality-shared features to narrow the modality differences, while neglecting the benefits of modality-specific features for feature embedding and narrowing the modality gap. To tackle this issue, our paper designs a method based on dual attention enhancement features to use shallow and deep features simultaneously. We first convert visible images into gray images to alleviate the visual difference. Then, to close the difference between modalities by learning the modality-specific features, we design a shallow feature measurement module, in which we use a class-specific maximum mean discrepancy loss to measure the distribution difference of specific features between two modalities. Finally, we design a dual attention feature enhancement module, which aims to mine more useful context information from modality-shared features to shorter the distance between classes within modalities. Specifically, our model exceeds the current SOTAs on SYSU-MM01, with 66.61% Rank -1 accuracy and 62.86% mAP.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning dual attention enhancement feature for visible–infrared person re-identification
    Zhang, Guoqing
    Zhang, Yinyin
    Zhang, Hongwei
    Chen, Yuhao
    Zheng, Yuhui
    Journal of Visual Communication and Image Representation, 2024, 99
  • [2] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Hao Wang
    Xiaojun Bi
    Changdong Yu
    Neural Processing Letters, 56
  • [3] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Wang, Hao
    Bi, Xiaojun
    Yu, Changdong
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [4] Visible-Infrared Person Re-Identification Via Feature Constrained Learning
    Zhang Jing
    Chen Guangfeng
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (12)
  • [5] Progressive Discriminative Feature Learning for Visible-Infrared Person Re-Identification
    Zhou, Feng
    Cheng, Zhuxuan
    Yang, Haitao
    Song, Yifeng
    Fu, Shengpeng
    ELECTRONICS, 2024, 13 (14)
  • [6] Joint Modal Alignment and Feature Enhancement for Visible-Infrared Person Re-Identification
    Lin, Ronghui
    Wang, Rong
    Zhang, Wenjing
    Wu, Ao
    Bi, Yihan
    SENSORS, 2023, 23 (11)
  • [7] Dual-granularity feature fusion in visible-infrared person re-identification
    Cai, Shuang
    Yang, Shanmin
    Hu, Jing
    Wu, Xi
    IET IMAGE PROCESSING, 2024, 18 (04) : 972 - 980
  • [8] Multi-dimensional feature learning for visible-infrared person re-identification
    Yang, Zhenzhen
    Wu, Xinyi
    Yang, Yongpeng
    BIG DATA RESEARCH, 2025, 40
  • [9] Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification
    Feng, Jiawei
    Wu, Ancong
    Zhen, Wei-Shi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22752 - 22761
  • [10] Unbiased Feature Learning with Causal Intervention for Visible-Infrared Person Re-Identification
    Yuan, Bo wen
    Lu, Jiahao
    You, Sisi
    Bao, Bing-kun
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (10)