Diffusion Augmentation and Pose Generation Based Pre-Training Method for Robust Visible-Infrared Person Re-Identification

被引:1
作者
Sun, Rui [1 ]
Huang, Guoxi [2 ]
Xie, Ruirui [2 ]
Wang, Xuebin [2 ]
Chen, Long [2 ]
机构
[1] Hefei Univ Technol, Sch Comp & Informat, Anhui Prov Key Lab Ind Safety & Emergency Technol, Key Lab Knowledge Engn Big Data,Minist Educ, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Sch Comp & Informat, Anhui Prov Key Lab Ind Safety & Emergency Technol, Hefei 230009, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-identification; visible-infrared; self-supervised; corruption robustness; pre-; training;
D O I
10.1109/LSP.2024.3466792
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Cross-Modal Visible-Infrared Person Re-identification (VI-REID) constitutes a vital application for constructing all-time surveillance systems. However, the current VI-REID model exhibits significant performance deterioration in noisy environments. Existing algorithms endeavor to mitigate this challenge through fine-tuning stages. We contend that, in contrast to fine-tuning stages, the pre-training phase can effectively exploit the attributes of extensive unlabeled data, thereby facilitating the development of a robust VI-REID model. Therefore, in this paper, we propose a pre-training method for VI-REID based on Diffusion Augmentation and Pose Generation (DAPG), aiming to enhance the robustness and recognition rate of VI-REID models in the presence of damaged scenes. Multiple transfer experiments on the SYSU-MM01 and RegDB datasets demonstrate that our method outperforms existing self-supervised methods, as evidenced by the results.
引用
收藏
页码:2670 / 2674
页数:5
相关论文
共 50 条
  • [21] Cross-Modality Transformer for Visible-Infrared Person Re-Identification
    Jiang, Kongzhu
    Zhang, Tianzhu
    Liu, Xiang
    Qian, Bingqiao
    Zhang, Yongdong
    Wu, Feng
    COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 : 480 - 496
  • [22] A guidance and alignment transformer model for visible-infrared person re-identification
    Huang, Linyu
    Xue, Zijie
    Ning, Qian
    Guo, Yong
    Li, Yongsheng
    MULTIMEDIA SYSTEMS, 2025, 31 (02)
  • [23] Counterfactual Intervention Feature Transfer for Visible-Infrared Person Re-identification
    Li, Xulin
    Lu, Yan
    Liu, Bin
    Liu, Yating
    Yin, Guojun
    Chu, Qi
    Huang, Jinyang
    Zhu, Feng
    Zhao, Rui
    Yu, Nenghai
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 381 - 398
  • [24] Feature Fusion and Center Aggregation for Visible-Infrared Person Re-Identification
    Wang, Xianju
    Chen, Cuiqun
    Zhu, Yong
    Chen, Shuguang
    IEEE ACCESS, 2022, 10 : 30949 - 30958
  • [25] Partial Enhancement and Channel Aggregation for Visible-Infrared Person Re-Identification
    Jing, Weiwei
    Li, Zhonghua
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2025, E108D (01) : 82 - 91
  • [26] A visible-infrared person re-identification method based on meta-graph isomerization aggregation module
    Shan, Chongrui
    Zhang, Baohua
    Gu, Yu
    Li, Jianjun
    Zhang, Ming
    Wang, Jingyu
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 104
  • [27] Visible-Infrared Person Re-Identification Via Feature Constrained Learning
    Zhang Jing
    Chen Guangfeng
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (12)
  • [28] Dynamic Center Aggregation Loss With Mixed Modality for Visible-Infrared Person Re-Identification
    Kong, Jun
    He, Qibin
    Jiang, Min
    Liu, Tianshan
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 2003 - 2007
  • [29] Correlation-Guided Semantic Consistency Network for Visible-Infrared Person Re-Identification
    Li, Haojie
    Li, Mingxuan
    Peng, Qijie
    Wang, Shijie
    Yu, Hong
    Wang, Zhihui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4503 - 4515
  • [30] A Generative-Based Image Fusion Strategy for Visible-Infrared Person Re-Identification
    Qi, Jia
    Liang, Tengfei
    Liu, Wu
    Li, Yidong
    Jin, Yi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (01) : 518 - 533