Adaptive Generation of Privileged Intermediate Information for Visible-Infrared Person Re-Identification

被引:0
|
作者
Alehdaghi, Mahdi [1 ]
Josi, Arthur [1 ]
Cruz, Rafael M. O. [1 ]
Shamsolmoali, Pourya [2 ]
Granger, Eric [1 ]
机构
[1] Ecole Polytech Montreal ETS Montreal, Dept Syst Engn, LIVIA, ILLS, Montreal, PQ H3C 1K3, Canada
[2] Univ York, Dept Comp Sci, York YO10 5DD, England
关键词
Feature extraction; Training; Cameras; Bridges; Identification of persons; Generators; Data mining; Computational modeling; Adaptation models; Accuracy; Visible-infrared person re-identification; learning under privileged information; adaptive image generation;
D O I
10.1109/TIFS.2025.3541969
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Visible-infrared person re-identification (V-I ReID) seeks to retrieve images of the same individual captured over a distributed network of RGB and IR sensors. Several V-I ReID approaches directly integrate the V and I modalities to represent images within a shared space. However, given the significant gap in the data distributions between V and I modalities, cross-modal V-I ReID remains challenging. A solution is to involve a privileged intermediate space to bridge between modalities, but in practice, such data is not available and requires selecting or creating effective mechanisms for informative intermediate domains. This paper introduces the Adaptive Generation of Privileged Intermediate Information (AGPI(2)) training approach to adapt and generate a virtual domain that bridges discriminative information between the V and I modalities. AGPI(2) enhances the training of a deep V-I ReID backbone by generating and then leveraging bridging privileged information without modifying the model in the inference phase. This information captures shared discriminative attributes that are not easily ascertainable for the model within individual V or I modalities. Towards this goal, a non-linear generative module is trained with adversarial objectives, transforming V attributes into intermediate spaces that also contain I features. This domain exhibits less domain shift relative to the I domain compared to the V domain. Meanwhile, the embedding module within AGPI(2) aims to extract discriminative modality-invariant features for both modalities by leveraging modality-free descriptors from generated images, making them a bridge between the main modalities. Experiments conducted on challenging V-I ReID datasets indicate that AGPI(2) consistently increases matching accuracy without additional computational resources during inference.
引用
收藏
页码:3400 / 3413
页数:14
相关论文
共 50 条
  • [1] Unified Conditional Image Generation for Visible-Infrared Person Re-Identification
    Pan, Honghu
    Pei, Wenjie
    Li, Xin
    He, Zhenyu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9026 - 9038
  • [2] Occluded Visible-Infrared Person Re-Identification
    Feng, Yujian
    Ji, Yimu
    Wu, Fei
    Gao, Guangwei
    Gao, Yang
    Liu, Tianliang
    Liu, Shangdong
    Jing, Xiao-Yuan
    Luo, Jiebo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 1401 - 1413
  • [3] Auxiliary Representation Guided Network for Visible-Infrared Person Re-Identification
    Qi, Mengzan
    Chan, Sixian
    Hang, Chen
    Zhang, Guixu
    Zeng, Tieyong
    Li, Zhi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 340 - 355
  • [4] Tri-Level Modality-Information Disentanglement for Visible-Infrared Person Re-Identification
    Lu, Zefeng
    Lin, Ronghao
    Hu, Haifeng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2700 - 2714
  • [5] Dual-Semantic Consistency Learning for Visible-Infrared Person Re-Identification
    Zhang, Yiyuan
    Kang, Yuhao
    Zhao, Sanyuan
    Shen, Jianbing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1554 - 1565
  • [6] Frequency domain adaptive framework for visible-infrared person re-identification
    Wang, Jiangcheng
    Li, Yize
    Tao, Xuefeng
    Kong, Jun
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, : 2553 - 2566
  • [7] Visible-Infrared Person Re-Identification via Partially Interactive Collaboration
    Zheng, Xiangtao
    Chen, Xiumei
    Lu, Xiaoqiang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6951 - 6963
  • [8] Contrastive Learning with Information Compensation for Visible-Infrared Person Re-Identification
    Zhang, La
    Guo, Haiyun
    Zhao, Xu
    Sun, Jian
    Wang, Jinqiao
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 1266 - 1271
  • [9] Cross-Modality Semantic Consistency Learning for Visible-Infrared Person Re-Identification
    Liu, Min
    Zhang, Zhu
    Bian, Yuan
    Wang, Xueping
    Sun, Yeqing
    Zhang, Baida
    Wang, Yaonan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 568 - 580
  • [10] Dual Consistency-Constrained Learning for Unsupervised Visible-Infrared Person Re-Identification
    Yang, Bin
    Chen, Jun
    Chen, Cuiqun
    Ye, Mang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 1767 - 1779