Visible-infrared person re-identification model based on feature consistency and modal indistinguishability

被引:7
作者
Sun, Jia [1 ]
Li, Yanfeng [1 ]
Chen, Houjin [1 ]
Peng, Yahui [1 ]
Zhu, Jinlei [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect Informat Engn, Beijing 100044, Peoples R China
[2] Synth Elect Technol Co Ltd, Jinan, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-identification; Cross-modality; Feature consistency; Modal indistinguishability;
D O I
10.1007/s00138-022-01368-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visible-infrared person re-identification (VI-ReID) is used to search person images across cameras under different modalities, which can address the limitation of visible-based ReID in dark environments. Intra-class discrepancy and feature-expression discrepancy caused by modal changes are two major problems in VI-ReID. To address these problems, a VI-ReID model based on feature consistency and modal indistinguishability is proposed. Specifically, image features of the two modalities are obtained through a one-stream network. Aiming at the problem of intra-class discrepancy, the class-level central consistency loss is developed, which makes the two modal feature centroids of the same identity close to their class centroids. To reduce the discrepancy of feature expression, a modal adversarial learning strategy is designed to distinguish whether the features are consistent with the modality attributes. The aim is that the generator generates the feature which the discriminator cannot distinguish its modality, thereby shortening the modality discrepancy and improving the modal indistinguishability. The generator network is optimized by the cross-entropy loss, the triplet loss, the proposed central consistency loss and the modal adversarial loss. The discriminator network is optimized by the modal adversarial loss. Experiments on the two datasets SYSU-MM01 and RegDB demonstrate that our method has better VI-ReID performance. The code has been released in .
引用
收藏
页数:14
相关论文
共 41 条
[31]   Channel Augmented Joint Learning for Visible-Infrared Recognition [J].
Ye, Mang ;
Ruan, Weijian ;
Du, Bo ;
Shou, Mike Zheng .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :13547-13556
[32]  
Ye M, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P1092
[33]   Deep Learning for Person Re-Identification: A Survey and Outlook [J].
Ye, Mang ;
Shen, Jianbing ;
Lin, Gaojie ;
Xiang, Tao ;
Shao, Ling ;
Hoi, Steven C. H. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (06) :2872-2893
[34]   Visible-Infrared Person Re-Identification via Homogeneous Augmented Tri-Modal Learning [J].
Ye, Mang ;
Shen, Jianbing ;
Shao, Ling .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 :728-739
[35]   Cross-Modality Person Re-Identification via Modality-Aware Collaborative Ensemble Learning [J].
Ye, Mang ;
Lan, Xiangyuan ;
Leng, Qingming ;
Shen, Jianbing .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 (29) :9387-9399
[36]  
Ye M, 2018, AAAI CONF ARTIF INTE, P7501
[37]   Unbiased feature enhancement framework for cross-modality person re-identification [J].
Yuan, Bowen ;
Chen, Bairu ;
Tan, Zhiyi ;
Shao, Xi ;
Bao, Bing-Kun .
MULTIMEDIA SYSTEMS, 2022, 28 (03) :749-759
[38]   Hetero-Center loss for cross -modality person Re-identification [J].
Yuanxin Zhu ;
Zhao Yang ;
Li Wang ;
Sai Zhao ;
Xiao Hu ;
Dapeng Tao .
NEUROCOMPUTING, 2020, 386 :97-109
[39]   Scalable Person Re-identification: A Benchmark [J].
Zheng, Liang ;
Shen, Liyue ;
Tian, Lu ;
Wang, Shengjin ;
Wang, Jingdong ;
Tian, Qi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1116-1124
[40]   Knowledge self-distillation for visible-infrared cross-modality person re-identification [J].
Zhou, Yu ;
Li, Rui ;
Sun, Yanjing ;
Dong, Kaiwen ;
Li, Song .
APPLIED INTELLIGENCE, 2022, 52 (09) :10617-10631