Fusion for Visual-Infrared Person ReID in Real-World Surveillance Using Corrupted Multimodal Data

被引:1
作者
Josi, Arthur [1 ]
Alehdaghi, Mahdi [1 ]
Cruz, Rafael M. O. [1 ]
Granger, Eric [1 ]
机构
[1] ETS Montreal, Lab imagerie vis & intelligence artificielle LIVIA, Montreal, PQ, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Deep neural networks; Multimodal fusion; Corrupted images; Data augmentation; Visual-infrared person re-identification;
D O I
10.1007/s11263-025-02396-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visible-infrared person re-identification (V-I ReID) seeks to match images of individuals captured over a distributed network of RGB and IR cameras. The task is challenging due to the significant differences between V and I modalities, especially under real-world conditions, where images face corruptions such as blur, noise, and weather. Despite their practical relevance, deep learning models for multimodal V-I ReID remain far less investigated than for single and cross-modal V to I settings. Moreover, state-of-art V-I ReID models cannot leverage corrupted modality information to sustain a high level of accuracy. In this paper, we propose an efficient model for multimodal V-I ReID - named Multimodal Middle Stream Fusion (MMSF) - that preserves modality-specific knowledge for improved robustness to corrupted multimodal images. In addition, three state-of-art attention-based multimodal fusion models are adapted to address corrupted multimodal data in V-I ReID, allowing for dynamic balancing of the importance of each modality. The literature typically reports ReID performance using clean datasets, but more recently, evaluation protocols have been proposed to assess the robustness of ReID models under challenging real-world scenarios, using data with realistic corruptions. However, these protocols are limited to unimodal V settings. For realistic evaluation of multimodal (and cross-modal) V-I person ReID models, we propose new challenging corrupted datasets for scenarios where V and I cameras are co-located (CL) and not co-located (NCL). Finally, the benefits of our Masking and Local Multimodal Data Augmentation (ML-MDA) strategy are explored to improve the robustness of ReID models to multimodal corruption. Our experiments on clean and corrupted versions of the SYSU-MM01, RegDB, and ThermalWORLD datasets indicate the multimodal V-I ReID models that are more likely to perform well in real-world operational conditions. In particular, the proposed ML-MDA is shown as essential for a V-I person ReID system to sustain high accuracy and robustness in face of corrupted multimodal images. Our multimodal ReID models attains the best accuracy and complexity trade-off under both CL and NCL settings and compared to state-of-art unimodal ReID systems, except for the ThermalWORLD dataset due to its low-quality I. Our MMSF model outperforms every method under CL and NCL camera scenarios. GitHub code: https://github.com/art2611/MREiD-UCD-CCD.git.
引用
收藏
页码:4690 / 4711
页数:22
相关论文
共 84 条
[1]  
Alehdaghi M., 2022, ARXIV
[2]  
Baltruaitis T., TPAMI IEEE 141076078
[3]  
Bhuiyan A, 2020, IEEE WINT CONF APPL, P2664, DOI [10.1109/wacv45572.2020.9093370, 10.1109/WACV45572.2020.9093370]
[4]  
Chang Y., IJCV SPRINGER, DOI [10.1007/s11263-019-01276-z, DOI 10.1007/S11263-019-01276-Z]
[5]  
Chen J., 2019, PRCV
[6]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[7]  
Chen M., 2021, EVALUATING LARGE LAN
[8]   Hi-CMD: Hierarchical Cross-Modality Disentanglement for Visible-Infrared Person Re-Identification [J].
Choi, Seokeon ;
Lee, Sumin ;
Kim, Youngeun ;
Kim, Taekyung ;
Kim, Changick .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :10254-10263
[9]  
Ciresan D, 2012, PROC CVPR IEEE, P3642, DOI 10.1109/CVPR.2012.6248110
[10]   Unsupervised Pre-training for Person Re-identification [J].
Fu, Dengpan ;
Chen, Dongdong ;
Bao, Jianmin ;
Yang, Hao ;
Yuan, Lu ;
Zhang, Lei ;
Li, Houqiang ;
Chen, Dong .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :14745-14754