DiverseReID: Towards generalizable person re-identification via Dynamic Style Hallucination and decoupled domain experts

被引:0
作者
Jia, Jieru
Xie, Huidi
Huang, Qin
Song, Yantao
Wu, Peng
机构
[1] Shanxi Univ, Inst Big Data Sci & Ind, Taiyuan 030006, Peoples R China
[2] Shanxi Univ, Sch Comp & Informat Technol, Taiyuan, Peoples R China
[3] Shanxi Univ, Engn Res Ctr Machine Vis & Data Min Shanxi Prov, Taiyuan 030006, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-identification; Domain generalization; Data augmentation; Mixture of experts;
D O I
10.1016/j.neunet.2025.107602
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Person re-identification (re-ID) models often fail to generalize well when deployed to other camera networks with domain shift. A classical domain generalization (DG) solution is to enhance the diversity of source data so that a model can learn more domain-invariant, and hence generalizable representations. Existing methods typically mix images from different domains in a mini-batch to generate novel styles, but the mixing coefficient sampled from predefined Beta distribution requires careful manual tuning and may render suboptimal performance. To this end, we propose a plug-and-play Dynamic Style Hallucination (DSH) module that adaptively adjusts the mixing weights based on the style distribution discrepancy between image pairs, which is dynamically measured with the reciprocal of Wasserstein distances. This approach not only reduces the tedious manual tuning of parameters but also significantly enriches style diversity by expanding the perturbation space to the utmost. In addition, to promote inter-domain diversity, we devise a Domain Experts Decoupling (DED) loss, which constrains features from one domain to go towards the orthogonal direction against features from other domains. The proposed approach, dubbed DiverseReID, is parameter-free and computationally efficient. Without bells and whistles, it outperforms the state-of-the-art on various DG re-ID benchmarks. Experiments verify that style diversity, not just the size of the training data, is crucial for enhancing generalization.
引用
收藏
页数:11
相关论文
共 61 条
[1]  
[Anonymous], 2017, ADV NEUR IN, DOI DOI 10.48550/ARXIV.1706.02413
[2]  
Bai HY, 2021, AAAI CONF ARTIF INTE, V35, P6705
[3]   Learning Style-Invariant Robust Representation for Generalizable Visual Instance Retrieval [J].
Chang, Tianyu ;
Yang, Xun ;
Luo, Xin ;
Ji, Wei ;
Wang, Meng .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, :6171-6180
[4]   ABD-Net: Attentive but Diverse Person Re-Identification [J].
Chen, Tianlong ;
Ding, Shaojin ;
Xie, Jingyi ;
Yuan, Ye ;
Chen, Wuyang ;
Yang, Yang ;
Ren, Zhou ;
Wang, Zhangyang .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :8350-8360
[5]   Treasure in Distribution: A Domain Randomization Based Multi-source Domain Generalization for 2D Medical Image Segmentation [J].
Chen, Ziyang ;
Pan, Yongsheng ;
Ye, Yiwen ;
Cui, Hengfei ;
Xia, Yong .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT IV, 2023, 14223 :89-99
[6]   Meta Batch-Instance Normalization for Generalizable Person Re-Identification [J].
Choi, Seokeon ;
Kim, Taekyung ;
Jeong, Minki ;
Park, Hyoungseob ;
Kim, Changick .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :3424-3434
[7]   Generalizable Person Re-identification with Relevance-aware Mixture of Experts [J].
Dai, Yongxing ;
Li, Xiaotong ;
Liu, Jun ;
Tong, Zekun ;
Duan, Ling-Yu .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :16140-16149
[8]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[9]   Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification [J].
Dou, Zhaopeng ;
Wang, Zhongdao ;
Li, Yali ;
Wang, Shengjin .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, :15801-15812
[10]  
Ge YX, 2020, ADV NEUR IN, V33