Unsupervised Domain Adaptation for Person Re-Identification through Source-Guided Pseudo-Labeling

被引:16
作者
Dubourvieux, Fabian [1 ,2 ]
Audigier, Romaric [1 ]
Loesch, Angelique [1 ]
Ainouz, Samia [2 ]
Canu, Stephane [2 ]
机构
[1] Univ Paris Saclay, CEA, List, F-91120 Palaiseau, France
[2] Normandie Univ, INSA Rouen, LITIS, Av Univ Madrillet, F-76801 St Etienne Du Rouvray, France
来源
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) | 2021年
关键词
D O I
10.1109/ICPR48806.2021.9412964
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Person Re-Identification (re-ID) aims at retrieving i mages of the same person taken by different cameras. A challenge for re-ID is the performance preservation when a model is used on data of interest (target data) which belong to a different domain from the training data domain (source data). Unsupervised Domain Adaptation (UDA) is an interesting research direction for this challenge as it avoids a costly annotation of the target data. Pseudo-labeling methods achieve the best results in UDA-based re-ID. They incrementally learn with identity pseudo-labels which are initialized by clustering features in the source reID encoder space. Surprisingly, labeled source data are discarded after this initialization step. However, we believe that pseudo-labeling could further leverage the labeled source data in order to improve the post-initialization training steps. In order to improve robustness against erroneous pseudo-labels, we advocate the exploitation of both labeled source data and pseudo-labeled target data during all training iterations. To support our guideline, we introduce a framework which relies on a two-branch architecture optimizing classification and triplet loss based metric learning in source and target domains, respectively, in order to allow adaptability to the target domain while ensuring robustness to noisy pseudo-labels. Indeed, shared low and mid-level parameters benefit from the source classification and triplet loss signal while high-level parameters of the target branch learn domain-specific features. Our method is simple enough to be easily combined with existing pseudo-labeling UDA approaches. We show experimentally that it is efficient and improves performance when the base method has no mechanism to deal with pseudo-label noise. Our approach reaches state-of-the-art performance when evaluated on commonly used datasets, Market-1501 and DukeMTMC-reID, and outperforms the state of the art when targeting the bigger and more challenging dataset MSMT.
引用
收藏
页码:4957 / 4964
页数:8
相关论文
共 27 条
[1]   Open Set Domain Adaptation [J].
Busto, Pau Panareda ;
Gall, Juergen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :754-763
[2]  
Chang XB, 2019, AAAI CONF ARTIF INTE, P3288
[3]   Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification [J].
Deng, Weijian ;
Zheng, Liang ;
Ye, Qixiang ;
Kang, Guoliang ;
Yang, Yi ;
Jiao, Jianbin .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :994-1003
[4]   Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification [J].
Fu, Yang ;
Wei, Yunchao ;
Wang, Guanshuo ;
Zhou, Yuqian ;
Shi, Honghui ;
Huang, Thomas S. .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6111-6120
[5]  
Ge Yixiao, 2020, ABS200101526 CORR
[6]  
He K., 2015, C COMPUTER VISION PA, DOI DOI 10.1109/CVPR.2016.90
[7]  
Hermans Alexander, 2017, DEFENSE TRIPLET LOSS, P2
[8]   Design Method and Application of DNA in the Design of Cultural Creative Products [J].
Li, Yi ;
Li, Jin ;
Yan, Qiu .
CROSS-CULTURAL DESIGN: APPLICATIONS IN CULTURAL HERITAGE, CREATIVITY AND SOCIAL DEVELOPMENT, CCD 2018, 2018, 10912 :172-185
[9]   Cross-Dataset Person Re-Identification via Unsupervised Pose Disentanglement and Adaptation [J].
Li, Yu-Jhe ;
Lin, Ci-Siang ;
Lin, Yan-Bo ;
Wang, Yu-Chiang Frank .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :7918-7928
[10]  
Lin Shan, 2018, ARXIV180701440