Adaptive Transfer Network for Cross-Domain Person Re-Identification

被引:149
作者
Liu, Jiawei [1 ]
Zha, Zheng-Jun [1 ]
Chen, Di [1 ]
Hong, Richang [2 ]
Wang, Meng [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
[2] HeFei Univ Technol, Hefei, Anhui, Peoples R China
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR.2019.00737
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent deep learning based person re-identification approaches have steadily improved the performance for benchmarks, however they often fail to generalize well from one domain to another. In this work, we propose a novel adaptive transfer network (ATNet) for effective cross-domain person re-identification. ATNet looks into the essential causes of domain gap and addresses it following the principle of "divide-and-conquer". It decomposes the complicated cross-domain transfer into a set of factor-wise sub-transfers, each of which concentrates on style transfer with respect to a certain imaging factor, e.g., illumination, resolution and camera view etc. An adaptive ensemble strategy is proposed to fuse factor-wise transfers by perceiving the affect magnitudes of various factors on images. Such "decomposition-and-ensemble" strategy gives ATNet the capability of precise style transfer at factor level and eventually effective transfer across domains. In particular, ATNet consists of a transfer network composed by multiple factor-wise CycleGANs and an ensemble CycleGAN as well as a selection network that infers the affects of different factors on transferring each image. Extensive experimental results on three widely-used datasets, i.e., Market-1501, DukeMTMC-reID and PRID2011 have demonstrated the effectiveness of the proposed ATNet with significant performance improvements over state-of-the-art methods.
引用
收藏
页码:7195 / 7204
页数:10
相关论文
共 48 条
[1]  
[Anonymous], 2016, ARXIV161102200
[2]  
[Anonymous], 2016, AAAI
[3]  
[Anonymous], 2018, CVPR
[4]  
Bak S., 2018, P EUR C COMP VIS SEP
[5]  
Felzenszwalb P, 2008, PROC CVPR IEEE, P1984
[6]   Unsupervised Visual Domain Adaptation Using Subspace Alignment [J].
Fernando, Basura ;
Habrard, Amaury ;
Sebban, Marc ;
Tuytelaars, Tinne .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :2960-2967
[7]  
Gong BQ, 2012, PROC CVPR IEEE, P2066, DOI 10.1109/CVPR.2012.6247911
[8]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[9]  
Guan XJ, 2018, IEEE CONF COMPUT
[10]   Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels [J].
Han, Bo ;
Yao, Quanming ;
Yu, Xingrui ;
Niu, Gang ;
Xu, Miao ;
Hu, Weihua ;
Tsang, Ivor W. ;
Sugiyama, Masashi .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31