Deep Attributes Driven Multi-camera Person Re-identification

被引:269
作者
Su, Chi [1 ]
Zhang, Shiliang [1 ]
Xing, Junliang [2 ]
Gao, Wen [1 ]
Tian, Qi [3 ]
机构
[1] Peking Univ, Beijing, Peoples R China
[2] Chinese Acad Sci, Beijing, Peoples R China
[3] Univ Texas San Antonio, Dept Comp Sci, San Antonio, TX USA
来源
COMPUTER VISION - ECCV 2016, PT II | 2016年 / 9906卷
关键词
Deep attributes; Re-identification;
D O I
10.1007/978-3-319-46475-6_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The visual appearance of a person is easily affected by many factors like pose variations, viewpoint changes and camera parameter differences. This makes person Re-Identification (ReID) among multiple cameras a very challenging task. This work is motivated to learn mid-level human attributes which are robust to such visual appearance variations. And we propose a semi-supervised attribute learning framework which progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a three-stage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely deep attributes exhibit superior generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained surprisingly good accuracy on four person ReID datasets. Experiments also show that a simple distance metric learning modular further boosts our method, making it significantly outperform many recent works.
引用
收藏
页码:475 / 491
页数:17
相关论文
共 53 条
[1]  
[Anonymous], 2015, CVPR
[2]  
[Anonymous], 2013, CVPR
[3]  
[Anonymous], 2015, ICMS
[4]  
[Anonymous], 2012, BMVC
[5]  
[Anonymous], 2015, CVPR
[6]  
[Anonymous], 2015, CVPR
[7]  
[Anonymous], 2012, BMVC
[8]  
[Anonymous], 2010, BMVC
[9]  
[Anonymous], 2014, ACM MM
[10]  
[Anonymous], 2015, CVPR