Learning Deep Context-aware Features over Body and Latent Parts for Person Re-identification

被引:442
作者
Li, Dangwei [1 ,2 ,3 ]
Chen, Xiaotang [1 ,2 ,3 ]
Zhang, Zhang [1 ,2 ,3 ]
Huang, Kaiqi [1 ,2 ,3 ,4 ]
机构
[1] CASIA, CRIPAC, Beijing, Peoples R China
[2] CASIA, NLPR, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, Beijing, Peoples R China
[4] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing, Peoples R China
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR.2017.782
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Person Re-identification (ReID) is to identify the same person across different cameras. It is a challenging task due to the large variations in person pose, occlusion, background clutter, etc. How to extract powerful features is a fundamental problem in ReID and is still an open problem today. In this paper, we design a Multi-Scale Context-Aware Network (MSCAN) to learn powerful features over full body and body parts, which can well capture the local context knowledge by stacking multi-scale convolutions in each layer. Moreover, instead of using predefined rigid parts, we propose to learn and localize deformable pedestrian parts using Spatial Transformer Networks (STN) with novel spatial constraints. The learned body parts can release some difficulties, e.g. pose variations and background clutters, in part-based representation. Finally, we integrate the representation learning processes of full body and body parts into a unified framework for person ReID through multi-class person identification tasks. Extensive evaluations on current challenging large-scale person ReID datasets, including the image-based Market1501, CUHK03 and sequence-based MARS datasets, show that the proposed method achieves the state-of-the-art results.
引用
收藏
页码:7398 / 7407
页数:10
相关论文
共 55 条
[1]  
[Anonymous], 2014, P CVPR
[2]  
[Anonymous], ARXIV160107255
[3]  
[Anonymous], ARXIV161002579
[4]  
[Anonymous], ARXIV160401850
[5]  
[Anonymous], P ICCV
[6]  
[Anonymous], 2015, NIPS
[7]  
[Anonymous], 2014, PROC IEEE GLOBAL COM
[8]  
[Anonymous], ARXIV160604404
[9]  
[Anonymous], 2015, P IEEE C COMP VIS PA
[10]  
[Anonymous], P CVPR