Pose-driven Deep Convolutional Model for Person Re-identification

被引:663
作者
Su, Chi [1 ,4 ]
Li, Jianing [1 ]
Zhang, Shiliang [1 ]
Xing, Junliang [2 ]
Gao, Wen [1 ]
Tian, Qi [3 ]
机构
[1] Peking Univ, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[3] Univ Texas San Antonio, Dept Comp Sci, San Antonio, TX 78249 USA
[4] Beijing Kingsoft Cloud Network Technol Co Ltd, 33 Xiaoying Rd W, Beijing 100085, Peoples R China
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2017年
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICCV.2017.427
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Feature extraction and matching are two crucial components in person Re-Identification (ReID). The large pose deformations and the complex view variations exhibited by the captured person images significantly increase the difficulty of learning and matching of the features from person images. To overcome these difficulties, in this work we propose a Pose-driven Deep Convolutional (PDC) model to learn improved feature extraction and matching models from end to end. Our deep architecture explicitly leverages the human part cues to alleviate the pose variations and learn robust feature representations from both the global image and different local parts. To match the features from global human body and local body parts, a pose driven feature weighting sub-network is further designed to learn adaptive feature fusions. Extensive experimental analyses and results on three popular datasets demonstrate significant performance improvements of our model over all published state-of-the-art methods.
引用
收藏
页码:3980 / 3989
页数:10
相关论文
共 64 条
[1]  
[Anonymous], 2015, CVPR
[2]  
[Anonymous], 2016, CVPR
[3]  
[Anonymous], 2013, CVPR
[4]  
[Anonymous], 2014, ACM MM
[5]  
[Anonymous], 2015, P 28 INT C NEUR INF
[6]  
[Anonymous], 2017, ARXIV170700798
[7]  
[Anonymous], BMCV
[8]  
[Anonymous], 2016, CVPR
[9]  
[Anonymous], 2012, BMVC
[10]  
[Anonymous], CVPR