A Two-Stage Attribute-Constraint Network for Video-Based Person Re-Identification

被引:10
作者
Song, Wanru [2 ]
Zheng, Jieying [2 ]
Wu, Yahong [2 ]
Chen, Changhong [2 ]
Liu, Feng [1 ,2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Jiangsu Key Lab Image Proc & Image Commun, Nanjing 210003, Jiangsu, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Key Lab Broadband Wireless Commun & Sensor Networ, Minist Educ, Nanjing 210003, Jiangsu, Peoples R China
来源
IEEE ACCESS | 2019年 / 7卷
基金
中国国家自然科学基金;
关键词
Attribute; constraint; feature extraction; person re-identification; video;
D O I
10.1109/ACCESS.2019.2890836
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Person re-identification has gradually become a popular research topic in many fields such as security, criminal investigation, and video analysis. This paper aims to learn a discriminative and robust spatial-temporal representation for video-based person re-identification by a two-stage attribute-constraint network (TSAC-Net). The knowledge of pedestrian attributes can help re-identification tasks because it contains high-level information and is robust to visual variations. In this paper, we manually annotate three video-based person re-identification datasets with four static appearance attributes and one dynamic appearance attribute. Each attribute is regarded as a constraint that is added to the deep network. In the first stage of the TSAC-Net, we solve the re-identification problem as a classification issue and adopt a multi-attribute classification loss to train the CNN model. In the second stage, two LSTM networks are trained under the constraint of identities and dynamic appearance attributes. Therefore, the two-stage network provides a spatial-temporal feature extractor for pedestrians in video sequences. In the testing phase, a spatial-temporal representation can be obtained by inputting a sequence of images to the proposed TSAC-Net. We demonstrate the performance improvement gained with the use of attributes on several challenging person re-identification datasets (PRID2011, iLIDS-VID, MARS, and VIPeR). Moreover, the extensive experiments show that our approach achieves state-of-the-art results on three video-based benchmark datasets.
引用
收藏
页码:8508 / 8518
页数:11
相关论文
共 38 条
[11]   Joint Learning for Attribute-Consistent Person Re-Identification [J].
Khamis, Sameh ;
Kuo, Cheng-Hao ;
Singh, Vivek K. ;
Shet, Vinay D. ;
Davis, Larry S. .
COMPUTER VISION - ECCV 2014 WORKSHOPS, PT III, 2015, 8927 :134-146
[12]  
Klaser A., 2008, 2008 19 BRIT MACH VI, P995
[13]  
Köstinger M, 2012, PROC CVPR IEEE, P2288, DOI 10.1109/CVPR.2012.6247939
[14]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[15]   Person Re-identification by Attributes [J].
Layne, Ryan ;
Hospedales, Timothy ;
Gong, Shaogang .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2012, 2012,
[16]  
Layne R, 2014, ADV COMPUT VIS PATT, P93, DOI 10.1007/978-1-4471-6296-4_5
[17]   Clothing Attributes Assisted Person Reidentification [J].
Li, Annan ;
Liu, Luoqi ;
Wang, Kang ;
Liu, Si ;
Yan, Shuicheng .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2015, 25 (05) :869-878
[18]  
Liao SC, 2015, PROC CVPR IEEE, P2197, DOI 10.1109/CVPR.2015.7298832
[19]   Improving person re-identification by attribute and identity learning [J].
Lin, Yutian ;
Zheng, Liang ;
Zheng, Zhedong ;
Wu, Yu ;
Hu, Zhilan ;
Yan, Chenggang ;
Yang, Yi .
PATTERN RECOGNITION, 2019, 95 :151-161
[20]   Video-Based Person Re-Identification With Accumulative Motion Context [J].
Liu, Hao ;
Jie, Zequn ;
Jayashree, Karlekar ;
Qi, Meibin ;
Jiang, Jianguo ;
Yan, Shuicheng ;
Feng, Jiashi .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) :2788-2802