Person Re-Identification by Camera Correlation Aware Feature Augmentation

被引:237
作者
Chen, Ying-Cong [1 ,2 ,3 ]
Zhu, Xiatian [1 ,4 ]
Zheng, Wei-Shi [1 ,5 ]
Lai, Jian-Huang [1 ,6 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou 510275, Guangdong, Peoples R China
[2] Natl Univ Def Technol, Collaborat Innovat Ctr High Performance Comp, Changsha 410073, Hunan, Peoples R China
[3] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Sha Tin, Hong Kong, Peoples R China
[4] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
[5] Sun Yat Sen Univ, Minist Educ, Key Lab Machine Intelligence & Adv Comp, Guangzhou 510275, Guangdong, Peoples R China
[6] Guangdong Prov Key Lab Informat Secur, Guangzhou 510275, Guangdong, Peoples R China
关键词
Person re-identification; adaptive feature augmentation; view-specific transformation; ADAPTATION; NETWORK;
D O I
10.1109/TPAMI.2017.2666805
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The challenge of person re-identification (re-id) is to match individual images of the same person captured by different non-overlapping camera views against significant and unknown cross-view feature distortion. While a large number of distance metric/subspace learning models have been developed for re-id, the cross-view transformations they learned are view-generic and thus potentially less effective in quantifying the feature distortion inherent to each camera view. Learning view-specific feature transformations for re-id (i.e., view-specific re-id), an under-studied approach, becomes an alternative resort for this problem. In this work, we formulate a novel view-specific person re-identification framework from the feature augmentation point of view, called Camera coRrelation Aware Feature augmenTation (CRAFT). Specifically, CRAFT performs cross-view adaptation by automatically measuring camera correlation from cross-view visual data distribution and adaptively conducting feature augmentation to transform the original features into a new adaptive space. Through our augmentation framework, view-generic learning algorithms can be readily generalized to learn and optimize view-specific sub-models whilst simultaneously modelling view-generic discrimination information. Therefore, our framework not only inherits the strength of view-generic model learning but also provides an effective way to take into account view specific characteristics. Our CRAFT framework can be extended to jointly learn view-specific feature transformations for person re-id across a large network with more than two cameras, a largely under-investigated but realistic re-id setting. Additionally, we present a domain-generic deep person appearance representation which is designed particularly to be towards view invariant for facilitating cross-view adaptation by CRAFT. We conducted extensively comparative experiments to validate the superiority and advantages of our proposed framework over state-of-the-art competitors on contemporary challenging person re-id datasets.
引用
收藏
页码:392 / 408
页数:17
相关论文
共 92 条
  • [1] Ahmed E, 2015, PROC CVPR IEEE, P3908, DOI 10.1109/CVPR.2015.7299016
  • [2] Person Reidentification With Reference Descriptor
    An, Le
    Kafai, Mehran
    Yang, Songfan
    Bhanu, Bir
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2016, 26 (04) : 776 - 787
  • [3] Person Re-Identification by Robust Canonical Correlation Analysis
    An, Le
    Yang, Songfan
    Bhanu, Bir
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2015, 22 (08) : 1103 - 1107
  • [4] [Anonymous], 2012, MATRIX COMPUTATIONS
  • [5] [Anonymous], P ANN M ASS COMP LIN
  • [6] [Anonymous], 2013, COMPUTER VISION ACCV, DOI 10.1007/978-3-642-37331-23
  • [7] [Anonymous], 2007, P 10 IEEE INT WORKSH
  • [8] [Anonymous], 2015, BMVC
  • [9] [Anonymous], PROC CVPR IEEE
  • [10] [Anonymous], 2014, COMPUT SCI