Learning Discriminative Features with Multiple Granularities for Person Re-Identification

被引:955
作者
Wang, Guanshuo [1 ]
Yuan, Yufeng [2 ]
Chen, Xiong [2 ]
Li, Jiwei [2 ]
Zhou, Xi [1 ,2 ]
机构
[1] Shanghai Jiao Tong Univ, Cooperat Medianet Innovat Ctr, Shanghai, Peoples R China
[2] CloudWalk Technol, Guangzhou, Peoples R China
来源
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18) | 2018年
关键词
Person re-identification; Feature learning; Multi-branch deep network; NETWORK;
D O I
10.1145/3240508.3240552
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method robustly achieves state-of-the-art performances and outperforms any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we obtain a top result of Rank-1/mAP=96.6%/94.2% with this method after re-ranking.
引用
收藏
页码:274 / 282
页数:9
相关论文
共 50 条
  • [41] Learning Deep Feature Representations with Domain Guided Dropout for Person Re-identification
    Xiao, Tong
    Li, Hongsheng
    Ouyang, Wanli
    Wang, Xiaogang
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1249 - 1258
  • [42] Attention-Aware Compositional Network for Person Re-identification
    Xu, Jing
    Zhao, Rui
    Zhu, Feng
    Wang, Huaming
    Ouyang, Wanli
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2119 - 2128
  • [43] Xuan Z., 2017, ARXIV171108184
  • [44] Deep Metric Learning for Person Re-Identification
    Yi, Dong
    Lei, Zhen
    Liao, Shengcai
    Li, Stan Z.
    [J]. 2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 34 - 39
  • [45] Deeply-Learned Part-Aligned Representations for Person Re-Identification
    Zhao, Liming
    Li, Xi
    Zhuang, Yueting
    Wang, Jingdong
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3239 - 3248
  • [46] Scalable Person Re-identification: A Benchmark
    Zheng, Liang
    Shen, Liyue
    Tian, Lu
    Wang, Shengjin
    Wang, Jingdong
    Tian, Qi
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1116 - 1124
  • [47] Zheng Liang, 2016, ARXIV161002984
  • [48] Zheng Z., 2017, IEEE Trans. on Circuits and Systems for Video Technology
  • [49] Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro
    Zheng, Zhedong
    Zheng, Liang
    Yang, Yi
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3774 - 3782
  • [50] Re-ranking Person Re-identification with k-reciprocal Encoding
    Zhong, Zhun
    Zheng, Liang
    Cao, Donglin
    Li, Shaozi
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3652 - 3661