Illumination Unification for Person Re-Identification

被引:43
作者
Zhang, Guoqing [1 ,2 ]
Luo, Zhiyuan [1 ]
Chen, Yuhao [1 ]
Zheng, Yuhui [1 ]
Lin, Weisi [3 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Engn Res Ctr Digital Forens, Minist Educ, Nanjing 210044, Peoples R China
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Lighting; Training; Image restoration; Testing; Cameras; Task analysis; Image reconstruction; Person re-identification; generative adversarial network; illumination-adaptive; FEATURES; NETWORK;
D O I
10.1109/TCSVT.2022.3169422
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The performance of person re-identification (re-ID) is easily affected by illumination variations caused by different shooting times, places and cameras. Existing illumination-adaptive methods usually require annotating cross-camera pedestrians on each illumination scale, which is unaffordable for a long-term person retrieval system. The cross-illumination person retrieval problem presents a great challenge for accurate person matching. In this paper, we propose a novel method to tackle this task, which only needs to annotate pedestrians on one illumination scale. Specifically, (i) we propose a novel Illumination Estimation and Restoring framework (IER) to estimate the illumination scale of testing images taken at different illumination conditions and restore them to the illumination scale of training images, such that the disparities between training images with uniform illumination and testing images with varying illuminations are reduced. IER achieves promising results on illumination-adaptive dataset and proving itself a proper baseline for cross-illumination person re-ID. (ii) we propose a Mixed Training strategy using both Original and Reconstructed images (MTOR) to further improve model performance. We generate reconstructed images that are consistent with the original training images in content but more similar to the restored images in style. The reconstructed images are combined with the original training images for supervised training to further reduce the domain gap between original training images and restored testing images. To verify the effectiveness of our method, some simulated illumination-adaptive datasets are constructed with various illumination conditions. Extensive experimental results on the simulated datasets validate the effectiveness of the proposed method. The source code is available at https://github.com/FadeOrigin/IUReId.
引用
收藏
页码:6766 / 6777
页数:12
相关论文
共 63 条
  • [1] Bhuiyan A, 2015, IEEE IMAGE PROC, P2329, DOI 10.1109/ICIP.2015.7351218
  • [2] Chen P., 2021, P IEEECVF INT C COMP, P11833
  • [3] Inter-Task Association Critic for Cross-Resolution Person Re-Identification
    Cheng, Zhiyi
    Dong, Qi
    Gong, Shaogang
    Zhu, Xiatian
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2602 - 2612
  • [4] Chung D., 2019, PROC IEEECVF C COMPU, P1
  • [5] Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification
    Deng, Weijian
    Zheng, Liang
    Ye, Qixiang
    Kang, Guoliang
    Yang, Yi
    Jiao, Jianbin
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 994 - 1003
  • [6] Disentangled Representations for Short-Term and Long-Term Person Re-Identification
    Eom, Chanho
    Lee, Wonkyung
    Lee, Geon
    Ham, Bumsub
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 8975 - 8991
  • [7] Blind inverse gamma correction
    Farid, H
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2001, 10 (10) : 1428 - 1433
  • [8] Learning View-Specific Deep Networks for Person Re-Identification
    Feng, Zhanxiang
    Lai, Jianhuang
    Xie, Xiaohua
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (07) : 3472 - 3483
  • [9] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [10] Gui J, 2020, Arxiv, DOI arXiv:2001.06937