Unsupervised Joint Contrastive Learning for Aerial Person Re-Identification and Remote Sensing Image Classification

被引:0
作者
Zhang, Guoqing [1 ,2 ,3 ]
Li, Jiqiang [1 ]
Ye, Zhonglin [4 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp Sci, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Engn Res Ctr Digital Forens, Minist Educ, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Jiangsu Collaborat Innovat Ctr Atmospher Environm, Dept Jiangsu Collaborat Innovat, Nanjing 210044, Peoples R China
[4] Qinghai Normal Univ, State Key Lab Tibetan Intelligent Informat Proc &, Xining 810000, Peoples R China
基金
中国国家自然科学基金;
关键词
person re-identification; contrastive learning; unsupervised learning; remote sensing;
D O I
10.3390/rs16020422
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Unsupervised person re-identification (Re-ID) aims to match the query image of a person with images in the gallery without the use of supervision labels. Most existing methods usually generate pseudo-labels through clustering algorithms for contrastive learning, which inevitably results in noisy labels assigned to samples. In addition, methods that only apply contrastive learning at the clustering level fail to fully consider instance-level relationships between instances. Motivated by this, we propose a joint contrastive learning (JCL) framework for unsupervised person Re-ID. Our proposed method involves creating two memory banks to store features of cluster centroids and instances and applies cluster and instance-level contrastive learning, respectively, to jointly optimize the neural networks. The cluster-level contrastive loss is used to promote feature compactness within the same cluster and reinforce identity similarity. The instance-level contrastive loss is used to distinguish easily confused samples. In addition, we use a WaveBlock attention module (WAM), which can continuously wave feature map blocks and introduce attention mechanisms to produce more robust feature representations of a person without considerable information loss. Furthermore, we enhance the quality of our clustering by leveraging camera label information to eliminate clusters containing single camera captures. Extensive experimental results on two widely used person Re-ID datasets verify the effectiveness of our JCL method. Meanwhile, we also used two remote sensing datasets to demonstrate the generalizability of our method.
引用
收藏
页数:20
相关论文
共 54 条
[1]   Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised Person Re-Identification and Text Authorship Attribution [J].
Bertocco, Gabriel ;
Theophilo, Antonio ;
Andalo, Fernanda ;
Rocha, Anderson .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 :3876-3890
[2]  
[岑奕 Cen Yi], 2020, [遥感学报, Journal of Remote Sensing], V24, P1299
[3]   ICE: Inter-instance Contrastive Encoding for Unsupervised Person Re-identification [J].
Chen, Hao ;
Lagadec, Benoit ;
Bremond, Francois .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :14940-14949
[4]   Joint Generative and Contrastive Learning for Unsupervised Person Re-identification [J].
Chen, Hao ;
Wang, Yaohui ;
Lagadec, Benoit ;
Dantcheva, Antitza ;
Bremond, Francois .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :2004-2013
[5]   TIPCB: A simple but effective part-based convolutional baseline for text-based person search [J].
Chen, Yuhao ;
Zhang, Guoqing ;
Lu, Yujiang ;
Wang, Zhenxing ;
Zheng, Yuhui .
NEUROCOMPUTING, 2022, 494 :171-181
[6]  
Dai Z., 2022, P ASIAN C COMPUTER V, P1142
[7]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[8]   Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification [J].
Deng, Weijian ;
Zheng, Liang ;
Ye, Qixiang ;
Kang, Guoliang ;
Yang, Yi ;
Jiao, Jianbin .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :994-1003
[9]  
Ester M., 1996, Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, P226, DOI [DOI 10.5555/3001460.3001507, 10.5555/3001460.3001507]
[10]  
Ge YX, 2020, ADV NEUR IN, V33