Attention-disentangled re-ID network for unsupervised domain adaptive re-identification

被引:1
作者
Wang, Lun [1 ]
Huang, Jiapeng [1 ]
Huang, Luoqi [1 ]
Wang, Fei [1 ]
Gao, Changxin [2 ]
Li, Jinsheng [3 ]
Xiao, Fei [3 ]
Luo, Dapeng [1 ]
机构
[1] China Univ Geosci, Sch Mech Engn & Elect Informat, Wuhan 430074, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan 430074, Peoples R China
[3] Intelligent Technologyco Co Ltd, Chinese Construct Engn Bur 3, Wuhan 430070, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-ID; Domain adaptation; Disentangled learning; Attention-disentangled; Hard sample memory bank; ADAPTATION;
D O I
10.1016/j.knosys.2024.112583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation (UDA) for person re-identification (re-ID) aims to bridge the domain gap by transferring knowledge from the labeled source domain to the unlabeled target domain. Recently, pseudo-label-based approaches have become the dominant solution for addressing this issue. However,most pseudo-label-based methods neglect class boundary samples to reduce the influence of false pseudo labels, sacrificing intra-class semantic diversity. Although hard sample memory bank-based approaches can discover relationships between samples and describe intra-class diversity, they are prone to increasing the risks of generating incorrect pseudo labels. Balancing intra-class variety and pseudo-label accuracy in cross-domain person re-ID poses a challenge. In this study, we propose an attention-disentangled re-ID network (ADDNet) to enhance the discriminative ability of re-ID related feature representations, addressing the contradiction between intra-class diversity and pseudo-label accuracy. Unlike most disentanglement learning-based re-ID approaches that focus on separating explanatory factors, we design a spatial attention-disentangled mechanism to separate re-ID related and unrelated weights, enhancing the discriminative ability of cross-domain feature representation. Additionally, a multiple hard sample memory learning strategy is designed to express intra-class diversity of target samples using re-ID related features extracted from ADDNet. Extensive experiments show that ADDNet achieves 46.7% mAP on the Market-to-MSMT cross-domain re-ID task, outperforming state-of-the-art methods by 6.5 points.
引用
收藏
页数:12
相关论文
共 56 条
[1]  
Bui MH, 2021, ADV NEUR IN, V34
[2]   Domain-Specific Batch Normalization for Unsupervised Domain Adaptation [J].
Chang, Woong-Gi ;
You, Tackgeun ;
Seo, Seonguk ;
Kwak, Suha ;
Han, Bohyung .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7346-7354
[3]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[4]   Bridging the Source-to-Target Gap for Cross-Domain Person Re-identification with Intermediate Domains [J].
Dai, Yongxing ;
Sun, Yifan ;
Liu, Jun ;
Tong, Zekun ;
Duan, Ling-Yu .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (01) :410-434
[5]   IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID [J].
Dai, Yongxing ;
Liu, Jun ;
Sun, Yifan ;
Tong, Zekun ;
Zhang, Chi ;
Duan, Ling-Yu .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :11844-11854
[6]  
Dai Zuozhuo, 2022, P AS C COMP VIS, P1142, DOI DOI 10.1007/978-3-031-26351-4_20
[7]   Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification [J].
Deng, Weijian ;
Zheng, Liang ;
Ye, Qixiang ;
Kang, Guoliang ;
Yang, Yi ;
Jiao, Jianbin .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :994-1003
[8]   Unsupervised Domain Adaptation for Person Re-Identification through Source-Guided Pseudo-Labeling [J].
Dubourvieux, Fabian ;
Audigier, Romaric ;
Loesch, Angelique ;
Ainouz, Samia ;
Canu, Stephane .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :4957-4964
[9]  
Eom Chanho, 2019, ADV NEUR IN, V32
[10]  
Ester M., 1996, KDD96 P 2 INT C KNOW, DOI [DOI 10.5555/3001460.3001507, 10.5555/3001460.3001507]