Semi-supervised Domain Adaptation via Joint Contrastive Learning with Sensitivity

被引:1
作者
Tu, Keyu [1 ]
Wang, Zilei [1 ]
Li, Junjie [1 ]
Zhang, Yixin [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
中国国家自然科学基金;
关键词
computer vision; domain adaptation; contrastive learning;
D O I
10.1145/3581783.3611991
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semi-supervised Domain Adaptation (SSDA) aims to learn a well-performed model using fully labeled source samples and scarcely labeled target samples, along with unlabeled target samples. Due to the dominant presence of labeled samples from the source domain in the training data, both the feature extractor and classifier can display bias towards the source domain. This can result in sub-optimal feature extraction for the challenging target samples that have notable differences from the source domain. Moreover, the source-favored classifier can hinder the classification performance of the target domain. To this end, we propose a novel Joint Contrastive Learning with Sensitivity (JCLS) in this paper, which consists of sensitivity-aware feature contrastive learning (SFCL) and class-wise probabilistic contrastive learning (CPCL). Different from the traditional contrastive learning, SFCL pays more attention to the sensitive samples during optimizing the feature extractor, and consequently the feature discrimination of unlabeled samples can be enhanced. CPCL performs class-wise contrastive learning in the probabilistic space to enforce the cross-domain classifier to match the real distribution of source and target samples. By combining these two components, our JCLS is able to extract domain-invariant and compact features and obtain a well-performed classifier. We conduct the experiments on the DomainNet and Office-Home benchmarks, and the results show that our approach achieves state-of-the-art performance.
引用
收藏
页码:5645 / 5654
页数:10
相关论文
共 52 条
[1]  
[Anonymous], IEEE T PATTERN ANAL
[2]  
Ao Shuang, 2017, P AAAI C ART INT, V31
[3]  
Bachman P, 2019, ADV NEUR IN, V32
[4]  
Berthelot D, 2019, ADV NEUR IN, V32
[5]  
Berthelot David, 2021, ARXIV210604732
[6]   Contrastive Test-Time Adaptation [J].
Chen, Dian ;
Wang, Dequan ;
Darrell, Trevor ;
Ibrahimi, Sayna .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :295-305
[7]  
Chen T., 2020, Advances in Neural Information Processing Systems, V33, P22243
[8]   Safe Model-Free Optimal Voltage Control via Continuous-Time Zeroth-Order Methods [J].
Chen, Xin ;
Poveda, Jorge, I ;
Li, N. .
2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, :4064-4070
[9]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[10]  
Chen Xinlei, 2020, ARXIV200304297