Prototypical Cross-domain Self-supervised Learning for Few-shot Unsupervised Domain Adaptation

被引:103
作者
Yue, Xiangyu [1 ]
Zheng, Zangwei [2 ]
Zhang, Shanghang [1 ]
Gao, Yang [3 ]
Darrell, Trevor [1 ]
Keutzer, Kurt [1 ]
Vincentelli, Alberto Sangiovanni [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Nanjing Univ, Nanjing, Peoples R China
[3] Tsinghua Univ, Beijing, Peoples R China
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
D O I
10.1109/CVPR46437.2021.01362
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised Domain Adaptation (UDA) transfers predictive models from a fully-labeled source domain to an unlabeled target domain. In some applications, however, it is expensive even to collect labels in the source domain, making most previous works impractical. To cope with this problem, recent work performed instance-wise cross-domain self-supervised learning, followed by an additional fine-tuning stage. However, the instance-wise self-supervised learning only learns and aligns low-level discriminative features. In this paper, we propose an endto-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)1. PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains. Our framework captures category-wise semantic structures of the data by in-domain prototypical contrastive learning; and performs feature alignment through cross-domain prototypical self-supervision. Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
引用
收藏
页码:13829 / 13839
页数:11
相关论文
共 79 条
[41]  
Kim Donghyun, 2020, ARXIV200308264
[42]  
Komodakis N., 2018, ICLR
[43]   Learning Representations for Automatic Colorization [J].
Larsson, Gustav ;
Maire, Michael ;
Shakhnarovich, Gregory .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :577-593
[44]  
Li J., 2021, P ICLR
[45]  
Long J, 2015, PROC CVPR IEEE, P3431, DOI 10.1109/CVPR.2015.7298965
[46]  
Long M, 2016, PROCEEDINGS OF SYMPOSIUM OF POLICING DIPLOMACY AND THE BELT & ROAD INITIATIVE, 2016, P136
[47]  
Long MS, 2017, PR MACH LEARN RES, V70
[48]  
Long MS, 2018, ADV NEUR IN, V31
[49]  
Long MS, 2015, PR MACH LEARN RES, V37, P97
[50]   Self-Supervised Learning of Pretext-Invariant Representations [J].
Misra, Ishan ;
van der Maaten, Laurens .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6706-6716