Prototypical Cross-domain Self-supervised Learning for Few-shot Unsupervised Domain Adaptation

被引:103
作者
Yue, Xiangyu [1 ]
Zheng, Zangwei [2 ]
Zhang, Shanghang [1 ]
Gao, Yang [3 ]
Darrell, Trevor [1 ]
Keutzer, Kurt [1 ]
Vincentelli, Alberto Sangiovanni [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Nanjing Univ, Nanjing, Peoples R China
[3] Tsinghua Univ, Beijing, Peoples R China
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
D O I
10.1109/CVPR46437.2021.01362
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised Domain Adaptation (UDA) transfers predictive models from a fully-labeled source domain to an unlabeled target domain. In some applications, however, it is expensive even to collect labels in the source domain, making most previous works impractical. To cope with this problem, recent work performed instance-wise cross-domain self-supervised learning, followed by an additional fine-tuning stage. However, the instance-wise self-supervised learning only learns and aligns low-level discriminative features. In this paper, we propose an endto-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)1. PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains. Our framework captures category-wise semantic structures of the data by in-domain prototypical contrastive learning; and performs feature alignment through cross-domain prototypical self-supervision. Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
引用
收藏
页码:13829 / 13839
页数:11
相关论文
共 79 条
[71]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
[72]   Deep Hashing Network for Unsupervised Domain Adaptation [J].
Venkateswara, Hemanth ;
Eusebio, Jose ;
Chakraborty, Shayok ;
Panchanathan, Sethuraman .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5385-5394
[73]  
Wang Hanchen, 2020, ARXIV201001089
[74]  
Wang Q, 2020, AAAI CONF ARTIF INTE, V34, P6243
[75]   Unsupervised Feature Learning via Non-Parametric Instance Discrimination [J].
Wu, Zhirong ;
Xiong, Yuanjun ;
Yu, Stella X. ;
Lin, Dahua .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3733-3742
[76]   Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking [J].
Yoon, Jae Shin ;
Shiratori, Takaaki ;
Yu, Shoou-, I ;
Park, Hyun Soo .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4596-4604
[77]   Domain Randomization and Pyramid Consistency: Simulation-to-Real Generalization without Accessing Target Domain Data [J].
Yue, Xiangyu ;
Zhang, Yang ;
Zhao, Sicheng ;
Sangiovanni-Vincentelli, Alberto ;
Keutzer, Kurt ;
Gong, Boqing .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :2100-2110
[78]  
Yuki Asano M., 2020, PROC INT C LEARN REP, P1
[79]   Colorful Image Colorization [J].
Zhang, Richard ;
Isola, Phillip ;
Efros, Alexei A. .
COMPUTER VISION - ECCV 2016, PT III, 2016, 9907 :649-666