DINE: Domain Adaptation from Single and Multiple Black-box Predictors

被引:64
作者
Liang, Jian [1 ,2 ]
Hu, Dapeng [3 ]
Feng, Jiashi [4 ]
He, Ran [1 ,2 ,5 ,6 ]
机构
[1] CRIPAC, Beijing, Peoples R China
[2] CASIA, NLPR, Beijing, Peoples R China
[3] NUS, Singapore, Singapore
[4] ByteDance, Palo Alto, CA USA
[5] UCAS, Beijing, Peoples R China
[6] Chinese Acad Sci, CEBSIT, Beijing, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00784
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To ease the burden of labeling, unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target). Despite impressive progress, prior methods always need to access the raw source data and develop data-dependent alignment approaches to recognize the target samples in a transductive learning manner, which may raise privacy concerns from source individuals. Several recent studies resort to an alternative solution by exploiting the well-trained white-box model from the source domain, yet, it may still leak the raw data via generative adversarial learning. This paper studies a practical and interesting setting for UDA, where only black-box source models (i.e., only network predictions are available) are provided during adaptation in the target domain. To solve this problem, we propose a new two-step knowledge adaptation framework called Distill and fine-tuNE (DINE). Taking into consideration the target data structure, DINE first distills the knowledge from the source predictor to a customized target model, then fine-tunes the distilled model to further fit the target domain. Besides, neural networks are not required to be identical across domains in DINE, even allowing effective adaptation on a low-resource device. Empirical results on three UDA scenarios (i.e., single-source, multisource, and partial-set) confirm that DINE achieves highly competitive performance compared to state-of-the-art data-dependent approaches. Code is available at https://github.com/tim-learn/DINE/.
引用
收藏
页码:7993 / 8003
页数:11
相关论文
共 89 条
[1]  
Ahmed Sk Miraj, 2021, P CVPR, P7
[2]  
[Anonymous], 2013, Proc. ICCV
[3]  
[Anonymous], P ICML
[4]  
[Anonymous], 2019, PROC AAAI C ARTIF IN
[5]  
Asano Y. M., 2019, P ICLR
[6]  
Ben-David Shai, 2007, P NEURIPS
[7]  
Bonino M, 2019, CITY AFTER CHINESE NEW TOWNS, P6
[8]   Partial Adversarial Domain Adaptation [J].
Cao, Zhangjie ;
Ma, Lijia ;
Long, Mingsheng ;
Wang, Jianmin .
COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 :139-155
[9]   AutoDIAL: Automatic DomaIn Alignment Layers [J].
Carlucci, Fabio Maria ;
Porzi, Lorenzo ;
Caputo, Barbara ;
Ricci, Elisa ;
Bulo, Samuel Rota .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5077-5085
[10]   Domain-Specific Batch Normalization for Unsupervised Domain Adaptation [J].
Chang, Woong-Gi ;
You, Tackgeun ;
Seo, Seonguk ;
Kwak, Suha ;
Han, Bohyung .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7346-7354