Source-free domain adaptation with unrestricted source hypothesis

被引:7
作者
He, Jiujun [1 ,2 ]
Wu, Liang [1 ,2 ]
Tao, Chaofan [3 ]
Lv, Fengmao [4 ,5 ]
机构
[1] Univ Finance & Econ, Ctr Stat Res, 555 Liutai Rd, Chengdu 611130, Sichuan, Peoples R China
[2] Southwestern Univ Finance & Econ, Sch Stat, 555 Liutai Rd, Chengdu 611130, Sichuan, Peoples R China
[3] Univ Hong Kong, Pok Fu Lam, Hong Kong 999077, Peoples R China
[4] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, West Pk Hitech Zone, Chengdu 611756, Sichuan, Peoples R China
[5] Minist Educ, Engn Res Ctr Sustainable Urban Intelligent Transpo, Chengdu 611756, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain adaptation; Privacy protection; Transfer learning; Deep learning;
D O I
10.1016/j.patcog.2023.110246
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain adaptation aims to bridge the distribution discrepancy across different domains and improve the generalization ability of learning models on the target domain. The existing domain adaptation approaches align the distribution shift via adversarial training on the source and target data. In practice, however, the source data is usually unavailable due to the privacy factor. In this work, we mainly focus on the source -free domain adaptation setting, in which we are only accessible to the model trained on the source data and the unlabeled target data. To this end, we propose the Source -Free Adversarial Domain Adaptation (SFADA) approach to align the distribution of the target domain data in the absence of source domain data. In particular, we develop an effective metric to measure the domain discrepancy by introducing the proxy data of the source domain. To generate the proxy data, our approach retrieves target data which lie over the intersection of the supports of the source and target domains. We also derive the learning bound of the source -free domain adaptation theoretically and show that our proposed SFADA approach is capable of reducing the bound effectively. Additionally, instead of modifying the source model in previous source -free approaches, our SFADA does not require training the source model with specific restrictions (i.e., normalizing the classifier weight) for practice and privacy -related concerns. State-of-the-art results are achieved for different standard domain adaptation benchmarks. The code can be available from https://github.com/tiggers23/SFADA-main.
引用
收藏
页数:8
相关论文
共 38 条
[1]   Generative attention adversarial classification network for unsupervised domain adaptation [J].
Chen, Wendong ;
Hu, Haifeng .
PATTERN RECOGNITION, 2020, 107
[2]   ROAD: Reality Oriented Adaptation for Semantic Segmentation of Urban Scenes [J].
Chen, Yuhua ;
Li, Wen ;
Van Gool, Luc .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7892-7901
[3]   Domain Adaptive Faster R-CNN for Object Detection in the Wild [J].
Chen, Yuhua ;
Li, Wen ;
Sakaridis, Christos ;
Dai, Dengxin ;
Van Gool, Luc .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3339-3348
[4]   Gradually Vanishing Bridge for Adversarial Domain Adaptation [J].
Cui, Shuhao ;
Wang, Shuhui ;
Zhuo, Junbao ;
Su, Chi ;
Huang, Qingming ;
Tian, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :12452-12461
[5]   Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization [J].
Dizaji, Kamran Ghasedi ;
Herandi, Amirhossein ;
Deng, Cheng ;
Cai, Weidong ;
Huang, Heng .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5747-5756
[6]  
Ganin Y, 2016, J MACH LEARN RES, V17
[7]  
Ganin Y, 2015, PR MACH LEARN RES, V37, P1180
[8]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[9]  
Gretton A, 2008, ARXIV08052368
[10]  
Han ZY, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2269