A Comprehensive Survey on Source-Free Domain Adaptation

被引:47
作者
Li, Jingjing [1 ,2 ]
Yu, Zhiqi [3 ]
Du, Zhekai [3 ]
Zhu, Lei [4 ]
Shen, Heng Tao [3 ]
机构
[1] Univ Elect Sci & Technol China UESTC, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Shenzhen Inst Adv Study, Chengdu 611731, Peoples R China
[3] Univ Elect Sci & Technol China UESTC, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[4] Tongji Univ, Sch Elect & Informat Engn, Shanghai 200070, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Surveys; Transfer learning; Adaptation models; Task analysis; Data models; Data privacy; Computer vision; data-free learning; domain adaptation; transfer learning;
D O I
10.1109/TPAMI.2024.3370978
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over the past decade, domain adaptation has become a widely studied branch of transfer learning which aims to improve performance on target domains by leveraging knowledge from the source domain. Conventional domain adaptation methods often assume access to both source and target domain data simultaneously, which may not be feasible in real-world scenarios due to privacy and confidentiality concerns. As a result, the research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years, which only utilizes the source-trained model and unlabeled target data to adapt to the target domain. Despite the rapid explosion of SFDA work, there has been no timely and comprehensive survey in the field. To fill this gap, we provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme based on the framework of transfer learning. Instead of presenting each approach independently, we modularize several components of each method to more clearly illustrate their relationships and mechanisms in light of the composite properties of each method. Furthermore, we compare the results of more than 30 representative SFDA methods on three popular classification benchmarks, namely Office-31, Office-home, and VisDA, to explore the effectiveness of various technical routes and the combination effects among them. Additionally, we briefly introduce the applications of SFDA and related fields. Drawing on our analysis of the challenges confronting SFDA, we offer some insights into future research directions and potential settings.
引用
收藏
页码:5743 / 5762
页数:20
相关论文
共 230 条
[21]   Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations [J].
Cui, Shuhao ;
Wang, Shuhui ;
Zhuo, Junbao ;
Li, Liang ;
Huang, Qingming ;
Tian, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3940-3949
[22]   A survey on heterogeneous transfer learning [J].
Day O. ;
Khoshgoftaar T.M. .
Journal of Big Data, 2017, 4 (01)
[23]  
de Masson Autume Cyprien, 2019, P 33 INT C NEUR INF, P13132
[24]   Source-Free Domain Adaptation via Distribution Estimation [J].
Ding, Ning ;
Xu, Yixing ;
Tang, Yehui ;
Xu, Chao ;
Wang, Yunhe ;
Tao, Dacheng .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :7202-7212
[25]   Source-free unsupervised multi-source domain adaptation via proxy task for person re-identification [J].
Ding, Yi ;
Duan, Zhikui ;
Li, Shiren .
VISUAL COMPUTER, 2022, 38 (06) :1871-1882
[26]   ProxyMix: Proxy-based Mixup training with label refinery for source-free domain adaptation [J].
Ding, Yuhe ;
Sheng, Lijun ;
Liang, Jian ;
Zheng, Aihua ;
He, Ran .
NEURAL NETWORKS, 2023, 167 :92-103
[27]   Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization [J].
Dizaji, Kamran Ghasedi ;
Herandi, Amirhossein ;
Deng, Cheng ;
Cai, Weidong ;
Huang, Heng .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5747-5756
[28]  
Dong J., 2021, P ADV NEUR INF PROC, P2848
[29]  
Dosovitskiy A., 2021, 9 INT C LEARN REPR I
[30]   Unpaired Multi-Modal Segmentation via Knowledge Distillation [J].
Dou, Qi ;
Liu, Quande ;
Heng, Pheng Ann ;
Glocker, Ben .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (07) :2415-2425