Feature Alignment by Uncertainty and Self-Training for Source-Free Domain

被引:27
作者
Lee, JoonHo [1 ,2 ]
Lee, Gyemin [2 ]
机构
[1] Samsung SDS Technol Res, Machine Learning Res Ctr, Seoul, South Korea
[2] Seoul Natl Univ Sci & Technol, Dept Elect & IT Media Engn, Seoul, South Korea
关键词
Unsupervised domain adaptation; Source-free domain adaptation; Uncertainty; Self-training; Image classification;
D O I
10.1016/j.neunet.2023.02.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation. However, this assumption is often infeasible owing to confidentiality issues or memory constraints on mobile devices. Some recently developed approaches do not require source images during adaptation, but they show limited performance on perturbed images. To address these problems, we propose a novel source-free UDA method that uses only a pre-trained source model and unlabeled target images. Our method captures the aleatoric uncertainty by incorporating data augmentation and trains the feature generator with two consistency objectives. The feature generator is encouraged to learn consistent visual features away from the decision boundaries of the head classifier. Thus, the adapted model becomes more robust to image perturbations. Inspired by self-supervised learning, our method promotes inter-space alignment between the prediction space and the feature space while incorporating intra-space consistency within the feature space to reduce the domain gap between the source and target domains. We also consider epistemic uncertainty to boost the model adaptation performance. Extensive experiments on popular UDA benchmark datasets demonstrate that the proposed source-free method is comparable or even superior to vanilla UDA methods. Moreover, the adapted models show more robust results when input images are perturbed. (c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:682 / 692
页数:11
相关论文
共 50 条
[31]   Teacher-Student Mutual Learning for efficient source-free unsupervised domain adaptation [J].
Li, Wei ;
Fan, Kefeng ;
Yang, Huihua .
KNOWLEDGE-BASED SYSTEMS, 2023, 261
[32]   Adaptive pseudo-label threshold for source-free domain adaptation [J].
Mingwen Shao ;
Sijie Chen ;
Fan Wang ;
Lixu Zhang .
Neural Computing and Applications, 2025, 37 (4) :1875-1887
[33]   DUAL-CONSISTENCY SELF-TRAINING FOR UNSUPERVISED DOMAIN ADAPTATION [J].
Wang, Jie ;
Zhong, Chaoliang ;
Feng, Cheng ;
Sun, Jun ;
Ide, Masaru ;
Yokota, Yasuto .
2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, :1529-1533
[34]   Exploring Relational Knowledge for Source-Free Domain Adaptation [J].
Ma, You ;
Chai, Lin ;
Tu, Shi ;
Wang, Qingling .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (02) :1825-1839
[35]   DOMAIN ADVERSARIAL DEBIASED SELF-TRAINING FOR HYPERSPECTRAL IMAGE CLASSIFICATION [J].
Zhang, Tianshu ;
Feng, Jie ;
Zhou, Ziyu ;
Zhang, Xiangrong ;
Jiao, Licheng .
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, :7637-7640
[36]   Source-free domain adaptation with Class Prototype Discovery [J].
Zhou, Lihua ;
Li, Nianxin ;
Ye, Mao ;
Zhu, Xiatian ;
Tang, Song .
PATTERN RECOGNITION, 2023, 145
[37]   Source-Free Implicit Semantic Augmentation for Domain Adaptation [J].
Zhang, Zheyuan ;
Zhang, Zili .
PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2022, 13630 :17-31
[38]   Source-Free Domain Adaptation for Optical Music Recognition [J].
Rosello, Adrian ;
Fuentes-Martinez, Eliseo ;
Alfaro-Contreras, Maria ;
Rizo, David ;
Calvo-Zaragoza, Jorge .
DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024, PT VI, 2024, 14809 :3-19
[39]   DPStyler: Dynamic PromptStyler for Source-Free Domain Generalization [J].
Tang, Yunlong ;
Wan, Yuxuan ;
Qi, Lei ;
Geng, Xin .
IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 :120-132
[40]   Lightweight Source-Free Domain Adaptation Based on Adaptive Euclidean Alignment for Brain-Computer Interfaces [J].
Wang, Huiyang ;
Han, Hongfang ;
Gan, John Q. ;
Wang, Haixian .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (02) :909-922