Feature Alignment by Uncertainty and Self-Training for Source-Free Domain

被引:27
作者
Lee, JoonHo [1 ,2 ]
Lee, Gyemin [2 ]
机构
[1] Samsung SDS Technol Res, Machine Learning Res Ctr, Seoul, South Korea
[2] Seoul Natl Univ Sci & Technol, Dept Elect & IT Media Engn, Seoul, South Korea
关键词
Unsupervised domain adaptation; Source-free domain adaptation; Uncertainty; Self-training; Image classification;
D O I
10.1016/j.neunet.2023.02.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation. However, this assumption is often infeasible owing to confidentiality issues or memory constraints on mobile devices. Some recently developed approaches do not require source images during adaptation, but they show limited performance on perturbed images. To address these problems, we propose a novel source-free UDA method that uses only a pre-trained source model and unlabeled target images. Our method captures the aleatoric uncertainty by incorporating data augmentation and trains the feature generator with two consistency objectives. The feature generator is encouraged to learn consistent visual features away from the decision boundaries of the head classifier. Thus, the adapted model becomes more robust to image perturbations. Inspired by self-supervised learning, our method promotes inter-space alignment between the prediction space and the feature space while incorporating intra-space consistency within the feature space to reduce the domain gap between the source and target domains. We also consider epistemic uncertainty to boost the model adaptation performance. Extensive experiments on popular UDA benchmark datasets demonstrate that the proposed source-free method is comparable or even superior to vanilla UDA methods. Moreover, the adapted models show more robust results when input images are perturbed. (c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:682 / 692
页数:11
相关论文
共 50 条
  • [21] GALA: Graph Diffusion-Based Alignment With Jigsaw for Source-Free Domain Adaptation
    Luo, Junyu
    Gu, Yiyang
    Luo, Xiao
    Ju, Wei
    Xiao, Zhiping
    Zhao, Yusheng
    Yuan, Jingyang
    Zhang, Ming
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 9038 - 9051
  • [22] Generation, division and training: A promising method for source-free unsupervised domain adaptation
    Tian, Qing
    Zhao, Mengna
    NEURAL NETWORKS, 2024, 172
  • [23] Adversarial Source Generation for Source-Free Domain Adaptation
    Cui, Chaoran
    Meng, Fan'an
    Zhang, Chunyun
    Liu, Ziyi
    Zhu, Lei
    Gong, Shuai
    Lin, Xue
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4887 - 4898
  • [24] Robust source-free domain adaptation with anti-adversarial samples training
    Wang, Zhirui
    Yang, Liu
    Han, Yahong
    NEUROCOMPUTING, 2025, 614
  • [25] UNSUPERVISED DOMAIN ADAPTATION FOR SPEECH RECOGNITION VIA UNCERTAINTY DRIVEN SELF-TRAINING
    Khurana, Sameer
    Moritz, Niko
    Hori, Takaaki
    Le Roux, Jonathan
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6553 - 6557
  • [26] Continual Source-Free Unsupervised Domain Adaptation
    Ahmed, Waqar
    Morerio, Pietro
    Murino, Vittorio
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2023, PT I, 2023, 14233 : 14 - 25
  • [27] Overcoming learning bias via Prototypical Feature Compensation for source-free domain adaptation
    Pan, Zicheng
    Yu, Xiaohan
    Zhang, Weichuan
    Gao, Yongsheng
    PATTERN RECOGNITION, 2025, 158
  • [28] Source-free domain adaptation for image segmentation
    Bateson, Mathilde
    Kervadec, Hoel
    Dolz, Jose
    Lombaert, Herve
    Ben Ayed, Ismail
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [29] Reducing bias in source-free unsupervised domain adaptation for regression
    Zhan, Qianshan
    Zeng, Xiao-Jun
    Wang, Qian
    NEURAL NETWORKS, 2025, 185
  • [30] Source-free domain adaptive person search
    Yan, Lan
    Zheng, Wenbo
    Li, Kenli
    PATTERN RECOGNITION, 2025, 161