Uncertainty-aware pseudo-label filtering for source-free unsupervised domain adaptation

被引:1
作者
Chen, Xi [1 ]
Yang, Haosen [2 ]
Zhang, Huicong [1 ]
Yao, Hongxun [1 ]
Zhu, Xiatian [2 ]
机构
[1] Harbin Inst Technol, Fac Comp, Weihai, Peoples R China
[2] Univ Surrey, Surrey, England
基金
国家重点研发计划;
关键词
Source-free unsupervised domain adaptation; Pseudo-label filtering; Uncertainty-aware; Contrastive learning;
D O I
10.1016/j.neucom.2023.127190
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Source -free unsupervised domain adaptation (SFUDA) aims to enable the utilization of a pre -trained source model in an unlabeled target domain without access to source data. Self -training is a way to solve SFUDA, where confident target samples are iteratively selected as pseudo -labeled samples to guide target model learning. However, prior heuristic noisy pseudo -label filtering methods all involve introducing extra models, which are sensitive to model assumptions and may introduce additional errors or mislabeling. In this work, we propose a method called Uncertainty -aware Pseudo -label -filtering Adaptation (UPA) to efficiently address this issue in a coarse -to -fine manner. Specially, we first introduce a sample selection module named Adaptive Pseudo -label Selection (APS), which is responsible for filtering noisy pseudo labels. The APS utilizes a simple sample uncertainty estimation method by aggregating knowledge from neighboring samples and confident samples are selected as clean pseudo -labeled. Additionally, we incorporate Class -Aware Contrastive Learning (CACL) to mitigate the memorization of pseudo -label noise by learning robust pair -wise representation supervised by pseudo labels. Through extensive experiments conducted on three widely used benchmarks, we demonstrate that our proposed method achieves competitive performance on par with state-of-the-art SFUDA methods.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Chen D., 2022, CVPR
  • [2] Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain Adaptation
    Chen, Weijie
    Lin, Luojun
    Yang, Shicai
    Xie, Di
    Pu, Shiliang
    Zhuang, Yueting
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10185 - 10192
  • [3] Chen X., 2020, arXiv preprint arXiv:2003.04297
  • [4] Chu T, 2022, AAAI CONF ARTIF INTE, P472
  • [5] Ganin Y, 2017, ADV COMPUT VIS PATT, P189, DOI 10.1007/978-3-319-58347-1_10
  • [6] Ghosh A, 2017, AAAI CONF ARTIF INTE, P1919
  • [7] Gretton A, 2012, J MACH LEARN RES, V13, P723
  • [8] Gu X, 2020, PROC CVPR IEEE, P9098, DOI 10.1109/CVPR42600.2020.00912
  • [9] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [10] Domain Transfer Through Deep Activation Matching
    Huang, Haoshuo
    Huang, Qixing
    Krahenbuhl, Philipp
    [J]. COMPUTER VISION - ECCV 2018, PT XVI, 2018, 11220 : 611 - 626