Confidence-based Visual Dispersal for Few-shot Unsupervised Domain Adaptation

被引:8
|
作者
Xiong, Yizhe [1 ,2 ,3 ]
Chen, Hui [2 ]
Lin, Zijia [1 ]
Zhao, Sicheng [2 ]
Ding, Guiguang [1 ,2 ]
机构
[1] Tsinghua Univ, Sch Software, Beijing, Peoples R China
[2] Beijing Natl Res Ctr Informat Sci & Technol BNRis, Beijing, Peoples R China
[3] Hangzhou Zhuoxi Inst Brain & Intelligence, Hangzhou, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
10.1109/ICCV51070.2023.01067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation aims to transfer knowledge from a fully-labeled source domain to an unlabeled target domain. However, in real-world scenarios, providing abundant labeled data even in the source domain can be infeasible due to the difficulty and high expense of annotation. To address this issue, recent works consider the Few-shot Unsupervised Domain Adaptation (FUDA) where only a few source samples are labeled, and conduct knowledge transfer via self-supervised learning methods. Yet existing methods generally overlook that the sparse label setting hinders learning reliable source knowledge for transfer. Additionally, the learning difficulty difference in target samples is different but ignored, leaving hard target samples poorly classified. To tackle both deficiencies, in this paper, we propose a novel Confidence-based Visual Dispersal Transfer learning method (C-VisDiT) for FUDA. Specifically, C-VisDiT consists of a cross-domain visual dispersal strategy that transfers only high-confidence source knowledge for model adaptation and an intra-domain visual dispersal strategy that guides the learning of hard target samples with easy ones. We conduct extensive experiments on Office-31, Office-Home, VisDA-C, and Domain-Net benchmark datasets and the results demonstrate that the proposed C-VisDiT significantly outperforms state-of-the-art FUDA methods. Our code is available at https://github.com/Bostoncake/C-VisDiT.
引用
收藏
页码:11587 / 11597
页数:11
相关论文
共 50 条
  • [31] Discriminativeness-Preserved Domain Adaptation for Few-Shot Learning
    Liu, Guangzhen
    Lu, Zhiwu
    IEEE ACCESS, 2020, 8 : 168405 - 168413
  • [32] Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling
    Li, Jiacheng
    Tang, Siliang
    Li, Juncheng
    Xiao, Jun
    Wu, Fei
    Pu, Shiliang
    Zhuang, Yueting
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4208 - 4216
  • [33] CellTranspose: Few-shot Domain Adaptation for Cellular Instance Segmentation
    Keaton, Matthew R.
    Zaveri, Ram J.
    Doretto, Gianfranco
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 455 - 466
  • [34] FEW-SHOT ASSOCIATIVE DOMAIN ADAPTATION FOR SURFACE NORMAL ESTIMATION
    Kang, Haeyong
    Kim, Gwangsu
    Yoo, Chang D.
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 4619 - 4623
  • [35] Invariant and consistent: Unsupervised representation learning for few-shot visual recognition
    Wu, Heng
    Zhao, Yifan
    Li, Jia
    NEUROCOMPUTING, 2023, 520 : 1 - 14
  • [36] An unsupervised domain adaptation approach with enhanced transferability and discriminability for bearing fault diagnosis under few-shot samples
    Ma, Wengang
    Zhang, Yadong
    Ma, Liang
    Liu, Ruiqi
    Yan, Shan
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 225
  • [37] Graph-Based Domain Adaptation Few-Shot Learning for Hyperspectral Image Classification
    Xu, Yanbing
    Zhang, Yanmei
    Yue, Tingxuan
    Yu, Chengcheng
    Li, Huan
    REMOTE SENSING, 2023, 15 (04)
  • [38] Supervised Domain Adaptation for Few-Shot Radar-Based Human Activity Recognition
    Li, Xinyu
    He, Yuan
    Zhang, J. Andrew
    Jing, Xiaojun
    IEEE SENSORS JOURNAL, 2021, 21 (22) : 25880 - 25890
  • [39] Convert Cross-Domain Classification Into Few-Shot Learning: A Unified Prompt-Tuning Framework for Unsupervised Domain Adaptation
    Zhu, Yi
    Shen, Hui
    Li, Yun
    Qiang, Jipeng
    Yuan, Yunhao
    Wu, Xindong
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 810 - 821
  • [40] Causal Factor Disentanglement for Few-Shot Domain Adaptation in Video Prediction
    Cornille, Nathan
    Laenen, Katrien
    Sun, Jingyuan
    Moens, Marie-Francine
    ENTROPY, 2023, 25 (11)