Confidence-based Visual Dispersal for Few-shot Unsupervised Domain Adaptation

被引:8
|
作者
Xiong, Yizhe [1 ,2 ,3 ]
Chen, Hui [2 ]
Lin, Zijia [1 ]
Zhao, Sicheng [2 ]
Ding, Guiguang [1 ,2 ]
机构
[1] Tsinghua Univ, Sch Software, Beijing, Peoples R China
[2] Beijing Natl Res Ctr Informat Sci & Technol BNRis, Beijing, Peoples R China
[3] Hangzhou Zhuoxi Inst Brain & Intelligence, Hangzhou, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
10.1109/ICCV51070.2023.01067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation aims to transfer knowledge from a fully-labeled source domain to an unlabeled target domain. However, in real-world scenarios, providing abundant labeled data even in the source domain can be infeasible due to the difficulty and high expense of annotation. To address this issue, recent works consider the Few-shot Unsupervised Domain Adaptation (FUDA) where only a few source samples are labeled, and conduct knowledge transfer via self-supervised learning methods. Yet existing methods generally overlook that the sparse label setting hinders learning reliable source knowledge for transfer. Additionally, the learning difficulty difference in target samples is different but ignored, leaving hard target samples poorly classified. To tackle both deficiencies, in this paper, we propose a novel Confidence-based Visual Dispersal Transfer learning method (C-VisDiT) for FUDA. Specifically, C-VisDiT consists of a cross-domain visual dispersal strategy that transfers only high-confidence source knowledge for model adaptation and an intra-domain visual dispersal strategy that guides the learning of hard target samples with easy ones. We conduct extensive experiments on Office-31, Office-Home, VisDA-C, and Domain-Net benchmark datasets and the results demonstrate that the proposed C-VisDiT significantly outperforms state-of-the-art FUDA methods. Our code is available at https://github.com/Bostoncake/C-VisDiT.
引用
收藏
页码:11587 / 11597
页数:11
相关论文
共 50 条
  • [41] DOMAIN ADAPTATION FOR LEARNING GENERATOR FROM PAIRED FEW-SHOT DATA
    Teng, Chun-Chih
    Chen, Pin-Yu
    Chiu, Wei-Chen
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1750 - 1754
  • [42] Perspectives of Calibrated Adaptation for Few-Shot Cross-Domain Classification
    Kong, Dechen
    Yang, Xi
    Wang, Nannan
    Gao, Xinbo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2410 - 2421
  • [43] Knowledge-Enhanced Domain Adaptation in Few-Shot Relation Classification
    Zhang, Jiawen
    Zhu, Jiaqi
    Yang, Yi
    Shi, Wandong
    Zhang, Congcong
    Wang, Hongan
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2183 - 2191
  • [44] Cross-Corpus Speech Emotion Recognition Based on Few-Shot Learning and Domain Adaptation
    Ahn, Youngdo
    Lee, Sung Joo
    Shin, Jong Won
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1190 - 1194
  • [45] StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation
    Alanov, Aibek
    Titov, Vadim
    Nakhodnov, Maksim
    Vetrov, Dmitry
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2184 - 2194
  • [46] Few-Shot Model-Based Adaptation in Noisy Conditions
    Arndt, Karol
    Ghadirzadeh, Ali
    Hazara, Murtaza
    Kyrki, Ville
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02): : 4193 - 4200
  • [47] AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection
    Gao, Yipeng
    Lin, Kun-Yu
    Yan, Junkai
    Wang, Yaowei
    Zheng, Wei-Shi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3261 - 3271
  • [48] HARDMIX: A REGULARIZATION METHOD TO MITIGATE THE LARGE SHIFT IN FEW-SHOT DOMAIN ADAPTATION
    Liang, Ziyun
    Gu, Yun
    Yang, Jie
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 454 - 458
  • [49] Domain adversarial adaptation framework for few-shot QoT estimation in optical networks
    Cai, Zhuojun
    Wang, Qihang
    Deng, Yubin
    Zhang, Peng
    Zhou, Gai
    Li, Yang
    Khan, Faisal Nadeem
    JOURNAL OF OPTICAL COMMUNICATIONS AND NETWORKING, 2024, 16 (11) : 1133 - 1144
  • [50] A Cross-Domain Few-Shot Visual Object Tracker Based on Bidirectional Adversary Generation
    Wang, Yilu
    Yang, Qi
    Liu, Lu
    Zhang, Xiaomeng
    IEEE SENSORS JOURNAL, 2024, 24 (12) : 19506 - 19516