SelectNAdapt: Support Set Selection for Few-Shot Domain Adaptation

被引:1
|
作者
Dawoud, Youssef [1 ]
Carneiro, Gustavo [2 ]
Belagiannis, Vasileios [1 ]
机构
[1] Friedrich Alexander Univ Erlangen Nurnberg, Erlangen, Germany
[2] Univ Surrey, Guildford, England
基金
澳大利亚研究理事会;
关键词
D O I
10.1109/ICCVW60793.2023.00104
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generalisation of deep neural networks becomes vulnerable when distribution shifts are encountered between train (source) and test (target) domain data. Few-shot domain adaptation mitigates this issue by adapting deep neural networks pre-trained on the source domain to the target domain using a randomly selected and annotated support set from the target domain. This paper argues that randomly selecting the support set can be further improved for effectively adapting the pre-trained source models to the target domain. Alternatively, we propose SelectNAdapt, an algorithm to curate the selection of the target domain samples, which are then annotated and included in the support set. In particular, for the K-shot adaptation problem, we first leverage self-supervision to learn features of the target domain data. Then, we propose a per-class clustering scheme of the learned target domain features and select K representative target samples using a distance-based scoring function. Finally, we bring our selection setup towards a practical ground by relying on pseudo-labels for clustering semantically similar target domain samples. Our experiments show promising results on three few-shot domain adaptation benchmarks for image recognition compared to related approaches and the standard random selection.
引用
收藏
页码:973 / 982
页数:10
相关论文
共 50 条
  • [1] Few-Shot Adversarial Domain Adaptation
    Motiian, Saeid
    Jones, Quinn
    Iranmanesh, Seyed Mehdi
    Doretto, Gianfranco
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [2] Marginalized Augmented Few-Shot Domain Adaptation
    Jing, Taotao
    Xia, Haifeng
    Hamm, Jihun
    Ding, Zhengming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 12459 - 12469
  • [3] Few-Shot Domain Adaptation with Polymorphic Transformers
    Li, Shaohua
    Sui, Xiuchao
    Fu, Jie
    Fu, Huazhu
    Luo, Xiangde
    Feng, Yangqin
    Xu, Xinxing
    Liu, Yong
    Ting, Daniel S. W.
    Goh, Rick Siow Mong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT II, 2021, 12902 : 330 - 340
  • [4] Domain Adaptation Algorithm for Few-Shot Classification Task
    Dai H.
    Hao X.-T.
    Sheng L.-J.
    Miao Q.-G.
    Jisuanji Xuebao/Chinese Journal of Computers, 2022, 45 (05): : 935 - 950
  • [5] VARIATIONAL FEATURE DISENTANGLEMENT FOR FEW-SHOT DOMAIN ADAPTATION
    Wang, Weiduo
    Gu, Yun
    Yang, Jie
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2860 - 2864
  • [6] Domain consensual contrastive learning for few-shot universal domain adaptation
    Liao, Haojin
    Wang, Qiang
    Zhao, Sicheng
    Xing, Tengfei
    Hu, Runbo
    APPLIED INTELLIGENCE, 2023, 53 (22) : 27191 - 27206
  • [7] Domain consensual contrastive learning for few-shot universal domain adaptation
    Haojin Liao
    Qiang Wang
    Sicheng Zhao
    Tengfei Xing
    Runbo Hu
    Applied Intelligence, 2023, 53 : 27191 - 27206
  • [8] Domain Re-Modulation for Few-Shot Generative Domain Adaptation
    Wu, Yi
    Li, Ziqiang
    Wang, Chaoyue
    Zheng, Heliang
    Zhao, Shanshan
    Li, Bin
    Tao, Dacheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Few-Shot Domain Adaptation for Identification of Clinical Image in Dermatology
    Jing H.
    Zhang Q.
    Chen M.
    Zhang L.
    Li Z.
    Zhu J.
    Li Z.
    Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2020, 54 (09): : 142 - 148and156
  • [10] Augmenting and Aligning Snippets for Few-Shot Video Domain Adaptation
    Xu, Yuecong
    Yang, Jianfei
    Zhou, Yunjiao
    Chen, Zhenghua
    Wu, Min
    Li, Xiaoli
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 13399 - 13410