The general framework for few-shot learning by kernel HyperNetworks

被引:3
作者
Sendera, Marcin [1 ,3 ]
Przewiezlikowski, Marcin [1 ,3 ,4 ]
Miksa, Jan [1 ]
Rajski, Mateusz [1 ]
Karanowski, Konrad [2 ]
Zieba, Maciej
Tabor, Jacek [1 ]
Spurek, Przemyslaw [1 ]
机构
[1] Jagiellonian Univ, Fac Math & Comp Sci, Lojasiewicza 6, Krakow, Poland
[2] Wroclaw Univ Sci & Technol, Dept Artificial Intelligence, Wyb Wyspianskiego 27, Wroclaw, Poland
[3] Jagiellonian Univ, Doctoral Sch Exact & Nat Sci, Lojasiewicza 11, Krakow, Poland
[4] IDEAS NCBR, Chmielna 69, Warsaw, Poland
关键词
Few-shot learning; Meta-learning; HyperNetworks; Kernel methods; Bayesian neural networks;
D O I
10.1007/s00138-023-01403-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot models aim at making predictions using a minimal number of labeled examples from a given task. The main challenge in this area is the one-shot setting, where only one element represents each class. We propose the general framework for few-shot learning via kernel HyperNetworks-the fusion of kernels and hypernetwork paradigm. Firstly, we introduce the classical realization of this framework, dubbed HyperShot. Compared to reference approaches that apply a gradient-based adjustment of the parameters, our models aim to switch the classification module parameters depending on the task's embedding. In practice, we utilize a hypernetwork, which takes the aggregated information from support data and returns the classifier's parameters handcrafted for the considered problem. Moreover, we introduce the kernel-based representation of the support examples delivered to hypernetwork to create the parameters of the classification module. Consequently, we rely on relations between the support examples' embeddings instead of the backbone models' direct feature values. Thanks to this approach, our model can adapt to highly different tasks. While such a method obtains very good results, it is limited by typical problems such as poorly quantified uncertainty due to limited data size. We further show that incorporating Bayesian neural networks into our general framework, an approach we call BayesHyperShot, solves this issue.
引用
收藏
页数:16
相关论文
共 54 条
  • [1] Antoniou Antreas, 2019, INT C LEARNING REPRE
  • [2] Bauer M, 2017, Arxiv, DOI arXiv:1706.00326
  • [3] Bengio S., 2013, Optimality in Biological and Artificial Networks?, P281
  • [4] Borycki P, 2023, Arxiv, DOI arXiv:2210.02796
  • [5] Chen W.Y., 2019, P INT C LEARN REPR A
  • [6] Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
  • [7] Fan C., 2021, 5 WORKSHOP META LEAR
  • [8] Finn C, 2017, PR MACH LEARN RES, V70
  • [9] Garnelo M, 2018, PR MACH LEARN RES, V80
  • [10] Boosting Few-Shot Visual Learning with Self-Supervision
    Gidaris, Spyros
    Bursuc, Andrei
    Komodakis, Nikos
    Perez, Patrick
    Cord, Matthieu
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8058 - 8067