Small-data image classification via drop-in variational autoencoder

被引:0
作者
Babak Mahdian [1 ]
Radim Nedbal [2 ]
机构
[1] The Czech Academy of Sciences,Institute of Information Theory and Automation
[2] Istituto Italiano di Tecnologia,Center for Translational Neurophysiology
关键词
Small data classification; Variational autoencoder; Neural Tangent Kernel; Supervised learning;
D O I
10.1007/s11760-025-04376-1
中图分类号
学科分类号
摘要
It is unclear whether generative approaches can achieve state-of-the-art performance with supervised classification in high-dimensional feature spaces and extremely small datasets. In this paper, we propose a drop-in variational autoencoder (VAE) for the task of supervised learning using an extremely small train set (i.e., n=1,..,5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=1,.., 5$$\end{document} images per class). Drop-in classifiers form a usual alternative when traditional approaches to Few-Shot Learning cannot be used. The classification will be defined as a posterior probability density function and approximated by the variational principle. We perform experiments on a large variety of deep feature representations extracted from different layers of popular convolutional neural network (CNN) architectures. We also benchmark with modern classifiers, including Neural Tangent Kernel (NTK), Support Vector Machine (SVM) with NTK kernel and Neural Network Gaussian Process (NNGP). Results obtained indicate that the drop-in VAE classifier outperforms all the compared classifiers in the extremely small data regime.
引用
收藏
相关论文
empty
未找到相关数据