On the least amount of training data for a machine learning model

被引:1
作者
Zhao, Dazhi [1 ,2 ]
Hao, Yunquan [1 ]
Li, Weibin [3 ]
Tu, Zhe [4 ]
机构
[1] Southwest Petr Univ, Sch Sci, Chengdu, Peoples R China
[2] Southwest Petr Univ, Inst Artificial Intelligence, Chengdu, Peoples R China
[3] China Aerodynam Res & Dev Ctr, Mianyang 621000, Sichuan, Peoples R China
[4] Zhejiang Wanli Univ, Coll Big Data & Software Engn, Ningbo, Peoples R China
基金
浙江省自然科学基金;
关键词
Machine learning; sampling theorem; frequency principle; signal recovery; neural network; Gaussian process regression; DEEP NEURAL-NETWORKS;
D O I
10.3233/JIFS-211024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Whether the exact amount of training data is enough for a specific task is an important question in machine learning, since it is always very expensive to label many data while insufficient data lead to underfitting. In this paper, the topic that what is the least amount of training data for a model is discussed from the perspective of sampling theorem. If the target function of supervised learning is taken as a multi-dimensional signal and the labeled data as samples, the training process can be regarded as the process of signal recovery. The main result is that the least amount of training data for a bandlimited task signal corresponds to a sampling rate which is larger than the Nyquist rate. Some numerical experiments are carried out to show the comparison between the learning process and the signal recovery, which demonstrates our result. Based on the equivalence between supervised learning and signal recovery, some spectral methods can be used to reveal underlying mechanisms of various supervised learning models, especially those "black-box" neural networks.
引用
收藏
页码:4891 / 4906
页数:16
相关论文
共 34 条
  • [21] Rahaman N, 2019, PR MACH LEARN RES, V97
  • [22] Rasmussen CE, 2005, ADAPT COMPUT MACH LE, P1
  • [23] Scholkopf Bernhard, 2001, Learning with Kernels-Support Vector Machines, Regularization, Optimization and Beyond
  • [24] Shalev-Shwartz S., 2014, Understanding machine learning: From theory to algorithms, DOI DOI 10.1017/CBO9781107298019
  • [25] Pegasos: primal estimated sub-gradient solver for SVM
    Shalev-Shwartz, Shai
    Singer, Yoram
    Srebro, Nathan
    Cotter, Andrew
    [J]. MATHEMATICAL PROGRAMMING, 2011, 127 (01) : 3 - 30
  • [26] Mastering the game of Go with deep neural networks and tree search
    Silver, David
    Huang, Aja
    Maddison, Chris J.
    Guez, Arthur
    Sifre, Laurent
    van den Driessche, George
    Schrittwieser, Julian
    Antonoglou, Ioannis
    Panneershelvam, Veda
    Lanctot, Marc
    Dieleman, Sander
    Grewe, Dominik
    Nham, John
    Kalchbrenner, Nal
    Sutskever, Ilya
    Lillicrap, Timothy
    Leach, Madeleine
    Kavukcuoglu, Koray
    Graepel, Thore
    Hassabis, Demis
    [J]. NATURE, 2016, 529 (7587) : 484 - +
  • [27] Sindhwani V., 2015, P 28 INT C NEUR INF, V28, P3088
  • [28] Vapnik V.N., 2000, NATURE STAT LEARNING, DOI DOI 10.1007/978-1-4757-3264-1
  • [29] Xu Z. J., 2018, arXiv
  • [30] XU Z Q J, arXiv