Universal Online Learning with Unbounded Losses: Memory Is All You Need

被引:0
作者
Blanchard, Moise [1 ]
Cosson, Romain [1 ]
Hanneke, Steve [2 ]
机构
[1] MIT, Cambridge, MA 02139 USA
[2] Purdue Univ, W Lafayette, IN 47907 USA
来源
INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 167 | 2022年 / 167卷
关键词
online learning; universal consistency; stochastic processes; measurable partitions; statistical learning theory; Borel measure; SEQUENTIAL PREDICTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We resolve an open problem of [13] on the subject of universally consistent online learning with non-i.i.d. processes and unbounded losses. The notion of an optimistically universal learning rule was defined by Hanneke [13] in an effort to study learning theory under minimal assumptions. A given learning rule is said to be optimistically universal if it achieves a low long-run average loss whenever the data generating process makes this goal achievable by some learning rule. [13] posed as an open problem whether, for every unbounded loss, the family of processes admitting universal learning are precisely those having a finite number of distinct values almost surely. In this paper, we completely resolve this problem, showing that this is indeed the case. As a consequence, this also offers a dramatically simpler formulation of an optimistically universal learning rule for any unbounded loss: namely, the simple memorization rule already suffices. Our proof relies on constructing random measurable partitions of the instance space and could be of independent interest for solving other open questions [14]. We extend the results to the non-realizable setting thereby providing an optimistically universal Bayes consistent learning rule.
引用
收藏
页数:21
相关论文
共 28 条
  • [11] GYORFI L., 2002, DISTRIBUTION FREE TH
  • [12] Gyorfi Laslo, 2012, Machine Learning For Financial Engineering, P179
  • [13] Sequential prediction of unbounded stationary time series
    Gyorfi, Laszlo
    Ottucsak, Gyorgy
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2007, 53 (05) : 1866 - 1872
  • [14] UNIVERSAL BAYES CONSISTENCY IN METRIC SPACES
    Hanneke, Steve
    Kontorovich, Aryeh
    Sabato, Sivan
    Weiss, Roi
    [J]. 2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [15] Hanneke S, 2021, J MACH LEARN RES, V22
  • [16] Hanneke Steve, 2021, PMLR, P4642
  • [17] PREDICTING (0,1)-FUNCTIONS ON RANDOMLY DRAWN POINTS
    HAUSSLER, D
    LITTLESTONE, N
    WARMUTH, MK
    [J]. INFORMATION AND COMPUTATION, 1994, 115 (02) : 248 - 292
  • [18] Littlestone N., 1988, Machine Learning, V2, P285, DOI 10.1023/A:1022869011914
  • [19] Morvai G, 1996, ANN STAT, V24, P370
  • [20] Regression estimation from an individual stable sequence
    Morvai, G
    Kulkarni, SR
    Nobel, AB
    [J]. STATISTICS, 1999, 33 (02) : 99 - 118