Algorithm Design for Online Meta-Learning with Task Boundary Detection

被引:0
|
作者
Sow, Daouda [1 ]
Lin, Sen [2 ]
Liang, Yingbin [1 ]
Zhang, Junshan [3 ]
机构
[1] Ohio State Univ, Dept ECE, Columbus, OH 43210 USA
[2] Univ Houston, Dept CS, Houston, TX USA
[3] Univ Calif Davis, Dept ECE, Davis, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Online meta-learning has recently emerged as a marriage between batch meta-learning and online learning, for achieving the capability of quick adaptation on new tasks in a lifelong manner. However, most existing approaches focus on the restrictive setting where the distribution of the online tasks remains fixed with known task boundaries. In this work, we relax these assumptions and propose a novel algorithm for task-agnostic online meta-learning in non-stationary environments. More specifically, we first propose two simple but effective detection mechanisms of task switches and distribution shift based on empirical observations, which serve as a key building block for more elegant online model updates in our algorithm: the task switch detection mechanism allows reusing of the best model available for the current task at hand, and the distribution shift detection mechanism differentiates the meta model update in order to preserve the knowledge for in-distribution tasks and quickly learn the new knowledge for out-of-distribution tasks. In particular, our online meta model updates are based only on the current data, which eliminates the need of storing previous data as required in most existing methods. We further show that a sublinear task-averaged regret can be achieved for our algorithm under mild conditions. Empirical studies on three different benchmarks clearly demonstrate the significant advantage of our algorithm over related baseline approaches.
引用
收藏
页码:458 / 479
页数:22
相关论文
共 50 条
  • [1] Online Meta-Learning by Parallel Algorithm Competition
    Elfwing, Stefan
    Uchibe, Eiji
    Doya, Kenji
    GECCO'18: PROCEEDINGS OF THE 2018 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2018, : 426 - 433
  • [2] Online Meta-Learning
    Finn, Chelsea
    Rajeswaran, Aravind
    Kakade, Sham
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [3] Online Structured Meta-learning
    Yao, Huaxiu
    Zhou, Yingbo
    Mahdavi, Mehrdad
    Li, Zhenhui
    Socher, Richard
    Xiong, Caiming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] Fast Adaptation of Radar Detection via Online Meta-learning
    Khan, Zareen
    Jiang, Wei
    Haimovich, Alexander
    Govoni, Mark
    Garner, Timothy
    Simeone, Osvaldo
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 580 - 585
  • [5] Online-Within-Online Meta-Learning
    Denevi, Giulia
    Stamos, Dimitris
    Ciliberto, Carlo
    Pontil, Massimiliano
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] Towards Task Sampler Learning for Meta-Learning
    Wang, Jingyao
    Qiang, Wenwen
    Su, Xingzhe
    Zheng, Changwen
    Sun, Fuchun
    Xiong, Hui
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (12) : 5534 - 5564
  • [7] Leveraging Task Variability in Meta-learning
    Aimen A.
    Ladrecha B.
    Sidheekh S.
    Krishnan N.C.
    SN Computer Science, 4 (5)
  • [8] Continual meta-learning algorithm
    Mengjuan Jiang
    Fanzhang Li
    Li Liu
    Applied Intelligence, 2022, 52 : 4527 - 4542
  • [9] A lightweight target tracking algorithm based on online correction for meta-learning
    Qi, Yongsheng
    Yin, Guohua
    Li, Yongting
    Liu, Liqiang
    Jiang, Zhengting
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 103
  • [10] Meta-learning with an Adaptive Task Scheduler
    Yao, Huaxiu
    Wang, Yu
    Wei, Ying
    Zhao, Peilin
    Mahdavi, Mehrdad
    Lian, Defu
    Finn, Chelsea
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34