Open-world continual learning: Unifying novelty detection and continual learning

被引:0
|
作者
Kim, Gyuhak [1 ]
Xiao, Changnan [2 ]
Konishi, Tatsuya [1 ]
Ke, Zixuan [1 ]
Liu, Bing [1 ]
机构
[1] Univ Illinois, 851 S Morgan St, Chicago, IL 60607 USA
[2] Byte Dance, Bldg 24, Zone B, 1999 Yishan Rd, Shanghai 201100, Peoples R China
关键词
Open world learning; Continual learning; OOD detection;
D O I
10.1016/j.artint.2024.104237
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As AI agents are increasingly used in the real open world with unknowns or novelties, they need the ability to (1) recognize objects that (a) they have learned before and (b) detect items that they have never seen or learned, and (2) learn the new items incrementally to become more and more knowledgeable and powerful. (1) is called novelty detection or out-of-distribution (OOD) detection and (2) is called class incremental learning (CIL), which is a setting of continual learning (CL). In existing research, OOD detection and CIL are regarded as two completely different problems. This paper first provides a theoretical proof that good OOD detection for each task within the set of learned tasks (called closed-world OOD detection) is necessary for successful CIL. We show this by decomposing CIL into two sub-problems: within-task prediction (WP) and task-id prediction (TP), and proving that TP is correlated with closed-world OOD detection. The key theoretical result is that regardless of whether WP and OOD detection (or TP) are defined explicitly or implicitly by a CIL algorithm, good WP and good closed-world OOD detection are necessary and sufficient conditions for good CIL, which unifies novelty or OOD detection and continual learning (CIL, in particular). We call this traditional CIL the closed-world CIL as it does not detect future OOD data in the open world. The paper then proves that the theory can be generalized or extended to open-world CIL, which is the proposed open-world continual learning, that can perform CIL in the open world and detect future or open-world OOD data. Based on the theoretical results, new CIL methods are also designed, which outperform strong baselines in CIL accuracy and in continual OOD detection by a large margin.
引用
收藏
页数:33
相关论文
共 50 条
  • [31] Dynamic learning rates for continual unsupervised learning
    David Fernandez-Rodriguez, Jose
    Jose Palomo, Esteban
    Miguel Ortiz-De-Lazcano-Lobato, Juan
    Ramos-Jimenez, Gonzalo
    Lopez-Rubio, Ezequiel
    INTEGRATED COMPUTER-AIDED ENGINEERING, 2023, 30 (03) : 257 - 273
  • [32] Poster: Continual Network Learning
    Di Cicco, Nicola
    Al Sadi, Amir
    Grasselli, Chiara
    Melis, Andrea
    Antichi, Gianni
    Tornatore, Massimo
    PROCEEDINGS OF THE 2023 ACM SIGCOMM 2023 CONFERENCE, SIGCOMM 2023, 2023, : 1096 - 1098
  • [33] Continual Learning with Dual Regularizations
    Han, Xuejun
    Guo, Yuhong
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 619 - 634
  • [34] Drifting explanations in continual learning
    Cossu, Andrea
    Spinnato, Francesco
    Guidotti, Riccardo
    Bacciu, Davide
    NEUROCOMPUTING, 2024, 597
  • [35] Open-World Learning for Traffic Scenarios Categorisation
    Balasubramanian, Lakshman
    Wurst, Jonas
    Botsch, Michael
    Deng, Ke
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (05): : 3506 - 3521
  • [36] Continual Representation Learning for Images with Variational Continual Auto-Encoder
    Jeon, Ik Hwan
    Shin, Soo Young
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 367 - 373
  • [37] Open-world Learning and Application to Product Classification
    Xu, Hu
    Liu, Bing
    Shu, Lei
    Yu, P.
    WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 3413 - 3419
  • [38] ConCS: A Continual Classifier System for Continual Learning of Multiple Boolean Problems
    Nguyen, Trung B.
    Browne, Will N.
    Zhang, Mengjie
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2023, 27 (04) : 1057 - 1071
  • [39] Continual variational dropout: a view of auxiliary local variables in continual learning
    Nam Le Hai
    Trang Nguyen
    Linh Ngo Van
    Thien Huu Nguyen
    Khoat Than
    Machine Learning, 2024, 113 : 281 - 323
  • [40] Continual variational dropout: a view of auxiliary local variables in continual learning
    Hai, Nam Le
    Nguyen, Trang
    Van, Linh Ngo
    Nguyen, Thien Huu
    Than, Khoat
    MACHINE LEARNING, 2024, 113 (01) : 281 - 323