ConCS: A Continual Classifier System for Continual Learning of Multiple Boolean Problems

被引:1
|
作者
Nguyen, Trung B. [1 ]
Browne, Will N. [2 ]
Zhang, Mengjie [1 ]
机构
[1] Victoria Univ Wellington, Sch Engn & Comp Sci, Wellington 6140, New Zealand
[2] Queensland Univ Technol, Fac Engn, Brisbane, Qld 4000, Australia
关键词
Building blocks; code fragment (CF); continual learning; learning classifier systems (LCS); multitask learning (MTL); XCS;
D O I
10.1109/TEVC.2022.3210872
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human intelligence can simultaneously process many tasks with the ability to accumulate and reuse knowledge. Recent advances in artificial intelligence, such as transfer, multitask, and layered learning, seek to replicate these abilities. However, humans must specify the task order, which is often difficult particularly with uncertain domain knowledge. This work introduces a continual-learning system (ConCS), such that given an open-ended set of problems once each is solved its solution can contribute to solving further problems. The hypothesis is that the evolutionary computation approach of learning classifier systems (LCSs) can form this system due to its niched, cooperative rules. A collaboration of parallel LCSs identifies sets of patterns linking features to classes that can be reused in related problems automatically. Results from distinct Boolean and integer classification problems, with varying interrelations, show that by combining knowledge from simple problems, complex problems can be solved at increasing scales. 100% accuracy is achieved for the problems tested regardless of the order of task presentation. This includes intractable problems for previous approaches, e.g., n -bit Majority-on. A major contribution is that human guidance is now unnecessary to determine the task learning order. Furthermore, the system automatically generates the curricula for learning the most difficult tasks.
引用
收藏
页码:1057 / 1071
页数:15
相关论文
共 50 条
  • [31] CUCL: Codebook for Unsupervised Continual Learning
    Chen, Cheng
    Song, Jingkuan
    Zhu, Xiaosu
    Zhu, Junchen
    Gao, Lianli
    Shen, Hengtao
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 1729 - 1737
  • [32] Extensible Steganalysis via Continual Learning
    Zhou, Zhili
    Yin, Zihao
    Meng, Ruohan
    Peng, Fei
    FRACTAL AND FRACTIONAL, 2022, 6 (12)
  • [33] Sample Condensation in Online Continual Learning
    Sangermano, Mattia
    Carta, Antonio
    Cossu, Andrea
    Bacciu, Davide
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [34] Continual Learning with Neuron Activation Importance
    Kim, Sohee
    Lee, Seungkyu
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT I, 2022, 13231 : 310 - 321
  • [35] Variance Guided Continual Learning in a Convolutional Neural Network Gaussian Process Single Classifier Approach for Multiple Tasks in Noisy Images
    Javed, Mahed
    Mihaylova, Lyudmila
    Bouaynaya, Nidhal
    2021 IEEE 24TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2021, : 524 - 531
  • [36] Continual Learning with Neural Networks: A Review
    Awasthi, Abhijeet
    Sarawagi, Sunita
    PROCEEDINGS OF THE 6TH ACM IKDD CODS AND 24TH COMAD, 2019, : 362 - 365
  • [37] Neural Agents with Continual Learning Capacities
    Zhinin-Vera, Luis
    Pretel, Elena
    Moya, Alejandro
    Jimenez-Ruescas, Javier
    Astudillo, Jaime
    INFORMATION AND COMMUNICATION TECHNOLOGIES, TICEC 2024, 2025, 2273 : 145 - 159
  • [38] Efficient Architecture Search for Continual Learning
    Gao, Qiang
    Luo, Zhipeng
    Klabjan, Diego
    Zhang, Fengli
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8555 - 8565
  • [39] Continual Learning of Knowledge Graph Embeddings
    Daruna, Angel
    Gupta, Mehul
    Sridharan, Mohan
    Chernova, Sonia
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 1128 - 1135
  • [40] Continual Learning for Fake Audio Detection
    Ma, Haoxin
    Yi, Jiangyan
    Tao, Jianhua
    Bai, Ye
    Tian, Zhengkun
    Wang, Chenglong
    INTERSPEECH 2021, 2021, : 886 - 890