Accretionary Learning With Deep Neural Networks With Applications

被引:0
|
作者
Wei, Xinyu [1 ]
Juang, Biing-Hwang [1 ]
Wang, Ouya [2 ]
Zhou, Shenglong [3 ]
Li, Geoffrey Ye [2 ]
机构
[1] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
[2] Imperial Coll London, Dept Elect & Elect Engn, London SW7 2BX, England
[3] Beijing Jiaotong Univ, Sch Math & Stat, Beijing, Peoples R China
关键词
Artificial neural networks; Data models; Knowledge engineering; Task analysis; Training; Speech recognition; Learning systems; Deep learning; accretion learning; deep neural networks; pattern recognition; wireless communications; CLASSIFICATION;
D O I
10.1109/TCCN.2023.3342454
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
One of the fundamental limitations of Deep Neural Networks (DNN) is their inability to acquire and accumulate new cognitive capabilities in an incremental or progressive manner. When data appear from object classes not among the learned ones, a conventional DNN would not be able to recognize them due to the fundamental formulation that it assumes. A typical solution is to re-design and re-learn a new network, most likely an expanded one, for the expanded set of object classes. This process is quite different from that of a human learner. In this paper, we propose a new learning method named Accretionary Learning (AL) to emulate human learning, in that the set of object classes to be recognized need not be fixed, meaning it can grow as the situation arises without requiring an entire redesign of the system. The proposed learning structure is modularized, and can dynamically expand to learn and register new knowledge, as the set of objects grows in size. AL does not forget previous knowledge when learning new data classes. We show that the structure and its learning methodology lead to a system that can grow to cope with increased cognitive complexity while providing stable and superior overall performance.
引用
收藏
页码:660 / 673
页数:14
相关论文
共 50 条
  • [1] Learning From Noisy Labels With Deep Neural Networks: A Survey
    Song, Hwanjun
    Kim, Minseok
    Park, Dongmin
    Shin, Yooju
    Lee, Jae-Gil
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8135 - 8153
  • [2] Applications of neural networks and deep learning to biomedical engineering
    Luis Sarmiento-Ramos, Jose
    UIS INGENIERIAS, 2020, 19 (04): : 1 - 18
  • [3] Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey
    Jing, Longlong
    Tian, Yingli
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (11) : 4037 - 4058
  • [4] Deep Neural Networks and Tabular Data: A Survey
    Borisov, Vadim
    Leemann, Tobias
    Sessler, Kathrin
    Haug, Johannes
    Pawelczyk, Martin
    Kasneci, Gjergji
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7499 - 7519
  • [5] Progressive Tandem Learning for Pattern Recognition With Deep Spiking Neural Networks
    Wu, Jibin
    Xu, Chenglin
    Han, Xiao
    Zhou, Daquan
    Zhang, Malu
    Li, Haizhou
    Tan, Kay Chen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 7824 - 7840
  • [6] QLP: Deep Q-Learning for Pruning Deep Neural Networks
    Camci, Efe
    Gupta, Manas
    Wu, Min
    Lin, Jie
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) : 6488 - 6501
  • [7] Deep representation-based transfer learning for deep neural networks
    Yang, Tao
    Yu, Xia
    Ma, Ning
    Zhang, Yifu
    Li, Hongru
    KNOWLEDGE-BASED SYSTEMS, 2022, 253
  • [8] Orthogonal Deep Neural Networks
    Li, Shuai
    Jia, Kui
    Wen, Yuxin
    Liu, Tongliang
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (04) : 1352 - 1368
  • [9] Tweaking Deep Neural Networks
    Kim, Jinwook
    Yoon, Heeyong
    Kim, Min-Soo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5715 - 5728
  • [10] A Comparison of Loss Weighting Strategies for Multi task Learning in Deep Neural Networks
    Gong, Ting
    Lee, Tyler
    Stephenson, Cory
    Renduchintala, Venkata
    Padhy, Suchismita
    Ndirango, Anthony
    Keskin, Gokce
    Elibol, Oguz H.
    IEEE ACCESS, 2019, 7 : 141627 - 141632