Accretionary Learning With Deep Neural Networks With Applications

被引:1
作者
Wei, Xinyu [1 ]
Juang, Biing-Hwang [1 ]
Wang, Ouya [2 ]
Zhou, Shenglong [3 ]
Li, Geoffrey Ye [2 ]
机构
[1] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
[2] Imperial Coll London, Dept Elect & Elect Engn, London SW7 2BX, England
[3] Beijing Jiaotong Univ, Sch Math & Stat, Beijing, Peoples R China
关键词
Artificial neural networks; Data models; Knowledge engineering; Task analysis; Training; Speech recognition; Learning systems; Deep learning; accretion learning; deep neural networks; pattern recognition; wireless communications; CLASSIFICATION;
D O I
10.1109/TCCN.2023.3342454
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
One of the fundamental limitations of Deep Neural Networks (DNN) is their inability to acquire and accumulate new cognitive capabilities in an incremental or progressive manner. When data appear from object classes not among the learned ones, a conventional DNN would not be able to recognize them due to the fundamental formulation that it assumes. A typical solution is to re-design and re-learn a new network, most likely an expanded one, for the expanded set of object classes. This process is quite different from that of a human learner. In this paper, we propose a new learning method named Accretionary Learning (AL) to emulate human learning, in that the set of object classes to be recognized need not be fixed, meaning it can grow as the situation arises without requiring an entire redesign of the system. The proposed learning structure is modularized, and can dynamically expand to learn and register new knowledge, as the set of objects grows in size. AL does not forget previous knowledge when learning new data classes. We show that the structure and its learning methodology lead to a system that can grow to cope with increased cognitive complexity while providing stable and superior overall performance.
引用
收藏
页码:660 / 673
页数:14
相关论文
共 45 条
[21]  
Kemker R, 2018, AAAI CONF ARTIF INTE, P3390
[22]   Overcoming catastrophic forgetting in neural networks [J].
Kirkpatricka, James ;
Pascanu, Razvan ;
Rabinowitz, Neil ;
Veness, Joel ;
Desjardins, Guillaume ;
Rusu, Andrei A. ;
Milan, Kieran ;
Quan, John ;
Ramalho, Tiago ;
Grabska-Barwinska, Agnieszka ;
Hassabis, Demis ;
Clopath, Claudia ;
Kumaran, Dharshan ;
Hadsell, Raia .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) :3521-3526
[23]   Learning without Forgetting [J].
Li, Zhizhong ;
Hoiem, Derek .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) :2935-2947
[24]  
Lopez-Paz D, 2017, ADV NEUR IN, V30
[25]  
McCloskey M., 1989, Psychology of Learning and Motivation, VVolume 24, P109
[26]  
Meinila J., 2009, Radio Technologies and Concepts for IMT-Advanced, P39, DOI [DOI 10.1109/TFUZZ.2004.840099, 10.1002/9780470748077.ch3]
[27]  
Nasrabadi NM, 2006, Pattern recognition and machine learning
[28]  
Norman DA., 1978, COGNITIVE PSYCHOL IN, P39, DOI DOI 10.1007/978-1-4684-2535-2_5
[29]  
Opper M., 1999, Online learning in neural networks, P363, DOI [10.1017/CBO9780511569920.017, DOI 10.1017/CBO9780511569920.017]
[30]   Continual lifelong learning with neural networks: A review [J].
Parisi, German I. ;
Kemker, Ronald ;
Part, Jose L. ;
Kanan, Christopher ;
Wermter, Stefan .
NEURAL NETWORKS, 2019, 113 :54-71