Continual learning of context-dependent processing in neural networks

被引:164
|
作者
Zeng, Guanxiong [1 ,2 ,3 ]
Chen, Yang [1 ,2 ]
Cui, Bo [1 ,2 ,3 ]
Yu, Shan [1 ,2 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Inst Automat, Brainnetome Ctr, Beijing, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, Beijing, Peoples R China
[4] Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing, Peoples R China
关键词
PREFRONTAL CORTEX; CONNECTIONIST MODELS; MIXED SELECTIVITY; SYSTEMS;
D O I
10.1038/s42256-019-0080-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are powerful tools in learning sophisticated but fixed mapping rules between inputs and outputs, thereby limiting their application in more complex and dynamic situations in which the mapping rules are not kept the same but change according to different contexts. To lift such limits, we developed an approach involving a learning algorithm, called orthogonal weights modification, with the addition of a context-dependent processing module. We demonstrated that with orthogonal weights modification to overcome catastrophic forgetting, and the context-dependent processing module to learn how to reuse a feature representation and a classifier for different contexts, a single network could acquire numerous context-dependent mapping rules in an online and continual manner, with as few as approximately ten samples to learn each. Our approach should enable highly compact systems to gradually learn myriad regularities of the real world and eventually behave appropriately within it. When neural networks are retrained to solve more than one problem, they tend to forget what they have learned earlier. Here, the authors propose orthogonal weights modification, a method to avoid this so-called catastrophic forgetting problem. Capitalizing on such an ability, a new module is introduced to enable the network to continually learn context-dependent processing.
引用
收藏
页码:364 / 372
页数:9
相关论文
共 50 条
  • [21] Interorganizational learning: a context-dependent process
    Rupcic, Natasa
    LEARNING ORGANIZATION, 2021, 28 (02): : 222 - 232
  • [22] Context-Dependent Learning of Linguistic Disjunction
    Jasbi, Masoud
    Jaggi, Akshay
    Clark, Eve V.
    Frank, Michael C.
    JOURNAL OF CHILD LANGUAGE, 2024, 51 (01) : 1 - 36
  • [23] Orthogonal representations for robust context-dependent task performance in brains and neural networks
    Flesch, Timo
    Juechems, Keno
    Dumbalska, Tsvetomira
    Saxe, Andrew
    Summerfield, Christopher
    NEURON, 2022, 110 (07) : 1258 - +
  • [24] Context-Dependent Deep Neural Networks for Commercial Mandarin Speech Recognition Applications
    Niu, Jianwei
    Xie, Lei
    Jia, Lei
    Hu, Na
    2013 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2013,
  • [25] Context-dependent information processing in patients with schizophrenia
    Bazin, N
    Perruchet, P
    Hardy-Bayle, MC
    Feline, A
    SCHIZOPHRENIA RESEARCH, 2000, 45 (1-2) : 93 - 101
  • [26] Atypical context-dependent speech processing in autism
    Yu, Alan Chi Lun
    To, Carol Kit Sum
    APPLIED PSYCHOLINGUISTICS, 2020, 41 (05) : 1045 - 1059
  • [27] Affective reactions and context-dependent processing of negations
    Rubaltelli, Enrico
    Slovic, Paul
    JUDGMENT AND DECISION MAKING, 2008, 3 (08): : 607 - 618
  • [28] Context-dependent modulation of auditory processing by serotonin
    Hurley, L. M.
    Hall, I. C.
    HEARING RESEARCH, 2011, 279 (1-2) : 74 - 84
  • [29] The cortical dynamics of context-dependent language processing
    Dietrich, Susanne
    Hertrich, Ingo
    Blum, Corinna
    Seibold, Verena C. C.
    Rolke, Bettina
    LANGUAGE COGNITION AND NEUROSCIENCE, 2023, 38 (07) : 903 - 924
  • [30] Multitask Learning of Context-Dependent Targets in Deep Neural Network Acoustic Models
    Bell, Peter
    Swietojanski, Pawel
    Renals, Steve
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2017, 25 (02) : 238 - 247