Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies

被引:1
|
作者
Shen, Yang [1 ]
Dasgupta, Sanjoy [2 ]
Navlakha, Saket [1 ]
机构
[1] Cold Spring Harbor Lab, Simons Ctr Quantitat Biol, Cold Spring Harbor, NY 11724 USA
[2] Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA USA
关键词
DROSOPHILA MUSHROOM BODY; NEURAL-NETWORKS; MEMORY; MECHANISMS; ALGORITHM; MODELS; SPARSE; REPRESENTATIONS; REEVALUATION; INFORMATION;
D O I
10.1162/neco_a_01615
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Catastrophic forgetting remains an outstanding challenge in continual learning. Recently, methods inspired by the brain, such as continual representation learning and memory replay, have been used to combat catastrophic forgetting. Associative learning (retaining associations between inputs and outputs, even after good representations are learned) plays an important function in the brain; however, its role in continual learning has not been carefully studied. Here, we identified a two-layer neural circuit in the fruit fly olfactory system that performs continual associative learning between odors and their associated valences. In the first layer, inputs (odors) are encoded using sparse, high-dimensional representations, which reduces memory interference by activating nonoverlapping populations of neurons for different odors. In the second layer, only the synapses between odor-activated neurons and the odor's associated output neuron are modified during learning; the rest of the weights are frozen to prevent unrelated memories from being overwritten. We prove theoretically that these two perceptron-like layers help reduce catastrophic forgetting compared to the original perceptron algorithm, under continual learning. We then show empirically on benchmark data sets that this simple and lightweight architecture outperforms other popular neural-inspired algorithms when also using a two-layer feedforward architecture. Overall, fruit flies evolved an efficient continual associative learning algorithm, and circuit mechanisms from neuroscience can be translated to improve machine computation.
引用
收藏
页码:1797 / 1819
页数:23
相关论文
共 46 条
  • [41] From Learning to Forgetting: Behavioral, Circuitry, and Molecular Properties Define the Different Functional States of the Recognition Memory Trace
    Romero-Granados, Rocio
    Fontan-Lozano, Angela
    Maria Delgado-Garcia, Jose
    Carrion, Angel M.
    HIPPOCAMPUS, 2010, 20 (05) : 584 - 595
  • [42] THE EFFECTS OF CBI LESSON SEQUENCE TYPE AND FIELD DEPENDENCE ON LEARNING FROM COMPUTER-BASED COOPERATIVE INSTRUCTION IN WEB
    Ipek, Ismail
    TURKISH ONLINE JOURNAL OF EDUCATIONAL TECHNOLOGY, 2010, 9 (01): : 221 - 234
  • [43] Learning From Ideography and Labels: A Schema-Aware Radical-Guided Associative Model for Chinese Text Classification
    Tao, Hanqing
    Zhu, Guanqi
    Chen, Enhong
    Tong, Shiwei
    Zhang, Kun
    Xu, Tong
    Liu, Qi
    Ong, Yew-Soon
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (06) : 6043 - 6057
  • [44] Cerebellum Involvement in Dystonia During Associative Motor Learning: Insights From a Data-Driven Spiking Network Model
    Geminiani, Alice
    Mockevicius, Aurimas
    D'Angelo, Egidio
    Casellato, Claudia
    FRONTIERS IN SYSTEMS NEUROSCIENCE, 2022, 16
  • [45] The motivational component of withdrawal in opiate addiction: Role of associative learning and aversive memory in opiate addiction from a behavioral, anatomical and functional perspective
    Frenois, F
    Le Moine, C
    Cador, M
    REVIEWS IN THE NEUROSCIENCES, 2005, 16 (03) : 255 - 276
  • [46] Deep associative learning approach for bio-medical sentiment analysis utilizing unsupervised representation from large-scale patients’ narratives
    Grissette H.
    Nfaoui E.H.
    Personal and Ubiquitous Computing, 2023, 27 (06) : 2055 - 2069