Learning rule influences recurrent network representations but not attractor structure in decision-making tasks

被引:0
作者
McMahan, Brandon [1 ]
Kleinman, Michael [1 ]
Kao, Jonathan C. [1 ]
机构
[1] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA 90024 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
基金
美国国家科学基金会; 加拿大自然科学与工程研究理事会;
关键词
DYNAMICS; GENERATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recurrent neural networks (RNNs) are popular tools for studying computational dynamics in neurobiological circuits. However, due to the dizzying array of design choices, it is unclear if computational dynamics unearthed from RNNs provide reliable neurobiological inferences. Understanding the effects of design choices on RNN computation is valuable in two ways. First, invariant properties that persist in RNNs across a wide range of design choices are more likely to be candidate neurobiological mechanisms. Second, understanding what design choices lead to similar dynamical solutions reduces the burden of imposing that all design choices be totally faithful replications of biology. We focus our investigation on how RNN learning rule and task design affect RNN computation. We trained large populations of RNNs with different, but commonly used, learning rules on decision-making tasks inspired by neuroscience literature. For relatively complex tasks, we find that attractor topology is invariant to the choice of learning rule, but representational geometry is not. For simple tasks, we find that attractor topology depends on task input noise. However, when a task becomes increasingly complex, RNN attractor topology becomes invariant to input noise. Together, our results suggest that RNN dynamics are robust across learning rules but can be sensitive to the training task design, especially for simpler tasks.
引用
收藏
页数:12
相关论文
共 25 条
  • [1] [Anonymous], 2017, Advances in neural information processing systems
  • [2] Spike timing-dependent plasticity: A Hebbian learning rule
    Caporale, Natalia
    Dan, Yang
    [J]. ANNUAL REVIEW OF NEUROSCIENCE, 2008, 31 : 25 - 46
  • [3] Computing by Robust Transience: How the Fronto-Parietal Network Performs Sequential, Category-Based Decisions
    Chaisangmongkon, Warasinee
    Swaminathan, Sruthi K.
    Freedman, David J.
    Wang, Xiao-Jing
    [J]. NEURON, 2017, 93 (06) : 1504 - +
  • [4] full-FORCE: A target-based method for training recurrent networks
    DePasquale, Brian
    Cueva, Christopher J.
    Rajan, Kanaka
    Escola, G. Sean
    Abbott, L. F.
    [J]. PLOS ONE, 2018, 13 (02):
  • [5] Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex Movements
    Hennequin, Guillaume
    Vogels, Tim P.
    Gerstner, Wulfram
    [J]. NEURON, 2014, 82 (06) : 1394 - 1406
  • [6] Kao J. C., 2018, CONSIDERATIONS USING
  • [7] Kleinman M., 2021, ADV NEURAL INFORM PR
  • [8] Backpropagation and the brain
    Lillicrap, Timothy P.
    Santoro, Adam
    Marris, Luke
    Akerman, Colin J.
    Hinton, Geoffrey
    [J]. NATURE REVIEWS NEUROSCIENCE, 2020, 21 (06) : 335 - 346
  • [9] Backpropagation through time and the brain
    Lillicrap, Timothy P.
    Santoro, Adam
    [J]. CURRENT OPINION IN NEUROBIOLOGY, 2019, 55 : 82 - 89
  • [10] Maheswaranathan N, 2019, ADV NEUR IN, V32