Learning rule influences recurrent network representations but not attractor structure in decision-making tasks

被引:0
作者
McMahan, Brandon [1 ]
Kleinman, Michael [1 ]
Kao, Jonathan C. [1 ]
机构
[1] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA 90024 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
基金
美国国家科学基金会; 加拿大自然科学与工程研究理事会;
关键词
DYNAMICS; GENERATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recurrent neural networks (RNNs) are popular tools for studying computational dynamics in neurobiological circuits. However, due to the dizzying array of design choices, it is unclear if computational dynamics unearthed from RNNs provide reliable neurobiological inferences. Understanding the effects of design choices on RNN computation is valuable in two ways. First, invariant properties that persist in RNNs across a wide range of design choices are more likely to be candidate neurobiological mechanisms. Second, understanding what design choices lead to similar dynamical solutions reduces the burden of imposing that all design choices be totally faithful replications of biology. We focus our investigation on how RNN learning rule and task design affect RNN computation. We trained large populations of RNNs with different, but commonly used, learning rules on decision-making tasks inspired by neuroscience literature. For relatively complex tasks, we find that attractor topology is invariant to the choice of learning rule, but representational geometry is not. For simple tasks, we find that attractor topology depends on task input noise. However, when a task becomes increasingly complex, RNN attractor topology becomes invariant to input noise. Together, our results suggest that RNN dynamics are robust across learning rules but can be sensitive to the training task design, especially for simpler tasks.
引用
收藏
页数:12
相关论文
共 25 条
  • [21] A neural network that finds a naturalistic solution for the production of muscle activity
    Sussillo, David
    Churchland, Mark M.
    Kaufman, Matthew T.
    Shenoy, Krishna V.
    [J]. NATURE NEUROSCIENCE, 2015, 18 (07) : 1025 - +
  • [22] Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks
    Sussillo, David
    Barak, Omri
    [J]. NEURAL COMPUTATION, 2013, 25 (03) : 626 - 649
  • [23] Generating Coherent Patterns of Activity from Chaotic Neural Networks
    Sussillo, David
    Abbott, L. F.
    [J]. NEURON, 2009, 63 (04) : 544 - 557
  • [24] Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis
    Williams, Alex H.
    Kim, Tony Hyun
    Wang, Forea
    Vyas, Saurabh
    Ryu, Stephen I.
    Shenoy, Krishna V.
    Schnitzer, Mark
    Kolda, Tamara G.
    Ganguli, Surya
    [J]. NEURON, 2018, 98 (06) : 1099 - +
  • [25] Yang G. R., 2019, TASK REPRESENTATIONS, P297