Considerations in using recurrent neural networks to probe neural dynamics

被引:10
|
作者
Kao, Jonathan C. [1 ,2 ]
机构
[1] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA USA
[2] Univ Calif Los Angeles, Neurosci Program, Los Angeles, CA USA
关键词
artificial neural network; motor cortex; neural computation; neural dynamics; recurrent neural network; MOTOR CORTEX; PREMOTOR CORTEX; MOVEMENT; COMPLEXITY; NEURONS;
D O I
10.1152/jn.00467.2018
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Recurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. RNNs are trained to reproduce animal behavior while also capturing key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Furthermore, because the RNN's governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network's nonlinear activation and rate regularization, we show that RNNs reproducing single-neuron firing rate motifs may not adequately capture important population motifs. Second, we find that even when RNNs reproduce key neurophysiological features on both the single neuron and population levels, they can do so through distinctly different dynamical mechanisms. To distinguish between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to input noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to tasks it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics. NEW & NOTEWORTHY Artificial neurons in a recurrent neural network (RNN) may resemble empirical single-unit activity but not adequately capture important features on the neural population level. Dynamics of RNNs can be visualized in low-dimensional projections to provide insight into the RNN's dynamical mechanism. RNNs trained in different ways may reproduce neurophysiological motifs but do so with distinctly different mechanisms. RNNs trained to only perform a delayed reach task can generalize to perform tasks where the target is switched or the target location is changed.
引用
收藏
页码:2504 / 2521
页数:18
相关论文
共 50 条
  • [1] Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks
    Sani, Omid G.
    Pesaran, Bijan
    Shanechi, Maryam M.
    NATURE NEUROSCIENCE, 2024, 27 (10) : 2033 - 2045
  • [2] Extracting the dynamics of the Hodgkin-Huxley model using recurrent neural networks
    Sari Andoni
    Manish Saggar
    Tekin Meriçli
    Risto Miikkulainen
    BMC Neuroscience, 8 (Suppl 2)
  • [3] Scoring Summaries Using Recurrent Neural Networks
    Ruseti, Stefan
    Dascalu, Mihai
    Johnson, Amy M.
    McNamara, Danielle S.
    Balyan, Renu
    McCarthy, Kathryn S.
    Trausan-Matu, Stefan
    INTELLIGENT TUTORING SYSTEMS, ITS 2018, 2018, 10858 : 191 - 201
  • [4] Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
    Bitzer, Sebastian
    Kiebel, Stefan J.
    BIOLOGICAL CYBERNETICS, 2012, 106 (4-5) : 201 - 217
  • [5] Emotion Recognition from Speech using Artificial Neural Networks and. Recurrent Neural Networks
    Sharma, Shambhavi
    2021 11TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, DATA SCIENCE & ENGINEERING (CONFLUENCE 2021), 2021, : 153 - 158
  • [6] Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks
    Pyle, Ryan
    Rosenbaum, Robert
    PHYSICAL REVIEW LETTERS, 2017, 118 (01)
  • [7] Attractor Dynamics Driven by Interactivity in Boolean Recurrent Neural Networks
    Cabessa, Jeremie
    Villa, Alessandro E. P.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2016, PT I, 2016, 9886 : 115 - 122
  • [8] Nonlinear dynamics and machine learning of recurrent spiking neural networks
    Maslennikov, O. V.
    Pugavko, M. M.
    Shchapin, D. S.
    Nekorkin, V. I.
    PHYSICS-USPEKHI, 2022, 65 (10) : 1020 - 1038
  • [9] Multiple target tracking using recurrent neural networks
    Mauroy, GP
    Kamen, EW
    1997 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, 1997, : 2076 - 2079
  • [10] Learning of Process Representations Using Recurrent Neural Networks
    Seeliger, Alexander
    Luettgen, Stefan
    Nolle, Timo
    Muehlhaeuser, Max
    ADVANCED INFORMATION SYSTEMS ENGINEERING (CAISE 2021), 2021, 12751 : 109 - 124