Considerations in using recurrent neural networks to probe neural dynamics

被引:10
|
作者
Kao, Jonathan C. [1 ,2 ]
机构
[1] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA USA
[2] Univ Calif Los Angeles, Neurosci Program, Los Angeles, CA USA
关键词
artificial neural network; motor cortex; neural computation; neural dynamics; recurrent neural network; MOTOR CORTEX; PREMOTOR CORTEX; MOVEMENT; COMPLEXITY; NEURONS;
D O I
10.1152/jn.00467.2018
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Recurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. RNNs are trained to reproduce animal behavior while also capturing key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Furthermore, because the RNN's governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network's nonlinear activation and rate regularization, we show that RNNs reproducing single-neuron firing rate motifs may not adequately capture important population motifs. Second, we find that even when RNNs reproduce key neurophysiological features on both the single neuron and population levels, they can do so through distinctly different dynamical mechanisms. To distinguish between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to input noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to tasks it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics. NEW & NOTEWORTHY Artificial neurons in a recurrent neural network (RNN) may resemble empirical single-unit activity but not adequately capture important features on the neural population level. Dynamics of RNNs can be visualized in low-dimensional projections to provide insight into the RNN's dynamical mechanism. RNNs trained in different ways may reproduce neurophysiological motifs but do so with distinctly different mechanisms. RNNs trained to only perform a delayed reach task can generalize to perform tasks where the target is switched or the target location is changed.
引用
收藏
页码:2504 / 2521
页数:18
相关论文
共 50 条
  • [21] MARKOV RECURRENT NEURAL NETWORKS
    Kuo, Che-Yu
    Chien, Jen-Tzung
    2018 IEEE 28TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2018,
  • [22] Expressive Power of Nondeterministic Recurrent Neural Networks in Terms of their Attractor Dynamics
    Cabessa, Jeremie
    Duparc, Jacques
    INTERNATIONAL JOURNAL OF UNCONVENTIONAL COMPUTING, 2016, 12 (01) : 25 - 50
  • [23] Modeling the motor cortex: Optimality, recurrent neural networks, and spatial dynamics
    Tanaka, Hirokazu
    NEUROSCIENCE RESEARCH, 2016, 104 : 64 - 71
  • [24] Gradient-free training of recurrent neural networks using random perturbations
    Fernandez, Jesus Garcia
    Keemink, Sander
    van Gerven, Marcel
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [25] On the P-critical dynamics analysis of projection recurrent neural networks
    Qiao, Chen
    Xu, Zongben
    NEUROCOMPUTING, 2010, 73 (13-15) : 2783 - 2788
  • [26] A working memory model based on recurrent neural networks using reinforcement learning
    Wang, Mengyuan
    Wang, Yihong
    Xu, Xuying
    Pan, Xiaochuan
    COGNITIVE NEURODYNAMICS, 2024, 18 (05) : 3031 - 3058
  • [27] Arabic sentiment analysis using recurrent neural networks: a review
    Sarah Omar Alhumoud
    Asma Ali Al Wazrah
    Artificial Intelligence Review, 2022, 55 : 707 - 748
  • [28] Using Recurrent Neural Networks to approximate orientation with Accelerometers and Magnetometers
    Hosang, Akil
    Hosein, Nicholas
    Hosein, Patrick
    2021 SECOND INTERNATIONAL CONFERENCE ON INTELLIGENT DATA SCIENCE TECHNOLOGIES AND APPLICATIONS (IDSTA), 2021, : 88 - 92
  • [29] Stable Learning Algorithm Using Reducibility for Recurrent Neural Networks
    Satoh, Seiya
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI, 2023, 14259 : 127 - 139
  • [30] Adaptive Control of Multivariable Process Using Recurrent Neural Networks
    Balasubramanian, G.
    Hariprasad, K.
    Sivakumaran, N.
    Radhakrishnan, T. K.
    INSTRUMENTATION SCIENCE & TECHNOLOGY, 2009, 37 (06) : 615 - 630