Considerations in using recurrent neural networks to probe neural dynamics

被引:10
|
作者
Kao, Jonathan C. [1 ,2 ]
机构
[1] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA USA
[2] Univ Calif Los Angeles, Neurosci Program, Los Angeles, CA USA
关键词
artificial neural network; motor cortex; neural computation; neural dynamics; recurrent neural network; MOTOR CORTEX; PREMOTOR CORTEX; MOVEMENT; COMPLEXITY; NEURONS;
D O I
10.1152/jn.00467.2018
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Recurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. RNNs are trained to reproduce animal behavior while also capturing key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Furthermore, because the RNN's governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network's nonlinear activation and rate regularization, we show that RNNs reproducing single-neuron firing rate motifs may not adequately capture important population motifs. Second, we find that even when RNNs reproduce key neurophysiological features on both the single neuron and population levels, they can do so through distinctly different dynamical mechanisms. To distinguish between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to input noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to tasks it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics. NEW & NOTEWORTHY Artificial neurons in a recurrent neural network (RNN) may resemble empirical single-unit activity but not adequately capture important features on the neural population level. Dynamics of RNNs can be visualized in low-dimensional projections to provide insight into the RNN's dynamical mechanism. RNNs trained in different ways may reproduce neurophysiological motifs but do so with distinctly different mechanisms. RNNs trained to only perform a delayed reach task can generalize to perform tasks where the target is switched or the target location is changed.
引用
收藏
页码:2504 / 2521
页数:18
相关论文
共 50 条
  • [31] DRIVER CONFUSION STATUS DETECTION USING RECURRENT NEURAL NETWORKS
    Hori, Chiori
    Watanabe, Shinji
    Hori, Takaaki
    Harsham, Bret A.
    Hershey, John R.
    Koji, Yusuke
    Fujii, Yoichi
    Furumoto, Yuki
    2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2016,
  • [32] Using Stigmergy as a Computational Memory in the Design of Recurrent Neural Networks
    Galatolo, Federico A.
    Cimino, Mario G. C. A.
    Vaglini, Gigliola
    ICPRAM: PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, 2019, : 830 - 836
  • [33] Isolated word recognition using modular recurrent neural networks
    Lee, T
    Ching, PC
    Chan, LW
    PATTERN RECOGNITION, 1998, 31 (06) : 751 - 760
  • [34] Arabic sentiment analysis using recurrent neural networks: a review
    Alhumoud, Sarah Omar
    Al Wazrah, Asma Ali
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (01) : 707 - 748
  • [35] Online Japanese Handwriting Recognizers using Recurrent Neural Networks
    Hung Tuan Nguyen
    Cuong Tuan Nguyen
    Nakagawa, Masaki
    PROCEEDINGS 2018 16TH INTERNATIONAL CONFERENCE ON FRONTIERS IN HANDWRITING RECOGNITION (ICFHR), 2018, : 435 - 440
  • [36] Seasonal Streamflow Series Forecasting Using Recurrent Neural Networks
    Belotti, Jonatas T.
    Lazzarin, Lilian N. A.
    Usberti, Fabio L.
    Siqueira, Hugo
    2018 IEEE LATIN AMERICAN CONFERENCE ON COMPUTATIONAL INTELLIGENCE (LA-CCI), 2018,
  • [37] Plasticity and stability in recurrent neural networks
    Friedemann Zenke
    Guillaume Hennequin
    Henning Sprekeler
    Tim P Vogels
    Wulfram Gerstner
    BMC Neuroscience, 12 (Suppl 1)
  • [38] Computation Through Neural Population Dynamics
    Vyas, Saurabh
    Golub, Matthew D.
    Sussillo, David
    Shenoy, Krishna V.
    ANNUAL REVIEW OF NEUROSCIENCE, VOL 43, 2020, 43 : 249 - 275
  • [39] Quaternionic Recurrent Correlation Neural Networks
    Valle, Marcos Eduardo
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [40] Microscopic instability in recurrent neural networks
    Yamanaka, Yuzuru
    Amari, Shun-ichi
    Shinomoto, Shigeru
    PHYSICAL REVIEW E, 2015, 91 (03):