The echo index and multistability in input-driven recurrent neural networks

被引:15
作者
Ceni, Andrea [1 ]
Ashwin, Peter [2 ]
Livi, Lorenzo [1 ,3 ]
Postlethwaite, Claire [4 ]
机构
[1] Univ Exeter, Dept Comp Sci, Exeter EX4 4QF, Devon, England
[2] Univ Exeter, Dept Math, Exeter EX4 4QF, Devon, England
[3] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
[4] Univ Auckland, Dept Math, Auckland 1142, New Zealand
基金
英国工程与自然科学研究理事会;
关键词
Nonautonomous dynamical systems; Input-driven systems; Recurrent neural networks; Echo state property; Multistability; Machine learning; STATE NETWORKS; ATTRACTORS;
D O I
10.1016/j.physd.2020.132609
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
A recurrent neural network (RNN) possesses the echo state property (ESP) if, for a given input sequence, it "forgets'' any internal states of the driven (nonautonomous) system and asymptotically follows a unique, possibly complex trajectory. The lack of ESP is conventionally understood as a lack of reliable behaviour in RNNs. Here, we show that RNNs can reliably perform computations under a more general principle that accounts only for their local behaviour in phase space. To this end, we formulate a generalisation of the ESP and introduce an echo index to characterise the number of simultaneously stable responses of a driven RNN. We show that it is possible for the echo index to change with inputs, highlighting a potential source of computational errors in RNNs due to characteristics of the inputs driving the dynamics. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:18
相关论文
共 41 条
[1]  
[Anonymous], 2004, STOCH DYNAM
[2]  
Bianchi F. M., 2017, Recurrent Neural Networks for Short-Term Load Forecasting: An Overview and Comparative Analysis, P31
[3]   State-dependent computations: spatiotemporal processing in cortical networks [J].
Buonomano, Dean V. ;
Maass, Wolfgang .
NATURE REVIEWS NEUROSCIENCE, 2009, 10 (02) :113-125
[4]  
Caraballo T, 2013, ADV NONLINEAR STUD, V13, P309
[5]   Using reservoir computers to distinguish chaotic signals [J].
Carroll, T. L. .
PHYSICAL REVIEW E, 2018, 98 (05)
[6]   Interpreting Recurrent Neural Networks Behaviour via Excitable Network Attractors [J].
Ceni, Andrea ;
Ashwin, Peter ;
Livi, Lorenzo .
COGNITIVE COMPUTATION, 2020, 12 (02) :330-356
[7]  
Chung J., 2014, CoRR, vol. abs/1412. 3555
[8]  
Falconer K., 1990, Fractal Geometry: Mathematical Foundations and Applications
[9]  
Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1007/978-3-642-24797-2, 10.1162/neco.1997.9.1.1]
[10]  
Grigoryeva L, 2019, J MACH LEARN RES, V20