LEARNING THE INITIAL-STATE OF A 2ND-ORDER RECURRENT NEURAL-NETWORK DURING REGULAR-LANGUAGE INFERENCE

被引:24
作者
FORCADA, ML
CARRASCO, RC
机构
关键词
D O I
10.1162/neco.1995.7.5.923
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent work has shown that second-order recurrent neural networks (2ORNNs) may be used to infer regular languages. This paper presents a modified version of the real-time recurrent learning (RTRL) algorithm used to train 2ORNNs, that learns the initial state in addition to the weights. The results of this modification, which adds extra flexibility at a negligible cost in time complexity, suggest that it may be used to improve the learning of regular languages when the size of the network is small.
引用
收藏
页码:923 / 930
页数:8
相关论文
共 7 条
[1]  
GILES CL, 1992, ADV NEUR IN, V4, P317
[2]   LEARNING AND EXTRACTING FINITE STATE AUTOMATA WITH 2ND-ORDER RECURRENT NEURAL NETWORKS [J].
GILES, CL ;
MILLER, CB ;
CHEN, D ;
CHEN, HH ;
SUN, GZ ;
LEE, YC .
NEURAL COMPUTATION, 1992, 4 (03) :393-405
[3]  
SIEGELMANN HT, 1992, INFORMATION PROCESSI, V1, P329
[4]  
TOMITA M, 1982, 4TH P ANN COGN SCI C, P105
[5]   INDUCTION OF FINITE-STATE LANGUAGES USING 2ND-ORDER RECURRENT NETWORKS [J].
WATROUS, RL ;
KUHN, GM .
NEURAL COMPUTATION, 1992, 4 (03) :406-414
[6]  
WATROUS RL, 1992, ADV NEURAL INFORMATI, V4, P306
[7]   A Learning Algorithm for Continually Running Fully Recurrent Neural Networks [J].
Williams, Ronald J. ;
Zipser, David .
NEURAL COMPUTATION, 1989, 1 (02) :270-280