Online learning with hidden Markov models

被引:68
作者
Mongillo, Gianluigi [1 ]
Deneve, Sophie [1 ]
机构
[1] Coll France, Ecole Normale Super, Dept Etud Cognit, Grp Neural Theory, F-75006 Paris, France
关键词
D O I
10.1162/neco.2008.10-06-351
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present an online version of the expectation-maximization (EM) algorithm for hidden Markov models (HMMs). The sufficient statistics required for parameters estimation is computed recursively with time, that is, in an online way instead of using the batch forward-backward procedure. This computational scheme is generalized to the case where the model parameters can change with time by introducing a discount factor into the recurrence relations. The resulting algorithm is equivalent to the batch EM algorithm, for appropriate discount factor and scheduling of parameters update. On the other hand, the online algorithm is able to deal with dynamic environments, i.e., when the statistics of the observed data is changing with time. The implications of the online algorithm for probabilistic modeling in neuroscience are briefly discussed.
引用
收藏
页码:1706 / 1716
页数:11
相关论文
共 6 条