PARALLEL LONG SHORT-TERM MEMORY FOR MULTI-STREAM CLASSIFICATION

被引:0
作者
Bouaziz, Mohamed [1 ,2 ]
Morchid, Mohamed [1 ]
Dufour, Richard [1 ]
Linares, Georges [1 ]
De Mori, Renato [1 ,3 ]
机构
[1] Univ Avignon, LIA, Avignon, France
[2] EDD, Paris, France
[3] McGill Univ, Montreal, PQ, Canada
来源
2016 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2016) | 2016年
关键词
long short-term memory; sequence classification; stream structuring; LSTM;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, machine learning methods have provided a broad spectrum of original and efficient algorithms based on Deep Neural Networks (DNN) to automatically predict an outcome with respect to a sequence of inputs. Recurrent hidden cells allow these DNN-based models to manage long-term dependencies such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM). Nevertheless, these RNNs process a single input stream in one (LSTM) or two (Bidirectional LSTM) directions. But most of the information available nowadays is from multistreams or multimedia documents, and require RNNs to process these information synchronously during the training. This paper presents an original LSTM-based architecture, named Parallel LSTM (PLSTM), that carries out multiple parallel synchronized input sequences in order to predict a common output. The proposed PLSTM method could be used for parallel sequence classification purposes. The PLSTM approach is evaluated on an automatic telecast genre sequences classification task and compared with different state-of-the-art architectures. Results show that the proposed PLSTM method outperforms the baseline n-gram models as well as the state-of-the-art LSTM approach.
引用
收藏
页码:218 / 223
页数:6
相关论文
共 21 条