Making Sense of Spatio-Temporal Preserving Representations for EEG-Based Human Intention Recognition

被引:253
作者
Zhang, Dalin [1 ]
Yao, Lina [1 ]
Chen, Kaixuan [1 ]
Wang, Sen [2 ]
Chang, Xiaojun [3 ]
Liu, Yunhao [4 ]
机构
[1] Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia
[2] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld 4072, Australia
[3] Monash Univ, Fac Informat Technol, Clayton, Vic 3800, Australia
[4] Michigan State Univ, Sch Comp Sci & Engn, E Lansing, MI 48823 USA
关键词
Electroencephalography; Electrodes; Brain modeling; Feature extraction; Biological neural networks; Recurrent neural networks; Brain-computer interface (BCI); electroencephalography (EEG); intention recognition; spatial information; temporal information; MOTOR-IMAGERY; CLASSIFICATION; ALGORITHM; NETWORK; SIGNALS; ICA;
D O I
10.1109/TCYB.2019.2905157
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Brain-computer interface (BCI) is a system empowering humans to communicate with or control the outside world with exclusively brain intentions. Electroencephalography (EEG)-based BCI is one of the promising solutions due to its convenient and portable instruments. Despite the extensive research of EEG in recent years, it is still challenging to interpret EEG signals effectively due to its nature of noise and difficulties in capturing the inconspicuous relations between EEG signals and specific brain activities. Most existing works either only consider EEG as chain-like sequences while neglecting complex dependencies between adjacent signals or requiring complex preprocessing. In this paper, we introduce two deep learning-based frameworks with novel spatio-temporal preserving representations of raw EEG streams to precisely identify human intentions. The two frameworks consist of both convolutional and recurrent neural networks effectively exploring the preserved spatial and temporal information in either a cascade or a parallel manner. Extensive experiments on a large scale movement intention EEG dataset (108 subjects, 3 145 160 EEG records) have demonstrated that the proposed frameworks achieve high accuracy of 98.3% and outperform a set of state-of-the-art and baseline models. The developed models are further evaluated with a real-world brain typing BCI and achieve a recognition accuracy of 93% over five instruction intentions suggesting good generalization over different kinds of intentions and BCI systems.
引用
收藏
页码:3033 / 3044
页数:12
相关论文
共 53 条
[51]   Spatial-Temporal Recurrent Neural Network for Emotion Recognition [J].
Zhang, Tong ;
Zheng, Wenming ;
Cui, Zhen ;
Zong, Yuan ;
Li, Yang .
IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (03) :839-847
[52]  
Zhang X., 2017, ARXIV PREPRINT ARXIV, V41, P162, DOI DOI 10.1109/TPAMI.2017.2778103
[53]   EmotionMeter: A Multimodal Framework for Recognizing Human Emotions [J].
Zheng, Wei-Long ;
Liu, Wei ;
Lu, Yifei ;
Lu, Bao-Liang ;
Cichocki, Andrzej .
IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (03) :1110-1122