Subject adaptation convolutional neural network for EEG-based motor imagery classification

被引:14
|
作者
Liu, Siwei [1 ]
Zhang, Jia [1 ]
Wang, Andong [2 ]
Wu, Hanrui [1 ]
Zhao, Qibin [2 ]
Long, Jinyi [1 ,3 ,4 ]
机构
[1] Jinan Univ, Coll Informat Sci & Technol, Guangzhou 510632, Peoples R China
[2] RIKEN AIP, Tensor Learning Team, Tokyo, Japan
[3] Guangdong Key Lab Tradit Chinese Med Informat Tech, Guangzhou 510632, Peoples R China
[4] Pazhou Lab, Guangzhou 510335, Peoples R China
基金
中国国家自然科学基金;
关键词
brain-computer interface; motor imagery; deep learning; transfer learning; electroencephalogram; BRAIN-COMPUTER INTERFACE; DOMAIN ADAPTATION; SEIZURE DETECTION; FEATURES; BCI;
D O I
10.1088/1741-2552/ac9c94
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Deep transfer learning has been widely used to address the nonstationarity of electroencephalogram (EEG) data during motor imagery (MI) classification. However, previous deep learning approaches suffer from limited classification accuracy because the temporal and spatial features cannot be effectively extracted. Approach. Here, we propose a novel end-to-end deep subject adaptation convolutional neural network (SACNN) to handle the problem of EEG-based MI classification. Our proposed model jointly optimizes three modules, i.e. a feature extractor, a classifier, and a subject adapter. Specifically, the feature extractor simultaneously extracts the temporal and spatial features from the raw EEG data using a parallel multiscale convolution network. In addition, we design a subject adapter to reduce the feature distribution shift between the source and target subjects by using the maximum mean discrepancy. By minimizing the classification loss and the distribution discrepancy, the model is able to extract the temporal-spatial features to the prediction of a new subject. Main results. Extensive experiments are carried out on three EEG-based MI datasets, i.e. brain-computer interface (BCI) competition IV dataset IIb, BCI competition III dataset IVa, and BCI competition IV dataset I, and the average accuracy reaches to 86.42%, 81.71% and 79.35% on the three datasets respectively. Furthermore, the statistical analysis also indicates the significant performance improvement of SACNN. Significance. This paper reveals the importance of the temporal-spatial features on EEG-based MI classification task. Our proposed SACNN model can make fully use of the temporal-spatial information to achieve the purpose.
引用
收藏
页数:15
相关论文
empty
未找到相关数据