Subject-specific CNN model with parameter-based transfer learning for SSVEP detection

被引:0
|
作者
Ji, Zhouyu [1 ]
Xu, Tao [2 ]
Chen, Chuangquan [1 ]
Yin, Haojun [1 ]
Wan, Feng [3 ,4 ]
Wang, Hongtao [1 ]
机构
[1] Wuyi Univ, Sch Elect & Informat Engn, Jiangmen, Peoples R China
[2] Shantou Univ, Dept Biomed Engn, Shantou, Peoples R China
[3] Univ Macau, Fac Sci & Technol, Dept Elect & Comp Engn, Macau, Peoples R China
[4] Univ Macau, Inst Collaborat Innovat, Ctr Cognit & Brain Sci, Macau, Peoples R China
关键词
Brain-computer interface; Deep learning; Electroencephalogram (EEG); Transfer learning; Steady-state visual evoked potential (SSVEP); BRAIN-COMPUTER INTERFACE; FREQUENCY RECOGNITION; NEURAL-NETWORK; CLASSIFICATION;
D O I
10.1016/j.bspc.2024.107404
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Steady-state visual evoked potentials (SSVEP)-based brain-computer interfaces (BCIs) leverage machine learning methods to enhance performance. However, these methods require a sufficiently long time window to achieve high accuracy and information transfer rate (ITR), which restricts their applications in real-world scenarios, particularly for user-specific decoding. To address this issue, we propose a parameter-based transfer learning CNN (PTL-CNN) approach for the SSVEP-BCI system, which can automatically fuse and extract both inter- and intra-subject features in EEG signals. Specifically, we first introduce a shallow CNN architecture and adopt a short time-window to train a pretrained model on a dataset comprising numerous subjects, aiming to explore the universal features across subjects. Subsequently, anew user is utilized to fine-tune the model, calibrating it to this specific user. Experimental results demonstrate that PTL-CNN achieves remarkable performance and significantly outperforms the compared algorithms under short time windows. For instance, in a time window of 0.4 s, PTL-CNN achieves an average accuracy of 80.60% with an average ITR of 247.77 bits/min on the Benchmark dataset, and an average accuracy of 66.91% with an average ITR of 185.90 bits/min on the Beta dataset. This performance is significantly better than that of Ensemble-TRCA (Benchmark: 71.21%, 209.12 bits/min; Beta: 53.04%, 135.53 bits/min). In summary, our proposed PTL-CNN achieves the highest average accuracy with the fastest average ITR and is of implications for the development of real-time BCI applications, as well as inspiration for other application paradigms.
引用
收藏
页数:9
相关论文
empty
未找到相关数据