An EEG-based cross-subject interpretable CNN for game player expertise level classification

被引:11
|
作者
Lin, Liqi [1 ]
Li, Pengrui [1 ]
Wang, Qinghua [2 ]
Bai, Binnan [1 ]
Cui, Ruifang [3 ,4 ]
Yu, Zhenxia [1 ]
Gao, Dongrui [1 ,3 ]
Zhang, Yongqing [1 ]
机构
[1] Chengdu Univ Informat Technol, Sch Comp Sci, Chengdu 610225, Peoples R China
[2] Hubi Wuhan Publ Secur Bur, Wuhan 430070, Hubei, Peoples R China
[3] Univ Elect Sci & Technol China, Sch Life Sci & Technol, Chengdu 611731, Peoples R China
[4] Univ Elect Sci & Technol China, Clin Hosp, Chengdu Brain Sci Inst, MOE Key Lab Neuroinformat, Chengdu 610054, Peoples R China
基金
中国国家自然科学基金;
关键词
Electroencephalography (EEG); Game player; Depthwise separable convolution; Cross-subject; Interpretable technology; COGNITIVE FUNCTIONS; VIDEO; PERFORMANCE; CMARS;
D O I
10.1016/j.eswa.2023.121658
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electroencephalogram (EEG) signals have been demonstrated to be an effective method for game player expertise level classification, as it can reflect the activity state of the player's brain during the game task. Although many efforts have been made to classify the expertise level of game players based on EEG signals, existing methods still need to be improved in identifying common brain patterns across different subjects. In this article, we propose a Spatiotemporal-Based Brain Pattern Recognition Network (BPR-STNet), which uses depthwise separable convolution to extract the spatiotemporal features of EEG signals and incorporates Gradient-weighted Class Activation Mapping (Grad-CAM) interpretable technology to explore common brain patterns among different subjects. To evaluate the model's performance, we use non-invasive wearable devices to collect EEG signals from 19 subjects during the game process. The results of the leave-one-out cross-subject experiments demonstrate that the average accuracy of the model's game player expertise level classification is 82%-86.32% in the five frequency bands (6, 0, a, /3, y). Among them, the y frequency band has the highest accuracy of 86.32%. More importantly, the average accuracy of our method is 1.56%-5.79% higher than the state-of-the-art deep learning methods. Moreover, interpretable results indicate the model can learn biologically significant features from EEG frequency bands. Overall, this study can offer a new idea of using deep learning to explore common brain patterns and features among different subjects at the same level. The BPR-STNet code is available at https://github.com/L000077/BPR-STNet.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Cross-Subject EEG-Based Emotion Recognition via Semisupervised Multisource Joint Distribution Adaptation
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [32] Cross-subject EEG-based Emotion Recognition Using Adversarial Domain Adaption with Attention Mechanism
    Ye, Yalan
    Zhu, Xin
    Li, Yunxia
    Pan, Tongjie
    He, Wenwen
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 1140 - 1144
  • [33] A class alignment network based on self-attention for cross-subject EEG classification
    Ma, Sufan
    Zhang, Dongxiao
    Wang, Jiayi
    Xie, Jialiang
    BIOMEDICAL PHYSICS & ENGINEERING EXPRESS, 2025, 11 (01):
  • [34] EEG-based cross-subject emotion recognition using multi-source domain transfer learning
    Quan, Jie
    Li, Ying
    Wang, Lingyue
    He, Renjie
    Yang, Shuo
    Guo, Lei
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 84
  • [35] JOINT SEMI-SUPERVISED FEATURE AUTO-WEIGHTING AND CLASSIFICATION MODEL FOR EEG-BASED CROSS-SUBJECT SLEEP QUALITY EVALUATION
    Peng, Yong
    Li, Qingxi
    Kong, Wanzeng
    Zhang, Jianhai
    Lu, Bao-Liang
    Cichocki, Andrzej
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 946 - 950
  • [36] Cross-subject classification of depression by using multiparadigm EEG feature fusion
    Yang, Jianli
    Zhang, Zhen
    Fu, Zhiyu
    Li, Bing
    Xiong, Peng
    Liu, Xiuling
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 233
  • [37] Toward cross-subject and cross-session generalization in EEG-based emotion recognition: Systematic review, taxonomy, and methods
    Apicella, Andrea
    Arpaia, Pasquale
    D'Errico, Giovanni
    Marocco, Davide
    Mastrati, Giovanna
    Moccaldi, Nicola
    Prevete, Roberto
    NEUROCOMPUTING, 2024, 604
  • [38] A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space
    Ma, Sufan
    Zhang, Dongxiao
    SENSORS, 2024, 24 (21)
  • [39] Dual selections based knowledge transfer learning for cross-subject motor imagery EEG classification
    Luo, Tian-jian
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [40] JOINT TEMPORAL CONVOLUTIONAL NETWORKS AND ADVERSARIAL DISCRIMINATIVE DOMAIN ADAPTATION FOR EEG-BASED CROSS-SUBJECT EMOTION RECOGNITION
    He, Zhipeng
    Zhong, Yongshi
    Pan, Jiahui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3214 - 3218