Emotion Classification Based on Transformer and CNN for EEG Spatial-Temporal Feature Learning

被引:8
|
作者
Yao, Xiuzhen [1 ,2 ]
Li, Tianwen [2 ,3 ]
Ding, Peng [1 ,2 ]
Wang, Fan [1 ,2 ]
Zhao, Lei [2 ,3 ]
Gong, Anmin [4 ]
Nan, Wenya [5 ]
Fu, Yunfa [1 ,2 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Peoples R China
[2] Kunming Univ Sci & Technol, Brain Cognit & Brain Comp Intelligence Integrat Gr, Kunming 650500, Peoples R China
[3] Kunming Univ Sci & Technol, Fac Sci, Kunming 650500, Peoples R China
[4] Chinese Peoples Armed Police Force Engn Univ, Sch Informat Engn, Xian 710086, Peoples R China
[5] Shanghai Normal Univ, Coll Educ, Dept Psychol, Shanghai 200234, Peoples R China
基金
中国国家自然科学基金;
关键词
EEG; emotion classification; transformer; CNN; multi-head attention;
D O I
10.3390/brainsci14030268
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Objectives: The temporal and spatial information of electroencephalogram (EEG) signals is crucial for recognizing features in emotion classification models, but it excessively relies on manual feature extraction. The transformer model has the capability of performing automatic feature extraction; however, its potential has not been fully explored in the classification of emotion-related EEG signals. To address these challenges, the present study proposes a novel model based on transformer and convolutional neural networks (TCNN) for EEG spatial-temporal (EEG ST) feature learning to automatic emotion classification. Methods: The proposed EEG ST-TCNN model utilizes position encoding (PE) and multi-head attention to perceive channel positions and timing information in EEG signals. Two parallel transformer encoders in the model are used to extract spatial and temporal features from emotion-related EEG signals, and a CNN is used to aggregate the EEG's spatial and temporal features, which are subsequently classified using Softmax. Results: The proposed EEG ST-TCNN model achieved an accuracy of 96.67% on the SEED dataset and accuracies of 95.73%, 96.95%, and 96.34% for the arousal-valence, arousal, and valence dimensions, respectively, for the DEAP dataset. Conclusions: The results demonstrate the effectiveness of the proposed ST-TCNN model, with superior performance in emotion classification compared to recent relevant studies. Significance: The proposed EEG ST-TCNN model has the potential to be used for EEG-based automatic emotion recognition.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] EEG-Based Emotion Recognition Using Spatial-Temporal Graph Convolutional LSTM With Attention Mechanism
    Feng, Lin
    Cheng, Cheng
    Zhao, Mingyan
    Deng, Huiyuan
    Zhang, Yong
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (11) : 5406 - 5417
  • [42] EEG-based Emotion Recognition Using Spatial-Temporal Representation via Bi-GRU
    Lew, Wai-Cheong Lincoln
    Wang, Di
    Shylouskaya, Katsiaryna
    Zhang, Zhuo
    Lim, Joo-Hwee
    Ang, Kai Keng
    Tan, Ah-Hwee
    42ND ANNUAL INTERNATIONAL CONFERENCES OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY: ENABLING INNOVATIVE TECHNOLOGIES FOR GLOBAL HEALTHCARE EMBC'20, 2020, : 116 - 119
  • [43] Fast Spatial-Temporal Transformer Network
    Escher, Rafael Molossi
    de Bem, Rodrigo Andrade
    Jorge Drews Jr, Paulo Lilles
    2021 34TH SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI 2021), 2021, : 65 - 72
  • [44] Violent video classification based on spatial-temporal cues using deep learning
    Xu, Xingyu
    Wu, Xiaoyu
    Wang, Ge
    Wang, Huimin
    2018 11TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2018, : 319 - 322
  • [45] Spatial-Temporal Feature Representation Learning for Facial Fatigue Detection
    Wang, Changyuan
    Yan, Ting
    Jia, Hongbo
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2018, 32 (12)
  • [46] Lane Marking Detection and Classification using Spatial-Temporal Feature Pooling
    Tabelini, Lucas
    Berriel, Rodrigo
    De Souza, Alberto F.
    Badue, Claudine
    Oliveira-Santos, Thiago
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [47] Learning Complementary Spatial-Temporal Transformer for Video Salient Object Detection
    Liu, Nian
    Nan, Kepan
    Zhao, Wangbo
    Yao, Xiwen
    Han, Junwei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 10663 - 10673
  • [48] EEG-Based Emotion Classification with Wavelet Entropy Feature
    Song, Xiaolin
    Kang, Qiaoju
    Tian, Zekun
    Yang, Yi
    Yang, Sihao
    Gao, Qiang
    Song, Yu
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 5685 - 5689
  • [49] EEG Emotion Classification Based on Multi-Feature Fusion
    Liang, Mingjing
    Wang, Lu
    Wen, Xin
    Cao, Rui
    Computer Engineering and Applications, 2024, 59 (05) : 155 - 159
  • [50] Differential Entropy Feature for EEG-Based Emotion Classification
    Duan, Ruo-Nan
    Zhu, Jia-Yi
    Lu, Bao-Liang
    2013 6TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2013, : 81 - 84