Emotion Classification Based on Transformer and CNN for EEG Spatial-Temporal Feature Learning

被引:8
|
作者
Yao, Xiuzhen [1 ,2 ]
Li, Tianwen [2 ,3 ]
Ding, Peng [1 ,2 ]
Wang, Fan [1 ,2 ]
Zhao, Lei [2 ,3 ]
Gong, Anmin [4 ]
Nan, Wenya [5 ]
Fu, Yunfa [1 ,2 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Peoples R China
[2] Kunming Univ Sci & Technol, Brain Cognit & Brain Comp Intelligence Integrat Gr, Kunming 650500, Peoples R China
[3] Kunming Univ Sci & Technol, Fac Sci, Kunming 650500, Peoples R China
[4] Chinese Peoples Armed Police Force Engn Univ, Sch Informat Engn, Xian 710086, Peoples R China
[5] Shanghai Normal Univ, Coll Educ, Dept Psychol, Shanghai 200234, Peoples R China
基金
中国国家自然科学基金;
关键词
EEG; emotion classification; transformer; CNN; multi-head attention;
D O I
10.3390/brainsci14030268
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Objectives: The temporal and spatial information of electroencephalogram (EEG) signals is crucial for recognizing features in emotion classification models, but it excessively relies on manual feature extraction. The transformer model has the capability of performing automatic feature extraction; however, its potential has not been fully explored in the classification of emotion-related EEG signals. To address these challenges, the present study proposes a novel model based on transformer and convolutional neural networks (TCNN) for EEG spatial-temporal (EEG ST) feature learning to automatic emotion classification. Methods: The proposed EEG ST-TCNN model utilizes position encoding (PE) and multi-head attention to perceive channel positions and timing information in EEG signals. Two parallel transformer encoders in the model are used to extract spatial and temporal features from emotion-related EEG signals, and a CNN is used to aggregate the EEG's spatial and temporal features, which are subsequently classified using Softmax. Results: The proposed EEG ST-TCNN model achieved an accuracy of 96.67% on the SEED dataset and accuracies of 95.73%, 96.95%, and 96.34% for the arousal-valence, arousal, and valence dimensions, respectively, for the DEAP dataset. Conclusions: The results demonstrate the effectiveness of the proposed ST-TCNN model, with superior performance in emotion classification compared to recent relevant studies. Significance: The proposed EEG ST-TCNN model has the potential to be used for EEG-based automatic emotion recognition.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Tourism demand forecasting: a deep learning model based on spatial-temporal transformer
    Chen, Jiaying
    Li, Cheng
    Huang, Liyao
    Zheng, Weimin
    TOURISM REVIEW, 2025, 80 (03) : 648 - 663
  • [32] Deblurring Videos Using Spatial-Temporal Contextual Transformer With Feature Propagation
    Zhang, Liyan
    Xu, Boming
    Yang, Zhongbao
    Pan, Jinshan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6354 - 6366
  • [33] An approach to quantifying the multi-channel EEG spatial-temporal feature
    Lo, PC
    Chung, WP
    BIOMETRICAL JOURNAL, 2000, 42 (07) : 901 - 914
  • [34] Learning a spatial-temporal texture transformer network for video inpainting
    Ma, Pengsen
    Xue, Tao
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [35] Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
    Emsawas, Taweesak
    Morita, Takashi
    Kimura, Tsukasa
    Fukui, Ken-ichi
    Numao, Masayuki
    SENSORS, 2022, 22 (21)
  • [36] Subject-independent emotion recognition of EEG signals using graph attention-based spatial-temporal pattern learning
    Zhu, Yiwen
    Guo, Yeshuang
    Zhu, Wenzhe
    Di, Lare
    Yin, Thong
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7070 - 7075
  • [37] CNN-Transformer network for student learning effect prediction using EEG signals based on spatio-temporal feature fusion
    Xie, Hui
    Dong, Zexiao
    Yang, Huiting
    Luo, Yanxia
    Ren, Shenghan
    Zhang, Pengyuan
    He, Jiangshan
    Jia, Chunli
    Yang, Yuqiang
    Jiang, Mingzhe
    Gao, Xinbo
    Chen, Xueli
    APPLIED SOFT COMPUTING, 2025, 170
  • [38] Spatial-temporal network for fine-grained-level emotion EEG recognition
    Ji, Youshuo
    Li, Fu
    Fu, Boxun
    Li, Yang
    Zhou, Yijin
    Niu, Yi
    Zhang, Lijian
    Chen, Yuanfang
    Shi, Guangming
    JOURNAL OF NEURAL ENGINEERING, 2022, 19 (03)
  • [39] RANDOM-SAMPLING-BASED SPATIAL-TEMPORAL FEATURE FOR CONSUMER VIDEO CONCEPT CLASSIFICATION
    Wei, Anjun
    Pei, Yuru
    Zha, Hongbin
    2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2012), 2012, : 1861 - 1864
  • [40] EEG-based patient-specific seizure prediction based on Spatial-Temporal Hypergraph Attention Transformer
    Dong, Changxu
    Sun, Dengdi
    Zhang, Zejing
    Luo, Bin
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 100