Emotion recognition using hierarchical spatial-temporal learning transformer from regional to global brain

被引:14
作者
Cheng, Cheng [1 ]
Liu, Wenzhe [2 ]
Feng, Lin [1 ,3 ]
Jia, Ziyu [4 ]
机构
[1] Dalian Univ Technol, Dept Comp Sci & Technol, Dalian, Peoples R China
[2] Huzhou Univ, Sch Informat Engn, Huzhou, Peoples R China
[3] Dalian Minzu Univ, Sch Informat & Commun Engn, Dlian, Peoples R China
[4] Univ Chinese Acad Sci, Chinese Acad Sci, Brainnetome Ctr, Inst Automat, Beijing, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Emotion recognition; Electroencephalogram (EEG); Transformer; Spatiotemporal features; EEG; FEATURES; NETWORK; FUSION;
D O I
10.1016/j.neunet.2024.106624
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotion recognition is an essential but challenging task in human-computer interaction systems due to the distinctive spatial structures and dynamic temporal dependencies associated with each emotion. However, current approaches fail to accurately capture the intricate effects of electroencephalogram (EEG) signals across different brain regions on emotion recognition. Therefore, this paper designs a transformer-based method, denoted by R2G-STLT, which relies on a spatial-temporal transformer encoder with regional to global hierarchical learning that learns the representative spatiotemporal features from the electrode level to the brain-region level. The regional spatial-temporal transformer (RST-Trans) encoder is designed to obtain spatial information and context dependence at the electrode level aiming to learn the regional spatiotemporal features. Then, the global spatial-temporal transformer (GST-Trans) encoder is utilized to extract reliable global spatiotemporal features, reflecting the impact of various brain regions on emotion recognition tasks. Moreover, the multi-head attention mechanism is placed into the GST-Trans encoder to empower it to capture the longrange spatial-temporal information among the brain regions. Finally, subject-independent experiments are conducted on each frequency band of the DEAP, SEED, and SEED-IV datasets to assess the performance of the proposed model. Results indicate that the R2G-STLT model surpasses several state-of-the-art approaches.
引用
收藏
页数:12
相关论文
共 70 条
[1]  
Ali M. U., 2023, IEEE Journal of Biomedical and Health Informatics
[2]   Subject independent emotion recognition using EEG signals employing attention driven neural networks [J].
Arjun ;
Rajpoot, Aniket Singh ;
Panicker, Mahesh Raveendranatha .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 75
[3]  
Ba J.L., 2016, arXiv preprint arXiv:1607.06450, DOI DOI 10.48550/ARXIV.1607.06450
[4]   MS-MDA: Multisource Marginal Distribution Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition [J].
Chen, Hao ;
Jin, Ming ;
Li, Zhunan ;
Fan, Cunhang ;
Li, Jinpeng ;
He, Huiguang .
FRONTIERS IN NEUROSCIENCE, 2021, 15
[5]   Exploring Self-Attention Graph Pooling With EEG-Based Topological Structure and Soft Label for Depression Detection [J].
Chen, Tao ;
Guo, Yanrong ;
Hao, Shijie ;
Hong, Richang .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (04) :2106-2118
[6]   A novel transformer autoencoder for multi-modal emotion recognition with incomplete data [J].
Cheng, Cheng ;
Liu, Wenzhe ;
Fan, Zhaoxin ;
Feng, Lin ;
Jia, Ziyu .
NEURAL NETWORKS, 2024, 172
[7]   Hybrid Network Using Dynamic Graph Convolution and Temporal Self-Attention for EEG-Based Emotion Recognition [J].
Cheng, Cheng ;
Yu, Zikang ;
Zhang, Yong ;
Feng, Lin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) :18565-18575
[8]   Multi-Domain Encoding of Spatiotemporal Dynamics in EEG for Emotion Recognition [J].
Cheng, Cheng ;
Zhang, Yong ;
Liu, Luyao ;
Liu, Wenzhe ;
Feng, Lin .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (03) :1342-1353
[9]   Frontal EEG asymmetry as a moderator and mediator of emotion [J].
Coan, JA ;
Allen, JJB .
BIOLOGICAL PSYCHOLOGY, 2004, 67 (1-2) :7-49
[10]   Emotion recognition in human-computer interaction [J].
Cowie, R ;
Douglas-Cowie, E ;
Tsapatsoulis, N ;
Votsis, G ;
Kollias, S ;
Fellenz, W ;
Taylor, JG .
IEEE SIGNAL PROCESSING MAGAZINE, 2001, 18 (01) :32-80