DAGAM: a domain adversarial graph attention model for subject-independent EEG-based emotion recognition

被引:25
作者
Xu, Tao [1 ]
Dang, Wang [1 ]
Wang, Jiabao [1 ]
Zhou, Yun [2 ]
机构
[1] Northwestern Polytech Univ, Sch Software, Xian, Peoples R China
[2] Shaanxi Normal Univ, Fac Educ, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
EEG; emotion recognition; subject independent; DIFFERENTIAL ENTROPY FEATURE;
D O I
10.1088/1741-2552/acae06
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Due to individual differences in electroencephalogram (EEG) signals, the learning model built by the subject-dependent technique from one person's data would be inaccurate when applied to another person for emotion recognition. Thus, the subject-dependent approach for emotion recognition may result in poor generalization performance when compared to the subject-independent approach. However, existing studies have attempted but have not fully utilized EEG's topology, nor have they solved the problem caused by the difference in data distribution between the source and target domains. Approach. To eliminate individual differences in EEG signals, this paper proposes the domain adversarial graph attention model, a novel EEG-based emotion recognition model. The basic idea is to generate a graph using biological topology to model multichannel EEG signals. Graph theory can topologically describe and analyze EEG channel relationships and mutual dependencies. Then, unlike other graph convolutional networks, self-attention pooling is used to benefit from the extraction of salient EEG features from the graph, effectively improving performance. Finally, following graph pooling, the domain adversarial model based on the graph is used to identify and handle EEG variation across subjects, achieving good generalizability efficiently. Main Results. We conduct extensive evaluations on two benchmark data sets (SEED and SEED IV) and obtain cutting-edge results in subject-independent emotion recognition. Our model boosts the SEED accuracy to 92.59% (4.06% improvement) with the lowest standard deviation (STD) of 3.21% (2.46% decrements) and SEED IV accuracy to 80.74% (6.90% improvement) with the lowest STD of 4.14% (3.88% decrements), respectively. The computational complexity is drastically reduced in comparison to similar efforts (33 times lower). Significance. We have developed a model that significantly reduces the computation time while maintaining accuracy, making EEG-based emotion decoding more practical and generalizable.
引用
收藏
页数:10
相关论文
共 35 条
[1]   Two-Level Domain Adaptation Neural Network for EEG-Based Emotion Recognition [J].
Bao, Guangcheng ;
Zhuang, Ning ;
Tong, Li ;
Yan, Bin ;
Shu, Jun ;
Wang, Linyuan ;
Zeng, Ying ;
Shen, Zhichong .
FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 14
[2]   Subject-independent emotion recognition based on physiological signals: a three-stage decision method [J].
Chen, Jing ;
Hue, Bin ;
Wang, Yue ;
Moore, Philip ;
Dail, Yongqiang ;
Feng, Lei ;
Dingo, Zhijie .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2017, 17
[3]   Selective Transfer Machine for Personalized Facial Expression Analysis [J].
Chu, Wen-Sheng ;
De la Torre, Fernando ;
Cohn, Jeffrey F. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (03) :529-545
[4]  
Collobert R, 2006, J MACH LEARN RES, V7, P1687
[5]  
Duan RN, 2013, I IEEE EMBS C NEUR E, P81, DOI 10.1109/NER.2013.6695876
[6]   Unsupervised Visual Domain Adaptation Using Subspace Alignment [J].
Fernando, Basura ;
Habrard, Amaury ;
Sebban, Marc ;
Tuytelaars, Tinne .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :2960-2967
[7]  
Ganin Y, 2016, J MACH LEARN RES, V17
[8]   EEG-GCN: Spatio-Temporal and Self-Adaptive Graph Convolutional Networks for Single and Multi-View EEG-Based Emotion Recognition [J].
Gao, Yue ;
Fu, Xiangling ;
Ouyang, Tianxiong ;
Wang, Yi .
IEEE SIGNAL PROCESSING LETTERS, 2022, 29 :1574-1578
[9]  
Gong BQ, 2012, PROC CVPR IEEE, P2066, DOI 10.1109/CVPR.2012.6247911
[10]   Feature Extraction and Selection for Emotion Recognition from EEG [J].
Jenke, Robert ;
Peer, Angelika ;
Buss, Martin .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (03) :327-339