FMLAN: A novel framework for cross-subject and cross-session EEG emotion recognition

被引:8
作者
Yu, Peng [1 ,2 ,3 ]
He, Xiaopeng [1 ,2 ,3 ]
Li, Haoyu [1 ,2 ,3 ]
Dou, Haowen [1 ,2 ,3 ]
Tan, Yeyu [1 ,2 ,3 ]
Wu, Hao [4 ]
Chen, Badong [1 ,2 ,3 ]
机构
[1] Natl Key Lab Human Machine Hybrid Augmented Intell, Xian, Peoples R China
[2] Natl Engn Res Ctr Visual Informat & Applicat, Xian, Peoples R China
[3] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian, Peoples R China
[4] Xian Univ Technol, Sch Elect Engn, Xian, Peoples R China
关键词
Electroencephalogram (EEG); Emotion recognition; Multi-source domain adaptation (MDA); Transfer learning; DOMAIN ADAPTATION;
D O I
10.1016/j.bspc.2024.106912
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Emotion recognition is significant in brain-computer interface (BCI) applications. Electroencephalography (EEG) is extensively employed for emotion recognition because of its precise temporal resolution and dependability. However, EEG signals are variable across subjects and sessions, limiting the effectiveness of emotion recognition methods on new users. To address this problem, multi-source domain adaptation was introduced to EEG emotion recognition. Actually, for cross-subject and cross-session emotion recognition methods, there are two most important aspects: extracting features relevant to the emotion recognition task and aligning the features of labeled subjects or sessions(source domains) with those of the unlabeled subject or session(target domain). In this study, we propose a Fine-grained Mutual Learning Adaptation Network (FMLAN) to make innovative improvements in these two aspects. Specifically, we establish multiple separate domain adaptation sub-networks, each corresponding to a specific source domain. Additionally, we introduce a single joint domain adaptation sub-network that combines multiple source domains together. For EEG emotion recognition, we introduce mutual learning for the first time to connect separate domain adaptation networks and joint domain adaptation sub-network. This facilitates the transfer of complementary information between different domains, enabling each sub-network to extract more comprehensive and robust features. Additionally, we design a novel Fine-grained Alignment Module (FAM), which takes category and decision boundary information into account during the feature alignment, ensuring more accurate alignment. Extensive experiments on SEED and SEED-IV datasets demonstrate that our approach outperforms state-of-the-art methods in performance.
引用
收藏
页数:12
相关论文
共 57 条
[1]  
Briner R.B., 2002, Psychol. Work, P229
[2]   Feature-level fusion approaches based on multimodal EEG data for depression recognition [J].
Cai, Hanshu ;
Qu, Zhidiao ;
Li, Zhe ;
Zhang, Yi ;
Hu, Xiping ;
Hu, Bin .
INFORMATION FUSION, 2020, 59 (59) :127-138
[3]   MS-MDA: Multisource Marginal Distribution Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition [J].
Chen, Hao ;
Jin, Ming ;
Li, Zhunan ;
Fan, Cunhang ;
Li, Jinpeng ;
He, Huiguang .
FRONTIERS IN NEUROSCIENCE, 2021, 15
[4]   MEERNet: Multi-source EEG-based Emotion Recognition Network for Generalization Across Subjects and Sessions [J].
Chen, Hao ;
Li, Zhunan ;
Jin, Ming ;
Li, Jinpeng .
2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, :6094-6097
[5]   Dry and Noncontact EEG Sensors for Mobile Brain-Computer Interfaces [J].
Chi, Yu Mike ;
Wang, Yu-Te ;
Wang, Yijun ;
Maier, Christoph ;
Jung, Tzyy-Ping ;
Cauwenberghs, Gert .
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2012, 20 (02) :228-235
[6]  
Cui H., 2022, IEEE Trans. Affect. Comput., P1
[7]   Emotion, cognition, and behavior [J].
Dolan, RJ .
SCIENCE, 2002, 298 (5596) :1191-1194
[8]   FACIAL EXPRESSION AND EMOTION [J].
EKMAN, P .
AMERICAN PSYCHOLOGIST, 1993, 48 (04) :384-392
[9]  
Ganin Y, 2015, PR MACH LEARN RES, V37, P1180
[10]   EEG emotion recognition using attention-based convolutional transformer neural network [J].
Gong, Linlin ;
Li, Mingyang ;
Zhang, Tao ;
Chen, Wanzhong .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 84