A novel multimodal EEG-image fusion approach for emotion recognition: introducing a multimodal KMED dataset

被引:0
作者
Bahar Hatipoglu Yilmaz [1 ]
Cemal Kose [1 ]
Cagatay Murat Yilmaz [2 ]
机构
[1] Karadeniz Technical University,Department of Computer Engineering
[2] Karadeniz Technical University,Department of Software Engineering
关键词
Multimodal emotion recognition; feature-level fusion; EEG; Face images; KMED dataset; DEAP dataset;
D O I
10.1007/s00521-024-10925-5
中图分类号
学科分类号
摘要
Nowadays, bio-signal-based emotion recognition have become a popular research topic. However, there are some problems that must be solved before emotion-based systems can be realized. We therefore aimed to propose a feature-level fusion (FLF) method for multimodal emotion recognition (MER). In this method, first, EEG signals are transformed to signal images named angle amplitude graphs (AAG). Second, facial images are recorded simultaneously with EEG signals, and then peak frames are selected among all the recorded facial images. After that, these modalities are fused at the feature level. Finally, all feature extraction and classification experiments are evaluated on these final features. In this work, we also introduce a new multimodal benchmark dataset, KMED, which includes EEG signals and facial videos from 14 participants. Experiments were carried out on the newly introduced KMED and publicly available DEAP datasets. For the KMED dataset, we achieved the highest classification accuracy of 89.95% with k-Nearest Neighbor algorithm in the (3-disgusting and 4-relaxing) class pair. For the DEAP dataset, we got the highest accuracy of 92.44% with support vector machines in arousal compared to the results of previous works. These results demonstrate that the proposed feature-level fusion approach have considerable potential for MER systems. Additionally, the introduced KMED benchmark dataset will facilitate future studies of multimodal emotion recognition.
引用
收藏
页码:5187 / 5202
页数:15
相关论文
empty
未找到相关数据