Emotional State Classification from MUSIC-Based Features of Multichannel EEG Signals

被引:9
作者
Hossain, Sakib Abrar [1 ,2 ]
Rahman, Md. Asadur [3 ]
Chakrabarty, Amitabha [1 ]
Rashid, Mohd Abdur [4 ,5 ]
Kuwana, Anna [5 ]
Kobayashi, Haruo [5 ]
机构
[1] Brac Univ, Dept Comp Sci & Engn, Dhaka 1212, Bangladesh
[2] North South Univ, NSU Genome Res Inst, Dhaka 1229, Bangladesh
[3] Mil Inst Sci & Technol MIST, Dept Biomed Engn, Dhaka 1216, Bangladesh
[4] Noakhali Sci & Technol Univ, Dept EEE, Noakhali 3814, Bangladesh
[5] Gunma Univ, Div Elect & Informat, 1-5-1 Tenjin cho, Kiryu, Gunma 3768515, Japan
来源
BIOENGINEERING-BASEL | 2023年 / 10卷 / 01期
关键词
EEG signal; MUSIC; PSD; feature extraction; classification; emotion recognition;
D O I
10.3390/bioengineering10010099
中图分类号
Q81 [生物工程学(生物技术)]; Q93 [微生物学];
学科分类号
071005 ; 0836 ; 090102 ; 100705 ;
摘要
Electroencephalogram (EEG)-based emotion recognition is a computationally challenging issue in the field of medical data science that has interesting applications in cognitive state disclosure. Generally, EEG signals are classified from frequency-based features that are often extracted using non-parametric models such as Welch's power spectral density (PSD). These non-parametric methods are not computationally sound due to having complexity and extended run time. The main purpose of this work is to apply the multiple signal classification (MUSIC) model, a parametric-based frequency-spectrum-estimation technique to extract features from multichannel EEG signals for emotional state classification from the SEED dataset. The main challenge of using MUSIC in EEG feature extraction is to tune its parameters for getting the discriminative features from different classes, which is a significant contribution of this work. Another contribution is to show some flaws of this dataset for the first time that contributed to achieving high classification accuracy in previous research works. This work used MUSIC features to classify three emotional states and achieve 97% accuracy on average using an artificial neural network. The proposed MUSIC model optimizes a 95-96% run time compared with the conventional classical non-parametric technique (Welch's PSD) for feature extraction.
引用
收藏
页数:18
相关论文
共 43 条
[1]   Comparison of Power Spectrum Predictors in Computing Coherence Functions for Intracortical EEG Signals [J].
Aydin, Serap .
ANNALS OF BIOMEDICAL ENGINEERING, 2009, 37 (01) :192-200
[2]   Source analysis of epileptic discharges using multiple signal classification analysis [J].
Beniczky, Sandor ;
Oturai, Peter S. ;
Alving, Jorgen ;
Sabers, Anne ;
Herning, Margrethe ;
Fabricius, Martin .
NEUROREPORT, 2006, 17 (12) :1283-1287
[3]   A Novel Multivariate-Multiscale Approach for Computing EEG Spectral and Temporal Complexity for Human Emotion Recognition [J].
Bhattacharyya, Abhijit ;
Tripathy, Rajesh Kumar ;
Garg, Lalit ;
Pachori, Ram Bilas .
IEEE SENSORS JOURNAL, 2021, 21 (03) :3579-3591
[4]  
Bhattacharyya S., 2016, MEDICAL IMAGING CONC, P300, DOI [10.4018/978-1-5225-0571-6.ch012, DOI 10.4018/978-1-5225-0571-6.CH012]
[5]   A Feature Extraction Method Based on Differential Entropy and Linear Discriminant Analysis for Emotion Recognition [J].
Chen, Dong-Wei ;
Miao, Rui ;
Yang, Wei-Qi ;
Liang, Yong ;
Chen, Hao-Heng ;
Huang, Lan ;
Deng, Chun-Jian ;
Han, Na .
SENSORS, 2019, 19 (07)
[6]   High-resolution Lamb waves dispersion curves estimation and elastic property inversion [J].
Chen, Qi ;
Xu, Kailiang ;
Ta, Dean .
ULTRASONICS, 2021, 115
[7]  
Damasio Antonio., 2018, The Strange Order of Things: Life, Feeling, and the Making of Cultures
[8]  
Duan RN, 2013, I IEEE EMBS C NEUR E, P81, DOI 10.1109/NER.2013.6695876
[9]   EEG based emotion recognition using fusion feature extraction method [J].
Gao, Qiang ;
Wang, Chu-han ;
Wang, Zhe ;
Song, Xiao-lin ;
Dong, En-zeng ;
Song, Yu .
MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (37-38) :27057-27074
[10]   The application of EEG power for the prediction and interpretation of consumer decision-making: A neuromarketing study [J].
Golnar-Nik, Parnaz ;
Farashi, Sajjad ;
Safari, Mir-Shahram .
PHYSIOLOGY & BEHAVIOR, 2019, 207 :90-98