Human Emotion Recognition by Integrating Facial and Speech Features: An Implementation of Multimodal Framework using CNN

被引:0
作者
Srinivas, P. V. V. S. [1 ]
Mishra, Pragnyaban [1 ]
机构
[1] Koneru Lakshmaiah Educ Fdn KLEF, Dept Comp Sci & Engn, Guntur, Andhra Pradesh, India
关键词
Emotion recognition; multimodal; fusion; MFCC; MFA; FERCNN; CREMAD; RAVDESS;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Emotion recognition plays a prominent role in today's intelligent system applications. Human computer interface, health care, law, and entertainment are a few of the applications where emotion recognition is used. Humans convey their emotions in the form of text, voice, and facial expressions, thus developing a multimodal emotional recognition system playing a crucial role in human-computer or intelligent system communication. The majority of established emotional recognition algorithms only identify emotions in unique data, such as text, audio, or image data. A multimodal system uses information from a variety of sources and fuses the information by using fusion techniques and categories to improve recognition accuracy. In this paper, a multimodal system to recognise emotions was presented that fuses the features from information obtained from heterogenous modalities like audio and video. For audio feature extraction energy, zero crossing rate and Mel-Frequency Cepstral Coefficients (MFCC) techniques are considered. Of these, MFCC produced promising results. For video feature extraction, first the videos are converted to frames and stored in a linear scale space by using a spatial temporal Gaussian Kernel. The features from the images are further extracted by applying a Gaussian weighted function to the second momentum matrix of linear scale space data. The Marginal Fisher Analysis (MFA) fusion method is used to fuse both the audio and video features, and the resulted features are given to the FERCNN model for evaluation. For experimentation, the RAVDESS and CREMAD datasets, which contain audio and video data, are used. Accuracy levels of 95.56, 96.28, and 95.07 on the RAVDESS dataset and accuracies of 80.50, 97.88, and 69.66 on the CREMAD dataset in audio, video, and multimodal modalities are achieved, whose performance is better than the existing multimodal systems.
引用
收藏
页码:592 / 603
页数:12
相关论文
共 52 条
[1]   A real-time automated system for the recognition of human facial expressions [J].
Anderson, K ;
McOwan, PW .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2006, 36 (01) :96-105
[2]  
[Anonymous], IEEE T AFFECTIVE COM
[3]  
[Anonymous], 2016, MULTIMEDIA TOOLS APP
[4]   Audiovisual emotion recognition in wild [J].
Avots, Egils ;
Sapinski, Tomasz ;
Bachmann, Maie ;
Kaminska, Dorota .
MACHINE VISION AND APPLICATIONS, 2019, 30 (05) :975-985
[5]  
Beard R., 2018, P 22 C COMP NAT LANG, P251
[6]   Affective Computing and Sentiment Analysis [J].
Cambria, Erik .
IEEE INTELLIGENT SYSTEMS, 2016, 31 (02) :102-107
[7]   CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset [J].
Cao, Houwei ;
Cooper, David G. ;
Keutmann, Michael K. ;
Gur, Ruben C. ;
Nenkova, Ani ;
Verma, Ragini .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (04) :377-390
[8]   Effects of micro- and nanoplastics on aquatic ecosystems: Current research trends and perspectives [J].
Chae, Yooeun ;
An, Youn-Joo .
MARINE POLLUTION BULLETIN, 2017, 124 (02) :624-632
[9]   Multi-Modal Residual Perceptron Network for Audio-Video Emotion Recognition [J].
Chang, Xin ;
Skarbek, Wladyslaw .
SENSORS, 2021, 21 (16)
[10]   Real-Time Speech Emotion Analysis for Smart Home Assistants [J].
Chatterjee, Rajdeep ;
Mazumdar, Saptarshi ;
Sherratt, R. Simon ;
Halder, Rohit ;
Maitra, Tanmoy ;
Giri, Debasis .
IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2021, 67 (01) :68-76