Audiovisual emotion recognition in wild

被引:1
作者
Egils Avots
Tomasz Sapiński
Maie Bachmann
Dorota Kamińska
机构
[1] University of Tartu,Institute of Technology
[2] Tallinn University of Technology,Department of Health Technologies, School of Information Technologies
[3] Łodz University of Technology,Institute of Mechatronics and Information Systems
来源
Machine Vision and Applications | 2019年 / 30卷
关键词
Emotion recognition; Audio signal processing; Facial expression; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
People express emotions through different modalities. Utilization of both verbal and nonverbal communication channels allows to create a system in which the emotional state is expressed more clearly and therefore easier to understand. Expanding the focus to several expression forms can facilitate research on emotion recognition as well as human–machine interaction. This article presents analysis of audiovisual information to recognize human emotions. A cross-corpus evaluation is done using three different databases as the training set (SAVEE, eNTERFACE’05 and RML) and AFEW (database simulating real-world conditions) as a testing set. Emotional speech is represented by commonly known audio and spectral features as well as MFCC coefficients. The SVM algorithm has been used for classification. In case of facial expression, faces in key frames are found using Viola–Jones face recognition algorithm and facial image emotion classification done by CNN (AlexNet). Multimodal emotion recognition is based on decision-level fusion. The performance of emotion recognition algorithm is compared with the validation of human decision makers.
引用
收藏
页码:975 / 985
页数:10
相关论文
共 50 条
[41]   Audiovisual emotion recognition based on bi-layer LSTM and multi-head attention mechanism on RAVDESS dataset [J].
Jin, Zeyu ;
Zai, Wenjiao .
JOURNAL OF SUPERCOMPUTING, 2025, 81 (01)
[42]   Smart Affect Monitoring With Wearables in the Wild: An Unobtrusive Mood-Aware Emotion Recognition System [J].
Can, Yekta Said ;
Ersoy, Cem .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) :2851-2863
[43]   ALIGNING AUDIOVISUAL FEATURES FOR AUDIOVISUAL SPEECH RECOGNITION [J].
Tao, Fei ;
Busso, Carlos .
2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
[44]   Multi-view Common Space Learning for Emotion Recognition in the Wild [J].
Wu, Jianlong ;
Lin, Zhouchen ;
Zha, Hongbin .
ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, :464-471
[45]   Multimodal Fusion of Spatial-Temporal Features for Emotion Recognition in the Wild [J].
Wang, Zuchen ;
Fang, Yuchun .
ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT I, 2018, 10735 :205-214
[46]   Emotion Recognition in Speech using Cross-Modal Transfer in the Wild [J].
Albanie, Samuel ;
Nagrani, Arsha ;
Vedaldi, Andrea ;
Zisserman, Andrew .
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, :292-301
[47]   Combining Multimodal Features within a Fusion Network for Emotion Recognition in the Wild [J].
Sun, Bo ;
Li, Liandong ;
Zhou, Guoyan ;
Wu, Xuewen ;
He, Jun ;
Yu, Lejun ;
Li, Dongxue ;
Wei, Qinglan .
ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, :497-502
[48]   Video-Based Emotion Recognition in the Wild for Online Education Systems [J].
Mai, Genting ;
Guo, Zijian ;
She, Yicong ;
Wang, Hongni ;
Liang, Yan .
PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2022, 13631 :516-529
[49]   Robust Audiovisual Emotion Recognition: Aligning Modalities, Capturing Temporal Information, and Handling Missing Features [J].
Goncalves, Lucas ;
Busso, Carlos .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (04) :2156-2170
[50]   Emotion Recognition in Sound [J].
Popova, Anastasiya S. ;
Rassadin, Alexandr G. ;
Ponomarenko, Alexander A. .
ADVANCES IN NEURAL COMPUTATION, MACHINE LEARNING, AND COGNITIVE RESEARCH, 2018, 736 :117-124