Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the Wild

被引:50
|
作者
Lu, Cheng [1 ]
Zheng, Wenming [2 ]
Li, Chaolong [3 ]
Tang, Chuangao [3 ]
Liu, Suyuan [3 ]
Yan, Simeng [3 ]
Zong, Yuan [3 ]
机构
[1] Southeast Univ, Sch Informat Sci & Engn, Nanjing, Jiangsu, Peoples R China
[2] Southeast Univ, Sch Biol Sci & Med Engn, Minist Educ, Key Lab Child Dev & Learning Sci, Nanjing, Jiangsu, Peoples R China
[3] Southeast Univ, Sch Biol Sci & Med Engn, Nanjing, Jiangsu, Peoples R China
来源
ICMI'18: PROCEEDINGS OF THE 20TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION | 2018年
基金
中国国家自然科学基金;
关键词
Emotion Recognition; Spatio-Temporal Information; Convolutional Neural Networks (CNN); Long Short-Term Memory (LSTM); 3D Convolutional Neural Networks (3D CNN); CLASSIFICATION;
D O I
10.1145/3242969.3264992
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The difficulty of emotion recognition in the wild (EmotiW) is how to train a robust model to deal with diverse scenarios and anomalies. The Audio-video Sub-challenge in EmotiW contains audio video short clips with several emotional labels and the task is to distinguish which label the video belongs to. For the better emotion recognition in videos, we propose a multiple spatio-temporal feature fusion (MSFF) framework, which can more accurately depict emotional information in spatial and temporal dimensions by two mutually complementary sources, including the facial image and audio. The framework is consisted of two parts: the facial image model and the audio model. With respect to the facial image model, three different architectures of spatial-temporal neural networks are employed to extract discriminative features about different emotions in facial expression images. Firstly, the high-level spatial features are obtained by the pre-trained convolutional neural networks (CNN), including VGG-Face and ResNet-50 which are all fed with the images generated by each video. Then, the features of all frames are sequentially input to the Bi-directional Long Short-Term Memory (BLSTM) so as to capture dynamic variations of facial appearance textures in a video. In addition to the structure of CNN-RNN, another spatio-temporal network, namely deep 3-Dimensional Convolutional Neural Networks (3D CNN) by extending the 2D convolution kernel to 3D, is also applied to attain evolving emotional information encoded in multiple adjacent frames. For the audio model, the spectrogram images of speech generated by preprocessing audio, are also modeled in a VGG-BLSTM framework to characterize the affective fluctuation more efficiently. Finally, a fusion strategy with the score matrices of different spatiotemporal networks gained from the above framework is proposed to boost the performance of emotion recognition complementally. Extensive experiments show that the overall accuracy of our proposed MSFF is 60.64%, which achieves a large improvement compared with the baseline and outperform the result of champion team in 2017.
引用
收藏
页码:646 / 652
页数:7
相关论文
共 50 条
  • [1] Video-based driver emotion recognition using hybrid deep spatio-temporal feature learning
    Varma, Harshit
    Ganapathy, Nagarajan
    Deserno, Thomas M.
    MEDICAL IMAGING 2022: IMAGING INFORMATICS FOR HEALTHCARE, RESEARCH, AND APPLICATIONS, 2022, 12037
  • [2] Spatio-Temporal Encoder-Decoder Fully Convolutional Network for Video-Based Dimensional Emotion Recognition
    Du, Zhengyin
    Wu, Suowei
    Huang, Di
    Li, Weixin
    Wang, Yunhong
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2021, 12 (03) : 565 - 578
  • [3] A multiple feature fusion framework for video emotion recognition in the wild
    Samadiani, Najmeh
    Huang, Guangyan
    Luo, Wei
    Chi, Chi-Hung
    Shu, Yanfeng
    Wang, Rui
    Kocaturk, Tuba
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (08)
  • [4] Video-Based Emotion Recognition in the Wild for Online Education Systems
    Mai, Genting
    Guo, Zijian
    She, Yicong
    Wang, Hongni
    Liang, Yan
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2022, 13631 : 516 - 529
  • [5] Spatio-Temporal Image-Based Encoded Atlases for EEG Emotion Recognition
    Avola, Danilo
    Cinque, Luigi
    Mambro, Angelo Di
    Fagioli, Alessio
    Marini, Marco Raoul
    Pannone, Daniele
    Fanini, Bruno
    Foresti, Gian Luca
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2024, 34 (05)
  • [6] Human Action Recognition Based on a Spatio-Temporal Video Autoencoder
    Sousa e Santos, Anderson Carlos
    Pedrini, Helio
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2020, 34 (11)
  • [7] HASTF: a hybrid attention spatio-temporal feature fusion network for EEG emotion recognition
    Hu, Fangzhou
    Wang, Fei
    Bi, Jinying
    An, Zida
    Chen, Chao
    Qu, Gangguo
    Han, Shuai
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [8] A Novel Spatio-Temporal Field for Emotion Recognition Based on EEG Signals
    Li, Wei
    Zhang, Zhen
    Hou, Bowen
    Li, Xiaoyu
    IEEE SENSORS JOURNAL, 2021, 21 (23) : 26941 - 26950
  • [9] Video-Based Emotion Recognition using Face Frontalization and Deep Spatiotemporal Feature
    Wang, Jinwei
    Zhao, Ziping
    Liang, Jinglian
    Li, Chao
    2018 FIRST ASIAN CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII ASIA), 2018,
  • [10] Multi-source domain adaptation with spatio-temporal feature extractor for EEG emotion recognition
    Guo, Wenhui
    Xu, Guixun
    Wang, Yanjiang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 84