Bi-modal emotion recognition from expressive face and body gestures

被引:208
|
作者
Gunes, Hatice
Piccardi, Massimo
机构
[1] Computer Vision Research Group, Faculty of Information Technology, University of Technology, Sydney (UTS), Broadway, NSW, 2007
关键词
Bi-modal emotion recognition; facial expression; expressive body gestures; feature-level fusion; decision-level fusion;
D O I
10.1016/j.jnca.2006.09.007
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Psychological research findings suggest that humans rely on the combined visual channels of face and body more than any other channel when they make judgments about human communicative behavior. However, most of the existing systems attempting to analyze the human nonverbal behavior are mono-modal and focus only on the face. Research that aims to integrate gestures as an expression mean has only recently emerged. Accordingly, this paper presents an approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework. Face and body movements are captured simultaneously using two separate cameras. For each video sequence single expressive frames both from face and body are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming classification using the individual facial or bodily modality alone. (c) 2006 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1334 / 1345
页数:12
相关论文
共 50 条
  • [1] Multimodal emotion recognition from expressive faces, body gestures and speech
    Caridakis, George
    Castellano, Ginevra
    Kessous, Loic
    Raouzaiou, Amaryllis
    Malatesta, Lori
    Asteriadis, Stelios
    Karpouzis, Kostas
    ARTIFICIAL INTELLIGENCE AND INNOVATIONS 2007: FROM THEORY TO APPLICATIONS, 2007, : 375 - +
  • [2] Enhancing Feature Correlation for Bi-Modal Group Emotion Recognition
    Liu, Ningjie
    Fang, Yuchun
    Guo, Yike
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2018, PT II, 2018, 11165 : 24 - 34
  • [3] Emotion Recognition Based on Meta Bi-Modal Learning Model
    Li Z.
    Sun Y.
    Zhang X.
    Zhou Y.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2023, 46 (05): : 87 - 105
  • [4] Fusing face and body display for bi-modal emotion recognition: Single frame analysis and multi-frame post integration
    Gunes, H
    Piccardi, M
    AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, PROCEEDINGS, 2005, 3784 : 102 - 111
  • [5] Bi-Modal Bi-Task Emotion Recognition Based on Transformer Architecture
    Song, Yu
    Zhou, Qi
    APPLIED ARTIFICIAL INTELLIGENCE, 2024, 38 (01)
  • [6] Gait Emotion Recognition Using a Bi-modal Deep Neural Network
    Bhatia, Yajury
    Bari, A. S. M. Hossain
    Gavrilovn, Marina
    ADVANCES IN VISUAL COMPUTING, ISVC 2022, PT I, 2022, 13598 : 46 - 60
  • [7] Automatic bi-modal emotion recognition system based on fusion of facial expressions and emotion extraction from speech
    Datcu, Dragos
    Rothkrantz, Leon J. M.
    2008 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2008), VOLS 1 AND 2, 2008, : 606 - 607
  • [8] Emotion Recognition from Body Movements and Gestures
    Stathopoulou, Ioanna-Ourania
    Tsihrintzis, George A.
    INTELLIGENT INTERACTIVE MULTIMEDIA SYSTEMS AND SERVICES (IIMSS 2011), 2011, 11 : 295 - 303
  • [9] A bi-modal face recognition framework integrating facial expression with facial appearance
    Tsai, Pohsiang
    Cao, Longbing
    Hintz, Tom
    Jan, Tony
    PATTERN RECOGNITION LETTERS, 2009, 30 (12) : 1096 - 1109
  • [10] A Survey of Face Recognition Techniques and Comparative Study of Various Bi-Modal and Multi-Modal Techniques
    Handa, Anand
    Agarwal, Rashi
    Kohli, Narendra
    2016 11TH INTERNATIONAL CONFERENCE ON INDUSTRIAL AND INFORMATION SYSTEMS (ICIIS), 2016, : 274 - 279