Bi-modal emotion recognition from expressive face and body gestures

被引:208
|
作者
Gunes, Hatice
Piccardi, Massimo
机构
[1] Computer Vision Research Group, Faculty of Information Technology, University of Technology, Sydney (UTS), Broadway, NSW, 2007
关键词
Bi-modal emotion recognition; facial expression; expressive body gestures; feature-level fusion; decision-level fusion;
D O I
10.1016/j.jnca.2006.09.007
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Psychological research findings suggest that humans rely on the combined visual channels of face and body more than any other channel when they make judgments about human communicative behavior. However, most of the existing systems attempting to analyze the human nonverbal behavior are mono-modal and focus only on the face. Research that aims to integrate gestures as an expression mean has only recently emerged. Accordingly, this paper presents an approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework. Face and body movements are captured simultaneously using two separate cameras. For each video sequence single expressive frames both from face and body are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming classification using the individual facial or bodily modality alone. (c) 2006 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1334 / 1345
页数:12
相关论文
共 50 条
  • [21] Recognition of Face Identity and Emotion in Expressive Specific Language Impairment
    Merkenschlager, A.
    Amorosa, H.
    Kiefl, H.
    Martinius, J.
    FOLIA PHONIATRICA ET LOGOPAEDICA, 2012, 64 (02) : 73 - 79
  • [22] Bi-Modal Progressive Mask Attention for Fine-Grained Recognition
    Song, Kaitao
    Wei, Xiu-Shen
    Shu, Xiangbo
    Song, Ren-Jie
    Lu, Jianfeng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 7006 - 7018
  • [23] Intelligent System for Bi-Modal Recognition of Apparent Personality Traits (iSMART)
    Patel, Cdr Devraj
    Dhavale, Sunita V.
    INTERNATIONAL CONFERENCE ON INNOVATIVE COMPUTING AND COMMUNICATIONS, ICICC 2022, VOL 1, 2023, 473 : 781 - 794
  • [24] Emotion Recognition From Expressions in Face, Voice, and Body: The Multimodal Emotion Recognition Test (MERT)
    Baenziger, Tanja
    Grandjean, Didier
    Scherer, Klaus R.
    EMOTION, 2009, 9 (05) : 691 - 704
  • [25] An Evaluation of Bi-modal Facial Appearance plus Facial Expression Face Biometrics
    Tsai, Pohsiang
    Tran, Tich Phuoc
    Hintz, Tom
    Jan, Tony
    19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1-6, 2008, : 2812 - 2816
  • [26] BI-MODAL DISTRIBUTIONS DERIVED FROM THE NORMAL DISTRIBUTION
    Prasad, Ayodhya
    SANKHYA, 1955, 14 : 369 - 374
  • [27] EMOTION RECOGNITION BASED ON MULTI-VIEW BODY GESTURES
    Shen, Zhijuan
    Cheng, Jun
    Hu, Xiping
    Dong, Qian
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3317 - 3321
  • [28] Exploiting Fine-tuning of Self-supervised Learning Models for Improving Bi-modal Sentiment Analysis and Emotion Recognition
    Yang, Wei
    Fukayama, Satoru
    Heracleous, Panikos
    Ogata, Jun
    INTERSPEECH 2022, 2022, : 1998 - 2002
  • [29] Bi-modal Handwritten Text Recognition (BiHTR) ICPR 2010 Contest Report
    Pastor, Moises
    Paredes, Roberto
    RECOGNIZING PATTERNS IN SIGNALS, SPEECH, IMAGES, AND VIDEOS, 2010, 6388 : 1 - 13
  • [30] Bi-Modal Person Recognition on a Mobile Phone: using mobile phone data
    McCool, Chris
    Marcel, Sebastien
    Hadid, Abdenour
    Pietikainen, Matti
    Matejka, Pavel
    Cernocky, Jan
    Poh, Norman
    Kittler, Josef
    Larcher, Anthony
    Levy, Christophe
    Matrouf, Driss
    Bonastre, Jean-Francois
    Tresadern, Phil
    Cootes, Timothy
    2012 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2012, : 635 - 640