A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units

被引:0
作者
Mahoor, Mohammad H. [1 ]
Cadavid, Steven [2 ]
Messinger, Daniel S. [3 ]
Cohn, Jeffrey F. [4 ]
机构
[1] Univ Denver, Dept Elect & Comp Engn, Denver, CO 80208 USA
[2] Univ Miami, Dept Elect & Comp Engn, Coral Gables, FL 33146 USA
[3] Miami Univ, Dept Psychol, Coral Gables, FL 33146 USA
[4] Univ Pittsburgh, Dept Psychol, Pittsburgh, PA 15260 USA
来源
2009 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPR WORKSHOPS 2009), VOLS 1 AND 2 | 2009年
关键词
MODELS; SHAPE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a framework to automatically measure the intensity of naturally occurring facial actions. Naturalistic expressions are non-posed spontaneous actions. The Facial Action Coding System (FACS) is the gold standard technique for describing facial expressions, which are parsed as comprehensive, nonoverlapping Action Units (Aus). AUs have intensities ranging from absent to maximal on a six-point metric (i.e., 0 to 5). Despite the efforts in recognizing the presence of non-posed action units, measuring their intensity has not been studied comprehensively. In this paper, we develop a framework to measure the intensity of AU12 (Lip Corner Puller) and AU6 (Cheek Raising) in videos captured from infant-mother live face-to-face communications. The AU12 and AU6 are the most challenging case of infant's expressions (e.g., low-facial texture in infant's face). One of the problems in facial image analysis is the large dimensionality of the visual data. Our approach for solving this problem is to utilize the spectral regression technique to project high dimensionality facial images into a low dimensionality space. Represented facial images in the low dimensional space are utilized to train Support Vector Machine classifiers to predict the intensity of action units. Analysis of 18 minutes of captured video of non-posed facial expressions of several infants and mothers shows significant agreement between a human FACS coder and our approach, which makes it an efficient approach for automated measurement of the intensity of non-posed facial action units.
引用
收藏
页码:833 / +
页数:3
相关论文
共 29 条
  • [1] [Anonymous], MONOGRAPH CODI UNPUB
  • [2] [Anonymous], 2003, PRACTICAL GUIDE SUPP
  • [3] Fully automatic facial action recognition in spontaneous behavior
    Bartlett, Marian Stewart
    Littlewort, Gwen
    Frank, Mark
    Lainscsek, Claudia
    Fasel, Ian
    Movellan, Javier
    [J]. PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION - PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE, 2006, : 223 - +
  • [4] Belkin M, 2002, ADV NEUR IN, V14, P585
  • [5] Laplacian eigenmaps for dimensionality reduction and data representation
    Belkin, M
    Niyogi, P
    [J]. NEURAL COMPUTATION, 2003, 15 (06) : 1373 - 1396
  • [7] Document clustering using locality preserving indexing
    Cai, D
    He, XF
    Han, JW
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2005, 17 (12) : 1624 - 1637
  • [8] Orthogonal laplacianfaces for face recognition
    Cai, Deng
    He, Xiaofei
    Han, Jiawei
    Zhang, Hong-Jiang
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (11) : 3608 - 3614
  • [9] Chang C.-C., LIBSVM: a Library for Support Vector Machines
  • [10] Cohn J., 2006, HDB EMOTION ELICITAT