AAM-based emotion recognition using variance of facial feature points on mobile video stream

被引:0
作者
Lee, Yong-Hwan [1 ]
Kim, Bonam [2 ]
Kim, Youngseop [3 ]
机构
[1] Far East Univ, Dept Smart Mobile, Tainan, Taiwan
[2] Chungnam Natl Univ, Div Elect & Comp Engn, Taejon, South Korea
[3] Dankook Univ, Dept Elect Engn, Yongin, Gyeonggi Do, South Korea
来源
COMPUTER SYSTEMS SCIENCE AND ENGINEERING | 2014年 / 29卷 / 06期
关键词
Emotion Recognition; Fuzzy Emotion Classifier; Weighted k-Nearest Neighbor; Active Appearance Model; EXPRESSION ANALYSIS; FACE;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Understanding and classifying human emotion can play an important roles in the interaction between human and machine communication system. The most expressive way to display the human emotion is through facial expression analysis. In this paper, we propose a novel extraction and recognition method of facial expression and emotion from mobile camera through the proposed classifier. Especially, we formulate a new classification model of facial emotions using a variance of the estimated landmark points. We utilize the variance values of 65 feature point locations to recognize a facial emotion by comparing with weighted fuzzy k-NN classification between on the current frame and on the previous frame. Finally, five types of facial emotions are recognized and classified as a facial expression in neutral, happy, angry, surprise or sad. To evaluate the performance of the proposed algorithm, we assess the ratio of success with iPhone/iPad camera views. The experimental results show that the proposed method effectively performed well in the recognition of facial emotion, and the obtained result indicates the good performance and enough to applicable to mobile environments.
引用
收藏
页码:423 / 428
页数:6
相关论文
共 17 条
  • [1] Testing face processing skills in children
    Bruce, V
    Campbell, RN
    Doherty-Sneddon, G
    Import, A
    Langton, S
    McAuley, S
    Wright, R
    [J]. BRITISH JOURNAL OF DEVELOPMENTAL PSYCHOLOGY, 2000, 18 : 319 - 333
  • [2] Cohen Ira, 2010, NEURAL INFORM PROCES
  • [3] Cunningham P., 2007, UCDCSI20074
  • [4] Interpreting face images using Active Appearance Models
    Edwards, GJ
    Taylor, CJ
    Cootes, TF
    [J]. AUTOMATIC FACE AND GESTURE RECOGNITION - THIRD IEEE INTERNATIONAL CONFERENCE PROCEEDINGS, 1998, : 300 - 305
  • [5] Ekman P., 1978, Facial action coding system: a technique for the measurement of facial movement
  • [6] Automatic Temporal Segment Detection and Affect Recognition From Face and Body Display
    Gunes, Hatice
    Piccardi, Massimo
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2009, 39 (01): : 64 - 84
  • [7] Emotion recognition through facial expression analysis based on a neurofuzzy network
    Ioannou, SV
    Raouzaiou, AT
    Tzouvaras, VA
    Mailis, TP
    Karpouzis, KC
    Kollias, SD
    [J]. NEURAL NETWORKS, 2005, 18 (04) : 423 - 435
  • [8] A FUZZY K-NEAREST NEIGHBOR ALGORITHM
    KELLER, JM
    GRAY, MR
    GIVENS, JA
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1985, 15 (04): : 580 - 585
  • [9] Kobayashi S, 1995, RO-MAN'95 TOKYO: 4TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, P164, DOI 10.1109/ROMAN.1995.531954
  • [10] Improved Active Shape Model for Efficient Extraction of Facial Feature Points on Mobile Devices
    Lee, Yong-Hwan
    Yang, Dong-Seok
    Lim, Jong-Kook
    Lee, YuKyong
    Kim, Bonam
    [J]. 2013 SEVENTH INTERNATIONAL CONFERENCE ON INNOVATIVE MOBILE AND INTERNET SERVICES IN UBIQUITOUS COMPUTING (IMIS 2013), 2013, : 256 - 259