Automatically Detecting Pain in Video Through Facial Action Units

被引:177
作者
Lucey, Patrick [1 ,2 ]
Cohn, Jeffrey F. [1 ,2 ]
Matthews, Iain [2 ,3 ]
Lucey, Simon [4 ]
Sridharan, Sridha [5 ]
Howlett, Jessica [5 ]
Prkachin, Kenneth M. [6 ]
机构
[1] Univ Pittsburgh, Dept Psychol, Pittsburgh, PA 15260 USA
[2] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[3] Disney Res Pittsburgh, Pittsburgh, PA 15213 USA
[4] Commonwealth Sci & Ind Res Org, Pullenvale, Qld 4069, Australia
[5] Queensland Univ Technol, Speech Audio Image & Video Technol Lab, Brisbane, Qld 4000, Australia
[6] Univ No British Columbia, Dept Psychol, Prince George, BC V2N 4Z9, Canada
来源
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS | 2011年 / 41卷 / 03期
基金
加拿大健康研究院;
关键词
Active appearance models (AAMs); emotion; Facial Action Coding System (FACS); facial action units (AUs); pain; support vector machines (SVMs); RECOGNITION;
D O I
10.1109/TSMCB.2010.2082525
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain. This pain data set highlights the many challenges associated with spontaneous emotion detection, particularly that of expression and head movement due to the patient's reaction to pain. In this paper, we show that the AAM can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the current-state-of-the-art approaches which utilize similarity-normalized appearance features only.
引用
收藏
页码:664 / 674
页数:11
相关论文
共 26 条
  • [1] Ashraf AB, 2007, ICMI'07: PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, P9
  • [2] The painful face - Pain expression recognition using active appearance models
    Ashraf, Ahmed Bilal
    Lucey, Simon
    Cohn, Jeffrey F.
    Chen, Tsuhan
    Ambadar, Zara
    Prkachin, Kenneth M.
    Solomon, Patricia E.
    [J]. IMAGE AND VISION COMPUTING, 2009, 27 (12) : 1788 - 1796
  • [3] Bartlett M. S., 2006, Journal of Multimedia, V1, DOI 10.4304/jmm.1.6.22-35
  • [4] Application-independent evaluation of speaker detection
    Brümmer, N
    du Preez, J
    [J]. COMPUTER SPEECH AND LANGUAGE, 2006, 20 (2-3) : 230 - 275
  • [5] LIBSVM: A Library for Support Vector Machines
    Chang, Chih-Chung
    Lin, Chih-Jen
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
  • [6] Active appearance models
    Cootes, TF
    Edwards, GJ
    Taylor, CJ
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2001, 23 (06) : 681 - 685
  • [7] Cornelius R.R., 1996, SCI EMOTION
  • [8] Cortes C, 2004, P 17 INT C NEURAL IN, p305?312
  • [9] Craig K.D., 2001, HDB PAIN ASSESSMENT, P153
  • [10] Ekman P., 1978, Facial action coding system: a technique for the measurement of facial movement