Infant AFAR: Automated facial action recognition in infants

被引:14
|
作者
Onal Ertugrul, Itir [1 ]
Ahn, Yeojin Amy [2 ]
Bilalpur, Maneesh [3 ]
Messinger, Daniel S. [2 ]
Speltz, Matthew L. [4 ]
Cohn, Jeffrey F. [3 ]
机构
[1] Univ Utrecht, Utrecht, Netherlands
[2] Univ Miami, Miami, FL USA
[3] Univ Pittsburgh, Pittsburgh, PA USA
[4] Univ Washington, Seattle, WA 98195 USA
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
Automatic facial action unit detection; Facial action coding system; Infant behavior; Cross domain generalizability; Deep learning; CRANIOFACIAL MICROSOMIA; EXPRESSIONS; FACE; MODELS;
D O I
10.3758/s13428-022-01863-y
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
Automated detection of facial action units in infants is challenging. Infant faces have different proportions, less texture, fewer wrinkles and furrows, and unique facial actions relative to adults. For these and related reasons, action unit (AU) detectors that are trained on adult faces may generalize poorly to infant faces. To train and test AU detectors for infant faces, we trained convolutional neural networks (CNN) in adult video databases and fine-tuned these networks in two large, manually annotated, infant video databases that differ in context, head pose, illumination, video resolution, and infant age. AUs were those central to expression of positive and negative emotion. AU detectors trained in infants greatly outperformed ones trained previously in adults. Training AU detectors across infant databases afforded greater robustness to between-database differences than did training database specific AU detectors and outperformed previous state-of-the-art in infant AU detection. The resulting AU detection system, which we refer to as Infant AFAR (Automated Facial Action Recognition), is available to the research community for further testing and applications in infant emotion, social interaction, and related topics.
引用
收藏
页码:1024 / 1035
页数:12
相关论文
共 50 条
  • [1] Infant AFAR: Automated facial action recognition in infants
    Itir Onal Ertugrul
    Yeojin Amy Ahn
    Maneesh Bilalpur
    Daniel S. Messinger
    Matthew L. Speltz
    Jeffrey F. Cohn
    Behavior Research Methods, 2023, 55 : 1024 - 1035
  • [2] Electromyographic Validation of Spontaneous Facial Mimicry Detection Using Automated Facial Action Coding
    Hsu, Chun-Ting
    Sato, Wataru
    SENSORS, 2023, 23 (22)
  • [3] Automated and objective action coding of facial expressions in patients with acute facial palsy
    Haase, Daniel
    Minnigerode, Laura
    Volk, Gerd Fabian
    Denzler, Joachim
    Guntinas-Lichius, Orlando
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2015, 272 (05) : 1259 - 1267
  • [4] Deep Explanation Model for Facial Expression Recognition through Facial Action Coding Unit
    Kim, Sunbin
    Kim, Hyeoncheol
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2019, : 307 - 310
  • [5] Upper Facial Action Unit Recognition
    Zor, Cemre
    Windeatt, Terry
    ADVANCES IN BIOMETRICS, 2009, 5558 : 239 - 248
  • [6] Infants' recognition of subtle anger facial expression
    Ichikawa, Hiroko
    Yamaguchi, Masami K.
    JAPANESE PSYCHOLOGICAL RESEARCH, 2014, 56 (01) : 15 - 23
  • [7] A survey of automatic facial action units recognition
    Zhao H.
    Wang Z.
    Liu Y.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2010, 22 (05): : 894 - 906
  • [8] A COMPARATIVE STUDY OF HMMS AND DBNS APPLIED TO FACIAL ACTION UNITS RECOGNITION
    Popa, M. C.
    Rothkrantz, L. J. M.
    Datcu, D.
    Wiggers, P.
    Braspenning, R.
    Shan, C.
    NEURAL NETWORK WORLD, 2010, 20 (06) : 737 - 760
  • [9] A Neural Basis of Facial Action Recognition in Humans
    Srinivasan, Ramprakash
    Golomb, Julie D.
    Martinez, Aleix M.
    JOURNAL OF NEUROSCIENCE, 2016, 36 (16) : 4434 - 4442
  • [10] Automated and objective action coding of facial expressions in patients with acute facial palsy
    Daniel Haase
    Laura Minnigerode
    Gerd Fabian Volk
    Joachim Denzler
    Orlando Guntinas-Lichius
    European Archives of Oto-Rhino-Laryngology, 2015, 272 : 1259 - 1267