Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks

被引:9
作者
He, Jun [1 ]
Yu, Xiaocui [1 ]
Sun, Bo [1 ]
Yu, Lejun [1 ]
机构
[1] Beijing Normal Univ, Engn Res Ctr Intelligent Technol & Educ Applicat, Sch Artificial Intelligence, Minist Educ, Beijing 100875, Peoples R China
基金
中国国家自然科学基金;
关键词
Facial expression; Action units (AUs); Dependency; Conditional generative adversarial network; Graph convolutional network (GCN); Prior knowledge; SEMANTIC RELATIONSHIPS;
D O I
10.1007/s12193-020-00363-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Understanding human facial expressions is one of the key steps towards achieving human-computer interaction. Owing to the anatomic mechanism that governs facial muscular interactions, there exist powerful dependencies between expressions and action units (AUs) that are useful for exploiting such rules of knowledge to guide the model learning process. However, they have not yet been represented directly and integrated into a network. In this study, we propose a novel method for facial expressions and AUs recognition based on their dependencies on graph convolutional network. First, we train the conditional generative adversarial network to filter out identity information and extract expression information through a de-expression learning procedure. Thereafter, we apply graph convolutional network to represent dependency laying among AU nodes and embed the nodes by dividing the expression component into multi patches, corresponding to the AU-related regions. Finally, we use prior knowledge matrices to represent the dependencies between expressions and AUs and subsequently integrate them into a loss function to constrain the model. The results of our experiments indicate that such representation is effective for improving the recognition rate. They also reveal that our work achieves better performance than several popular approaches.
引用
收藏
页码:429 / 440
页数:12
相关论文
共 47 条
  • [1] Boosted NNE collections for multicultural facial expression recognition
    Ali, Ghulam
    Iqbal, Muhammad Amjad
    Choi, Tae-Sun
    [J]. PATTERN RECOGNITION, 2016, 55 : 14 - 27
  • [2] [Anonymous], 2016, ARXIV160304467
  • [3] A Bayesian Networks approach to Operational Risk
    Aquaro, V.
    Bardoscia, M.
    Bellotti, R.
    Consiglio, A.
    De Carlo, F.
    Ferri, G.
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2010, 389 (08) : 1721 - 1728
  • [4] Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Model
    Cai, Jianrui
    Zeng, Hui
    Yong, Hongwei
    Cao, Zisheng
    Zhang, Lei
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3086 - 3095
  • [5] Confidence-Weighted Local Expression Predictions for Occlusion Handling in Expression Recognition and Action Unit Detection
    Dapogny, Arnaud
    Bailly, Kevin
    Dubuisson, Severine
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2018, 126 (2-4) : 255 - 271
  • [6] Compound facial expressions of emotion
    Du, Shichuan
    Tao, Yong
    Martinez, Aleix M.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2014, 111 (15) : E1454 - E1462
  • [7] Ekman P, 1979, BROWS EMOTIONAL CONV
  • [8] Ekman P, 2002, Facial Action Coding System: The Manual on CD ROM. Salt Lake City: A Human FaceJ. Facial action coding system: The manual on CD ROM
  • [9] Ekman P., 1978, Facial Action Coding System
  • [10] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672