3DPalsyNet: A Facial Palsy Grading and Motion Recognition Framework Using Fully 3D Convolutional Neural Networks

被引:27
作者
Storey, Gary [1 ]
Jiang, Richard [2 ]
Keogh, Shelagh [1 ]
Bouridane, Ahmed [1 ]
Li, Chang-Tsun [3 ]
机构
[1] Northumbria Univ, Dept Comp & Informat Sci, Newcastle Upon Tyne NE1 8ST, Tyne & Wear, England
[2] Univ Lancaster, Dept Comp & Commun, Lancaster LA1 4WA, England
[3] Deakin Univ, Sch Informat Technol, Geelong, Vic 3220, Australia
基金
英国工程与自然科学研究理事会;
关键词
Computer vision; face detection; facial action recognition; machine learning; PATTERNS;
D O I
10.1109/ACCESS.2019.2937285
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The capability to perform facial analysis from video sequences has significant potential to positively impact in many areas of life. One such area relates to the medical domain to specifically aid in the diagnosis and rehabilitation of patients with facial palsy. With this application in mind, this paper presents an end-to-end framework, named 3DPalsyNet, for the tasks of mouth motion recognition and facial palsy grading. 3DPalsyNet utilizes a 3D CNN architecture with a ResNet backbone for the prediction of these dynamic tasks. Leveraging transfer learning from a 3D CNNs pre-trained on the Kinetics data set for general action recognition, the model is modified to apply joint supervised learning using center and softmax loss concepts. 3DPalsyNet is evaluated on a test set consisting of individuals with varying ranges of facial palsy and mouth motions and the results have shown an attractive level of classification accuracy in these tasks of 82% and 86% respectively. The frame duration and the loss function affect was studied in terms of the predictive qualities of the proposed 3DPalsyNet, where it was found shorter frame duration's of 8 performed best for this specific task. Centre loss and softmax have shown improvements in spatio-temporal feature learning than softmax loss alone, this is in agreement with earlier work involving the spatial domain.
引用
收藏
页码:121655 / 121664
页数:10
相关论文
共 34 条
  • [1] [Anonymous], 2018, arXiv:1804.08348
  • [2] [Anonymous], 2018, IEEE T NEUR NET LEAR, DOI DOI 10.1109/TNNLS.2018.2827036
  • [3] Christoph R., 2016, Advances in neural information processing systems, P3476
  • [4] Facial expression recognition from video sequences: temporal and static modeling
    Cohen, I
    Sebe, N
    Garg, A
    Chen, LS
    Huang, TS
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2003, 91 (1-2) : 160 - 187
  • [5] Video-Based Emotion Recognition using CNN-RNN and C3D Hybrid Networks
    Fan, Yin
    Lu, Xiangju
    Li, Dian
    Liu, Yuanliu
    [J]. ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, : 445 - 450
  • [6] Spatiotemporal Multiplier Networks for Video Action Recognition
    Feichtenhofer, Christoph
    Pinz, Axel
    Wildes, Richard P.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 7445 - 7454
  • [7] Convolutional Two-Stream Network Fusion for Video Action Recognition
    Feichtenhofer, Christoph
    Pinz, Axel
    Zisserman, Andrew
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1933 - 1941
  • [8] Facial palsy in children
    Guerreschi, P.
    Gabert, P. -E.
    Labbe, D.
    Martinot-Duquennoy, V.
    [J]. ANNALES DE CHIRURGIE PLASTIQUE ESTHETIQUE, 2016, 61 (05): : 513 - 518
  • [9] Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?
    Hara, Kensho
    Kataoka, Hirokatsu
    Satoh, Yutaka
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6546 - 6555
  • [10] Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition
    Hara, Kensho
    Kataoka, Hirokatsu
    Satoh, Yutaka
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 3154 - 3160