Learning facial expression-aware global-to-local representation for robust action unit detection

被引:0
作者
Rudong An
Aobo Jin
Wei Chen
Wei Zhang
Hao Zeng
Zhigang Deng
Yu Ding
机构
[1] Virtual Human Group,
[2] Netease Fuxi AI Lab,undefined
[3] University of Houston-Victoria,undefined
[4] Hebei Agricultural University,undefined
[5] University of Houston,undefined
来源
Applied Intelligence | 2024年 / 54卷
关键词
Facial action coding; Facial action unit detection; Facial expression recognition; Expression-aware representation; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
The task of detecting facial action units (AU) often utilizes discrete expression categories, such as Angry, Disgust, and Happy, as auxiliary information to enhance performance. However, these categories are unable to capture the subtle transformations of AUs. Additionally, existing works suffer from overfitting due to the limited availability of AU datasets. This paper proposes a novel fine-grained global expression representation encoder to capture continuous and subtle global facial expressions and improve AU detection. The facial expression representation effectively reduces overfitting by isolating facial expressions from other factors such as identity, background, head pose, and illumination. To further address overfitting, a local AU features module transforms the global expression representation into local facial features for each AU. Finally, the local AU features are fed into an AU classifier to determine the occurrence of each AU. Our proposed method outperforms previous works and achieves state-of-the-art performances on both in-the-lab and in-the-wild datasets. This is in contrast to most existing works that only focus on in-the-lab datasets. Our method specifically addresses the issue of overfitting from limited data, which contributes to its superior performance.
引用
收藏
页码:1405 / 1425
页数:20
相关论文
共 47 条
[1]  
Chen Y(2022)Geoconv: geodesic guided convolution for facial action unit recognition Pattern Recogn 122 108-355
[2]  
Song G(2018)Eac-net: deep nets with enhancing and cropping for facial action unit detection IEEE Transactions on Pattern Analysis and Machine Intelligence 40 2583-2596
[3]  
Shao Z(2015)Au-inspired deep networks for facial expression feature learning Neurocomputing 159 126-136
[4]  
Li W(2019)Au r-cnn: encoding expert prior knowledge into r-cnn for action unit detection Neurocomputing 355 35-47
[5]  
Abtahi F(2013)Disfa: a spontaneous facial action intensity database IEEE Trans Affect Comput 4 151-160
[6]  
Zhu Z(2017)Affectnet: a database for facial expression, valence, and arousal computing in the wild IEEE Trans Affect Comput 10 18-31
[7]  
Liu M(2019)D-pattnet: dynamic patch-attentive deep network for action unit detection Frontiers in Computer Science 1 11-1461
[8]  
Li S(2004)Facial action recognition for facial expression analysis from static face images Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 34 1449-953
[9]  
Shan S(1992)Impaired recognition of affect in facial expression in depressed patients Biological psychiatry 31 947-340
[10]  
Ma C(2021)Jaa-net: joint facial action unit detection and face alignment via adaptive attention International Journal of Computer Vision 129 321-275