A multi-scale multi-attention network for dynamic facial expression recognition

被引:8
作者
Xia, Xiaohan [1 ]
Yang, Le [1 ]
Wei, Xiaoyong [2 ,3 ]
Sahli, Hichem [4 ,5 ]
Jiang, Dongmei [1 ,3 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Natl Engn Lab Integrated Aerosp Ground Ocean Big, Shaanxi Key Lab Speech & Image Informat Proc, Youyi Xilu 127, Xian 710072, Peoples R China
[2] Sichuan Univ, Sch Comp Sci, Chengdu 610065, Peoples R China
[3] Peng Cheng Lab, Vanke Cloud City Phase 1,Bldg 8,Xili St, Shenzhen 518055, Guangdong, Peoples R China
[4] Vrije Univ Brussel VUB, Dept Elect & Informat ETRO, Pl Laan 2, B-1050 Brussels, Belgium
[5] Interunivers Microelect Ctr IMEC, Kapeldreef 75, B-3001 Heverlee, Belgium
基金
中国国家自然科学基金;
关键词
Facial expression recognition; Multi-scale multi-attention network (MSMA-Net); Spatial attention; Temporal attention; MODEL;
D O I
10.1007/s00530-021-00849-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Characterizing spatial information and modelling temporal dynamics of facial images are key challenges for dynamic facial expression recognition (FER). In this paper, we propose an end-to-end multi-scale multi-attention network (MSMA-Net) for dynamic FER. In our model, the spatio-temporal features are encoded at two scales, i.e. the entire face and local facial patches. For each scale, we adopt a 2D convolutional neural network (CNN) to capture frame-based spatial information, and a 3D CNN to depict the short-term dynamics in the temporal sequence. Moreover, we propose a multi-attention mechanism by considering both spatial and temporal attention models. The temporal attention is applied on the image sequence to highlight expressive frames within the whole sequence, and the spatial attention mechanism is applied at the patch level to learn salient facial features. Comprehensive experiments on publicly available datasets (Aff-Wild2, RML, and AFEW) show that the proposed MSMA-Net model automatically highlights salient expressive frames, within which salient facial features are learned, allowing better or very competitive results compared to state-of-the-art methods.
引用
收藏
页码:479 / 493
页数:15
相关论文
共 46 条
[1]  
Abd Elrahman S.M., 2013, journal of network and innovative. Computing, V1, P332
[2]   PERFORMANCE OF OPTICAL-FLOW TECHNIQUES [J].
BARRON, JL ;
FLEET, DJ ;
BEAUCHEMIN, SS .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 1994, 12 (01) :43-77
[3]   VGGFace2: A dataset for recognising faces across pose and age [J].
Cao, Qiong ;
Shen, Li ;
Xie, Weidi ;
Parkhi, Omkar M. ;
Zisserman, Andrew .
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, :67-74
[4]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[5]   Multitask Emotion Recognition with Incomplete Labels [J].
Deng, Didan ;
Chen, Zhaokang ;
Shi, Bertram E. .
2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, :592-599
[6]   EmotiW 2019: Automatic Emotion, Engagement and Cohesion Prediction Tasks [J].
Dhall, Abhinav ;
Goecke, Roland ;
Ghosh, Shreya ;
Gedeon, Tom .
ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, :546-550
[7]   Collecting Large, Richly Annotated Facial-Expression Databases from Movies [J].
Dhall, Abhinav ;
Goecke, Roland ;
Lucey, Simon ;
Gedeon, Tom .
IEEE MULTIMEDIA, 2012, 19 (03) :34-41
[8]   Learning Spatiotemporal Features with 3D Convolutional Networks [J].
Du Tran ;
Bourdev, Lubomir ;
Fergus, Rob ;
Torresani, Lorenzo ;
Paluri, Manohar .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :4489-4497
[9]  
Ekman P., 1978, Palo Alto, V47, P126
[10]  
Fan Y., 2018, INT C ARTIFICIAL NEU, V84, P94