An Enhanced Adversarial Network with Combined Latent Features for Spatio-temporal Facial Affect Estimation in the Wild

被引:5
作者
Aspandi, Decky [1 ,2 ]
Sukno, Federico [1 ]
Schuller, Bjoern [2 ,3 ]
Binefa, Xavier [1 ]
机构
[1] Pompeu Fabra Univ, Dept Informat & Commun Technol, Barcelona, Spain
[2] Univ Augsburg, Chair Embedded Intelligence Hlth Care & Wellbeing, Augsburg, Germany
[3] Imperial Coll London, GLAM Grp Language Audio & Mus, London, England
来源
VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 4: VISAPP | 2021年
关键词
Affective Computing; Temporal Modelling; Adversarial Learning; TRACKING;
D O I
10.5220/0010332001720181
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Affective Computing has recently attracted the attention of the research community, due to its numerous applications in diverse areas. In this context, the emergence of video-based data allows to enrich the widely used spatial features with the inclusion of temporal information. However, such spatio-temporal modelling often results in very high-dimensional feature spaces and large volumes of data, making training difficult and time consuming. This paper addresses these shortcomings by proposing a novel model that efficiently extracts both spatial and temporal features of the data by means of its enhanced temporal modelling based on latent features. Our proposed model consists of three major networks, coined Generator, Discriminator, and Combiner, which are trained in an adversarial setting combined with curriculum learning to enable our adaptive attention modules. In our experiments, we show the effectiveness of our approach by reporting our competitive results on both the AFEW-VA and SEWA datasets, suggesting that temporal modelling improves the affect estimates both in qualitative and quantitative terms. Furthermore, we find that the inclusion of attention mechanisms leads to the highest accuracy improvements, as its weights seem to correlate well with the appearance of facial movements, both in terms of temporal localisation and intensity. Finally, we observe the sequence length of around 160 ms to be the optimum one for temporal modelling, which is consistent with other relevant findings utilising similar lengths.
引用
收藏
页码:172 / 181
页数:10
相关论文
共 41 条
[1]  
Aspandi D, 2019, IEEE INT CONF AUTOMA, P730
[2]   Latent-Based Adversarial Neural Networks for Facial Affect Estimations [J].
Aspandi, Decky ;
Mallol-Ragolta, Adria ;
Schuller, Bjoern ;
Binefa, Xavier .
2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, :606-610
[3]   Fully End-to-End Composite Recurrent Convolution Network for Deformable Facial Tracking In The Wild [J].
Aspandi, Decky ;
Martinez, Oriol ;
Sukno, Federico ;
Binefa, Xavier .
2019 14TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2019), 2019, :115-122
[4]   Robust Facial Alignment with Internal Denoising Auto-Encoder [J].
Aspandi, Decky ;
Martinez, Oriol ;
Sukno, Federico ;
Binefa, Xavier .
2019 16TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV 2019), 2019, :143-150
[5]  
Barros P., 2018, P IJCNN 2018, P1, DOI DOI 10.1109/IJCNN.2018.8489099
[6]  
Bengio Y., 2009, P 26 ANN INT C MACH, P41
[7]   Multisource Transfer Learning With Convolutional Neural Networks for Lung Pattern Analysis [J].
Christodoulidis, Stergios ;
Anthimopoulos, Marios ;
Ebner, Lukas ;
Christe, Andreas ;
Mougiakakou, Stavroula .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2017, 21 (01) :76-84
[8]   End-to-end Facial and Physiological Model for Affective Computing and Applications [J].
Comas, Joaquim ;
Aspandi, Decky ;
Binefa, Xavier .
2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, :93-100
[9]   Active appearance models [J].
Cootes, TF ;
Edwards, GJ ;
Taylor, CJ .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2001, 23 (06) :681-685
[10]   Towards Diverse and Natural Image Descriptions via a Conditional GAN [J].
Dai, Bo ;
Fidler, Sanja ;
Urtasun, Raquel ;
Lin, Dahua .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2989-2998