Study on emotion recognition bias in different regional groups

被引:0
作者
Martin Lukac
Gulnaz Zhambulova
Kamila Abdiyeva
Michael Lewis
机构
[1] Nazarbayev University,Department of Computer Science
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Human-machine communication can be substantially enhanced by the inclusion of high-quality real-time recognition of spontaneous human emotional expressions. However, successful recognition of such expressions can be negatively impacted by factors such as sudden variations of lighting, or intentional obfuscation. Reliable recognition can be more substantively impeded due to the observation that the presentation and meaning of emotional expressions can vary significantly based on the culture of the expressor and the environment within which the emotions are expressed. As an example, an emotion recognition model trained on a regionally-specific database collected from North America might fail to recognize standard emotional expressions from another region, such as East Asia. To address the problem of regional and cultural bias in emotion recognition from facial expressions, we propose a meta-model that fuses multiple emotional cues and features. The proposed approach integrates image features, action level units, micro-expressions and macro-expressions into a multi-cues emotion model (MCAM). Each of the facial attributes incorporated into the model represents a specific category: fine-grained content-independent features, facial muscle movements, short-term facial expressions and high-level facial expressions. The results of the proposed meta-classifier (MCAM) approach show that a) the successful classification of regional facial expressions is based on non-sympathetic features b) learning the emotional facial expressions of some regional groups can confound the successful recognition of emotional expressions of other regional groups unless it is done from scratch and c) the identification of certain facial cues and features of the data-sets that serve to preclude the design of the perfect unbiased classifier. As a result of these observations we posit that to learn certain regional emotional expressions, other regional expressions first have to be “forgotten”.
引用
收藏
相关论文
共 70 条
[1]  
Friesen E(1978)Facial action coding system: A technique for the measurement of facial movement Palo Alto 3 5-20
[2]  
Ekman P(2013)Practical speech emotion recognition based on online learning: From acted data to elicited data Math. Probl. Eng. 2013 1-2628
[3]  
Huang C(2019)A deep learning system for recognizing facial expression in real-time ACM Trans. Multimedia Comput. Commun. Appl (TOMM) 15 2621-32 304
[4]  
Liang R(2020)Mimamo net: Integrating micro-and macro-motion for video emotion recognition Proc. AAAI Conf. Artif. Intell. 34 32 297-1286
[5]  
Wang Q(2019)Learning affective video features for facial expression recognition via hybrid deep learning IEEE Access 7 1248-610
[6]  
Xi J(2013)Culture and facial expressions of emotion Vis. Cogn. 21 597-952
[7]  
Zha C(2017)Ensemble of deep neural networks with probability-based fusion for facial expression recognition Cogn. Comput. 9 936-323
[8]  
Zhao L(2016)Hybrid deep neural network model for human action recognition Appl. Soft Comput. 46 312-111 878
[9]  
Miao Y(2020)Tsinghua facial expression database-a database of facial expressions in Chinese young and older women and men: Development and validation PLoS ONE 15 111 866-358
[10]  
Dong H(2019)Speech emotion recognition using deep 1d & 2d cnn lstm networks Biomed. Signal Process. Control 47 355-1367