A Deeper Look at Facial Expression Dataset Bias

被引:67
作者
Li, Shan [1 ]
Deng, Weihong [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Informat & Commun Engn, Pattern Recognit & Intelligent Syst Lab, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
Face recognition; Databases; Adaptation models; Training; Semantics; Data models; Task analysis; Cross dataset; facial expression recognition (FER); dataset bias; domain adaption; RECOGNITION;
D O I
10.1109/TAFFC.2020.2973158
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Datasets play an important role in the progress of facial expression recognition algorithms, but they may suffer from obvious biases caused by different cultures and collection conditions. To look deeper into this bias, we first conduct comprehensive experiments on dataset recognition and cross-dataset generalization tasks, and for the first time, explore the intrinsic causes of the dataset discrepancy. The results quantitatively verify that current datasets have a strong build-in bias, and corresponding analyses indicate that the conditional probability distributions between source and target datasets are different. However, previous researches are mainly based on shallow features with limited discriminative ability under the assumption that the conditional distribution remains unchanged across domains. To address these issues, we further propose a novel deep Emotion-Conditional Adaption Network (ECAN) to learn domain-invariant and discriminative feature representations, which can match not only the marginal distribution but also the class-conditional distribution across domains by exploring the underlying label information of the target dataset. Moreover, the largely ignored expression class distribution bias is also addressed so that the training and testing domains can share similar class distribution. Extensive cross-database experiments on both lab-controlled datasets (CK+, JAFFE, MMI, and Oulu-CASIA) and real-world databases (AffectNet, FER2013, RAF-DB 2.0, and SFEW 2.0) demonstrate that our ECAN can yield competitive performances across various cross-dataset facial expression recognition tasks and outperform the state-of-the-art methods.
引用
收藏
页码:881 / 893
页数:13
相关论文
共 52 条
[1]   Fully Automated Recognition of Spontaneous Facial Expressions in Videos Using Random Forest Classifiers [J].
Abd El Meguid, Mostafa K. ;
Levine, Martin D. .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (02) :141-154
[2]   Boosted NNE collections for multicultural facial expression recognition [J].
Ali, Ghulam ;
Iqbal, Muhammad Amjad ;
Choi, Tae-Sun .
PATTERN RECOGNITION, 2016, 55 :14-27
[3]  
[Anonymous], 2010, 2010 IEEE COMPUTER S, DOI [10. 1109/CVPRW.2010.5543262, DOI 10.1109/CVPRW.2010.5543262]
[4]  
[Anonymous], 2013, INT J BIOSCI BIOTECH
[5]  
[Anonymous], 2011, ACM T INTEL SYST TEC, DOI DOI 10.1145/1961189.1961199
[6]   Integrating structured biological data by Kernel Maximum Mean Discrepancy [J].
Borgwardt, Karsten M. ;
Gretton, Arthur ;
Rasch, Malte J. ;
Kriegel, Hans-Peter ;
Schoelkopf, Bernhard ;
Smola, Alex J. .
BIOINFORMATICS, 2006, 22 (14) :E49-E57
[7]   VGGFace2: A dataset for recognising faces across pose and age [J].
Cao, Qiong ;
Shen, Li ;
Xie, Weidi ;
Parkhi, Omkar M. ;
Zisserman, Andrew .
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, :67-74
[8]   Learning person-specific models for facial expression and action unit recognition [J].
Chen, Jixu ;
Liu, Xiaoming ;
Tu, Peter ;
Aragones, Amy .
PATTERN RECOGNITION LETTERS, 2013, 34 (15) :1964-1970
[9]   Selective Transfer Machine for Personalized Facial Expression Analysis [J].
Chu, Wen-Sheng ;
De la Torre, Fernando ;
Cohn, Jeffrey F. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (03) :529-545
[10]  
Cohn JF, 2019, COMPUT VIS PATT REC, P407, DOI 10.1016/B978-0-12-814601-9.00026-2