Adaptive Feature Mapping for Customizing Deep Learning Based Facial Expression Recognition Model

被引:68
作者
Wu, Bing-Fei [1 ]
Lin, Chun-Hsien [1 ]
机构
[1] Natl Chiao Tung Univ, Elect & Control Engn, Hsinchu 30010, Taiwan
关键词
Cross domain adaption; facial expression recognition; computer vision; pattern recognition; image processing; VALIDATION; FACES;
D O I
10.1109/ACCESS.2018.2805861
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated facial expression recognition can greatly improve the human-machine interface. The machine can provide better and more personalized services when it knows the human's emotion. This kind of improvement is an important progress in this artificial intelligence era. Many deep learning approaches have been applied in recent years due to their outstanding recognition accuracy after training with large amounts of data. The performance is limited, however, by the specific environmental conditions and variations in different persons involved. Hence, this paper addresses the issue of how to customize the generic model without label information from the testing samples. Weighted Center Regression Adaptive Feature Mapping (W-CR-AFM) is mainly proposed to transform the feature distribution of testing samples into that of trained samples. By means of minimizing the error between each feature of testing sample and the center of the most relevant category, W-CR-AFM can bring the features of testing samples around the decision boundary to the centers of expression categories; therefore, their predicted labels can be corrected. When the model which is tuned by W-CR-AFM is tested on extended Cohn-Kanade (CK+), Radboud Faces database, and Amsterdam dynamic facial expression set, our approach can improve the recognition accuracy by about 3.01%, 0.49%, and 5.33%, respectively. Compared to the competing deep learning architectures with the same training data, our approach shows the better performance.
引用
收藏
页码:12451 / 12461
页数:11
相关论文
共 36 条
[1]  
[Anonymous], 2015, 2015 International Joint Conference on Neural Networks (IJCNN)
[2]  
[Anonymous], 2016, P IEEE C COMP VIS PA
[3]  
[Anonymous], P IEEE C COMP VIS PA
[4]  
[Anonymous], 2015, 2015 21 KOR JAP JOIN
[5]  
[Anonymous], 2014, ACM INT C MULTIMEDIA
[6]   Facial Expression Recognition Using 3D Convolutional Neural Network [J].
Byeon, Young-Hyen ;
Kwak, Keun-Chang .
INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2014, 5 (12) :107-112
[7]   Selective Transfer Machine for Personalized Facial Expression Analysis [J].
Chu, Wen-Sheng ;
De la Torre, Fernando ;
Cohn, Jeffrey F. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (03) :529-545
[8]   Facial Expression Recognition From Image Sequence Based on LBP and Taylor Expansion [J].
Ding, Yuanyuan ;
Zhao, Qin ;
Li, Baoqing ;
Yuan, Xiaobing .
IEEE ACCESS, 2017, 5 :19409-19419
[9]  
Vo DM, 2016, 2016 3RD NATIONAL FOUNDATION FOR SCIENCE AND TECHNOLOGY DEVELOPMENT CONFERENCE ON INFORMATION AND COMPUTER SCIENCE (NICS), P80, DOI 10.1109/NICS.2016.7725672
[10]   Automatic Segmentation of Hippocampus for Longitudinal Infant Brain MR Image Sequence by Spatial-Temporal Hypergraph Learning [J].
Guo, Yanrong ;
Dong, Pei ;
Hao, Shijie ;
Wang, Li ;
Wu, Guorong ;
Shen, Dinggang .
PATCH-BASED TECHNIQUES IN MEDICAL IMAGING, PATCH-MI 2016, 2016, 9993 :1-8