Sparse Kernel Reduced-Rank Regression for Bimodal Emotion Recognition From Facial Expression and Speech

被引:75
作者
Yan, Jingjie [1 ]
Zheng, Wenming [3 ]
Xu, Qinyu [1 ]
Lu, Guanming [1 ]
Li, Haibo [1 ,2 ]
Wang, Bei [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Jiangsu Prov Key Lab Image Proc & Image Commun, Coll Telecomm & Informat Engn, Nanjing 210003, Peoples R China
[2] Royal Inst Technol, Sch Comp Sci & Commun, S-11428 Stockholm, Sweden
[3] Southeast Univ, Key Lab Child Dev & Learning Sci, Minist Educ, Res Ctr Learning Sci, Nanjing 210096, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Bimodal emotion recognition; facial expression; feature fusion; sparse kernel reduced-rank regression (SKRRR); speech; PHENOTYPES; FRAMEWORK; FUSION; FACE;
D O I
10.1109/TMM.2016.2557721
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A novel bimodal emotion recognition approach from facial expression and speech based on the sparse kernel reduced-rank regression (SKRRR) fusion method is proposed in this paper. In this method, we use the openSMILE feature extractor and the scale invariant feature transform feature descriptor to respectively extract effective features from speech modality and facial expression modality, and then propose the SKRRR fusion approach to fuse the emotion features of two modalities. The proposed SKRRR method is a nonlinear extension of the traditional reduced-rank regression (RRR), where both predictor and response feature vectors in RRR are kernelized by being mapped onto two high-dimensional feature space via two nonlinear mappings, respectively. To solve the SKRRR problem, we propose a sparse representation (SR)-based approach to find the optimal solution of the coefficient matrices of SKRRR, where the introduction of the SR technique aims to fully consider the different contributions of training data samples to the derivation of optimal solution of SKRRR. Finally, we utilize the eNTERFACE '05 and AFEW4.0 bimodal emotion database to conduct the experiments of monomodal emotion recognition and bimodal emotion recognition, and the results indicate that our presented approach acquires the highest or comparable bimodal emotion recognition rate among some state-of-the-art approaches.
引用
收藏
页码:1319 / 1329
页数:11
相关论文
共 65 条
[51]   Robust Face Recognition via Sparse Representation [J].
Wright, John ;
Yang, Allen Y. ;
Ganesh, Arvind ;
Sastry, S. Shankar ;
Ma, Yi .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2009, 31 (02) :210-227
[52]  
Xiaoqing Liu, 2011, 2011 11th IEEE International Conference on Advanced Learning Technologies (ICALT 2011), P63, DOI 10.1109/ICALT.2011.26
[53]   Facial Expression Recognition Based on Sparse Locality Preserving Projection [J].
Yan, Jingjie ;
Zheng, Wenming ;
Xin, Minghai ;
Yan, Jingwei .
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2014, E97A (07) :1650-1653
[54]   Integrating Facial Expression and Body Gesture in Videos for Emotion Recognition [J].
Yan, Jingjie ;
Zheng, Wenming ;
Xin, Minhai ;
Yan, Jingwei .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2014, E97D (03) :610-613
[55]   Speech Emotion Recognition Based on Sparse Representation [J].
Yan, Jingjie ;
Wang, Xiaolan ;
Gu, Weiyi ;
Ma, Lili .
ARCHIVES OF ACOUSTICS, 2013, 38 (04) :465-470
[56]  
[闫静杰 Yan Jingjie], 2013, [中国图象图形学报, Journal of Image and Graphics], V18, P1101
[57]  
Yang JC, 2009, PROC CVPR IEEE, P1794, DOI 10.1109/CVPRW.2009.5206757
[58]   A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions [J].
Zeng, Zhihong ;
Pantic, Maja ;
Roisman, Glenn I. ;
Huang, Thomas S. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2009, 31 (01) :39-58
[59]   Audio-visual affect recognition [J].
Zeng, Zhihong ;
Tu, Jilin ;
Liu, Ming ;
Huang, Thomas S. ;
Pianfetti, Brian ;
Roth, Dan ;
Levinson, Stephen .
IEEE TRANSACTIONS ON MULTIMEDIA, 2007, 9 (02) :424-428
[60]   Emotion Recognition Based on Multimodal Information [J].
Zeng, Zhihong ;
Pantic, Maja ;
Huang, Thomas S. .
AFFECTIVE INFORMATION PROCESSING, 2009, :241-+