Sparse Kernel Reduced-Rank Regression for Bimodal Emotion Recognition From Facial Expression and Speech

被引:75
作者
Yan, Jingjie [1 ]
Zheng, Wenming [3 ]
Xu, Qinyu [1 ]
Lu, Guanming [1 ]
Li, Haibo [1 ,2 ]
Wang, Bei [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Jiangsu Prov Key Lab Image Proc & Image Commun, Coll Telecomm & Informat Engn, Nanjing 210003, Peoples R China
[2] Royal Inst Technol, Sch Comp Sci & Commun, S-11428 Stockholm, Sweden
[3] Southeast Univ, Key Lab Child Dev & Learning Sci, Minist Educ, Res Ctr Learning Sci, Nanjing 210096, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Bimodal emotion recognition; facial expression; feature fusion; sparse kernel reduced-rank regression (SKRRR); speech; PHENOTYPES; FRAMEWORK; FUSION; FACE;
D O I
10.1109/TMM.2016.2557721
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A novel bimodal emotion recognition approach from facial expression and speech based on the sparse kernel reduced-rank regression (SKRRR) fusion method is proposed in this paper. In this method, we use the openSMILE feature extractor and the scale invariant feature transform feature descriptor to respectively extract effective features from speech modality and facial expression modality, and then propose the SKRRR fusion approach to fuse the emotion features of two modalities. The proposed SKRRR method is a nonlinear extension of the traditional reduced-rank regression (RRR), where both predictor and response feature vectors in RRR are kernelized by being mapped onto two high-dimensional feature space via two nonlinear mappings, respectively. To solve the SKRRR problem, we propose a sparse representation (SR)-based approach to find the optimal solution of the coefficient matrices of SKRRR, where the introduction of the SR technique aims to fully consider the different contributions of training data samples to the derivation of optimal solution of SKRRR. Finally, we utilize the eNTERFACE '05 and AFEW4.0 bimodal emotion database to conduct the experiments of monomodal emotion recognition and bimodal emotion recognition, and the results indicate that our presented approach acquires the highest or comparable bimodal emotion recognition rate among some state-of-the-art approaches.
引用
收藏
页码:1319 / 1329
页数:11
相关论文
共 65 条
[1]   ESTIMATING LINEAR RESTRICTIONS ON REGRESSION COEFFICIENTS FOR MULTIVARIATE NORMAL DISTRIBUTIONS [J].
ANDERSON, TW .
ANNALS OF MATHEMATICAL STATISTICS, 1951, 22 (03) :327-351
[2]  
[Anonymous], 2007, P BMVC WARW UK 10 13
[3]  
[Anonymous], HUM CENTR COMP HUMAN
[4]  
[Anonymous], 2015, P AVEC15 BRISB AUSTR
[5]  
[Anonymous], 2006, 22 INT C DATA ENG WO
[6]  
[Anonymous], SPARSE CANONICAL COR
[7]  
[Anonymous], 2014, ICMI
[8]  
[Anonymous], 2014, Proceedings of the 16th International Conference on Multimodal Interaction, DOI 10.1145/2663204.2666275
[9]  
[Anonymous], 2013, MULTIMEDIA EXPO ICME
[10]   Speech Emotion Recognition Using Canonical Correlation Analysis and Probabilistic Neural Network [J].
Cen, Ling ;
Ser, Wee ;
Yu, Zhu Liang .
SEVENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, PROCEEDINGS, 2008, :859-+