Multimodal shared features learning for emotion recognition by enhanced sparse local discriminative canonical correlation analysis

被引:15
作者
Fu, Jiamin [1 ]
Mao, Qirong [1 ]
Tu, Juanjuan [2 ]
Zhan, Yongzhao [1 ]
机构
[1] Jiangsu Univ, Sch Comp Sci & Commun Engn, Zhenjiang, Jiangsu, Peoples R China
[2] Jiangsu Univ Sci & Technol, Sch Comp Sci & Engn, Zhenjiang, Jiangsu, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Multimodal emotion recognition; Multimodal shared feature learning; Multimodal information fusion; Canonical correlation analysis;
D O I
10.1007/s00530-017-0547-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal emotion recognition is a challenging research topic which has recently started to attract the attention of the research community. To better recognize the video users' emotion, the research of multimodal emotion recognition based on audio and video is essential. Multimodal emotion recognition performance heavily depends on finding good shared feature representation. The good shared representation needs to consider two aspects: (1) it has the character of each modality and (2) it can balance the effect of different modalities to make the decision optimal. In the light of these, we propose a novel Enhanced Sparse Local Discriminative Canonical Correlation Analysis approach (En-SLDCCA) to learn the multimodal shared feature representation. The shared feature representation learning involves two stages. In the first stage, we pretrain the Sparse Auto-Encoder with unimodal video (or audio), so that we can obtain the hidden feature representation of video and audio separately. In the second stage, we obtain the correlation coefficients of video and audio using our En-SLDCCA approach, then we form the shared feature representation which fuses the features from video and audio using the correlation coefficients. We evaluate the performance of our method on the challenging multimodal Enterface'05 database. Experimental results reveal that our method is superior to the unimodal video (or audio) and improves significantly the performance for multimodal emotion recognition when compared with the current state of the art.
引用
收藏
页码:451 / 461
页数:11
相关论文
共 41 条
[1]   Person Re-Identification by Robust Canonical Correlation Analysis [J].
An, Le ;
Yang, Songfan ;
Bhanu, Bir .
IEEE SIGNAL PROCESSING LETTERS, 2015, 22 (08) :1103-1107
[2]  
Busso C., 2004, Proceedings of the 6th international conference on Multimodal interfaces, P205, DOI [10.1145/1027933.1027968, DOI 10.1145/1027933.1027968]
[3]   Multimodal human emotion/expression recognition [J].
Chen, LS ;
Huang, TS ;
Miyasato, T ;
Nakatsu, R .
AUTOMATIC FACE AND GESTURE RECOGNITION - THIRD IEEE INTERNATIONAL CONFERENCE PROCEEDINGS, 1998, :366-371
[4]   Shrinkage Algorithms for MMSE Covariance Estimation [J].
Chen, Yilun ;
Wiesel, Ami ;
Eldar, Yonina C. ;
Hero, Alfred O. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2010, 58 (10) :5016-5029
[5]  
Datcu D., 2009, MULTIMODAL RECOGNITI
[6]   Sparse Autoencoder-based Feature Transfer Learning for Speech Emotion Recognition [J].
Deng, Jun ;
Zhang, Zixing ;
Marchi, Erik ;
Schuller, Bjoern .
2013 HUMAINE ASSOCIATION CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2013, :511-516
[7]   Towards Efficient Multi-Modal Emotion Recognition [J].
Dobrisek, Simon ;
Gajsek, Rok ;
Mihelic, France ;
Pavesic, Nikola ;
Struc, Vitomir .
INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2013, 10
[8]  
Gajsek Rok, 2010, Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR 2010), P4133, DOI 10.1109/ICPR.2010.1005
[9]  
Gunes H., 2008, LAB REAL WORLD AFFEC
[10]  
Han MJ, 2007, IEEE SYS MAN CYBERN, P2464