Deep learning-based late fusion of multimodal information for emotion classification of music video

被引:124
作者
Pandeya, Yagya Raj [1 ]
Lee, Joonwhoan [1 ]
机构
[1] Jeonbuk Natl Univ, Div Comp Sci & Engn, Jeonju, South Korea
基金
新加坡国家研究基金会;
关键词
Emotion classification; Music video dataset; CNN; Multimodal approach; Late fusion; NEURAL-NETWORKS; REPRESENTATION; MODEL; RECOGNITION; CATEGORIES; SPACE;
D O I
10.1007/s11042-020-08836-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Affective computing is an emerging area of research that aims to enable intelligent systems to recognize, feel, infer and interpret human emotions. The widely spread online and off-line music videos are one of the rich sources of human emotion analysis because it integrates the composer's internal feeling through song lyrics, musical instruments performance and visual expression. In general, the metadata which music video customers to choose a product includes high-level semantics like emotion so that automatic emotion analysis might be necessary. In this research area, however, the lack of a labeled dataset is a major problem. Therefore, we first construct a balanced music video emotion dataset including diversity of territory, language, culture and musical instruments. We test this dataset over four unimodal and four multimodal convolutional neural networks (CNN) of music and video. First, we separately fine-tuned each pre-trained unimodal CNN and test the performance on unseen data. In addition, we train a 1-dimensional CNN-based music emotion classifier with raw waveform input. The comparative analysis of each unimodal classifier over various optimizers is made to find the best model that can be integrate into a multimodal structure. The best unimodal modality is integrated with corresponding music and video network features for multimodal classifier. The multimodal structure integrates whole music video features and makes final classification with the SoftMax classifier by a late feature fusion strategy. All possible multimodal structures are also combined into one predictive model to get the overall prediction. All the proposed multimodal structure uses cross-validation to overcome the data scarcity problem (overfitting) at the decision level. The evaluation results using various metrics show a boost in the performance of the multimodal architectures compared to each unimodal emotion classifier. The predictive model by integration of all multimodal structure achieves 88.56% in accuracy, 0.88 in f1-score, and 0.987 in area under the curve (AUC) score. The result suggests human high-level emotions are automatically well classified in the proposed CNN-based multimodal networks, even though a small amount of labeled data samples is available for training.
引用
收藏
页码:2887 / 2905
页数:19
相关论文
共 68 条
[31]  
Kunze J, 2017, ARXIV170600290V1
[32]   SampleCNN: End-to-End Deep Convolutional Neural Networks Using Very Small Filters for Music Classification [J].
Lee, Jongpil ;
Park, Jiyoung ;
Kim, Keunhyoung Luke ;
Nam, Juhan .
APPLIED SCIENCES-BASEL, 2018, 8 (01)
[33]   A new three-dimensional model for emotions and monoamine neurotransmitters [J].
Lovheim, Hugo .
MEDICAL HYPOTHESES, 2012, 78 (02) :341-348
[34]   Audio-visual emotion fusion (AVEF): A deep efficient weighted approach [J].
Ma, Yaxiong ;
Hao, Yixue ;
Chen, Min ;
Chen, Jincai ;
Lu, Ping ;
Kosir, Andrej .
INFORMATION FUSION, 2019, 46 :184-192
[35]  
Minaee S., 2019, Deep-emotion: Facial expression recognition using attentional convolutional network
[36]  
Ng JYH, 2015, PROC CVPR IEEE, P4694, DOI 10.1109/CVPR.2015.7299101
[37]   Deep spatio-temporal features for multimodal emotion recognition [J].
Nguyen, Dung ;
Nguyen, Kien ;
Sridharan, Sridha ;
Ghasemi, Afsane ;
Dean, David ;
Fookes, Clinton .
2017 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2017), 2017, :1215-1223
[38]   Vocal-based emotion recognition using random forests and decision tree [J].
Noroozi F. ;
Sapiński T. ;
Kamińska D. ;
Anbarjafari G. .
International Journal of Speech Technology, 2017, 20 (02) :239-246
[39]  
Ortega JDS, 2019, ARXIV190703196V1
[40]  
Ouyang, 2017, INT C MULT INT UK GL