Rough set-based approach for automatic emotion classification of music

被引:4
作者
Baniya B.K. [1 ]
Lee J. [1 ]
机构
[1] Dept. of Computer Science and Engineering, Chonbuk National University, Jeonju
来源
Journal of Information Processing Systems | 2017年 / 13卷 / 02期
基金
新加坡国家研究基金会;
关键词
Attributes; covariance; Discretize; Rough set; Rules;
D O I
10.3745/JIPS.04.0032
中图分类号
学科分类号
摘要
Music emotion is an important component in the field of music information retrieval and computational musicology. This paper proposes an approach for automatic emotion classification, based on rough set (RS) theory. In the proposed approach, four different sets of music features are extracted, representing dynamics, rhythm, spectral, and harmony. From the features, five different statistical parameters are considered as attributes, including up to the 4th order central moments of each feature, and covariance components of mutual ones. The large number of attributes is controlled by RS-based approach, in which superfluous features are removed, to obtain indispensable ones. In addition, RS-based approach makes it possible to visualize which attributes play a significant role in the generated rules, and also determine the strength of each rule for classification. The experiments have been performed to find out which audio features and which of the different statistical parameters derived from them are important for emotion classification. Also, the resulting indispensable attributes and the usefulness of covariance components have been discussed. The overall classification accuracy with all statistical parameters has recorded comparatively better than currently existing methods on a pair of datasets. © 2017 KIPS.
引用
收藏
页码:400 / 416
页数:16
相关论文
共 24 条
[1]  
Yang Y.H., Lin Y.C., Su Y.F., Chen H.H., A regression approach to music emotion recognition,, IEEE Transactions on Audio, Speech, and Language Processing, 16, 2, pp. 448-457, (2008)
[2]  
Cabrera D., PsySound: a computer program for psychoacoustical analysis,, Proceedings of the Australian Acoustical Society Conference, pp. 47-54, (1999)
[3]  
Tzanetakis G., Cook P., Musical genre classification of audio signals,, IEEE Transactions on Speech and Audio Processing, 10, 5, pp. 293-302, (2002)
[4]  
Lu L., Liu D., Zhang H.J., Automatic mood detection and tracking of music audio signals,, IEEE Transactions on Audio, Speech, and Language Processing, 14, 1, pp. 5-18, (2006)
[5]  
Li T., Ogihara M., Content-based music similarity search and emotion detection,, Proceedings of IEEE International Conference on Acoustics, pp. 705-708, (2004)
[6]  
Sen A., Srivastava M., Regression Analysis: Theory, Methods, and Applications, (1990)
[7]  
Smola A.J., Scholkopf B., A tutorial on support vector regression,, Statistics and Computing, 14, 3, pp. 199-222, (2004)
[8]  
Yang Y.H., Chen H.H., Ranking-based emotion recognition for music organization and retrieval,, IEEE Transactions on Audio, Speech, and Language Processing, 19, 4, pp. 762-774, (2011)
[9]  
Solomatine D.P., Shrestha D.L., AdaBoost.RT: a boosting algorithm for regression problems,, Proceedings of IEEE International Joint Conference on Neural Networks, pp. 1163-1168, (2004)
[10]  
Lartillot O., Toivianen P., MIR in MATLAB (II): a toolbox for musical feature extraction from audio,, Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR2007), pp. 127-130, (2007)