Music Theory-Inspired Acoustic Representation for Speech Emotion Recognition

被引:5
作者
Li, Xingfeng [1 ]
Shi, Xiaohan [2 ]
Hu, Desheng [3 ]
Li, Yongwei [4 ]
Zhang, Qingchen [1 ]
Wang, Zhengxia [5 ]
Unoki, Masashi [6 ]
Akagi, Masato [6 ]
机构
[1] Hainan Univ, Grad Sch Comp Sci & Technol, Haikou 570288, Peoples R China
[2] Nagoya Univ, Sch Informat Sci, Nagoya 4648601, Japan
[3] Taiyuan Univ Technol, Coll Informat & Comp, Taiyuan 030024, Peoples R China
[4] Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
[5] Hainan Univ, Sch Comp Sci & Technol, Haikou 570288, Peoples R China
[6] Japan Adv Inst Sci & Technol, Sch Informat Sci, Nomi 9231292, Japan
基金
中国国家自然科学基金;
关键词
Affective computing; speech emotion recognition; acoustic representation; music theory and speech analysis; PERCEPTION; EXPRESSION; PATTERNS; FEATURES; PITCH; PERSPECTIVE; MODALITIES; KNOWLEDGE; INTERVALS; COGNITION;
D O I
10.1109/TASLP.2023.3289312
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This research presents a music theory-inspired acoustic representation (hereafter, MTAR) to address improved speech emotion recognition. The recognition of emotion in speech and music is developed in parallel, yet a relatively limited understanding of MTAR for interpreting speech emotions is involved. In the present study, we use music theory to study representative acoustics associated with emotion in speech from vocal emotion expressions and auditory emotion perception domains. In experiments assessing the role and effectiveness of the proposed representation in classifying discrete emotion categories and predicting continuous emotion dimensions, it shows promising performance compared with extensively used features for emotion recognition based on the spectrogram, Mel-spectrogram, Mel-frequency cepstral coefficients, VGGish, and the large baseline feature sets of the INTERSPEECH challenges. This proposal opens up a novel research avenue in developing a computational acoustic representation of speech emotion via music theory.
引用
收藏
页码:2534 / 2547
页数:14
相关论文
共 50 条
[31]   Review on speech emotion recognition [J].
Han, W.-J. (hanwenjing07@gmail.com), 1600, Chinese Academy of Sciences (25) :37-50
[32]   Exploring the benefits of discretization of acoustic features for speech emotion recognition [J].
Vogt, Thurid ;
Andre, Elisabeth .
INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5, 2009, :348-351
[33]   A survey of music emotion recognition [J].
Han, Donghong ;
Kong, Yanru ;
Han, Jiayi ;
Wang, Guoren .
FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)
[34]   Acoustic feature analysis and optimization for Bangla speech emotion recognition [J].
Sultana, Sadia ;
Rahman, Mohammad Shahidur .
ACOUSTICAL SCIENCE AND TECHNOLOGY, 2023, 44 (03) :157-166
[35]   Acoustic feature selection for automatic emotion recognition from speech [J].
Rong, Jia ;
Li, Gang ;
Chen, Yi-Ping Phoebe .
INFORMATION PROCESSING & MANAGEMENT, 2009, 45 (03) :315-328
[36]   Modeling the Temporal Evolution of Acoustic Parameters for Speech Emotion Recognition [J].
Ntalampiras, Stavros ;
Fakotakis, Nikos .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2012, 3 (01) :116-125
[37]   Speech Emotion Recognition Model Based on Joint Modeling of Discrete and Dimensional Emotion Representation [J].
Bautista, John Lorenzo ;
Shin, Hyun Soon .
APPLIED SCIENCES-BASEL, 2025, 15 (02)
[38]   Speech emotion recognition based on bi-directional acoustic-articulatory conversion [J].
Li, Haifeng ;
Zhang, Xueying ;
Duan, Shufei ;
Liang, Huizhi .
KNOWLEDGE-BASED SYSTEMS, 2024, 299
[39]   Speech Emotion Recognition Based on Multiple Acoustic Features and Deep Convolutional Neural Network [J].
Bhangale, Kishor ;
Kothandaraman, Mohanaprasad .
ELECTRONICS, 2023, 12 (04)
[40]   Hierarchical sparse coding framework for speech emotion recognition [J].
Torres-Boza, Diana ;
Oveneke, Meshia Cedric ;
Wang, Fengna ;
Jiang, Dongmei ;
Verhelst, Werner ;
Sahli, Hichem .
SPEECH COMMUNICATION, 2018, 99 :80-89