Using Circular Models to Improve Music Emotion Recognition

被引:8
|
作者
Dufour, Isabelle [1 ]
Tzanetakis, George [1 ]
机构
[1] Univ Victoria, Dept Comp Sci, Victoria, BC V8P 5C2, Canada
关键词
Mood; Emotion recognition; Computational modeling; Task analysis; Music; Predictive models; Music emotion recognition; emotional model; circular annotations; CIRCUMPLEX MODEL; BASIC EMOTIONS; PREDICTION; DISCRIMINATION; CLASSIFICATION; CONSTRUCTION; RETRIEVAL; DISCRETE;
D O I
10.1109/TAFFC.2018.2885744
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The two commonly accepted models of affect used in affective computing are categorical and two-dimensional. However, categorical models are limited to datasets that only contain music for which human annotators fully agree upon, while two-dimensional models use descriptors to which users may not relate to (e.g., Valence and Arousal). This paper explores the hypothesis that the music emotion problem is circular, and shows how circular models can be used for automatic music emotion recognition. This hypothesis is tested through experiments on the two commonly accepted models of affect, as well as on an original circular model proposed by the authors. First, an original dataset was assembled and annotated as a way to investigate agreement among annotators. Then, polygonal approximations of circular regression are proposed as a practical method to investigate whether the circularity of the annotations can be exploited. Experiments with different polygons demonstrate consistent improvements over the categorical model on a dataset containing musical extracts for which the human annotators did not fully agree upon. Finally, a proposed multi-tagging strategy based on the circular predictions is put forward as a pragmatic method to automatically annotate music based on the circular models.
引用
收藏
页码:666 / 681
页数:16
相关论文
共 50 条
  • [1] A survey of music emotion recognition
    Han, Donghong
    Kong, Yanru
    Han, Jiayi
    Wang, Guoren
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)
  • [2] Machine Recognition of Music Emotion: A Review
    Yang, Yi-Hsuan
    Chen, Homer H.
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2012, 3 (03)
  • [3] A multi-genre model for music emotion recognition using linear regressors
    Griffiths, Darryl
    Cunningham, Stuart
    Weinel, Jonathan
    Picking, Richard
    JOURNAL OF NEW MUSIC RESEARCH, 2021, 50 (04) : 355 - 372
  • [4] A comparison of the discrete and dimensional models of emotion in music
    Eerola, Tuomas
    Vuoskoski, Jonna K.
    PSYCHOLOGY OF MUSIC, 2011, 39 (01) : 18 - 49
  • [5] Music Emotion Recognition Based on Deep Learning: A Review
    Jiang, Xingguo
    Zhang, Yuchao
    Lin, Guojun
    Yu, Ling
    IEEE ACCESS, 2024, 12 : 157716 - 157745
  • [6] LINEAR REGRESSION-BASED ADAPTATION OF MUSIC EMOTION RECOGNITION MODELS FOR PERSONALIZATION
    Chen, Yu-An
    Wang, Ju-Chiang
    Yang, Yi-Hsuan
    Chen, Homer
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [7] Music Emotion Recognition: From Content- to Context-Based Models
    Barthet, Mathieu
    Fazekas, Gyorgy
    Sandler, Mark
    FROM SOUNDS TO MUSIC AND EMOTIONS, 2013, 7900 : 228 - 252
  • [8] Embedding-Based Music Emotion Recognition Using Composite Loss
    Takashima, Naoki
    Li, Frederic
    Grzegorzek, Marcin
    Shirahama, Kimiaki
    IEEE ACCESS, 2023, 11 : 36579 - 36604
  • [10] Automatic ECG-Based Emotion Recognition in Music Listening
    Hsu, Yu-Liang
    Wang, Jeen-Shing
    Chiang, Wei-Chun
    Hung, Chien-Han
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2020, 11 (01) : 85 - 99