A Theory-Based Explainable Deep Learning Architecture for Music Emotion

被引:2
作者
Fong, Hortense [1 ]
Kumar, Vineet [2 ]
Sudhir, K. [2 ]
机构
[1] Columbia Business Sch, Mkt, New York, NY 10027 USA
[2] Yale Sch Management, New Haven, CT 06511 USA
关键词
audio data; deep learning; explainable and interpretable AI; emotion; digital advertising; music theory; RESPONSES; DISCRETE; CONTEXT; MODEL; MOOD; FELT;
D O I
10.1287/mksc.2022.0323
中图分类号
F [经济];
学科分类号
02 ;
摘要
This paper develops a theory-based, explainable deep learning convolutional neural network (CNN) classifier to predict the time-varying emotional response to music. We design novel CNN filters that leverage the frequency harmonics structure from acoustic physics known to impact the perception of musical features. Our theory-based model is more parsimonious, but it provides comparable predictive performance with atheoretical deep learning models while performing better than models using handcrafted features. Our model can be complemented with handcrafted features, but the performance improvement is marginal. Importantly, the harmonics-based structure placed on the CNN filters provides better explainability for how the model predicts emotional response (valence and arousal) because emotion is closely related to consonance-a perceptual feature defined by the alignment of harmonics. Finally, we illustrate the utility of our model with an application involving digital advertising. Motivated by YouTube's midroll ads, we conduct a laboratory experiment in which we exogenously insert ads at different times within videos. We find that ads placed in emotionally similar contexts increase ad engagement (lower skip rates and higher brand recall rates). Ad insertion based on emotional similarity metrics predicted by our theory-based, explainable model produces comparable or better engagement relative to atheoretical models.
引用
收藏
页码:196 / 219
页数:25
相关论文
共 73 条
[11]  
Chakraborty I, 2024, IN PRESS
[12]  
Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
[13]  
Choi K, 2018, PREPRINT
[14]  
Choi K, 2017, INT CONF ACOUST SPEE, P2392, DOI 10.1109/ICASSP.2017.7952585
[15]  
Chowdhury S., 2019, PREPRINT
[16]  
Cohen J. B., 2018, HDB CONSUMER PSYCHOL, P306
[17]  
Corrigall K.A., 2013, Handbook of psychology of emotions: Recent theoretical perspectives and novel empirical findings, V2, P299
[18]   The effects of affective responses to media context on advertising evaluations [J].
Coulter, KS .
JOURNAL OF ADVERTISING, 1998, 27 (04) :41-51
[19]   Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations [J].
Davani, Aida Mostafazadeh ;
Diaz, Mark ;
Prabhakaran, Vinodkumar .
TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 :92-110
[20]   Letting Logos Speak: Leveraging Multiview Representation Learning for Data-Driven Branding and Logo Design [J].
Dew, Ryan ;
Ansari, Asim ;
Toubia, Olivier .
MARKETING SCIENCE, 2022, 41 (02) :401-425