Constructing a Multimodal Music Teaching Model in College by Integrating Emotions

被引:0
作者
Song J. [1 ]
机构
[1] College of Arts, Hubei University of Education, Hubei, Wuhan
关键词
CaffeNet network; Feature fusion; LRLR; Music teaching; Speech signal;
D O I
10.2478/amns-2024-1202
中图分类号
学科分类号
摘要
In this study, we enhanced the CaffeNet network for recognizing students' facial expressions in a music classroom and extracted emotional features from their expressions. Additionally, students' speech signals were processed through filters to identify emotional characteristics. Using the LRLR fusion strategy, these expression and speech-based emotional features were combined to derive multimodal fusion emotion results. Subsequently, a music teaching model incorporating this multimodal emotion recognition was developed. Our analysis indicates a mere 6.03% discrepancy between the model's emotion recognition results and manual emotional assessments, underscoring its effectiveness. Implementation of this model in a music teaching context led to a noticeable increase in positive emotional responses - happy and surprised emotions peaked at 30.04% and 27.36%, respectively, during the fourth week. Furthermore, 70% of students displayed a positive learning status, demonstrating a significant boost in engagement and motivation for music learning. This approach markedly enhances student interest in learning and provides a solid basis for improving educational outcomes in music classes. © 2024 Jia Song, published by Sciendo.
引用
收藏
相关论文
共 21 条
[1]  
Hong H., Luo W., The method of emotional education in music teaching, International Journal of Electrical Engineering Education, (2021)
[2]  
Zhang Y., Li Z., Automatic synthesis technology of music teaching melodies based on recurrent neural network, Scientific Programming, 2021, 13, (2021)
[3]  
Xia X., Yan J., Construction of music teaching evaluation model based on weighted nave bayes, Scientific Programming, (2021)
[4]  
Xue L., Construction of the music teaching resource library in the view of digitalization, Basic & Clinical Pharmacology & Toxicology, 3, (2019)
[5]  
Liu Y., Exploration on the application of the internet and the network multimedia in the vocal music teaching, Basic & Clinical Pharmacology & Toxicology, 2, (2019)
[6]  
Zhao Y., Research on the music vocal music teaching model based on the computer platform, Basic & Clinical Pharmacology & Toxicology, 9, (2019)
[7]  
Li W., Design and implementation of music teaching assistant platform based on internet of things, Transactions on Emerging Telecommunications Technologies, (2019)
[8]  
Cao W., Evaluating the vocal music teaching using backpropagation neural network, Mobile Information Systems, (2022)
[9]  
Liu B., Dinesh X., Duan K., Et al., Creating a multitrack classical music performance dataset for multimodal music analysis: Challenges, insights, and applications, IEEE Transactions on Multimedia, (2019)
[10]  
Zhou W., The development system of local music teaching materials based on deep learning, Optik, 273, (2023)