Music mode analysis and teaching enlightenment research under the background of digital education

被引:0
作者
Mao, Qiusi [1 ]
机构
[1] Henan Univ Urban Construct, Pingdingshan 467000, Henan, Peoples R China
关键词
Digital education; Deep learning; Music mode signal; Music teaching; Attention mechanism; CLASSIFICATION; FEATURES;
D O I
10.1007/s00500-023-08755-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the proliferation of digital education, the investigation of leveraging deep learning models to analyze the attributes of music genres embedded within music mode signals and enhance students' capacity for music appreciation has garnered considerable research attention within the realm of music pedagogy. In order to augment the efficacy of a deep learning model in capturing music mode signals and genre characteristics, this study proposes a music genre classification model that draws upon spectral and spatial domain feature attention. Initially, the original music mode signal undergoes filtering, and subsequently, the resultant music Mel spectrogram is partitioned and fed into the network. Furthermore, the model bolsters genre feature extraction by modifying the convolutional structure and intensifying spatial domain attention. Experimental findings substantiate that our model outperforms alternative approaches, demonstrating enhanced accuracy and convergence in music genre classification, thereby yielding a surge in accuracy ranging from 5.36 percentage points to 10.44 percentage points. Consequently, our model attains precise extraction of music mode signals, facilitates genre classification, and significantly enhances the efficacy of digital music instruction.
引用
收藏
页数:9
相关论文
共 27 条
  • [1] Bae Jun, 2019, [Journal of the Korea Institute Of Information and Communication Engineering, 한국정보통신학회논문지], V23, P27, DOI 10.6109/jkiice.2019.23.1.27
  • [2] Importance of audio feature reduction in automatic music genre classification
    Baniya, Babu Kaji
    Lee, Joonwhoan
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (06) : 3013 - 3026
  • [3] Speech/Music Classification Using Features From Spectral Peaks
    Bhattacharjee, Mrinmoy
    Prasanna, S. R. Mahadeva
    Guha, Prithwijit
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 (28) : 1549 - 1559
  • [4] COMPARISON BETWEEN DIFFERENT FEATURE EXTRACTION TECHNIQUES FOR AUDIO-VISUAL SPEECH RECOGNITION
    Chitu, Alin G.
    Rothkrantz, Leon J. M.
    Wiggers, Pascal
    Wojdel, Jacek C.
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2007, 1 (01) : 7 - 20
  • [5] Dieleman Sander, 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), P6964, DOI 10.1109/ICASSP.2014.6854950
  • [6] Goswami A. D., 2023, BIOLOGY-BASEL, V15, P119, DOI DOI 10.1007/S41870-022-01071-Z
  • [7] Machine learning: Trends, perspectives, and prospects
    Jordan, M. I.
    Mitchell, T. M.
    [J]. SCIENCE, 2015, 349 (6245) : 255 - 260
  • [8] Comparison and Analysis of SampleCNN Architectures for Audio Classification
    Kim, Taejun
    Lee, Jongpil
    Nam, Juhan
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2019, 13 (02) : 285 - 297
  • [9] Stacked auto-encoders based visual features for speech/music classification
    Kumar, Arvind
    Solanki, Sandeep Singh
    Chandra, Mahesh
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [10] RETRACTED: Automatic Music Classification Model Based on Instantaneous Frequency and CNNs in High Noise Environment (Retracted Article)
    Lai, Wen
    [J]. JOURNAL OF ENVIRONMENTAL AND PUBLIC HEALTH, 2022, 2022