A GAN Model With Self-attention Mechanism To Generate Multi-instruments Symbolic Music

被引:0
作者
Guan, Faqian [1 ]
Yu, Chunyan [1 ]
Yang, Suqiong [1 ]
机构
[1] Fuzhou Univ, Coll Math & Comp Sci, Fuzhou, Peoples R China
来源
2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2019年
关键词
symbolic music generation; Generative Adversarial Networks; multi-instruments; switchable normalization; self-attention mechanism;
D O I
10.1109/ijcnn.2019.8852291
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
GAN has recently been proved to be able to generate symbolic music in the form of piano-rolls. However, those existing GAN-based multi-track music generation methods are always unstable. Moreover, due to defects in the temporal features extraction, the generated multi-track music does not sound natural enough. Therefore, we propose a new GAN model with self-attention mechanism, DMB-GAN, which can extract more temporal features of music to generate multi-instruments music stably. First of all, to generate more consistent and natural single-track music, we introduce self-attention mechanism to enable GAN-based music generation model to extract not only spatial features but also temporal features. Secondly, to generate multi-instruments music with harmonic structure among all tracks, we construct a dual generative adversarial architecture with multi-branches, each branch for one track. Finally, to improve generated quality of multi-instruments symbolic music, we introduce switchable normalization to stabilize network training. The experimental results show that DMB-GAN can stably generate coherent, natural multi-instruments music with good quality.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Self-attention guided multi-sequence fusion model for differentiation of hepatocellular carcinoma
    Jia X.-B.
    Sun Z.
    Yang D.-W.
    Yang Z.-H.
    Gongcheng Kexue Xuebao/Chinese Journal of Engineering, 2021, 43 (09): : 1149 - 1156
  • [32] Self-attention generative adversarial networks applied to conditional music generation
    Pedro Lucas Tomaz Neves
    José Fornari
    João Batista Florindo
    Multimedia Tools and Applications, 2022, 81 : 24419 - 24430
  • [33] Cross-View Gait Recognition Model Combining Multi-Scale Feature Residual Structure and Self-Attention Mechanism
    Wang, Jingxue
    Guo, Jun
    Xu, Zhenghui
    IEEE ACCESS, 2023, 11 : 127769 - 127782
  • [34] Self-attention generative adversarial networks applied to conditional music generation
    Tomaz Neves, Pedro Lucas
    Fornari, Jose
    Florindo, Joao Batista
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (17) : 24419 - 24430
  • [35] Point cloud classification network based on self-attention mechanism
    Li, Yujie
    Cai, Jintong
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 104
  • [36] Application and exploration of self-attention mechanism in dynamic process monitoring
    Ma, Xin
    Liu, Zhanzhan
    Zheng, Mingxing
    Wang, Youqing
    IFAC PAPERSONLINE, 2022, 55 (06): : 139 - 144
  • [37] Network Intrusion Detection Based on Self-Attention Mechanism and BIGRU
    Du, Xuran
    Gan, Gang
    2024 2ND INTERNATIONAL CONFERENCE ON MOBILE INTERNET, CLOUD COMPUTING AND INFORMATION SECURITY, MICCIS 2024, 2024, : 236 - 241
  • [38] A Service Recommendation Algorithm Based on Self-Attention Mechanism and DeepFM
    Deng, Li Ping
    Guo, Bing
    Zheng, Wen
    INTERNATIONAL JOURNAL OF WEB SERVICES RESEARCH, 2023, 20 (01)
  • [39] An ensemble of CNNs with self-attention mechanism for DeepFake video detection
    Omar, Karima
    Sakr, Rasha H.
    Alrahmawy, Mohammed F.
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (06) : 2749 - 2765
  • [40] Long-Tailed Recognition Based on Self-attention Mechanism
    Feng, Zekai
    Jia, Hong
    Li, Mengke
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT II, ICIC 2024, 2024, 14876 : 380 - 391