Real-Time Emotion-Based Piano Music Generation Using Generative Adversarial Network (GAN)

被引:2
作者
Zheng, Lijun [1 ]
Li, Chenglong [2 ]
机构
[1] Ewha Womans Univ, Sch Mus, Seoul 03760, South Korea
[2] Qiannan Normal Coll Nationalities, Conservatory Mus & Dance, Duyun 558000, Guizhou, Peoples R China
关键词
Generative adversarial networks; Learning automata; Deep learning; Music; Instruments; Complexity theory; Computational modeling; Reinforcement learning; Real-time music generation; generative adversarial network; self-attention mechanism; reinforcement learning; learning automata; emotion-based music;
D O I
10.1109/ACCESS.2024.3414673
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automatic creation of real-time, emotion-based piano music pieces remains a challenge for deep learning models. While Generative Adversarial Networks (GANs) have shown promise, existing methods can struggle with generating musically coherent pieces and often require complex manual configuration. This paper proposes a novel model called Learning Automata-based Self-Attention Generative Adversarial Network (LA-SAGAN) to address these limitations. The proposed model uses a Generative Adversarial Network (GAN), combined with Self-Attention (SA) mechanism to reach this goal. The benefits of using SA modules in GAN architecture is twofold: First, SA mechanism results in generating music pieces with homogenous structure, which means long-distance dependencies in generated outputs are considered. Second, the SA mechanism utilizes the emotional features of the input to produce output pieces. This results in generating music pieces with desired genre or theme. In order to control the complexity of the proposed model, and optimize its structure, a set of Learning Automata (LA) models have been used to determine the activity state of each SA module. To do this, an iterative algorithm based on cooperation of LAs is introduced which optimizes the model by deactivating unnecessary SA modules. The efficiency of the proposed model in generating piano music has been evaluated. Evaluations demonstrate LA-SAGAN's effectiveness: at least 14.47% improvement in entropy (diversity) and improvements in precision (at least 2.47%) and recall (at least 2.13%). Moreover, human evaluation confirms superior musical coherence and adherence to emotional cues.
引用
收藏
页码:87489 / 87500
页数:12
相关论文
共 33 条
[21]  
Narendra K., 2012, Learning automata: an introduction
[22]   Development of an End-to-End Deep Learning Framework for Sign Language Recognition, Translation, and Video Generation [J].
Natarajan, B. ;
Rajalakshmi, E. ;
Elakkiya, R. ;
Kotecha, Ketan ;
Abraham, Ajith ;
Gabralla, Lubna Abdelkareim ;
Subramaniyaswamy, V .
IEEE ACCESS, 2022, 10 :104358-104374
[23]   Dynamic GAN for high-quality sign language video generation from skeletal poses using generative adversarial networks [J].
Natarajan, B. ;
Elakkiya, R. .
SOFT COMPUTING, 2022, 26 (23) :13153-13175
[24]   The Generation of Piano Music Using Deep Learning Aided by Robotic Technology [J].
Pan, Jian ;
Yu, Shaode ;
Zhang, Zi ;
Hu, Zhen ;
Wei, Mingliang .
COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
[25]  
Pasini M, 2022, Arxiv, DOI arXiv:2208.08706
[26]   GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music Generation with Transformers [J].
Sarmento, Pedro ;
Kumar, Adarsh ;
Chen, Yu-Hua ;
Carr, Cj ;
Zukowski, Zack ;
Barthet, Mathieu .
ARTIFICIAL INTELLIGENCE IN MUSIC, SOUND, ART AND DESIGN, EVOMUSART 2023, 2023, 13988 :260-275
[28]   Theme Transformer: Symbolic Music Generation With Theme-Conditioned Transformer [J].
Shih, Yi-Jen ;
Wu, Shih-Lun ;
Zalkow, Frank ;
Mueller, Meinard ;
Yang, Yi-Hsuan .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :3495-3508
[29]  
Wang HF, 2023, Arxiv, DOI arXiv:2303.07794
[30]   A Lightweight Deep Learning-Based Approach for Jazz Music Generation in MIDI Format [J].
Yadav, Prasant Singh ;
Khan, Shadab ;
Singh, Yash Veer ;
Garg, Puneet ;
Singh, Ram Sewak .
COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022