Toward Leveraging Pre-Trained Self-Supervised Frontends for Automatic Singing Voice Understanding Tasks: Three Case Studies

被引:1
作者
Yamamoto, Yuya [1 ]
机构
[1] Univ Tsukuba, Tsukuba, Ibaraki, Japan
来源
2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC | 2023年
关键词
MUSIC;
D O I
10.1109/APSIPAASC58517.2023.10317286
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic singing voice understanding tasks, such as singer identification, singing voice transcription, and singing technique classification, benefit from data-driven approaches that utilize deep learning techniques. These approaches work well even under the rich diversity of vocal and noisy samples owing to their representation ability. However, the limited availability of labeled data remains a significant obstacle to achieving satisfactory performance. In recent years, self-supervised learning models (SSL models) have been trained using large amounts of unlabeled data in the field of speech processing and music classification. By fine-tuning these models for the target tasks, comparable performance to conventional supervised learning can be achieved with limited training data. Therefore, in this paper, we investigate the effectiveness of SSL models for various singing voice recognition tasks. We report the results of experiments comparing SSL models for three different tasks (i.e., singer identification, singing voice transcription, and singing technique classification) as initial exploration and aim to discuss these findings. Experimental results show that each SSL model achieves comparable performance and sometimes outperforms compared to state-of-the-art methods on each task. We also conducted a layer-wise analysis to further understand the behavior of the SSL models.
引用
收藏
页码:1745 / 1752
页数:8
相关论文
共 59 条
[1]  
[Anonymous], 2015, ISMIR
[2]  
[Anonymous], 2007, 8 INT C MUS INF RETR
[3]  
Baevski A, 2020, ADV NEUR IN, V33
[4]   END-TO-END LYRICS RECOGNITION WITH VOICE TO SINGING STYLE TRANSFER [J].
Basak, Sakya ;
Agarwal, Shrutina ;
Ganapathy, Sriram ;
Takahashi, Naoya .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :266-270
[5]  
Castellon R., 2021, 22 INT SOC MUS INF R
[6]   DISTILHUBERT: SPEECH REPRESENTATION LEARNING BY LAYER-WISE DISTILLATION OF HIDDEN-UNIT BERT [J].
Chang, Heng-Jui ;
Yang, Shu-wen ;
Lee, Hung-yi .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :7087-7091
[7]   WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing [J].
Chen, Sanyuan ;
Wang, Chengyi ;
Chen, Zhengyang ;
Wu, Yu ;
Liu, Shujie ;
Chen, Zhuo ;
Li, Jinyu ;
Kanda, Naoyuki ;
Yoshioka, Takuya ;
Xiao, Xiong ;
Wu, Jian ;
Zhou, Long ;
Ren, Shuo ;
Qian, Yanmin ;
Qian, Yao ;
Zeng, Michael ;
Yu, Xiangzhan ;
Wei, Furu .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) :1505-1518
[8]   LARGE-SCALE SELF-SUPERVISED SPEECH REPRESENTATION LEARNING FOR AUTOMATIC SPEAKER VERIFICATION [J].
Chen, Zhengyang ;
Chen, Sanyuan ;
Wu, Yu ;
Qian, Yao ;
Wang, Chengyi ;
Liu, Shujie ;
Qian, Yanmin ;
Zeng, Michael .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :6147-6151
[9]  
Cho KYHY, 2014, Arxiv, DOI arXiv:1409.1259
[10]  
Choi HS, 2021, ADV NEUR IN, V34