Attention-based neural network for end-to-end music separation

被引:6
作者
Wang, Jing [1 ,5 ]
Liu, Hanyue [1 ]
Ying, Haorong [1 ]
Qiu, Chuhan [2 ]
Li, Jingxin [3 ]
Anwar, Muhammad Shahid [4 ,6 ]
机构
[1] Beijing Inst Technol, Beijing, Peoples R China
[2] Commun Univ China, Beijing, Peoples R China
[3] China Elect Standardizat Inst, Beijing, Peoples R China
[4] Gachon Univ, Seongnam, South Korea
[5] Beijing Inst Technol, Beijing 100081, Peoples R China
[6] Gachon Univ, Seongnam 13120, South Korea
关键词
channel attention; densely connected network; end-to-end music separation;
D O I
10.1049/cit2.12163
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The end-to-end separation algorithm with superior performance in the field of speech separation has not been effectively used in music separation. Moreover, since music signals are often dual channel data with a high sampling rate, how to model long-sequence data and make rational use of the relevant information between channels is also an urgent problem to be solved. In order to solve the above problems, the performance of the end-to-end music separation algorithm is enhanced by improving the network structure. Our main contributions include the following: (1) A more reasonable densely connected U-Net is designed to capture the long-term characteristics of music, such as main melody, tone and so on. (2) On this basis, the multi-head attention and dual-path transformer are introduced in the separation module. Channel attention units are applied recursively on the feature map of each layer of the network, enabling the network to perform long-sequence separation. Experimental results show that after the introduction of the channel attention, the performance of the proposed algorithm has a stable improvement compared with the baseline system. On the MUSDB18 dataset, the average score of the separated audio exceeds that of the current best-performing music separation algorithm based on the time-frequency domain (T-F domain).
引用
收藏
页码:355 / 363
页数:9
相关论文
共 28 条
  • [1] Andreas J., 2017, P 18 INT SOC MUS INF, P23, DOI DOI 10.5281/ZENODO.1414934
  • [2] Defossez A., 2019, Music source separation in the waveform domain
  • [3] Defossez Alexandre, 2022, arXiv
  • [4] Gregor K, 2010, P 27 INT C INT C MAC, P399
  • [5] Hennequin R., 2020, J OPEN SOURCE SOFTW, V5, P2154, DOI [10.21105/joss.02154, DOI 10.21105/JOSS.02154]
  • [6] Hershey JR, 2016, INT CONF ACOUST SPEE, P31, DOI 10.1109/ICASSP.2016.7471631
  • [7] Huang PS, 2012, INT CONF ACOUST SPEE, P57, DOI 10.1109/ICASSP.2012.6287816
  • [8] kalakota R., 2000, E-business 2.0: Roadmap for sucecces
  • [9] The Internet is changing the music industry
    Lam, CKM
    Tan, BCY
    [J]. COMMUNICATIONS OF THE ACM, 2001, 44 (08) : 62 - 68
  • [10] Luo Y, 2020, INT CONF ACOUST SPEE, P46, DOI [10.1109/icassp40776.2020.9054266, 10.1109/ICASSP40776.2020.9054266]