Sparse pursuit and dictionary learning for blind source separation in polyphonic music recordings

被引:0
|
作者
Sören Schulze
Emily J. King
机构
[1] AG Computational Data Analysis,
[2] Faculty 3,undefined
[3] University of Bremen,undefined
[4] Mathematics Department,undefined
[5] Colorado State University,undefined
来源
EURASIP Journal on Audio, Speech, and Music Processing | / 2021卷
关键词
Blind source separation; Unsupervised learning; Dictionary learning; Pitch-invariance; Pattern matching; Sparsity; Stochastic optimization; Adam; Orthogonal matching pursuit;
D O I
暂无
中图分类号
学科分类号
摘要
We propose an algorithm for the blind separation of single-channel audio signals. It is based on a parametric model that describes the spectral properties of the sounds of musical instruments independently of pitch. We develop a novel sparse pursuit algorithm that can match the discrete frequency spectra from the recorded signal with the continuous spectra delivered by the model. We first use this algorithm to convert an STFT spectrogram from the recording into a novel form of log-frequency spectrogram whose resolution exceeds that of the mel spectrogram. We then make use of the pitch-invariant properties of that representation in order to identify the sounds of the instruments via the same sparse pursuit method. As the model parameters which characterize the musical instruments are not known beforehand, we train a dictionary that contains them, using a modified version of Adam. Applying the algorithm on various audio samples, we find that it is capable of producing high-quality separation results when the model assumptions are satisfied and the instruments are clearly distinguishable, but combinations of instruments with similar spectral characteristics pose a conceptual difficulty. While a key feature of the model is that it explicitly models inharmonicity, its presence can also still impede performance of the sparse pursuit algorithm. In general, due to its pitch-invariance, our method is especially suitable for dealing with spectra from acoustic instruments, requiring only a minimal number of hyperparameters to be preset. Additionally, we demonstrate that the dictionary that is constructed for one recording can be applied to a different recording with similar instruments without additional training.
引用
收藏
相关论文
共 50 条
  • [1] Sparse pursuit and dictionary learning for blind source separation in polyphonic music recordings
    Schulze, Soeren
    King, Emily J.
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2021, 2021 (01)
  • [2] Sparse Dictionary Learning and Per-source Filtering for Blind Radio Source Separation
    Dong, Annan
    Simeone, Osvaldo
    Haimovich, Alexander
    Dabin, Jason
    2019 53RD ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2019,
  • [3] Blind Source Separation in Polyphonic Music Recordings Using Deep Neural Networks Trained via Policy Gradients
    Schulze, Soeren
    Leuschner, Johannes
    King, Emily J.
    SIGNALS, 2021, 2 (04): : 637 - 661
  • [4] Blind source separation by sparse decomposition in a signal dictionary
    Zibulevsky, M
    Pearlmutter, BA
    NEURAL COMPUTATION, 2001, 13 (04) : 863 - 882
  • [5] Underdetermined blind source separation of speech mixtures unifying dictionary learning and sparse representation
    Xie, Yuan
    Xie, Kan
    Xie, Shengli
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (12) : 3573 - 3583
  • [6] Underdetermined blind source separation of speech mixtures unifying dictionary learning and sparse representation
    Yuan Xie
    Kan Xie
    Shengli Xie
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 3573 - 3583
  • [7] Semi-blind Source Separation via Sparse Representations and Online Dictionary Learning
    Rambhatla, Sirisha
    Haupt, Jarvis
    2013 ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS, 2013, : 1687 - 1691
  • [8] Blind Source Separation of Sparse Overcomplete Mixtures and Application to Neural Recordings
    Natora, Michal
    Franke, Felix
    Munk, Matthias
    Obermayer, Klaus
    INDEPENDENT COMPONENT ANALYSIS AND SIGNAL SEPARATION, PROCEEDINGS, 2009, 5441 : 459 - +
  • [9] Predominant audio source separation in polyphonic music
    Reghunath, Lekshmi Chandrika
    Rajan, Rajeev
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2023, 2023 (01)
  • [10] Bayesian group sparse learning for music source separation
    Jen-Tzung Chien
    Hsin-Lung Hsieh
    EURASIP Journal on Audio, Speech, and Music Processing, 2013