Unsupervised Audio Source Separation using Generative Priors

被引:10
|
作者
Narayanaswamy, Vivek [1 ]
Thiagarajan, Jayaraman J. [2 ]
Anirudh, Rushil [2 ]
Spanias, Andreas [1 ]
机构
[1] Arizona State Univ, SenSIP Ctr, Sch ECEE, Tempe, AZ 85281 USA
[2] Lawrence Livermore Natl Lab, 7000 East Ave, Livermore, CA 94550 USA
来源
INTERSPEECH 2020 | 2020年
关键词
audio source separation; unsupervised learning; generative priors; projected gradient descent;
D O I
10.21437/Interspeech.2020-3115
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
State-of-the-art under-determined audio source separation systems rely on supervised end to end training of carefully tailored neural network architectures operating either in the time or the spectral domain. However, these methods are severely challenged in terms of requiring access to expensive source level labeled data and being specific to a given set of sources and the mixing process, which demands complete re-training when those assumptions change. This strongly emphasizes the need for unsupervised methods that can leverage the recent advances in data-driven modeling, and compensate for the lack of labeled data through meaningful priors. To this end, we propose a novel approach for audio source separation based on generative priors trained on individual sources. Through the use of projected gradient descent optimization, our approach simultaneously searches in the source-specific latent spaces to effectively recover the constituent sources. Though the generative priors can be defined in the time domain directly, e.g. WaveGAN, we find that using spectral domain loss functions for our optimization leads to good-quality source estimates. Our empirical studies on standard spoken digit and instrument datasets clearly demonstrate the effectiveness of our approach over classical as well as state-of-the-art unsupervised baselines.
引用
收藏
页码:2657 / 2661
页数:5
相关论文
共 50 条
  • [11] DETERMINED AUDIO SOURCE SEPARATION WITH MULTICHANNEL STAR GENERATIVE ADVERSARIAL NETWORK
    Li, Li
    Kameoka, Hirokazu
    Makino, Shoji
    PROCEEDINGS OF THE 2020 IEEE 30TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2020,
  • [12] Unsupervised Portrait Shadow Removal via Generative Priors
    He, Yingqing
    Xing, Yazhou
    Zhang, Tianjia
    Chen, Qifeng
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 236 - 244
  • [13] Musical source separation using time-frequency source priors
    Vincent, E
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2006, 14 (01): : 91 - 98
  • [14] Using beamforming in the audio source separation problem
    Mitianoudis, N
    Davies, ME
    SEVENTH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, VOL 2, PROCEEDINGS, 2003, : 89 - 92
  • [15] Underdetermined source separation with structured source priors
    Vincent, E
    Rodet, X
    INDEPENDENT COMPONENT ANALYSIS AND BLIND SIGNAL SEPARATION, 2004, 3195 : 327 - 334
  • [16] Audio source separation
    Davies, M
    MATHEMATICS IN SIGNAL PROCESSING V, 2002, (71): : 57 - 68
  • [17] Monaural Audio Source Separation using Variational Autoencoders
    Pandey, Laxmi
    Kumar, Anurendra
    Namboodiri, Vinay
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 3489 - 3493
  • [18] AUDIO SOURCE SEPARATION USING MULTIPLE DEFORMED REFERENCES
    Souviraa-Labastie, Nathan
    Olivero, Anaik
    Vincent, Emmanuel
    Bimbot, Frederic
    2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2014, : 311 - 315
  • [19] Unsupervised Music Source Separation Using Differentiable Parametric Source Models
    Schulze-Forster, Kilian
    Richard, Gael
    Kelley, Liam
    Doire, Clement S. J.
    Badeau, Roland
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 1276 - 1289
  • [20] Reverberant Source Separation Using NTF With Delayed Subsources and Spatial Priors
    Fras, Mieszko
    Kowalczyk, Konrad
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 1954 - 1967