Coarse-to-fine Optimization for Speech Enhancement

被引:4
作者
Yao, Jian [1 ]
Al-Dahle, Ahmad [1 ]
机构
[1] Apple Inc, Cupertino, CA 95014 USA
来源
INTERSPEECH 2019 | 2019年
关键词
speech enhancement; coarse-to-fine; deep learning; generative model; discriminative model; dynamic perceptual loss;
D O I
10.21437/Interspeech.2019-2792
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
In this paper, we propose the coarse-to-fine optimization for the task of speech enhancement. Cosine similarity loss [1] has proven to be an effective metric to measure similarity of speech signals. However, due to the large variance of the enhanced speech with even the same cosine similarity loss in high dimensional space, a deep neural network learnt with this loss might not be able to predict enhanced speech with good quality. Our coarse-to-fine strategy optimizes the cosine similarity loss for different granularities so that more constraints are added to the prediction from high dimension to relatively low dimension. In this way, the enhanced speech will better resemble the clean speech. Experimental results show the effectiveness of our proposed coarse-to-fine optimization in both discriminative models and generative models. Moreover, we apply the coarse-to-fine strategy to the adversarial loss in generative adversarial network (GAN) and propose dynamic perceptual loss, which dynamically computes the adversarial loss from coarse resolution to fine resolution. Dynamic perceptual loss further improves the accuracy and achieves state-of-the-art results compared with other generative models.
引用
收藏
页码:2743 / 2747
页数:5
相关论文
共 37 条
  • [1] SHORT-TERM SPECTRAL ANALYSIS, SYNTHESIS, AND MODIFICATION BY DISCRETE FOURIER-TRANSFORM
    ALLEN, JB
    [J]. IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1977, 25 (03): : 235 - 238
  • [2] [Anonymous], 2018, INT C AC SPEECH SIGN
  • [3] [Anonymous], 2007, P 862 2 WID EXT REC
  • [4] [Anonymous], **DATA OBJECT**, DOI DOI 10.7488/DS/1356
  • [5] Berouti M., 1979, ICASSP 79. 1979 IEEE International Conference on Acoustics, Speech and Signal Processing, P208
  • [6] Choi Hyeong-Seok, 2019, P ICLR
  • [7] Courville A., 2016, EUR C COMP VIS ECCV
  • [8] Dumoulin V., 2015, GUIDE CONVOLUTION AR
  • [9] Fleet D.J., 2005, OPTICAL FLOW ESTIMAT
  • [10] End-to-End Waveform Utterance Enhancement for Direct Evaluation Metrics Optimization by Fully Convolutional Neural Networks
    Fu, Szu-Wei
    Wang, Tao-Wei
    Tsao, Yu
    Lu, Xugang
    Kawai, Hisashi
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2018, 26 (09) : 1570 - 1584