Parametric modelling for single-channel blind dereverberation of speech from a moving speaker

被引:5
|
作者
Evers, C. [1 ]
Hopgood, J. R. [1 ]
机构
[1] Univ Edinburgh, Inst Digital Commun, Sch Engn & Elect, Edinburgh EH9 3JL, Midlothian, Scotland
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1049/iet-spr:20070046
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Single-channel blind dereverberation for the enhancement of speech acquired in acoustic environments is essential in applications where microphone arrays prove impractical. In many scenarios, the source-sensor geometry is not varying rapidly, but in most applications the geometry is subject to change, for example when a user wishes to move around a room. A previous model-based approach to blind dereverberation by representing the channel as a linear time-varying all-pole filter is extended, in which the parameters of the filter are modelled as a linear combination of known basis functions with unknown weightings. Moreover, an improved block-based time-varying autoregressive model is proposed for the speech signal, which aims to reflect the underlying signal statistics more accurately on both a local and global level. Given these parametric models, their coefficients are estimated using Bayesian inference, so that the channel estimate can then be used for dereverberation. An in-depth discussion is also presented about the applicability of these models to real speech and a real acoustic environment. Results are presented to demonstrate the performance of the Bayesian inference algorithms.
引用
收藏
页码:59 / 74
页数:16
相关论文
共 50 条
  • [1] Block-based tvar models for single-channel blind dereverberation of speech from a moving speaker
    Hopgood, James R.
    Evers, Christine
    2007 IEEE/SP 14TH WORKSHOP ON STATISTICAL SIGNAL PROCESSING, VOLS 1 AND 2, 2007, : 274 - 278
  • [2] Harmonicity-based blind dereverberation for single-channel speech signals
    Nakatani, Tomohiro
    Kinoshita, Keisuke
    Miyoshi, Masato
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2007, 15 (01): : 80 - 95
  • [3] Single-Channel Speech Dereverberation in Acoustical Environments
    Joorabchi, Marjan
    Ghorshi, Seyed
    Sarafnia, Ali
    2014 56TH INTERNATIONAL SYMPOSIUM ELMAR (ELMAR), 2014, : 211 - 214
  • [4] A Novel Scheme for Single-Channel Speech Dereverberation
    Kilis, Nikolaos
    Mitianoudis, Nikolaos
    ACOUSTICS, 2019, 1 (03): : 711 - 725
  • [5] Blind dereverberation of single-channel speech signals using an ICA-based generative model
    Lee, JH
    Oh, SH
    Lee, SY
    NEURAL INFORMATION PROCESSING, 2004, 3316 : 1070 - 1075
  • [6] BUDDY: SINGLE-CHANNEL BLIND UNSUPERVISED DEREVERBERATION WITH DIFFUSION MODELS
    Moliner, Eloi
    Lemercier, Jean-Marie
    Welker, Simon
    Gerkmann, Timo
    Valimaki, Vesa
    2024 18TH INTERNATIONAL WORKSHOP ON ACOUSTIC SIGNAL ENHANCEMENT, IWAENC 2024, 2024, : 120 - 124
  • [7] Single-channel Speech Dereverberation via Generative Adversarial Training
    Li, Chenxing
    Wang, Tieqiang
    Xu, Shuang
    Xu, Bo
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 1309 - 1313
  • [8] SUBJECTIVE SPEECH QUALITY AND SPEECH INTELLIGIBILITY EVALUATION OF SINGLE-CHANNEL DEREVERBERATION ALGORITHMS
    Warzybok, Anna
    Kodrasi, Ina
    Jungmann, Jan Ole
    Habets, Emanuel
    Gerkmann, Timo
    Mertins, Alfred
    Doclo, Simon
    Kollmeier, Birger
    Goetze, Stefan
    2014 14TH INTERNATIONAL WORKSHOP ON ACOUSTIC SIGNAL ENHANCEMENT (IWAENC), 2014, : 332 - 336
  • [9] JOINT SINGLE-CHANNEL SPEECH SEPARATION AND SPEAKER IDENTIFICATION
    Mowlaee, P.
    Saeidi, R.
    Tan, Z. -H.
    Christensen, M. G.
    Franti, P.
    Jensen, S. H.
    2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 4430 - 4433
  • [10] Single-channel dereverberation by feature mapping using cascade neural networks for robust distant speaker identification and speech recognition
    Nugraha, Aditya Arie
    Yamamoto, Kazumasa
    Nakagawa, Seiichi
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2014,