Correntropy-based linear prediction for voice inverse filtering

被引:1
|
作者
Zalazar, Ivan A. [1 ,2 ]
Alzamendi, Gabriel A. [1 ,2 ]
Zanartu, Matias [3 ]
Schlotthauer, Gaston [1 ,2 ]
机构
[1] Institute Res & Dev Bioengineering & Bioinformat, CONICET UNER, Oro Verde, Entre Rios, Argentina
[2] Univ Nacl Entre Rios, BFaculty Engn, Oro Verde, Entre Rios, Argentina
[3] Univ Tecn Federico Santa Maria, Dept Elect Engn, Valparaiso, Chile
关键词
Maximum correntropy linear prediction; Voice inverse filtering; Correntropy measure;
D O I
10.1117/12.2669810
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Voice inverse filtering analysis comprises different methods for the non-invasive estimation of glottal airflow from a speech signal, thus bringing forth relevant information about the vocal function and acoustic excitation during voiced phonation. Most inverse filtering strategies consider a parametric source-filter model of phonation and variants of linear prediction to adjust the model coefficients. However, classical linear prediction is susceptible to impulse-like acoustic excitations produced by abrupt glottal closures. Robust alternatives have been proposed that apply a time-domain weighting function to de-emphasize the detrimental contribution of the impulse-like glottal events. The present study introduces the maximum correntropy criterion-based linear prediction for voice inverse filtering. This method takes advantage of the correntropy -a non-linear localized similarity measure inherently insensitive to outliers- to implement a robust weighted linear prediction, where the weighting function is adjusted iteratively through a speech-data-guided optimization scheme. Simulations show that the proposed method naturally overweights samples in the glottal closed phase, where the phonation model is more accurate, without being necessary any prior information about the closure instants. It is further shown that maximum correntropy criterion-based linear prediction improves inverse filtering analysis in terms of the smoothness of estimated glottal waveforms, and the spectral relevance of the vocal tract filter.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Correntropy-based robust extreme learning machine for classification
    Ren, Zhuo
    Yang, Liming
    NEUROCOMPUTING, 2018, 313 : 74 - 84
  • [22] Correntropy-based metric for robust twin support vector machine
    Yuan, Chao
    Yang, Liming
    Sun, Ping
    INFORMATION SCIENCES, 2021, 545 : 82 - 101
  • [23] Correntropy-Based Constructive One Hidden Layer Neural Network
    Nayyeri, Mojtaba
    Rouhani, Modjtaba
    Yazdi, Hadi Sadoghi
    Makela, Marko M.
    Maskooki, Alaleh
    Nikulin, Yury
    ALGORITHMS, 2024, 17 (01)
  • [24] Correntropy-Based Sparse Spectral Clustering for Hyperspectral Band Selection
    Sun, Weiwei
    Peng, Jiangtao
    Yang, Gang
    Du, Qian
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2020, 17 (03) : 484 - 488
  • [25] Robust twin extreme learning machines with correntropy-based metric
    Yuan, Chao
    Yang, Liming
    KNOWLEDGE-BASED SYSTEMS, 2021, 214
  • [26] Mixture correntropy-based robust distance metric learning for classification
    Yuan, Chao
    Zhou, Changsheng
    Peng, Jigen
    Li, Haiyang
    KNOWLEDGE-BASED SYSTEMS, 2024, 295
  • [27] Correntropy-based Adaptive Learning to Support Video Surveillance Systems
    Alvarez-Meza, A. M.
    Molina-Giraldo, S.
    Castellanos-Dominguez, G.
    2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 2590 - 2595
  • [28] Maximum Correntropy-Based Extended Particle Filter for Nonlinear System
    Jin, Yongze
    Mu, Lingxia
    Feng, Nan
    Hei, Xinhong
    Li, Yankai
    Xie, Guo
    Ye, Xin
    Li, Jiajie
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (07) : 2520 - 2524
  • [29] Resilient distributed estimation against FDI attacks: A correntropy-based approach
    Xia, Wei
    Zhang, Yuhan
    INFORMATION SCIENCES, 2023, 635 : 236 - 256
  • [30] Robust twin support vector regression with correntropy-based metric
    Zhang, Min
    Zhao, Yifeng
    Yang, Liming
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 45443 - 45469