Gated Bilinear Networks for Vowel Formant Estimation

被引:0
|
作者
Dai, Wang [1 ]
Hua, Zheng [1 ]
Zhang, Jinsong [1 ]
Xie, Yanlu [1 ]
Lin, Binghuai [2 ]
机构
[1] Beijing Language & Culture Univ, Sch Informat Sci, Beijing, Peoples R China
[2] Tencent Technol Co Ltd, Smart Platform Prod Dept, Beijing, Peoples R China
来源
2020 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP 2020) | 2020年
关键词
vowel formant estimation; Bilinear Network; Temporal Attention-Augmented Bilinear Network; gate mechanism; TRACKING; FREQUENCIES; PREDICTION;
D O I
10.1109/ialp51396.2020.9310481
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Formant estimation from vowel segments, is very useful for linguistic purposes. Traditionally, formants are estimated using classical signal processing methods and statistical models. The averaged continuous extraction of formant frequency along the vowel segments is taken as the formant of vowels. New approaches using neural networks to predict vowel formants on an annotated database, where the input is the acoustic features and output is the mean formant frequency value. Recently, the Bilinear Network (BL) and Temporal Attention-Augmented Bilinear Network (TABL) have proven to be very effective on financial time-series analysis task, compared to recurrent networks and convolution networks. Similar to our work, we explored how to extend the structure of BL and learn from TABL to produce better short-term modeling capability for vowel formant estimation. More specifically, we proposed to replace the attention mechanism with sigmoid gate and use a learnable parameter to dynamically integrate the first linear transformation output, thus learning better representation of BL. Experiments on the vowels test set of public VTR corpus showed that our approach significantly surpassed DNN, CNN, BL and achieved slightly better performance than the poweful TABL model in terms of mean absolute error and mean absolute percent error rate on F1, F2, F3 and overall.
引用
收藏
页码:205 / 209
页数:5
相关论文
共 50 条
  • [41] Static measurements of vowel formant frequencies and bandwidths: A review
    Kent, Raymond D.
    Vorperian, Houri K.
    JOURNAL OF COMMUNICATION DISORDERS, 2018, 74 : 74 - 97
  • [42] Formant interaction as a cue to vowel perception: A case report
    Tanji, K
    Suzuki, K
    Okuda, J
    Shimizu, H
    Seki, H
    Kimura, I
    Endo, K
    Hirayama, K
    Fujii, T
    Yamadori, A
    NEUROCASE, 2003, 9 (04) : 350 - 355
  • [43] PERCEPTION OF BRIEF, SPLIT-FORMANT VOWEL SOUNDS
    AINSWORTH, WA
    ACUSTICA, 1981, 48 (04): : 254 - 259
  • [44] Rollover effect of signal level on vowel formant discrimination
    Liu, Chang
    Journal of the Acoustical Society of America, 2008, 123 (04):
  • [45] Testing the effect of speech separation on vowel formant estimates
    Stanley, Joseph A.
    Johnson, Lisa Morgan
    Brown, Earl Kjar
    LINGUISTICS VANGUARD, 2025,
  • [46] The Study of Vowel Space and Formant Structure in Mazani Language
    Sharifpoor, Mohamad
    Dehghan, Mehdi
    Matloubi, Shima
    Khafri, Soraya
    ARCHIVES OF REHABILITATION, 2020, 21 (02): : 272 - 285
  • [47] Vowel and formant representation in the human auditory speech cortex
    Oganian, Yulia
    Bhaya-Grossman, Ilina
    Johnson, Keith
    Chang, Edward F.
    NEURON, 2023, 111 (13) : 2105 - 2118.e4
  • [48] 2-FORMANT MODELS, PITCH, AND VOWEL PERCEPTION
    CARLSON, R
    FANT, G
    GRANSTROEM, B
    ACUSTICA, 1974, 31 (06): : 360 - 362
  • [49] IS THE 3RD FORMANT NECESSARY FOR VOWEL NORMALIZATION
    ROSNER, BS
    PROCEEDINGS : INSTITUTE OF ACOUSTICS, VOL 8, PART 7: SPEECH & HEARING, 1986, 8 : 1 - 7
  • [50] VOWEL QUALITY DISCRIMINATION WITH DICHOTIC AND ASYNCHRONOUS FORMANT EXCITATION
    PAUL, DJ
    HAGGARD, MP
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1975, 57 : S49 - S49