Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech

被引:4
作者
Yu, Chongchong [1 ]
Yu, Jiaqi [1 ]
Qian, Zhaopeng [1 ]
Tan, Yuchen [1 ]
机构
[1] Beijing Technol & Business Univ, Sch Artificial Intelligence, Beijing 100048, Peoples R China
关键词
audiovisual speech recognition; low-resource language; automatic speech recognition; lipreading; AUDIOVISUAL FUSION; RECOGNITION; LANGUAGE; ADAPTATION; ASR;
D O I
10.3390/s23042071
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Endangered language generally has low-resource characteristics, as an immaterial cultural resource that cannot be renewed. Automatic speech recognition (ASR) is an effective means to protect this language. However, for low-resource language, native speakers are few and labeled corpora are insufficient. ASR, thus, suffers deficiencies including high speaker dependence and over fitting, which greatly harms the accuracy of recognition. To tackle the deficiencies, the paper puts forward an approach of audiovisual speech recognition (AVSR) based on LSTM-Transformer. The approach introduces visual modality information including lip movements to reduce the dependence of acoustic models on speakers and the quantity of data. Specifically, the new approach, through the fusion of audio and visual information, enhances the expression of speakers' feature space, thus achieving the speaker adaptation that is difficult in a single modality. The approach also includes experiments on speaker dependence and evaluates to what extent audiovisual fusion is dependent on speakers. Experimental results show that the CER of AVSR is 16.9% lower than those of traditional models (optimal performance scenario), and 11.8% lower than that for lip reading. The accuracy for recognizing phonemes, especially finals, improves substantially. For recognizing initials, the accuracy improves for affricates and fricatives where the lip movements are obvious and deteriorates for stops where the lip movements are not obvious. In AVSR, the generalization onto different speakers is also better than in a single modality and the CER can drop by as much as 17.2%. Therefore, AVSR is of great significance in studying the protection and preservation of endangered languages through AI.
引用
收藏
页数:19
相关论文
共 43 条
  • [1] Deep Audio-Visual Speech Recognition
    Afouras, Triantafyllos
    Chung, Joon Son
    Senior, Andrew
    Vinyals, Oriol
    Zisserman, Andrew
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 8717 - 8727
  • [2] Assael Y.M., 2016, ARXIV
  • [3] Coupled hidden Markov models for complex action recognition
    Brand, M
    Oliver, N
    Pentland, A
    [J]. 1997 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, 1997, : 994 - 999
  • [4] Multilingual Convolutional, Long Short-Term Memory, Deep Neural Networks for Low Resource Speech Recognition
    Bukhari, Danish
    Wang, Yutian
    Wang, Hui
    [J]. ADVANCES IN INFORMATION AND COMMUNICATION TECHNOLOGY, 2017, 107 : 842 - 847
  • [5] Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
  • [6] Chen ZY, 2020, PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), P1064, DOI [10.1109/ITNEC48623.2020.9084771, 10.1109/itnec48623.2020.9084771]
  • [7] Lip Reading Sentences in the Wild
    Chung, Joon Son
    Senior, Andrew
    Vinyals, Oriol
    Zisserman, Andrew
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3444 - 3450
  • [8] Using automatic alignment to analyze endangered language data: Testing the viability of untrained alignment
    DiCanio, Christian
    Nam, Hosung
    Whalen, Douglas H.
    Bunnell, H. Timothy
    Amith, Jonathan D.
    Castillo Garcia, Rey
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2013, 134 (03) : 2235 - 2246
  • [9] Audio-Visual Speech Modeling for Continuous Speech Recognition
    Dupont, Stephane
    Luettin, Juergen
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2000, 2 (03) : 141 - 151
  • [10] Maximum likelihood linear transformations for HMM-based speech recognition
    Gales, MJF
    [J]. COMPUTER SPEECH AND LANGUAGE, 1998, 12 (02) : 75 - 98