Silent Speech Interface Using Ultrasonic Doppler Sonar

被引:4
作者
Lee, Ki-Seung [1 ]
机构
[1] Konkuk Univ, Dept Elect Engn, Seoul 143701, South Korea
来源
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS | 2020年 / E103D卷 / 08期
关键词
silent speech interface; ultrasonic Doppler; deep neural networks; RECOGNITION; SENSOR;
D O I
10.1587/transinf.2019EDP7211
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Some non-acoustic modalities have the ability to reveal certain speech attributes that can be used for synthesizing speech signals without acoustic signals. This study validated the use of ultrasonic Doppler frequency shifts caused by facial movements to implement a silent speech interface system. A 40kHz ultrasonic beam is incident to a speaker's mouth region. The features derived from the demodulated received signals were used to estimate the speech parameters. A nonlinear regression approach was employed in this estimation where the relationship between ultrasonic features and corresponding speech is represented by deep neural networks (DNN). In this study, we investigated the discrepancies between the ultrasonic signals of audible and silent speech to validate the possibility for totally silent communication. Since reference speech signals are not available in silently mouthed ultrasonic signals, a nearest-neighbor search and alignment method was proposed, wherein alignment was achieved by determining the optimal pair of ultrasonic and audible features in the sense of a minimum mean square error criterion. The experimental results showed that the performance of the ultrasonic Doppler-based method was superior to that of EMG-based speech estimation, and was comparable to an image-based method.
引用
收藏
页码:1875 / 1887
页数:13
相关论文
共 40 条
  • [1] Visually Derived Wiener Filters for Speech Enhancement
    Almajai, Ibrahim
    Milner, Ben
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2011, 19 (06): : 1642 - 1651
  • [2] [Anonymous], 2001, PERC EV SPEECH QUAL
  • [3] [Anonymous], 2005, P 9 EUR C SPEECH COM
  • [4] Deligne S., 2002, P INT C SPOK LANG PR, P1449
  • [5] Denby B, 2004, 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PROCEEDINGS, P685
  • [6] Silent speech interfaces
    Denby, B.
    Schultz, T.
    Honda, K.
    Hueber, T.
    Gilbert, J. M.
    Brumberg, J. S.
    [J]. SPEECH COMMUNICATION, 2010, 52 (04) : 270 - 287
  • [7] Denby B, 2006, INT CONF ACOUST SPEE, P365
  • [8] Deng L, 2010, 11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 3 AND 4, P1692
  • [9] Florescu VM, 2010, 11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 1-2, P450
  • [10] Audiovisual speech enhancement: New advances using multi-layer perceptrons
    Girin, L
    Varin, L
    Feng, G
    Schwartz, JL
    [J]. 1998 IEEE SECOND WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, 1998, : 77 - 82