Streaming Parrotron for on-device speech-to-speech conversion

被引:0
|
作者
Rybakov, Oleg [1 ]
Biadsy, Fadi [1 ]
Zhang, Xia [1 ]
Jiang, Liyang [1 ]
Meadowlark, Phoenix [1 ]
Agrawal, Shivani [1 ]
机构
[1] Google Res, Atlanta, GA 30309 USA
来源
INTERSPEECH 2023 | 2023年
关键词
speech to speech; parrotron;
D O I
10.21437/Interspeech.2023-160
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We present a fully on-device streaming Speech2Speech conversion model that normalizes a given input speech directly to synthesized output speech. Deploying such a model on mobile devices pose significant challenges in terms of memory footprint and computation requirements. We present a streaming-based approach to produce an acceptable delay, with minimal loss in speech conversion quality, when compared to a reference state of the art non-streaming approach. Our method consists of first streaming the encoder in real time while the speaker is speaking. Then, as soon as the speaker stops speaking, we run the spectrogram decoder in streaming mode along the side of a streaming vocoder to generate output speech. To achieve an acceptable delay-quality trade-off, we propose a novel hybrid approach for look-ahead in the encoder which combines a look-ahead feature stacker with a look-ahead self-attention. We show that our streaming approach is approximate to 2x faster than real time on the Pixel4 CPU.
引用
收藏
页码:2033 / 2037
页数:5
相关论文
共 6 条
  • [1] Speech-to-text and speech-to-speech summarization of spontaneous speech
    Furui, S
    Kikuchi, T
    Shinnaka, Y
    Hori, C
    IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 2004, 12 (04): : 401 - 408
  • [2] ASSESSING EVALUATION METRICS FOR SPEECH-TO-SPEECH TRANSLATION
    Salesky, Elizabeth
    Maeder, Julian
    Klinger, Severin
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 733 - 740
  • [3] Speech-to-speech Low-resource Translation
    Liu, Hsiao-Chuan
    Day, Min-Yuh
    Wang, Chih-Chien
    2023 IEEE 24TH INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE, IRI, 2023, : 91 - 95
  • [4] Leveraging unsupervised and weakly-supervised data to improve direct speech-to-speech translation
    Jia, Ye
    Ding, Yifan
    Bapna, Ankur
    Cherry, Colin
    Zhang, Yu
    Conneau, Alexis
    Morioka, Nobuyuki
    INTERSPEECH 2022, 2022, : 1721 - 1725
  • [5] Deriving phonetic transcriptions and discovering word segmentations for speech-to-speech translation in low-resource settings
    Wilkinson, Andrew
    Zhao, Tiancheng
    Black, Alan W.
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 3086 - 3090
  • [6] Enabling effective design of multimodal interfaces for speech-to-speech translation system: An empirical study of longitudinal user behaviors over time and user strategies for coping with errors
    Shin, JongHo
    Georgiou, Panayiotis G.
    Narayanan, Shrikanth
    COMPUTER SPEECH AND LANGUAGE, 2013, 27 (02): : 554 - 571