SpeechPrompt: Prompting Speech Language Models for Speech Processing Tasks

被引:0
|
作者
Chang, Kai-Wei [1 ]
Wu, Haibin [1 ]
Wang, Yu-Kai [2 ]
Wu, Yuan-Kuei [1 ]
Shen, Hua [3 ]
Tseng, Wei-Cheng [4 ]
Kang, Iu-Thing [5 ]
Li, Shang-Wen [6 ]
Lee, Hung-Yi [1 ]
机构
[1] Natl Taiwan Univ, Grad Inst Commun Engn, Taipei City 10617, Taiwan
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Univ Michigan, Ann Arbor, MI 48109 USA
[4] Univ Texas Austin, Austin, TX 78712 USA
[5] MediaTek, Hsinchu 30078, Taiwan
[6] FAIR, Menlo Pk, CA 94025 USA
关键词
Task analysis; Speech processing; Computational modeling; Adaptation models; Tuning; Self-supervised learning; Feature extraction; Prompting; speech language model; self-supervised learning; representation learning; REPRESENTATION;
D O I
10.1109/TASLP.2024.3436618
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Prompting has become a practical method for utilizing pre-trained language models (LMs). This approach offers several advantages. It allows an LM to adapt to new tasks with minimal training and parameter updates, thus achieving efficiency in both storage and computation. Additionally, prompting modifies only the LM's inputs and harnesses the generative capabilities of language models to address various downstream tasks in a unified manner. This significantly reduces the need for human labor in designing task-specific models. These advantages become even more evident as the number of tasks served by the LM scales up. Motivated by the strengths of prompting, we are the first to explore the potential of prompting speech LMs in the domain of speech processing. Recently, there has been a growing interest in converting speech into discrete units for language modeling. Our pioneer research demonstrates that these quantized speech units are highly versatile within our unified prompting framework. Not only can they serve as class labels, but they also contain rich phonetic information that can be re-synthesized back into speech signals for speech generation tasks. Specifically, we reformulate speech processing tasks into speech-to-unit generation tasks. As a result, we can seamlessly integrate tasks such as speech classification, sequence generation, and speech generation within a single, unified prompting framework. The experiment results show that the prompting method can achieve competitive performance compared to the strong fine-tuning method based on self-supervised learning models with a similar number of trainable parameters. The prompting method also shows promising results in the few-shot setting. Moreover, with the advanced speech LMs coming into the stage, the proposed prompting framework attains great potential.
引用
收藏
页码:3730 / 3744
页数:15
相关论文
共 50 条
  • [31] Continuous speech processing
    Brodbeck, Christian
    Simon, Jonathan Z.
    CURRENT OPINION IN PHYSIOLOGY, 2020, 18 : 25 - 31
  • [32] Speech Processing and Prosody
    Jouvet, Denis
    TEXT, SPEECH, AND DIALOGUE (TSD 2019), 2019, 11697 : 3 - 15
  • [33] Reinforcement learning and bandits for speech and language processing: Tutorial, review and outlook
    Lin, Baihan
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [34] A Critical Survey on the use of Fuzzy Sets in Speech and Natural Language Processing
    Carvalho, Joao P.
    Batista, Fernando
    Coheur, Luisa
    2012 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2012,
  • [35] REDISCOVERING 50 YEARS OF DISCOVERIES IN SPEECH AND LANGUAGE PROCESSING: A SURVEY.
    Mariani, Joseph
    Francopoulo, Gil
    Paroubek, Patrick
    Vernier, Frederic
    2017 20TH CONFERENCE OF THE ORIENTAL CHAPTER OF THE INTERNATIONAL COORDINATING COMMITTEE ON SPEECH DATABASES AND SPEECH I/O SYSTEMS AND ASSESSMENT (O-COCOSDA), 2017,
  • [36] Speech and Audio Processing Laboratory Speech coding related signal processing modules
    Ali, Syed Imran
    Hasan, Raza
    Hayat, M. Sohail
    2015 2ND WORLD SYMPOSIUM ON WEB APPLICATIONS AND NETWORKING (WSWAN), 2015,
  • [37] RemixIT: Continual Self-Training of Speech Enhancement Models via Bootstrapped Remixing
    Tzinis, Efthymios
    Adi, Yossi
    Ithapu, Vamsi K.
    Xu, Buye
    Smaragdis, Paris
    Kumar, Anurag
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1329 - 1341
  • [38] Investigation of Ensemble features of Self-Supervised Pretrained Models for Automatic Speech Recognition
    Arunkumar, A.
    Sukhadia, Vrunda Nileshkumar
    Umesh, Srinivasan
    INTERSPEECH 2022, 2022, : 5145 - 5149
  • [39] Prompting Large Language Models With the Socratic Method
    Chang, Edward Y.
    2023 IEEE 13TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE, CCWC, 2023, : 351 - 360
  • [40] Web-based possibilistic language models for automatic speech recognition
    Oger, Stanislas
    Linares, Georges
    COMPUTER SPEECH AND LANGUAGE, 2014, 28 (04): : 923 - 939