SpeechPrompt: Prompting Speech Language Models for Speech Processing Tasks

被引:0
|
作者
Chang, Kai-Wei [1 ]
Wu, Haibin [1 ]
Wang, Yu-Kai [2 ]
Wu, Yuan-Kuei [1 ]
Shen, Hua [3 ]
Tseng, Wei-Cheng [4 ]
Kang, Iu-Thing [5 ]
Li, Shang-Wen [6 ]
Lee, Hung-Yi [1 ]
机构
[1] Natl Taiwan Univ, Grad Inst Commun Engn, Taipei City 10617, Taiwan
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Univ Michigan, Ann Arbor, MI 48109 USA
[4] Univ Texas Austin, Austin, TX 78712 USA
[5] MediaTek, Hsinchu 30078, Taiwan
[6] FAIR, Menlo Pk, CA 94025 USA
关键词
Task analysis; Speech processing; Computational modeling; Adaptation models; Tuning; Self-supervised learning; Feature extraction; Prompting; speech language model; self-supervised learning; representation learning; REPRESENTATION;
D O I
10.1109/TASLP.2024.3436618
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Prompting has become a practical method for utilizing pre-trained language models (LMs). This approach offers several advantages. It allows an LM to adapt to new tasks with minimal training and parameter updates, thus achieving efficiency in both storage and computation. Additionally, prompting modifies only the LM's inputs and harnesses the generative capabilities of language models to address various downstream tasks in a unified manner. This significantly reduces the need for human labor in designing task-specific models. These advantages become even more evident as the number of tasks served by the LM scales up. Motivated by the strengths of prompting, we are the first to explore the potential of prompting speech LMs in the domain of speech processing. Recently, there has been a growing interest in converting speech into discrete units for language modeling. Our pioneer research demonstrates that these quantized speech units are highly versatile within our unified prompting framework. Not only can they serve as class labels, but they also contain rich phonetic information that can be re-synthesized back into speech signals for speech generation tasks. Specifically, we reformulate speech processing tasks into speech-to-unit generation tasks. As a result, we can seamlessly integrate tasks such as speech classification, sequence generation, and speech generation within a single, unified prompting framework. The experiment results show that the prompting method can achieve competitive performance compared to the strong fine-tuning method based on self-supervised learning models with a similar number of trainable parameters. The prompting method also shows promising results in the few-shot setting. Moreover, with the advanced speech LMs coming into the stage, the proposed prompting framework attains great potential.
引用
收藏
页码:3730 / 3744
页数:15
相关论文
共 50 条
  • [1] An Exploration of Prompt Tuning on Generative Spoken Language Model for Speech Processing Tasks
    Chang, Kai-Wei
    Tseng, Wei-Cheng
    Li, Shang-Wen
    Lee, Hung-yi
    INTERSPEECH 2022, 2022, : 5005 - 5009
  • [2] A Large-Scale Evaluation of Speech Foundation Models
    Yang, Shu-wen
    Chang, Heng-Jui
    Huang, Zili
    Liu, Andy T.
    Lai, Cheng-, I
    Wu, Haibin
    Shi, Jiatong
    Chang, Xuankai
    Tsai, Hsiang-Sheng
    Huang, Wen-Chin
    Feng, Tzu-hsun
    Chi, Po-Han
    Lin, Yist Y.
    Chuang, Yung-Sung
    Huang, Tzu-Hsien
    Tseng, Wei-Cheng
    Lakhotia, Kushal
    Li, Shang-Wen
    Mohamed, Abdelrahman
    Watanabe, Shinji
    Lee, Hung-yi
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 2884 - 2899
  • [3] ELP-Adapters: Parameter Efficient Adapter Tuning for Various Speech Processing Tasks
    Inoue, Nakamasa
    Otake, Shinta
    Hirose, Takumi
    Ohi, Masanari
    Kawakami, Rei
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 3867 - 3880
  • [4] Speech and Language processing as assistive technologies
    McCoy, Kathleen F.
    Arnott, John L.
    Ferres, Leo
    Fried-Oken, Melanie
    Roark, Brian
    COMPUTER SPEECH AND LANGUAGE, 2013, 27 (06): : 1143 - 1146
  • [5] Exploring In-Context Learning of Textless Speech Language Model for Speech Classification Tasks
    Chang, Kai-Wei
    Hsu, Ming-Hao
    Li, Shan-Wen
    Lee, Hung-yi
    INTERSPEECH 2024, 2024, : 4139 - 4143
  • [6] Speech endpoint detection with non-language speech sounds for generic speech processing applications
    McClain, Matthew
    Romanowski, Brian
    SENSORS, AND COMMAND, CONTROL, COMMUNICATIONS, AND INTELLIGENCE (C3I) TECHNOLOGIES FOR HOMELAND SECURITY AND HOMELAND DEFENSE VIII, 2009, 7305
  • [7] Measuring Innovation in Speech and Language Processing Publications
    Mariani, Joseph
    Francopoulo, Gil
    Paroubek, Patrick
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 1890 - 1895
  • [8] Optimization Algorithms and Applications for Speech and Language Processing
    Wright, Stephen J.
    Kanevsky, Dimitri
    Deng, Li
    He, Xiaodong
    Heigold, Georg
    Li, Haizhou
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2013, 21 (11): : 2231 - 2243
  • [9] VioLA: Conditional Language Models for Speech Recognition, Synthesis, and Translation
    Wang, Tianrui
    Zhou, Long
    Zhang, Ziqiang
    Wu, Yu
    Liu, Shujie
    Gaur, Yashesh
    Chen, Zhuo
    Li, Jinyu
    Wei, Furu
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 3709 - 3716
  • [10] Live Streaming Speech Recognition Using Deep Bidirectional LSTM Acoustic Models and Interpolated Language Models
    Jorge, Javier
    Gimenez, Adria
    Silvestre-Cerda, Joan Albert
    Civera, Jorge
    Sanchis, Albert
    Juan, Alfons
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 148 - 161