SpeechPrompt: Prompting Speech Language Models for Speech Processing Tasks

被引:0
|
作者
Chang, Kai-Wei [1 ]
Wu, Haibin [1 ]
Wang, Yu-Kai [2 ]
Wu, Yuan-Kuei [1 ]
Shen, Hua [3 ]
Tseng, Wei-Cheng [4 ]
Kang, Iu-Thing [5 ]
Li, Shang-Wen [6 ]
Lee, Hung-Yi [1 ]
机构
[1] Natl Taiwan Univ, Grad Inst Commun Engn, Taipei City 10617, Taiwan
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Univ Michigan, Ann Arbor, MI 48109 USA
[4] Univ Texas Austin, Austin, TX 78712 USA
[5] MediaTek, Hsinchu 30078, Taiwan
[6] FAIR, Menlo Pk, CA 94025 USA
关键词
Task analysis; Speech processing; Computational modeling; Adaptation models; Tuning; Self-supervised learning; Feature extraction; Prompting; speech language model; self-supervised learning; representation learning; REPRESENTATION;
D O I
10.1109/TASLP.2024.3436618
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Prompting has become a practical method for utilizing pre-trained language models (LMs). This approach offers several advantages. It allows an LM to adapt to new tasks with minimal training and parameter updates, thus achieving efficiency in both storage and computation. Additionally, prompting modifies only the LM's inputs and harnesses the generative capabilities of language models to address various downstream tasks in a unified manner. This significantly reduces the need for human labor in designing task-specific models. These advantages become even more evident as the number of tasks served by the LM scales up. Motivated by the strengths of prompting, we are the first to explore the potential of prompting speech LMs in the domain of speech processing. Recently, there has been a growing interest in converting speech into discrete units for language modeling. Our pioneer research demonstrates that these quantized speech units are highly versatile within our unified prompting framework. Not only can they serve as class labels, but they also contain rich phonetic information that can be re-synthesized back into speech signals for speech generation tasks. Specifically, we reformulate speech processing tasks into speech-to-unit generation tasks. As a result, we can seamlessly integrate tasks such as speech classification, sequence generation, and speech generation within a single, unified prompting framework. The experiment results show that the prompting method can achieve competitive performance compared to the strong fine-tuning method based on self-supervised learning models with a similar number of trainable parameters. The prompting method also shows promising results in the few-shot setting. Moreover, with the advanced speech LMs coming into the stage, the proposed prompting framework attains great potential.
引用
收藏
页码:3730 / 3744
页数:15
相关论文
共 50 条
  • [41] SPEECHCLIP: INTEGRATING SPEECH WITH PRE-TRAINED VISION AND LANGUAGE MODEL
    Shih, Yi-Jen
    Wang, Hsuan-Fu
    Chang, Heng-Jui
    Berry, Layne
    Lee, Hung-yi
    Harwath, David
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 715 - 722
  • [42] Speech Recognition Based on Open Source Speech Processing Software
    Klosowski, Piotr
    Dustor, Adam
    Izydorczyk, Jacek
    Kotas, Jan
    Slimok, Jacek
    COMPUTER NETWORKS, CN 2014, 2014, 431 : 308 - 317
  • [43] Speech Processing for Arabic Speech Synthesis Based on Concatenation Rules
    Imedjdouben F.
    SN Computer Science, 5 (3)
  • [44] The Efficacy of Self-Supervised Speech Models as Audio Representations
    Wu, Tung-Yu
    Hsu, Tsu-Yuan
    Li, Chen-An
    Lin, Tzu-Han
    Lee, Hung-yi
    HEAR: HOLISTIC EVALUATION OF AUDIO REPRESENTATIONS, VOL 166, 2021, 166 : 90 - 110
  • [45] Mixed Precision Low-Bit Quantization of Neural Network Language Models for Speech Recognition
    Xu, Junhao
    Yu, Jianwei
    Hu, Shoukang
    Liu, Xunying
    Meng, Helen
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3679 - 3693
  • [46] Are Learnable Prompts the Right Way of Prompting? Adapting Vision-and-Language Models with Memory Optimization
    Moratelli, Nicholas
    Barraco, Manuele
    Cornia, Marcella
    Baraldi, Lorenzo
    Cucchiara, Rita
    IEEE INTELLIGENT SYSTEMS, 2024, 39 (03) : 26 - 34
  • [47] A review of deep learning techniques for speech processing
    Mehrish, Ambuj
    Majumder, Navonil
    Bharadwaj, Rishabh
    Mihalcea, Rada
    Poria, Soujanya
    INFORMATION FUSION, 2023, 99
  • [48] Survey of Deep Learning Paradigms for Speech Processing
    Bhangale, Kishor Barasu
    Kothandaraman, Mohanaprasad
    WIRELESS PERSONAL COMMUNICATIONS, 2022, 125 (02) : 1913 - 1949
  • [49] Module-Based End-to-End Distant Speech Processing: A case study of far-field automatic speech recognition
    Chang, Xuankai
    Watanabe, Shinji
    Delcroix, Marc
    Ochiai, Tsubasa
    Zhang, Wangyou
    Qian, Yanmin
    IEEE SIGNAL PROCESSING MAGAZINE, 2024, 41 (06) : 39 - 50
  • [50] IMPROVING GENERALIZABILITY OF DISTILLED SELF-SUPERVISED SPEECH PROCESSING MODELS UNDER DISTORTED SETTINGS
    Huang, Kuan-Po
    Fu, Yu-Kuan
    Hsu, Tsu-Yuan
    Gutierrez, Fabian Ritter
    Wang, Fan-Lin
    Tseng, Liang-Hsuan
    Zhang, Yu
    Lee, Hung-Yi
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 1112 - 1119