PTE: Prompt tuning with ensemble verbalizers

被引:0
|
作者
Liang, Liheng [1 ]
Wang, Guancheng [2 ]
Lin, Cong [2 ]
Feng, Zhuowen [3 ]
机构
[1] Guangdong Ocean Univ, Fac Math & Comp Sci, Zhanjiang 524088, Peoples R China
[2] Guangdong Ocean Univ, Coll Elect & Informat Engn, Zhanjiang 524088, Peoples R China
[3] Guangdong Ocean Univ, Coll Literature & News Commun, Zhanjiang 524088, Peoples R China
关键词
Prompt tuning; Few-shot learning; Text classification; Pre-trained language models;
D O I
10.1016/j.eswa.2024.125600
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prompt tuning has achieved remarkable success in facilitating the performance of Pre-trained Language Models (PLMs) across various downstream NLP tasks, particularly in scenarios with limited downstream data. Reframing tasks as fill-in-the-blank questions represents an effective approach within prompt tuning. However, this approach necessitates the mapping of labels through a verbalizer consisting of one or more label tokens, constrained by manually crafted prompts. Furthermore, most existing automatic crafting methods either introduce external resources or rely solely on discrete or continuous optimization strategies. To address this issue, we have proposed a methodology for optimizing discrete verbalizers based on gradient descent, which we refer to this approach as PTE. This method integrates discrete tokens into verbalizers that can be continuously optimized, combining the distinct advantages of both discrete and continuous optimization strategies. In contrast to prior approaches, ours eschews reliance on prompts generated by other models or prior knowledge, merely augmenting a matrix. This approach boasts remarkable simplicity and flexibility, enabling prompt optimization while preserving the interpretability of output label tokens without constraints imposed by discrete vocabularies. Finally, employing this method in text classification tasks, we observe that PTE achieves results comparable to, if not surpassing, previous methods even under extreme conciseness. This furnishes a simple, intuitive, and efficient solution for automatically constructing verbalizers. Moreover, through quantitative analysis of optimized verbalizers, we uncover that language models likely rely not only on semantic information but also on other features for text classification. This revelation unveils new avenues for future research and model enhancements.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] LIPT: Improving Prompt Tuning with Late Inception Reparameterization
    He, Yawen
    Feng, Ao
    Gao, Zhengjie
    Song, Xinyu
    ELECTRONICS, 2024, 13 (23):
  • [32] Action-guided prompt tuning for video grounding
    Wang, Jing
    Tsao, Raymon
    Wang, Xuan
    Wang, Xiaojie
    Feng, Fangxiang
    Tian, Shiyu
    Poria, Soujanya
    INFORMATION FUSION, 2025, 113
  • [33] Constraint embedding for prompt tuning in vision-language pre-trained modelConstraint embedding for prompt tuning in vision-language pre-trained modelK. Cheng et al.
    Keyang Cheng
    Liutao Wei
    Jingfeng Tang
    Yongzhao Zhan
    Multimedia Systems, 2025, 31 (1)
  • [34] Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning
    Chen, Xiang
    Li, Lei
    Zhang, Ningyu
    Tan, Chuanqi
    Huang, Fei
    Si, Luo
    Chen, Huajun
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2443 - 2448
  • [35] GAP: A novel Generative context-Aware Prompt-tuning method for relation extraction
    Chen, Zhenbin
    Li, Zhixin
    Zeng, Yufei
    Zhang, Canlong
    Ma, Huifang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 248
  • [36] REKP: Refined External Knowledge into Prompt-Tuning for Few-Shot Text Classification
    Dang, Yuzhuo
    Chen, Weijie
    Zhang, Xin
    Chen, Honghui
    MATHEMATICS, 2023, 11 (23)
  • [37] Adaptive multimodal prompt-tuning model for few-shot multimodal sentiment analysis
    Xiang, Yan
    Zhang, Anlan
    Guo, Junjun
    Huang, Yuxin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025,
  • [38] Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model
    Dong, Zijian
    Wu, Yilei
    Chen, Zijiao
    Zhang, Yichi
    Jin, Yueming
    Zhou, Juan Helen
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XI, 2024, 15011 : 512 - 521
  • [39] KEPT: Knowledge Enhanced Prompt Tuning for event causality identification
    Liu, Jintao
    Zhang, Zequn
    Guo, Zhi
    Jin, Li
    Li, Xiaoyu
    Wei, Kaiwen
    Sun, Xian
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [40] Medical text classification based on the discriminative pre-training model and prompt-tuning
    Wang, Yu
    Wang, Yuan
    Peng, Zhenwan
    Zhang, Feifan
    Zhou, Luyao
    Yang, Fei
    DIGITAL HEALTH, 2023, 9