PTE: Prompt tuning with ensemble verbalizers

被引:0
|
作者
Liang, Liheng [1 ]
Wang, Guancheng [2 ]
Lin, Cong [2 ]
Feng, Zhuowen [3 ]
机构
[1] Guangdong Ocean Univ, Fac Math & Comp Sci, Zhanjiang 524088, Peoples R China
[2] Guangdong Ocean Univ, Coll Elect & Informat Engn, Zhanjiang 524088, Peoples R China
[3] Guangdong Ocean Univ, Coll Literature & News Commun, Zhanjiang 524088, Peoples R China
关键词
Prompt tuning; Few-shot learning; Text classification; Pre-trained language models;
D O I
10.1016/j.eswa.2024.125600
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prompt tuning has achieved remarkable success in facilitating the performance of Pre-trained Language Models (PLMs) across various downstream NLP tasks, particularly in scenarios with limited downstream data. Reframing tasks as fill-in-the-blank questions represents an effective approach within prompt tuning. However, this approach necessitates the mapping of labels through a verbalizer consisting of one or more label tokens, constrained by manually crafted prompts. Furthermore, most existing automatic crafting methods either introduce external resources or rely solely on discrete or continuous optimization strategies. To address this issue, we have proposed a methodology for optimizing discrete verbalizers based on gradient descent, which we refer to this approach as PTE. This method integrates discrete tokens into verbalizers that can be continuously optimized, combining the distinct advantages of both discrete and continuous optimization strategies. In contrast to prior approaches, ours eschews reliance on prompts generated by other models or prior knowledge, merely augmenting a matrix. This approach boasts remarkable simplicity and flexibility, enabling prompt optimization while preserving the interpretability of output label tokens without constraints imposed by discrete vocabularies. Finally, employing this method in text classification tasks, we observe that PTE achieves results comparable to, if not surpassing, previous methods even under extreme conciseness. This furnishes a simple, intuitive, and efficient solution for automatically constructing verbalizers. Moreover, through quantitative analysis of optimized verbalizers, we uncover that language models likely rely not only on semantic information but also on other features for text classification. This revelation unveils new avenues for future research and model enhancements.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Prompt Tuning in Biomedical Relation Extraction
    He, Jianping
    Li, Fang
    Li, Jianfu
    Hu, Xinyue
    Nian, Yi
    Xiang, Yang
    Wang, Jingqi
    Wei, Qiang
    Li, Yiming
    Xu, Hua
    Tao, Cui
    JOURNAL OF HEALTHCARE INFORMATICS RESEARCH, 2024, 8 (02) : 206 - 224
  • [2] Prompt Tuning in Biomedical Relation Extraction
    Jianping He
    Fang Li
    Jianfu Li
    Xinyue Hu
    Yi Nian
    Yang Xiang
    Jingqi Wang
    Qiang Wei
    Yiming Li
    Hua Xu
    Cui Tao
    Journal of Healthcare Informatics Research, 2024, 8 : 206 - 224
  • [3] PTR: Prompt Tuning with Rules for Text Classification
    Han, Xu
    Zhao, Weilin
    Ding, Ning
    Liu, Zhiyuan
    Sun, Maosong
    AI OPEN, 2022, 3 : 182 - 192
  • [4] Progressive Multi-modal Conditional Prompt Tuning
    Qiu, Xiaoyu
    Feng, Hao
    Wang, Yuechen
    Zhou, Wengang
    Li, Houqiang
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 46 - 54
  • [5] When Adversarial Training Meets Prompt Tuning: Adversarial Dual Prompt Tuning for Unsupervised Domain Adaptation
    Cui, Chaoran
    Liu, Ziyi
    Gong, Shuai
    Zhu, Lei
    Zhang, Chunyun
    Liu, Hui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1427 - 1440
  • [6] KPT plus plus : Refined knowledgeable prompt tuning for few-shot text classification
    Ni, Shiwen
    Kao, Hung-Yu
    KNOWLEDGE-BASED SYSTEMS, 2023, 274
  • [7] BioKnowPrompt: Incorporating imprecise knowledge into prompt-tuning verbalizer with biomedical text for relation extraction
    Li, Qing
    Wang, Yichen
    You, Tao
    Lu, Yantao
    INFORMATION SCIENCES, 2022, 617 : 346 - 358
  • [8] Clickbait Detection via Prompt-Tuning With Titles Only
    Wang, Ye
    Zhu, Yi
    Li, Yun
    Qiang, Jipeng
    Yuan, Yunhao
    Wu, Xindong
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 695 - 705
  • [9] PTCAS: Prompt tuning with continuous answer search for relation extraction
    Chen, Yang
    Shi, Bowen
    Xu, Ke
    INFORMATION SCIENCES, 2024, 659
  • [10] Iterative Soft Prompt-Tuning for Unsupervised Domain Adaptation
    Zhu, Yi
    Wang, Shuqin
    Qiang, Jipeng
    Wu, Xindong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 8580 - 8592