Prompt Tuning on Graph-Augmented Low-Resource Text Classification

被引:0
|
作者
Wen, Zhihao [1 ]
Fang, Yuan [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore 188065, Singapore
关键词
Tuning; Text categorization; Task analysis; Accuracy; Paints; Oils; Ink; Text classification; graph; low-resource learning; pre-training; prompt;
D O I
10.1109/TKDE.2024.3440068
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Text classification is a fundamental problem in information retrieval with many real-world applications, such as predicting the topics of online articles and the categories of e-commerce product descriptions. However, low-resource text classification, with no or few labeled samples, presents a serious concern for supervised learning. Meanwhile, many text data are inherently grounded on a network structure, such as a hyperlink/citation network for online articles, and a user-item purchase network for e-commerce products. These graph structures capture rich semantic relationships, which can potentially augment low-resource text classification. In this paper, we propose a novel model called Graph-Grounded Pre-training and Prompting (G2P2) to address low-resource text classification in a two-pronged approach. During pre-training, we propose three graph interaction-based contrastive strategies to jointly pre-train a graph-text model; during downstream classification, we explore handcrafted discrete prompts and continuous prompt tuning for the jointly pre-trained model to achieve zero- and few-shot classification, respectively. Moreover, we explore the possibility of employing continuous prompt tuning for zero-shot inference. Specifically, we aim to generalize continuous prompts to unseen classes while leveraging a set of base classes. To this end, we extend G2P2 into G2P2 (& lowast;) , hinging on a new architecture of conditional prompt tuning. Extensive experiments on four real-world datasets demonstrate the strength of G2P2 in zero- and few-shot low-resource text classification tasks, and illustrate the advantage of G2P2 (& lowast; )in dealing with unseen classes.
引用
收藏
页码:9080 / 9095
页数:16
相关论文
共 22 条
  • [21] TAML-Adapter: Enhancing Adapter Tuning Through Task-Agnostic Meta-Learning for Low-Resource Automatic Speech Recognition
    Liu, Yunpeng
    Yang, Xukui
    Zhang, Jiayi
    Xi, Yangli
    Qu, Dan
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 636 - 640
  • [22] Transfer Learning, Style Control, and Speaker Reconstruction Loss for Zero-Shot Multilingual Multi-Speaker Text-to-Speech on Low-Resource Languages
    Azizah, Kurniawati
    Jatmiko, Wisnu
    IEEE ACCESS, 2022, 10 : 5895 - 5911