DesPrompt: Personality-descriptive prompt tuning for few-shot personality recognition

被引:14
作者
Wen, Zhiyuan [1 ]
Cao, Jiannong [1 ]
Yang, Yu [1 ]
Wang, Haoli [1 ]
Yang, Ruosong [1 ]
Liu, Shuaiqi [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hung Hom, Kowloon, Yuk Choi Rd 11, Hong Kong, Peoples R China
关键词
Personality recognition; Prompt-tuning; Text classification; BIG-5; ENGLISH;
D O I
10.1016/j.ipm.2023.103422
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Personality recognition in text is a critical problem in classifying personality traits from the input content of users. Recent studies address this issue by fine-tuning pre-trained language models (PLMs) with additional classification heads. However, the classification heads are often insufficiently trained when annotated data is scarce, resulting in poor recognition performance. To this end, we propose DesPrompt to tune PLM through personality-descriptive prompts for few-shot personality recognition, without introducing additional parameters. DesPrompt is based on the lexical hypothesis of personality, which suggests that personalities are revealed by descriptive adjectives. Specifically, DesPrompt models personality recognition as a word filling task. The input content is first encapsulated with personality-descriptive prompts. Then, the PLM is supervised to fill in the prompts with label words describing personality traits. The label words are selected from trait-descriptive adjectives from psychology findings and lexical knowledge. Finally, the label words filled in by PLM are mapped into the personality labels for recognition. Our approach aligns with the Masked Language Modeling (MLM) task in pre-training PLMs. So, it efficiently utilizes pre-trained parameters to reduce dependence on annotated data. Experiments on four public datasets show that DesPrompt outperforms conventional fine-tuning and other prompt-based methods, especially in zero-shot and few-shot settings.
引用
收藏
页数:17
相关论文
共 44 条
[31]   Build a Good Human-Free Prompt Tuning: Jointly Pre-Trained Template and Verbalizer for Few-Shot Classification [J].
Chen, Mouxiang ;
Fu, Han ;
Liu, Chenghao ;
Wang, Xiaoyun Joy ;
Li, Zhuo ;
Sun, Jianling .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (05) :2253-2265
[32]   Brain-Inspired Fast-and Slow-Update Prompt Tuning for Few-Shot Class-Incremental Learning [J].
Ran, Hang ;
Gao, Xingyu ;
Li, Lusi ;
Li, Weijun ;
Tian, Songsong ;
Wang, Gang ;
Shi, Hailong ;
Ning, Xin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (07) :13417-13430
[33]   TaxonPrompt: Taxonomy-aware curriculum prompt learning for few-shot event classification [J].
Song, Chengyu ;
Cai, Fei ;
Wang, Mengru ;
Zheng, Jianming ;
Shao, Taihua .
KNOWLEDGE-BASED SYSTEMS, 2023, 264
[34]   Exploring the potential of using ChatGPT for rhetorical move-step analysis: The impact of prompt refinement, few-shot learning, and fine-tuning [J].
Kim, Minjin ;
Lu, Xiaofei .
JOURNAL OF ENGLISH FOR ACADEMIC PURPOSES, 2024, 71
[35]   Virtual prompt pre-training for prototype-based few-shot relation extraction [J].
He, Kai ;
Huang, Yucheng ;
Mao, Rui ;
Gong, Tieliang ;
Li, Chen ;
Cambria, Erik .
EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
[36]   Few-shot Log Analysis with Prompt-based Multi-task Transfer Learning [J].
Zhou, Mingjie ;
Yang, Weidong ;
Ma, Lipeng ;
Jiang, Sihang ;
Xu, Bo ;
Xiao, Yanghua .
DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 2, 2025, 14851 :466-475
[37]   Prompt-Based Label-Aware Framework for Few-Shot Multi-Label Text Classification [J].
Thaminkaew, Thanakorn ;
Lertvittayakumjorn, Piyawat ;
Vateekul, Peerapon .
IEEE ACCESS, 2024, 12 :28310-28322
[38]   TOKEN Is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models [J].
Davody, Ali ;
Adelani, David Ifeoluwa ;
Kleinbauer, Thomas ;
Klakow, Dietrich .
TEXT, SPEECH, AND DIALOGUE (TSD 2022), 2022, 13502 :138-150
[39]   CoHOZ: Contrastive Multimodal Prompt Tuning for Hierarchical Open-set Zero-shot Recognition [J].
Liao, Ning ;
Liu, Yifeng ;
Li, Xiaobo ;
Lei, Chenyi ;
Wang, Guoxin ;
Hua, Xian-Sheng ;
Yan, Junchi .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, :3262-3271
[40]   CoCoOpter: Pre-train, prompt, and fine-tune the vision-language model for few-shot image classification [J].
Yan, Jie ;
Xie, Yuxiang ;
Guo, Yanming ;
Wei, Yingmei ;
Zhang, Xiaoping ;
Luan, Xidao .
INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2023, 12 (02)