DesPrompt: Personality-descriptive prompt tuning for few-shot personality recognition

被引:14
|
作者
Wen, Zhiyuan [1 ]
Cao, Jiannong [1 ]
Yang, Yu [1 ]
Wang, Haoli [1 ]
Yang, Ruosong [1 ]
Liu, Shuaiqi [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hung Hom, Kowloon, Yuk Choi Rd 11, Hong Kong, Peoples R China
关键词
Personality recognition; Prompt-tuning; Text classification; BIG-5; ENGLISH;
D O I
10.1016/j.ipm.2023.103422
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Personality recognition in text is a critical problem in classifying personality traits from the input content of users. Recent studies address this issue by fine-tuning pre-trained language models (PLMs) with additional classification heads. However, the classification heads are often insufficiently trained when annotated data is scarce, resulting in poor recognition performance. To this end, we propose DesPrompt to tune PLM through personality-descriptive prompts for few-shot personality recognition, without introducing additional parameters. DesPrompt is based on the lexical hypothesis of personality, which suggests that personalities are revealed by descriptive adjectives. Specifically, DesPrompt models personality recognition as a word filling task. The input content is first encapsulated with personality-descriptive prompts. Then, the PLM is supervised to fill in the prompts with label words describing personality traits. The label words are selected from trait-descriptive adjectives from psychology findings and lexical knowledge. Finally, the label words filled in by PLM are mapped into the personality labels for recognition. Our approach aligns with the Masked Language Modeling (MLM) task in pre-training PLMs. So, it efficiently utilizes pre-trained parameters to reduce dependence on annotated data. Experiments on four public datasets show that DesPrompt outperforms conventional fine-tuning and other prompt-based methods, especially in zero-shot and few-shot settings.
引用
收藏
页数:17
相关论文
共 38 条
  • [1] Cross-language few-shot intent recognition via prompt-based tuning
    Cao, Pei
    Li, Yu
    Li, Xinlu
    APPLIED INTELLIGENCE, 2025, 55 (01)
  • [2] Hierarchical Prompt Tuning for Few-Shot Multi-Task Learning
    Liu, Jingping
    Chen, Tao
    Liang, Zujie
    Jiang, Haiyun
    Xiao, Yanghua
    Wei, Feng
    Qian, Yuxi
    Hao, Zhenghong
    Han, Bing
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1556 - 1565
  • [3] Ontology-enhanced Prompt-tuning for Few-shot Learning
    Ye, Hongbin
    Zhang, Ningyu
    Deng, Shumin
    Chen, Xiang
    Chen, Hui
    Xiong, Feiyu
    Chen, Xi
    Chen, Huajun
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 778 - 787
  • [4] KPT plus plus : Refined knowledgeable prompt tuning for few-shot text classification
    Ni, Shiwen
    Kao, Hung-Yu
    KNOWLEDGE-BASED SYSTEMS, 2023, 274
  • [5] An enhanced few-shot text classification approach by integrating topic modeling and prompt-tuning
    Zhang, Yinghui
    Xu, Yichun
    Dong, Fangmin
    NEUROCOMPUTING, 2025, 617
  • [6] A prompt tuning method based on relation graphs for few-shot relation extraction
    Zhang, Zirui
    Yang, Yiyu
    Chen, Benhui
    NEURAL NETWORKS, 2025, 185
  • [7] Few-Shot Text Classification with an Efficient Prompt Tuning Method in Meta-Learning Framework
    Lv, Xiaobao
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (03)
  • [8] Adaptive multimodal prompt-tuning model for few-shot multimodal sentiment analysis
    Xiang, Yan
    Zhang, Anlan
    Guo, Junjun
    Huang, Yuxin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025,
  • [9] Exploring Universal Intrinsic Task Subspace for Few-Shot Learning via Prompt Tuning
    Qin, Yujia
    Wang, Xiaozhi
    Su, Yusheng
    Lin, Yankai
    Ding, Ning
    Yi, Jing
    Chen, Weize
    Liu, Zhiyuan
    Li, Juanzi
    Hou, Lei
    Li, Peng
    Sun, Maosong
    Zhou, Jie
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 3631 - 3643
  • [10] REKP: Refined External Knowledge into Prompt-Tuning for Few-Shot Text Classification
    Dang, Yuzhuo
    Chen, Weijie
    Zhang, Xin
    Chen, Honghui
    MATHEMATICS, 2023, 11 (23)