CoPrompt: A Contrastive-prompt Tuning Method for Multiparty Dialogue Character Relationship Extraction

被引:0
作者
Li, Yu [1 ]
Jiang, Yuru [1 ]
Chen, Jie [1 ]
Wang, Liangguo [1 ]
Tao, Yuyang [1 ]
Zhang, Yangsen [1 ]
机构
[1] Beijing Informat Sci & Technol Univ, Beijing, Peoples R China
来源
PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, NLPIR 2023 | 2023年
关键词
Relation Extraction; Multiparty Dialogue; Prompt-tuning;
D O I
10.1145/3639233.3639239
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Information extraction is a fundamental task in natural language processing, and relation extraction is one of its primary subfields. However, most previous research focuses on extracting relations from standard texts, such as news articles and Wikipedia entries, while overlooking the challenges present in dialogues - sparse details, cross-sentence links, and complex character relationships. To address these challenges, we propose a contrastive-prompt tuning method (CoPrompt) to better capture the relationships between characters in dialogues. Our method improves the performance of the relationship extraction model by constructing positive and negative sample pairs to obtain better embedding representations of relationship features through prompt learning. To evaluate our method, we separately built manual and continuous templates and conducted experiments on the DialogRE and CRECIL datasets. Our method consistently outperformed other methods, notably achieving state-of-the-art results on Chinese datasets(DialogRE_cn and CRECIL), underscoring its robust efficiency in Chinese language relation extraction tasks. We also demonstrate the effectiveness of our method in low-resource scenarios. Our code is available at https://github.com/LIyu810/CoPrompt_main.
引用
收藏
页码:153 / 160
页数:8
相关论文
共 29 条
  • [1] Brown T, 2020, Adv Neural Inf Process Syst, V33, P1877
  • [2] Staudemeyer RC, 2019, Arxiv, DOI [arXiv:1909.09586, 10.48550/arXiv.1909.09586, DOI 10.48550/ARXIV.1909.09586]
  • [3] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
    Chen, Xiang
    Zhang, Ningyu
    Xie, Xin
    Deng, Shumin
    Yao, Yunzhi
    Tan, Chuanqi
    Huang, Fei
    Si, Luo
    Chen, Huajun
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 2778 - 2788
  • [4] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [5] Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
  • [6] Fang HC, 2020, Arxiv, DOI arXiv:2005.12766
  • [7] Guo ZJ, 2020, Arxiv, DOI arXiv:1906.07510
  • [8] Hambardzumyan K, 2021, 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, P4921
  • [9] PTR: Prompt Tuning with Rules for Text Classification
    Han, Xu
    Zhao, Weilin
    Ding, Ning
    Liu, Zhiyuan
    Sun, Maosong
    [J]. AI OPEN, 2022, 3 : 182 - 192
  • [10] Jiang YR, 2022, LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, P2337