Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning

被引:23
|
作者
Chen, Xiang [1 ]
Li, Lei [1 ]
Zhang, Ningyu [1 ]
Tan, Chuanqi [2 ]
Huang, Fei [2 ]
Si, Luo [2 ]
Chen, Huajun [1 ]
机构
[1] Zhejiang Univ, Hangzhou Innovat Ctr, AZFT Joint Lab Knowledge Engine, Hangzhou, Zhejiang, Peoples R China
[2] Alibaba Grp, Hangzhou, Zhejiang, Peoples R China
来源
PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22) | 2022年
基金
国家重点研发计划;
关键词
Relation Extraction; Prompt Tuning; Few-shot Learning;
D O I
10.1145/3477495.3531746
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Pre-trained language models have contributed significantly to relation extraction by demonstrating remarkable few-shot learning abilities. However, prompt tuning methods for relation extraction may still fail to generalize to those rare or hard patterns. Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test. Those long-tailed or hard patterns can hardly be memorized in parameters given few-shot instances. To this end, we regard RE as an open-book examination and propose a new semiparametric paradigm of retrieval-enhanced prompt tuning for relation extraction. We construct an open-book datastore for retrieval regarding prompt-based instance representations and corresponding relation labels as memorized key-value pairs. During inference, the model can infer relations by linearly interpolating the base output of PLM with the non-parametric nearest neighbor distribution over the datastore. In this way, our model not only infers relation through knowledge stored in the weights during training but also assists decision-making by unwinding and querying examples in the open-book datastore. Extensive experiments on benchmark datasets show that our method can achieve state-of-the-art in both standard supervised and few-shot settings(1).
引用
收藏
页码:2443 / 2448
页数:6
相关论文
共 21 条
  • [1] Retrieval-Enhanced Event Temporal Relation Extraction by Prompt Tuning
    Luo, Rong
    Hu, Po
    WEB AND BIG DATA, PT IV, APWEB-WAIM 2023, 2024, 14334 : 16 - 30
  • [2] Prompt Tuning in Biomedical Relation Extraction
    He, Jianping
    Li, Fang
    Li, Jianfu
    Hu, Xinyue
    Nian, Yi
    Xiang, Yang
    Wang, Jingqi
    Wei, Qiang
    Li, Yiming
    Xu, Hua
    Tao, Cui
    JOURNAL OF HEALTHCARE INFORMATICS RESEARCH, 2024, 8 (02) : 206 - 224
  • [3] Prompt Tuning in Biomedical Relation Extraction
    Jianping He
    Fang Li
    Jianfu Li
    Xinyue Hu
    Yi Nian
    Yang Xiang
    Jingqi Wang
    Qiang Wei
    Yiming Li
    Hua Xu
    Cui Tao
    Journal of Healthcare Informatics Research, 2024, 8 : 206 - 224
  • [4] Contrastive learning-based few-shot relation extraction with open-book datastore
    Gong, Wanyuan
    Zhou, Qifeng
    APPLIED SOFT COMPUTING, 2024, 167
  • [5] PTCAS: Prompt tuning with continuous answer search for relation extraction
    Chen, Yang
    Shi, Bowen
    Xu, Ke
    INFORMATION SCIENCES, 2024, 659
  • [6] Judicial Text Relation Extraction Based on Prompt Tuning
    Chen, Xue
    Li, Yi
    Fan, Shuhuan
    Hou, Mengshu
    2024 2ND ASIA CONFERENCE ON COMPUTER VISION, IMAGE PROCESSING AND PATTERN RECOGNITION, CVIPPR 2024, 2024,
  • [7] Context-aware generative prompt tuning for relation extraction
    Liu, Xiaoyong
    Wen, Handong
    Xu, Chunlin
    Du, Zhiguo
    Li, Huihui
    Hu, Miao
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (12) : 5495 - 5508
  • [8] APRE: Annotation-Aware Prompt-Tuning for Relation Extraction
    Wei, Chao
    Chen, Yanping
    Wang, Kai
    Qin, Yongbin
    Huang, Ruizhang
    Zheng, Qinghua
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [9] A prompt tuning method based on relation graphs for few-shot relation extraction
    Zhang, Zirui
    Yang, Yiyu
    Chen, Benhui
    NEURAL NETWORKS, 2025, 185
  • [10] APRE: Annotation-Aware Prompt-Tuning for Relation Extraction
    Chao Wei
    Yanping Chen
    Kai Wang
    Yongbin Qin
    Ruizhang Huang
    Qinghua Zheng
    Neural Processing Letters, 56