Assisted Process Knowledge Graph Building Using Pre-trained Language Models

被引:1
|
作者
Bellan, Patrizio [1 ,2 ]
Dragoni, Mauro [1 ]
Ghidini, Chiara [1 ]
机构
[1] Fdn Bruno Kessler, Trento, Italy
[2] Free Univ Bozen Bolzano, Bolzano, Italy
来源
AIXIA 2022 - ADVANCES IN ARTIFICIAL INTELLIGENCE | 2023年 / 13796卷
关键词
Process extraction from text; In-context learning; Knowledge graph; Pre-trained language model; Business process management;
D O I
10.1007/978-3-031-27181-6_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The automated construction of knowledge graphs from procedural documents is a challenging research area. Here, the lack of annotated data, as well as raw text repositories describing real-world procedural documents, make it extremely difficult to adopt deep learning approaches. Pre-trained language models have shown promising results concerning the knowledge extraction tasks from the models themselves. Although several works explored this strategy to build knowledge graph, the viability of knowledge base construction by using prompt-based learning strategy from such language models has not yet been investigated deeply. In this work, we present a prompt-based in-context learning strategy to extract, from natural language process descriptions, conceptual information that can be converted into their equivalent knowledge graphs. Such a strategy is performed in a multi-turn dialog fashion. We validate the accuracy of the proposed approach from both quantitative and qualitative perspectives. The results highlight the feasibility of the proposed approach within low-resource scenarios.
引用
收藏
页码:60 / 74
页数:15
相关论文
共 50 条
  • [1] Integrating Knowledge Graph Embeddings and Pre-trained Language Models in Hypercomplex Spaces
    Nayyeri, Mojtaba
    Wang, Zihao
    Akter, Mst. Mahfuja
    Alam, Mirza Mohtashim
    Rony, Md Rashad Al Hasan
    Lehmann, Jens
    Staab, Steffen
    SEMANTIC WEB, ISWC 2023, PART I, 2023, 14265 : 388 - 407
  • [2] Interpretable Biomedical Reasoning via Deep Fusion of Knowledge Graph and Pre-trained Language Models
    Xu Y.
    Yang Z.
    Lin Y.
    Hu J.
    Dong S.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2024, 60 (01): : 62 - 70
  • [3] KG-prompt: Interpretable knowledge graph prompt for pre-trained language models
    Chen, Liyi
    Liu, Jie
    Duan, Yutai
    Wang, Runze
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [4] NMT Enhancement based on Knowledge Graph Mining with Pre-trained Language Model
    Yang, Hao
    Qin, Ying
    Deng, Yao
    Wang, Minghan
    2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 185 - 189
  • [5] ProSide: Knowledge Projector and Sideway for Pre-trained Language Models
    He, Chaofan
    Lu, Gewei
    Shen, Liping
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024, 2025, 15360 : 56 - 68
  • [6] Exploiting Pre-Trained Language Models for Black-Box Attack against Knowledge Graph Embeddings
    Yang, Guangqian
    Zhang, Lei
    Liu, Yi
    Xie, Hongtao
    Mao, Zhendong
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 19 (01)
  • [7] Prompting disentangled embeddings for knowledge graph completion with pre-trained language model
    Geng, Yuxia
    Chen, Jiaoyan
    Zeng, Yuhang
    Chen, Zhuo
    Zhang, Wen
    Pan, Jeff Z.
    Wang, Yuxiang
    Xu, Xiaoliang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268
  • [8] Enhancing pre-trained language models with Chinese character morphological knowledge
    Zheng, Zhenzhong
    Wu, Xiaoming
    Liu, Xiangzhi
    INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (01)
  • [9] ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering
    Xing Cao
    Yun Liu
    Applied Intelligence, 2023, 53 : 12032 - 12046
  • [10] CokeBERT: Contextual knowledge selection and embedding towards enhanced pre-trained language models
    Su, Yusheng
    Han, Xu
    Zhang, Zhengyan
    Lin, Yankai
    Li, Peng
    Liu, Zhiyuan
    Zhou, Jie
    Sun, Maosong
    AI OPEN, 2021, 2 : 127 - 134