Assisted Process Knowledge Graph Building Using Pre-trained Language Models

被引:1
|
作者
Bellan, Patrizio [1 ,2 ]
Dragoni, Mauro [1 ]
Ghidini, Chiara [1 ]
机构
[1] Fdn Bruno Kessler, Trento, Italy
[2] Free Univ Bozen Bolzano, Bolzano, Italy
来源
AIXIA 2022 - ADVANCES IN ARTIFICIAL INTELLIGENCE | 2023年 / 13796卷
关键词
Process extraction from text; In-context learning; Knowledge graph; Pre-trained language model; Business process management;
D O I
10.1007/978-3-031-27181-6_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The automated construction of knowledge graphs from procedural documents is a challenging research area. Here, the lack of annotated data, as well as raw text repositories describing real-world procedural documents, make it extremely difficult to adopt deep learning approaches. Pre-trained language models have shown promising results concerning the knowledge extraction tasks from the models themselves. Although several works explored this strategy to build knowledge graph, the viability of knowledge base construction by using prompt-based learning strategy from such language models has not yet been investigated deeply. In this work, we present a prompt-based in-context learning strategy to extract, from natural language process descriptions, conceptual information that can be converted into their equivalent knowledge graphs. Such a strategy is performed in a multi-turn dialog fashion. We validate the accuracy of the proposed approach from both quantitative and qualitative perspectives. The results highlight the feasibility of the proposed approach within low-resource scenarios.
引用
收藏
页码:60 / 74
页数:15
相关论文
共 50 条
  • [41] Zero-shot domain paraphrase with unaligned pre-trained language models
    Zheng Chen
    Hu Yuan
    Jiankun Ren
    Complex & Intelligent Systems, 2023, 9 : 1097 - 1110
  • [42] Injecting Descriptive Meta-information into Pre-trained Language Models with Hypernetworks
    Duan, Wenying
    He, Xiaoxi
    Zhou, Zimu
    Rao, Hong
    Thiele, Lothar
    INTERSPEECH 2021, 2021, : 3216 - 3220
  • [43] Zero-shot domain paraphrase with unaligned pre-trained language models
    Chen, Zheng
    Yuan, Hu
    Ren, Jiankun
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (01) : 1097 - 1110
  • [44] JointMatcher: Numerically-aware entity matching using pre-trained language models with attention concentration
    Ye, Chen
    Jiang, Shihao
    Zhang, Hua
    Wu, Yifan
    Shi, Jiankai
    Wang, Hongzhi
    Dai, Guojun
    KNOWLEDGE-BASED SYSTEMS, 2022, 251
  • [45] Non-Autoregressive ASR Modeling Using Pre-Trained Language Models for Chinese Speech Recognition
    Yu, Fu-Hao
    Chen, Kuan-Yu
    Lu, Ke-Han
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1474 - 1482
  • [46] AdaDS: Adaptive data selection for accelerating pre-trained language model knowledge distillation
    Zhou, Qinhong
    Li, Peng
    Liu, Yang
    Guan, Yuyang
    Xing, Qizhou
    Chen, Ming
    Sun, Maosong
    Liu, Yang
    AI OPEN, 2023, 4 : 56 - 63
  • [47] Using Noise and External Knowledge to Enhance Chinese Pre-trained Model
    Ma, Haoyang
    Li, Zeyu
    Guo, Hongyu
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 476 - 480
  • [48] Construction and application of knowledge graph for grid dispatch fault handling based on pre-trained model
    Ji, Zhixiang
    Wang, Xiaohui
    Zhang, Jie
    Wu, Di
    GLOBAL ENERGY INTERCONNECTION-CHINA, 2023, 6 (04): : 493 - 504
  • [49] Billion-scale Pre-trained E-commerce Product Knowledge Graph Model
    Zhang, Wen
    Wong, Chi-Man
    Ye, Ganqiang
    Wen, Bo
    Zhang, Wei
    Chen, Huajun
    2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021), 2021, : 2476 - 2487
  • [50] Traditional Chinese Medicine Symptom Normalization Approach Based on Pre-Trained Language Models
    Xie Y.
    Tao H.
    Jia Q.
    Yang S.
    Han X.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2022, 45 (04): : 13 - 18and57