Fusing graph structural information with pre-trained generative model for knowledge graph-to-text generation

被引:3
作者
Shi, Xiayang [1 ]
Xia, Zhenlin [2 ]
Li, Yinlin [3 ]
Wang, Xuhui [4 ]
Niu, Yufeng [4 ]
机构
[1] Zhengzhou Univ Light Ind, Coll Software Engn, Zhengzhou, Peoples R China
[2] Zhengzhou Univ Light Ind, Coll Math & Informat Sci, Zhengzhou, Peoples R China
[3] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
[4] Inst Stand Measurement ShanXi Prov, Inspect & Testing Ctr ShanXi Prov, Taiyuan, Peoples R China
基金
中国国家自然科学基金;
关键词
KG-to-text; Pre-trained models; Graph neural networks; Graph structural information;
D O I
10.1007/s10115-024-02235-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge graph-to-text generation (KG-to-Text) is a task that involves generating accurate textual descriptions based on a given knowledge graph. Previous efforts have often enhanced pre-trained generative models by incorporating additional auxiliary pre-training tasks to supplement the missing graph structural information. However, such tasks not only require substantial computational resources, but also yield limited improvements. To address this issue, we propose a novel approach that effectively combines the graph structural information from knowledge graphs with pre-trained generative models without altering their fundamental architecture. Our approach involves inputting the original knowledge graph data into a graph convolutional network to acquire graph feature representations enriched with node characteristics. Additionally, the linearized sequence of the knowledge graph is inputted into the pre-trained generative model to exploit its inherent semantic richness. After computing multi-head attention mechanisms, we fuse the acquired graph feature representations into the pre-trained generative model to supplement the missing graph structural information. Experimental results on the WebNLG and EventNarrative datasets show that our approach achieves improved performance while reducing computational overhead.
引用
收藏
页码:2619 / 2640
页数:22
相关论文
共 44 条
[1]  
Banerjee S., 2005, P ACL WORKSH INTR EX, P65, DOI DOI 10.3115/1626355.1626389
[2]  
Bosselut A, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P4762
[3]  
Chen WH, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), P8635
[4]   Toward Subgraph-Guided Knowledge Graph Question Generation With Graph Neural Networks [J].
Chen, Yu ;
Wu, Lingfei ;
Zaki, Mohammed J. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) :12706-12717
[5]  
Colas A, 2021, P NEUR INF PROC SYST
[6]  
Colas Anthony, 2022, P 29 INT C COMP LING, P5755
[7]  
Colin E, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P937
[8]   Graph-Based Text Summarization and Its Application on COVID-19 Twitter Data [J].
Das, Ajit Kumar ;
Thumu, Bhaavanaa ;
Sarkar, Apurba ;
Vimal, S. ;
Das, Asit Kumar .
INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2022, 30 (03) :513-540
[9]  
Dong C., 2023, IEEE T KNOWL DATA EN, V01, P1
[10]  
Gardent C., 2017, P 10 INT C NATURAL L, P124