CollRec: Pre-Trained Language Models and Knowledge Graphs Collaborate to Enhance Conversational Recommendation System

被引:0
|
作者
Liu, Shuang [1 ]
Ao, Zhizhuo [1 ]
Chen, Peng [2 ]
Kolmanic, Simon [3 ]
机构
[1] Dalian Minzu Univ, Sch Comp Sci & Engn, Dalian 116600, Peoples R China
[2] Dalian Neusoft Univ Informat, Sch Comp & Software, Dalian 116023, Peoples R China
[3] Univ Maribor, Fac Elect Engn & Comp Sci, Maribor 2000, Slovenia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Knowledge graphs; Oral communication; Task analysis; Recommender systems; Motion pictures; Costs; Accuracy; Large language models; Conversational recommendation system; knowledge graph; large language model; end-to-end generation; fine-tuning; ReDial; WebNLG; 2020; challenge;
D O I
10.1109/ACCESS.2024.3434720
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing conversational recommender systems (CRS) use insufficient generality in incorporating external information using knowledge graphs. The recommendation module and generation module are loosely connected during model training and shallowly integrated during inference. A simple switching or copying mechanism is used to merge recommended items into generated responses. These problems significantly degrade the recommendation performance. To alleviate this problem, we propose a novel unified framework for collaboratively enhancing conversational recommendations using pre-trained language models and knowledge graphs (CollRec). We use a fine-tuned pre-trained language model to efficiently extract knowledge graphs from conversational text descriptions, perform entity-based recommendations based on the generated graph nodes and edges, and fine-tune a large-scale pre-trained language model to generate fluent and diverse responses. Experimental results on the WebNLG 2020 Challenge dataset, ReDial dataset, and Reddit-Movie dataset show that our CollRec model significantly outperforms the state-of-the-art methods.
引用
收藏
页码:104663 / 104675
页数:13
相关论文
共 50 条
  • [21] Automated LOINC Standardization Using Pre-trained Large Language Models
    Tu, Tao
    Loreaux, Eric
    Chesley, Emma
    Lelkes, Adam D.
    Gamble, Paul
    Bellaiche, Mathias
    Seneviratne, Martin
    Chen, Ming-Jun
    MACHINE LEARNING FOR HEALTH, VOL 193, 2022, 193 : 343 - 355
  • [22] FuseLinker: Leveraging LLM's ' s pre-trained text embeddings and domain knowledge to enhance GNN-based link prediction on biomedical knowledge graphs
    Xiao, Yongkang
    Zhang, Sinian
    Zhou, Huixue
    Li, Mingchen
    Yang, Han
    Zhang, Rui
    JOURNAL OF BIOMEDICAL INFORMATICS, 2024, 158
  • [23] Enhancing radiology report generation through pre-trained language models
    Leonardi, Giorgio
    Portinale, Luigi
    Santomauro, Andrea
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2024,
  • [24] SMT Solver Validation Empowered by Large Pre-trained Language Models
    Sun, Maolin
    Yang, Yibiao
    Wang, Yang
    Wen, Ming
    Jia, Haoxiang
    Zhou, Yuming
    2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 1288 - 1300
  • [25] Adopting Pre-trained Large Language Models for Regional Language Tasks: A Case Study
    Gaikwad, Harsha
    Kiwelekar, Arvind
    Laddha, Manjushree
    Shahare, Shashank
    INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, 2024, 14531 : 15 - 25
  • [26] NMT Enhancement based on Knowledge Graph Mining with Pre-trained Language Model
    Yang, Hao
    Qin, Ying
    Deng, Yao
    Wang, Minghan
    2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 185 - 189
  • [27] Pre-Trained Models for Search and Recommendation: Introduction to the Special Issue-Part 1
    Wang, Wenjie
    Liu, Zheng
    Feng, Fuli
    Dou, Zhicheng
    Ai, Qingyao
    Yang, Grace Hui
    Lian, Defu
    Hou, Lu
    Sun, Aixin
    Zamani, Hamed
    Metzler, Donald
    de Rijke, Maarten
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2025, 43 (02)
  • [28] Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
    Sujatha, R.
    Nimala, K.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 78 (02): : 1669 - 1686
  • [29] Recommending metamodel concepts during modeling activities with pre-trained language models
    Martin Weyssow
    Houari Sahraoui
    Eugene Syriani
    Software and Systems Modeling, 2022, 21 : 1071 - 1089
  • [30] A Comparative Study of Using Pre-trained Language Models for Toxic Comment Classification
    Zhao, Zhixue
    Zhang, Ziqi
    Hopfgartner, Frank
    WEB CONFERENCE 2021: COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2021), 2021, : 500 - 507