Graph path fusion and reinforcement reasoning for recommendation in MOOCs

被引:11
作者
Liang, Zibo [1 ]
Mu, Lan [1 ]
Chen, Jie [1 ]
Xie, Qing [1 ,2 ]
机构
[1] Wuhan Univ Technol, Sch Comp & Artificial Intelligence, Luoshi Rd, Wuhan 430070, Hubei, Peoples R China
[2] Chongqing Res Inst WHUT, Chongqing, Peoples R China
关键词
Recommender systems; Reinforcement learning; Knowledge graph; Graph path fusion;
D O I
10.1007/s10639-022-11178-2
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
In recent years, online learning methods have gradually been accepted by more and more people. A large number of online teaching courses and other resources (MOOCs) have also followed. To attract students' interest in learning, many scholars have built recommendation systems for MOOCs. However, students need a variety of different learning resources, such as courses, videos, concepts, etc., and it is difficult to find suitable resources by themselves. So we propose a resource recommendation method called Multi-path Embedding and User-centric Reasoning (MEUR), which embeds multiple paths and searches with users as the center, innovatively combining the advantages of graph convolution network and reinforcement learning, ultimately shows the path of the knowledge graph. First, we put forward the problem to solve, which is to recommend multiple types of learning resources for users at the same time and show the corresponding reasoning path as the reason for the recommendation. Second, we propose an embedding model that integrates multi-path and graph convolution network, embedding entities in the knowledge graph into vectors. Third, we use reinforcement learning and combine user-centric reasoning to make recommendations for users. Finally, we use datasets from a real MOOC platform to evaluate our model through experiments and compare it with other methods.
引用
收藏
页码:525 / 545
页数:21
相关论文
共 45 条
  • [1] Agrawal R., 2009, WSDM 09, DOI DOI 10.1145/1498759.1498766
  • [2] Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation
    Ai, Qingyao
    Azizi, Vahid
    Chen, Xu
    Zhang, Yongfeng
    [J]. ALGORITHMS, 2018, 11 (09)
  • [3] Deep Reinforcement Learning A brief survey
    Arulkumaran, Kai
    Deisenroth, Marc Peter
    Brundage, Miles
    Bharath, Anil Anthony
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) : 26 - 38
  • [4] Bordes A., 2013, P 26 INT C NEUR INF, V2, P2787
  • [5] Attention-based hierarchical recurrent neural networks for MOOC forum posts analysis
    Capuano, Nicola
    Caballe, Santi
    Conesa, Jordi
    Greco, Antonio
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 12 (11) : 9977 - 9989
  • [6] Knowledge-guided Deep Reinforcement Learning for Interactive Recommendation
    Chen, Xiaocong
    Huang, Chaoran
    Yao, Lina
    Wang, Xianzhi
    Liu, Wei
    Zhang, Wenjie
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [7] Interpretable MOOC recommendation: a multi-attention network for personalized learning behavior analysis
    Fan, Ju
    Jiang, Yuanchun
    Liu, Yezheng
    Zhou, Yonghang
    [J]. INTERNET RESEARCH, 2022, 32 (02) : 588 - 605
  • [8] Gong JB, 2020, PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), P79, DOI [10.1145/3397271.3401057, 10.1145/11221.27]
  • [9] Guo Q., 2020, ARXIV 200300911
  • [10] Hamilton WL, 2017, ADV NEUR IN, V30