Self-Supervised pre-training model based on Multi-view for MOOC Recommendation

被引:0
|
作者
Tian, Runyu [1 ]
Cai, Juanjuan [2 ]
Li, Chuanzhen [1 ,3 ]
Wang, Jingling [1 ]
机构
[1] Commun Univ China, Sch Informat & Commun Engn, Beijing 100024, Peoples R China
[2] Commun Univ China, State Key Lab Media Audio & Video, Minist Educ, Beijing, Peoples R China
[3] Commun Univ China, State Key Lab Media Convergence & Commun, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
MOOC recommendation; Contrastive learning; Prerequisite dependency; Multi-view correlation;
D O I
10.1016/j.eswa.2024.124143
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommendation strategies based on concepts of knowledge are gradually applied to personalized course recommendation to promote model learning from implicit feedback data. However, existing approaches typically overlook the prerequisite dependency between concepts, which is the significant basis for connecting courses, and they fail to effectively model the relationship between items and attributes of courses, leading to inadequate capturing of associations between data and ineffective integration of implicit semantics into sequence representations. In this paper, we propose S elf-Supervised pre-training model based on M ulti-view for M OOC R ecommendation (SSM4MR) that exploits non-explicit but inherently correlated features to guide the representation learning of users' course preferences. In particular, to keep the model from relying solely on course prediction loss and overmphasising on the final performance, we treat the concepts of knowledge, course items and learning paths as different views, then sufficiently model the intrinsic relevance among multi-view through formulating multiple specific self-supervised objectives. As such, our model enhances the sequence representation and ultimately achieves high-performance course recommendation. All the extensive experiments and analyses provide persuasive support for the superiority of the model design and the recommendation results.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Object Adaptive Self-Supervised Dense Visual Pre-Training
    Zhang, Yu
    Zhang, Tao
    Zhu, Hongyuan
    Chen, Zihan
    Mi, Siya
    Peng, Xi
    Geng, Xin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 2228 - 2240
  • [2] Complementary Mask Self-Supervised Pre-training Based on Teacher-Student Network
    Ye, Shaoxiong
    Huang, Jing
    Zhu, Lifu
    2023 3RD ASIA-PACIFIC CONFERENCE ON COMMUNICATIONS TECHNOLOGY AND COMPUTER SCIENCE, ACCTCS, 2023, : 199 - 206
  • [3] A Multi-view Molecular Pre-training with Generative Contrastive Learning
    Liu, Yunwu
    Zhang, Ruisheng
    Yuan, Yongna
    Ma, Jun
    Li, Tongfeng
    Yu, Zhixuan
    INTERDISCIPLINARY SCIENCES-COMPUTATIONAL LIFE SCIENCES, 2024, 16 (03) : 741 - 754
  • [4] Incomplete multi-view clustering based on information fusion with self-supervised learning
    Cai, Yilong
    Shu, Qianyu
    Zhou, Zhengchun
    Meng, Hua
    INFORMATION FUSION, 2025, 117
  • [5] PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map
    Xu, Chenfeng
    Li, Tian
    Tang, Chen
    Sun, Lingfeng
    Keutzer, Kurt
    Tomizuka, Masayoshi
    Fathi, Alireza
    Zhan, Wei
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 34 - 50
  • [6] Self-supervised Transformer-Based Pre-training Method with General Plant Infection Dataset
    Wang, Zhengle
    Wang, Ruifeng
    Wang, Minjuan
    Lai, Tianyun
    Zhang, Man
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT II, 2025, 15032 : 189 - 202
  • [7] Generation-based Multi-view Contrast for Self-supervised Graph Representation Learning
    Han, Yuehui
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (05)
  • [8] Semi-Supervised and Self-Supervised Classification with Multi-View Graph Neural Networks
    Yuan, Jinliang
    Yu, Hualei
    Cao, Meng
    Xu, Ming
    Xie, Junyuan
    Wang, Chongjun
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2466 - 2476
  • [9] Sleep Stage Classification Via Multi-View Based Self-Supervised Contrastive Learning of EEG
    Zhao, Chen
    Wu, Wei
    Zhang, Haoyi
    Zhang, Ruiyan
    Zheng, Xinyue
    Kong, Xiangzeng
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (12) : 7068 - 7077
  • [10] Graph Contrastive Multi-view Learning: A Pre-training Framework for Graph Classification
    Adjeisah, Michael
    Zhu, Xinzhong
    Xu, Huiying
    Ayall, Tewodros Alemu
    KNOWLEDGE-BASED SYSTEMS, 2024, 299