Self-Supervised pre-training model based on Multi-view for MOOC Recommendation

被引:0
|
作者
Tian, Runyu [1 ]
Cai, Juanjuan [2 ]
Li, Chuanzhen [1 ,3 ]
Wang, Jingling [1 ]
机构
[1] Commun Univ China, Sch Informat & Commun Engn, Beijing 100024, Peoples R China
[2] Commun Univ China, State Key Lab Media Audio & Video, Minist Educ, Beijing, Peoples R China
[3] Commun Univ China, State Key Lab Media Convergence & Commun, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
MOOC recommendation; Contrastive learning; Prerequisite dependency; Multi-view correlation;
D O I
10.1016/j.eswa.2024.124143
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommendation strategies based on concepts of knowledge are gradually applied to personalized course recommendation to promote model learning from implicit feedback data. However, existing approaches typically overlook the prerequisite dependency between concepts, which is the significant basis for connecting courses, and they fail to effectively model the relationship between items and attributes of courses, leading to inadequate capturing of associations between data and ineffective integration of implicit semantics into sequence representations. In this paper, we propose S elf-Supervised pre-training model based on M ulti-view for M OOC R ecommendation (SSM4MR) that exploits non-explicit but inherently correlated features to guide the representation learning of users' course preferences. In particular, to keep the model from relying solely on course prediction loss and overmphasising on the final performance, we treat the concepts of knowledge, course items and learning paths as different views, then sufficiently model the intrinsic relevance among multi-view through formulating multiple specific self-supervised objectives. As such, our model enhances the sequence representation and ultimately achieves high-performance course recommendation. All the extensive experiments and analyses provide persuasive support for the superiority of the model design and the recommendation results.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Cross-Modal self-supervised vision language pre-training with multiple objectives for medical visual question answering
    Liu, Gang
    He, Jinlong
    Li, Pengfei
    Zhao, Zixu
    Zhong, Shenjun
    JOURNAL OF BIOMEDICAL INFORMATICS, 2024, 160
  • [22] Self-supervised pre-training for large-scale crop mapping using Sentinel-2 time series
    Xu, Yijia
    Ma, Yuchi
    Zhang, Zhou
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2024, 207 : 312 - 325
  • [23] Learning Consistent Semantic Representation for Chest X-ray via Anatomical Localization in Self-Supervised Pre-Training
    Chu, Surong
    Ren, Xueting
    Ji, Guohua
    Zhao, Juanjuan
    Shi, Jinwei
    Wei, Yangyang
    Pei, Bo
    Qiang, Yan
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (03) : 2100 - 2112
  • [24] Self-supervised scientific document recommendation based on contrastive learning
    Tan, Shicheng
    Zhang, Tao
    Zhao, Shu
    Zhang, Yanping
    SCIENTOMETRICS, 2023, 128 (09) : 5027 - 5049
  • [25] Self-supervised scientific document recommendation based on contrastive learning
    Shicheng Tan
    Tao Zhang
    Shu Zhao
    Yanping Zhang
    Scientometrics, 2023, 128 : 5027 - 5049
  • [26] Sociological-Theory-Based Multitopic Self-Supervised Recommendation
    Zhao, Qin
    Wu, Peihan
    Liu, Gang
    An, Dongdong
    Lian, Jie
    Zhou, MengChu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [27] Research on personalised knowledge graph recommendation algorithm based on self-supervised learning
    Shen, Bing
    Zhang, Yulai
    DISCOVER APPLIED SCIENCES, 2024, 6 (08)
  • [28] UserBERT: Pre-training User Model with Contrastive Self-supervision
    Wu, Chuhan
    Wu, Fangzhao
    Qi, Tao
    Huang, Yongfeng
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2087 - 2092
  • [29] GenView: Enhancing View Quality with Pretrained Generative Model for Self-Supervised Learning
    Li, Xiaojie
    Yang, Yibo
    Li, Xiangtai
    Wu, Jianlong
    Yu, Yue
    Ghanem, Bernard
    Zhang, Min
    COMPUTER VISION - ECCV 2024, PT XXXVII, 2025, 15095 : 306 - 325
  • [30] A contrastive learning based unsupervised multi-view stereo with multi-stage self-training strategy
    Wang, Zihang
    Luo, Haonan
    Wang, Xiang
    Zheng, Jin
    Ning, Xin
    Bai, Xiao
    DISPLAYS, 2024, 83