Self-Supervised pre-training model based on Multi-view for MOOC Recommendation

被引:0
|
作者
Tian, Runyu [1 ]
Cai, Juanjuan [2 ]
Li, Chuanzhen [1 ,3 ]
Wang, Jingling [1 ]
机构
[1] Commun Univ China, Sch Informat & Commun Engn, Beijing 100024, Peoples R China
[2] Commun Univ China, State Key Lab Media Audio & Video, Minist Educ, Beijing, Peoples R China
[3] Commun Univ China, State Key Lab Media Convergence & Commun, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
MOOC recommendation; Contrastive learning; Prerequisite dependency; Multi-view correlation;
D O I
10.1016/j.eswa.2024.124143
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommendation strategies based on concepts of knowledge are gradually applied to personalized course recommendation to promote model learning from implicit feedback data. However, existing approaches typically overlook the prerequisite dependency between concepts, which is the significant basis for connecting courses, and they fail to effectively model the relationship between items and attributes of courses, leading to inadequate capturing of associations between data and ineffective integration of implicit semantics into sequence representations. In this paper, we propose S elf-Supervised pre-training model based on M ulti-view for M OOC R ecommendation (SSM4MR) that exploits non-explicit but inherently correlated features to guide the representation learning of users' course preferences. In particular, to keep the model from relying solely on course prediction loss and overmphasising on the final performance, we treat the concepts of knowledge, course items and learning paths as different views, then sufficiently model the intrinsic relevance among multi-view through formulating multiple specific self-supervised objectives. As such, our model enhances the sequence representation and ultimately achieves high-performance course recommendation. All the extensive experiments and analyses provide persuasive support for the superiority of the model design and the recommendation results.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Qingbo
    Wu, Jinjian
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 513 - 517
  • [32] Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance
    Wolf, Daniel
    Payer, Tristan
    Lisson, Catharina Silvia
    Lisson, Christoph Gerhard
    Beer, Meinrad
    Götz, Michael
    Ropinski, Timo
    Computers in Biology and Medicine, 2024, 183
  • [33] Multi-view Pre-trained Model for Code Vulnerability Identification
    Jiang, Xuxiang
    Xiao, Yinhao
    Wang, Jun
    Zhang, Wei
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PT III, 2022, 13473 : 127 - 135
  • [34] Multi-View Contrastive Fusion POI Recommendation Based on Hypergraph Neural Network
    Hu, Luyao
    Han, Guangpu
    Liu, Shichang
    Ren, Yuqing
    Wang, Xu
    Liu, Ya
    Wen, Junhao
    Yang, Zhengyi
    MATHEMATICS, 2025, 13 (06)
  • [35] Long-tailed visual classification based on supervised contrastive learning with multi-view fusion
    Zeng, Liang
    Feng, Zheng
    Chen, Jia
    Wang, Shanshan
    KNOWLEDGE-BASED SYSTEMS, 2024, 301
  • [36] Self-supervised Image-based 3D Model Retrieval
    Song, Dan
    Zhang, Chu-Meng
    Zhao, Xiao-Qian
    Wang, Teng
    Nie, Wei-Zhi
    Li, Xuan-Ya
    Liu, An-An
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (02)
  • [37] CETP: A novel semi-supervised framework based on contrastive pre-training for imbalanced encrypted traffic classification
    Lin, Xinjie
    He, Longtao
    Gou, Gaopeng
    Yu, Jing
    Guan, Zhong
    Li, Xiang
    Guo, Juncheng
    Xiong, Gang
    COMPUTERS & SECURITY, 2024, 143
  • [38] Focalized contrastive view-invariant learning for self-supervised skeleton-based action recognition
    Men, Qianhui
    Ho, Edmond S. L.
    Shum, Hubert P. H.
    Leung, Howard
    NEUROCOMPUTING, 2023, 537 : 198 - 209
  • [39] MVC-HGAT: multi-view contrastive hypergraph attention network for session-based recommendation
    Yang, Fan
    Peng, Dunlu
    APPLIED INTELLIGENCE, 2025, 55 (01)
  • [40] CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
    Kyungjin Cho
    Ki Duk Kim
    Yujin Nam
    Jiheon Jeong
    Jeeyoung Kim
    Changyong Choi
    Soyoung Lee
    Jun Soo Lee
    Seoyeon Woo
    Gil-Sun Hong
    Joon Beom Seo
    Namkug Kim
    Journal of Digital Imaging, 2023, 36 : 902 - 910