Self-Supervised pre-training model based on Multi-view for MOOC Recommendation

被引:0
|
作者
Tian, Runyu [1 ]
Cai, Juanjuan [2 ]
Li, Chuanzhen [1 ,3 ]
Wang, Jingling [1 ]
机构
[1] Commun Univ China, Sch Informat & Commun Engn, Beijing 100024, Peoples R China
[2] Commun Univ China, State Key Lab Media Audio & Video, Minist Educ, Beijing, Peoples R China
[3] Commun Univ China, State Key Lab Media Convergence & Commun, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
MOOC recommendation; Contrastive learning; Prerequisite dependency; Multi-view correlation;
D O I
10.1016/j.eswa.2024.124143
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommendation strategies based on concepts of knowledge are gradually applied to personalized course recommendation to promote model learning from implicit feedback data. However, existing approaches typically overlook the prerequisite dependency between concepts, which is the significant basis for connecting courses, and they fail to effectively model the relationship between items and attributes of courses, leading to inadequate capturing of associations between data and ineffective integration of implicit semantics into sequence representations. In this paper, we propose S elf-Supervised pre-training model based on M ulti-view for M OOC R ecommendation (SSM4MR) that exploits non-explicit but inherently correlated features to guide the representation learning of users' course preferences. In particular, to keep the model from relying solely on course prediction loss and overmphasising on the final performance, we treat the concepts of knowledge, course items and learning paths as different views, then sufficiently model the intrinsic relevance among multi-view through formulating multiple specific self-supervised objectives. As such, our model enhances the sequence representation and ultimately achieves high-performance course recommendation. All the extensive experiments and analyses provide persuasive support for the superiority of the model design and the recommendation results.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
    Cho, Kyungjin
    Kim, Ki Duk
    Nam, Yujin
    Jeong, Jiheon
    Kim, Jeeyoung
    Choi, Changyong
    Lee, Soyoung
    Lee, Jun Soo
    Woo, Seoyeon
    Hong, Gil-Sun
    Seo, Joon Beom
    Kim, Namkug
    JOURNAL OF DIGITAL IMAGING, 2023, 36 (03) : 902 - 910
  • [42] Multi-scale motion contrastive learning for self-supervised skeleton-based action recognition
    Wu, Yushan
    Xu, Zengmin
    Yuan, Mengwei
    Tang, Tianchi
    Meng, Ruxing
    Wang, Zhongyuan
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [43] SSMDA: Self-Supervised Cherry Maturity Detection Algorithm Based on Multi-Feature Contrastive Learning
    Gai, Rong-Li
    Wei, Kai
    Wang, Peng-Fei
    AGRICULTURE-BASEL, 2023, 13 (05):
  • [44] SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification
    Mishra, Animesh
    Jha, Ritesh
    Bhattacharjee, Vandana
    IEEE ACCESS, 2023, 11 : 6673 - 6681
  • [45] A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer
    Dong, Chi
    Wu, Yujiao
    Sun, Bo
    Bo, Jiayi
    Huang, Yufei
    Geng, Yikang
    Zhang, Qianhui
    Liu, Ruixiang
    Guo, Wei
    Wang, Xingling
    Jiang, Xiran
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2025, 119
  • [46] Contrastive Learning Based Multi-view Feature Fusion Model for Aspect-Based Sentiment Analysis
    Wu, Xing
    Xia, Hongbin
    Liu, Yuan
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2024, 37 (10): : 910 - 922
  • [47] One multimodal plugin enhancing all: CLIP-based pre-training framework enhancing multimodal item representations in recommendation systems
    Mo, Minghao
    Lu, Weihai
    Xie, Qixiao
    Xiao, Zikai
    Lv, Xiang
    Yang, Hong
    Zhang, Yanchun
    NEUROCOMPUTING, 2025, 637
  • [48] 6G-Oriented CSI-Based Multi-Modal Pre-Training and Downstream Task Adaptation Paradigm
    Jiao, Tianyu
    Ye, Chenhui
    Huang, Yihang
    Feng, Yijia
    Xiao, Zhuoran
    Xu, Yin
    He, Dazhi
    Guan, Yunfeng
    Yang, Bei
    Chang, Jiang
    Cai, Liyu
    Bi, Qi
    2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS 2024, 2024, : 1389 - 1394
  • [49] ContrastLOS: A Graph-Based Deep Learning Model With Contrastive Pre-Training for Improved ICU Length-of-Stay Prediction
    Fan, Guangrui
    Liu, Aixiang
    Zhang, Chao
    IEEE ACCESS, 2025, 13 : 34132 - 34148
  • [50] A Short-Term Wind Power Forecasting Method With Self-Supervised Contrastive Learning-Based Feature Extraction Model
    Zhu, Nanyang
    Zhang, Kaifeng
    Wang, Ying
    Zheng, Huiping
    Pan, Yanxia
    Cheng, Xueting
    9TH INTERNATIONAL YOUTH CONFERENCE ON ENERGY, IYCE 2024, 2024,