Self-Supervised pre-training model based on Multi-view for MOOC Recommendation

被引:0
|
作者
Tian, Runyu [1 ]
Cai, Juanjuan [2 ]
Li, Chuanzhen [1 ,3 ]
Wang, Jingling [1 ]
机构
[1] Commun Univ China, Sch Informat & Commun Engn, Beijing 100024, Peoples R China
[2] Commun Univ China, State Key Lab Media Audio & Video, Minist Educ, Beijing, Peoples R China
[3] Commun Univ China, State Key Lab Media Convergence & Commun, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
MOOC recommendation; Contrastive learning; Prerequisite dependency; Multi-view correlation;
D O I
10.1016/j.eswa.2024.124143
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommendation strategies based on concepts of knowledge are gradually applied to personalized course recommendation to promote model learning from implicit feedback data. However, existing approaches typically overlook the prerequisite dependency between concepts, which is the significant basis for connecting courses, and they fail to effectively model the relationship between items and attributes of courses, leading to inadequate capturing of associations between data and ineffective integration of implicit semantics into sequence representations. In this paper, we propose S elf-Supervised pre-training model based on M ulti-view for M OOC R ecommendation (SSM4MR) that exploits non-explicit but inherently correlated features to guide the representation learning of users' course preferences. In particular, to keep the model from relying solely on course prediction loss and overmphasising on the final performance, we treat the concepts of knowledge, course items and learning paths as different views, then sufficiently model the intrinsic relevance among multi-view through formulating multiple specific self-supervised objectives. As such, our model enhances the sequence representation and ultimately achieves high-performance course recommendation. All the extensive experiments and analyses provide persuasive support for the superiority of the model design and the recommendation results.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Multi-View Collaborative Training and Self-Supervised Learning for Group Recommendation
    Wei, Feng
    Chen, Shuyu
    MATHEMATICS, 2025, 13 (01)
  • [2] Self-Supervised Pre-Training via Multi-View Graph Information Bottleneck for Molecular Property Prediction
    Zang, Xuan
    Zhang, Junjie
    Tang, Buzhou
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (12) : 7659 - 7669
  • [3] Multi-view self-supervised learning on heterogeneous graphs for recommendation
    Zhang, Yunjia
    Zhang, Yihao
    Liao, Weiwen
    Li, Xiaokang
    Wang, Xibin
    APPLIED SOFT COMPUTING, 2025, 174
  • [4] Self-supervised ECG pre-training
    Liu, Han
    Zhao, Zhenbo
    She, Qiang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 70
  • [5] Thyroid ultrasound diagnosis improvement via multi-view self-supervised learning and two-stage pre-training
    Wang, Jian
    Yang, Xin
    Jia, Xiaohong
    Xue, Wufeng
    Chen, Rusi
    Chen, Yanlin
    Zhu, Xiliang
    Liu, Lian
    Cao, Yan
    Zhou, Jianqiao
    Ni, Dong
    Gu, Ning
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 171
  • [6] FALL DETECTION USING SELF-SUPERVISED PRE-TRAINING MODEL
    Yhdego, Haben
    Audette, Michel
    Paolini, Christopher
    PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 361 - 371
  • [7] Self-supervised Pre-training of Text Recognizers
    Kiss, Martin
    Hradis, Michal
    DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024, PT IV, 2024, 14807 : 218 - 235
  • [8] Self-supervised Pre-training for Mirror Detection
    Lin, Jiaying
    Lau, Rynson W. H.
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12193 - 12202
  • [9] Self-supervised Pre-training for Nuclei Segmentation
    Haq, Mohammad Minhazul
    Huang, Junzhou
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 303 - 313
  • [10] EFFECTIVENESS OF SELF-SUPERVISED PRE-TRAINING FOR ASR
    Baevski, Alexei
    Mohamed, Abdelrahman
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7694 - 7698