Self-Supervised pre-training model based on Multi-view for MOOC Recommendation

被引:0
|
作者
Tian, Runyu [1 ]
Cai, Juanjuan [2 ]
Li, Chuanzhen [1 ,3 ]
Wang, Jingling [1 ]
机构
[1] Commun Univ China, Sch Informat & Commun Engn, Beijing 100024, Peoples R China
[2] Commun Univ China, State Key Lab Media Audio & Video, Minist Educ, Beijing, Peoples R China
[3] Commun Univ China, State Key Lab Media Convergence & Commun, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
MOOC recommendation; Contrastive learning; Prerequisite dependency; Multi-view correlation;
D O I
10.1016/j.eswa.2024.124143
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommendation strategies based on concepts of knowledge are gradually applied to personalized course recommendation to promote model learning from implicit feedback data. However, existing approaches typically overlook the prerequisite dependency between concepts, which is the significant basis for connecting courses, and they fail to effectively model the relationship between items and attributes of courses, leading to inadequate capturing of associations between data and ineffective integration of implicit semantics into sequence representations. In this paper, we propose S elf-Supervised pre-training model based on M ulti-view for M OOC R ecommendation (SSM4MR) that exploits non-explicit but inherently correlated features to guide the representation learning of users' course preferences. In particular, to keep the model from relying solely on course prediction loss and overmphasising on the final performance, we treat the concepts of knowledge, course items and learning paths as different views, then sufficiently model the intrinsic relevance among multi-view through formulating multiple specific self-supervised objectives. As such, our model enhances the sequence representation and ultimately achieves high-performance course recommendation. All the extensive experiments and analyses provide persuasive support for the superiority of the model design and the recommendation results.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Self-Supervised Pre-Training for Intravascular Ultrasound Image Segmentation Method Based on Diffusion Model
    Hao Wenyue
    Cai Huaiyu
    Zuo Tingtao
    Jia Zhongwei
    Wang Yi
    Chen Xiaodong
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (18)
  • [22] Voice Deepfake Detection Using the Self-Supervised Pre-Training Model HuBERT
    Li, Lanting
    Lu, Tianliang
    Ma, Xingbang
    Yuan, Mengjiao
    Wan, Da
    APPLIED SCIENCES-BASEL, 2023, 13 (14):
  • [23] Self-Supervised Pre-Training for Attention-Based Encoder-Decoder ASR Model
    Gao, Changfeng
    Cheng, Gaofeng
    Li, Ta
    Zhang, Pengyuan
    Yan, Yonghong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1763 - 1774
  • [24] Object Adaptive Self-Supervised Dense Visual Pre-Training
    Zhang, Yu
    Zhang, Tao
    Zhu, Hongyuan
    Chen, Zihan
    Mi, Siya
    Peng, Xi
    Geng, Xin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 2228 - 2240
  • [25] UniVIP: A Unified Framework for Self-Supervised Visual Pre-training
    Li, Zhaowen
    Zhu, Yousong
    Yang, Fan
    Li, Wei
    Zhao, Chaoyang
    Chen, Yingying
    Chen, Zhiyang
    Xie, Jiahao
    Wu, Liwei
    Zhao, Rui
    Tang, Ming
    Wang, Jinqiao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14607 - 14616
  • [26] Representation Recovering for Self-Supervised Pre-training on Medical Images
    Yan, Xiangyi
    Naushad, Junayed
    Sun, Shanlin
    Han, Kun
    Tang, Hao
    Kong, Deying
    Ma, Haoyu
    You, Chenyu
    Xie, Xiaohui
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2684 - 2694
  • [27] Reducing Domain mismatch in Self-supervised speech pre-training
    Baskar, Murali Karthick
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Zhang, Yu
    INTERSPEECH 2022, 2022, : 3028 - 3032
  • [28] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032
  • [29] Self-supervised VICReg pre-training for Brugada ECG detection
    Ronan, Robert
    Tarabanis, Constantine
    Chinitz, Larry
    Jankelson, Lior
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [30] A Self-Supervised Pre-Training Method for Chinese Spelling Correction
    Su J.
    Yu S.
    Hong X.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2023, 51 (09): : 90 - 98