On the Effectiveness of Self-Training in MOOC Dropout Prediction

被引:18
|
作者
Goel, Yamini [1 ]
Goyal, Rinkaj [1 ]
机构
[1] Guru Gobind Singh GGS Indraprastha Univ, Univ Sch Informat Commun & Technol, New Delhi 110078, India
关键词
Semi-Supervised Learning; Deep Learning; Self-Training; MOOCs; Dropout Prediction; ONLINE; QUALITY;
D O I
10.1515/comp-2020-0153
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Massive open online courses (MOOCs) have gained enormous popularity in recent years and have attracted learners worldwide. However, MOOCs face a crucial challenge in the high dropout rate, which varies between 91%-93%. An interplay between different learning analytics strategies and MOOCs have emerged as a research area to reduce dropout rate. Most existing studies use click-stream features as engagement patterns to predict at-risk students. However, this study uses a combination of click-stream features and the influence of the learner's friends based on their demographics to identify potential dropouts. Existing predictive models are based on supervised learning techniques that require the bulk of hand-labelled data to train models. In practice, however, scarcity of massive labelled data makes training difficult. Therefore, this study uses self-training, a semi-supervised learning model, to develop predictive models. Experimental results on a public data set demonstrate that semisupervised models attain comparable results to state-ofthe-art approaches, while also having the flexibility of utilizing a small quantity of labelled data. This study deploys seven well-known optimizers to train the self-training classifiers, out of which, Stochastic Gradient Descent (SGD) outperformed others with the value of F1 score at 94.29%, affirming the relevance of this exposition.
引用
收藏
页码:246 / 258
页数:13
相关论文
共 50 条
  • [31] Adversarial self-training for robustness and generalization
    Li, Zhuorong
    Wu, Minghui
    Jin, Canghong
    Yu, Daiwei
    Yu, Hongchuan
    PATTERN RECOGNITION LETTERS, 2024, 185 : 117 - 123
  • [32] Unsupervised Controllable Generation with Self-Training
    Chrysos, Grigorios G.
    Kossaifi, Jean
    Yu, Zhiding
    Anandkumar, Anima
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [33] Self-training for Cell Segmentation and Counting
    Luo, J.
    Oore, S.
    Hollensen, P.
    Fine, A.
    Trappenberg, T.
    ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11489 : 406 - 412
  • [34] CONSIDERATIONS ON SELF-TRAINING IN THE INNOVATION UNION
    Blaga, Petruta
    Tripon, Avram
    STUDIES ON LITERATURE, DISCOURSE AND MULTICULTURAL DIALOGUE: COMMUNICATION AND PUBLIC RELATIONS, 2013, : 56 - 61
  • [35] Reranking and Self-Training for Parser Adaptation
    McClosky, David
    Charniak, Eugene
    Johnson, Mark
    COLING/ACL 2006, VOLS 1 AND 2, PROCEEDINGS OF THE CONFERENCE, 2006, : 337 - 344
  • [36] Crafting networks: A self-training intervention
    Wang, Huatian
    Demerouti, Evangelia
    Rispens, Sonja
    van Gool, Piet
    JOURNAL OF VOCATIONAL BEHAVIOR, 2024, 149
  • [37] Adaptive Self-Training for Object Detection
    Vandeghen, Renaud
    Louppe, Gilles
    Van Droogenbroeck, Marc
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 914 - 923
  • [38] Self-training of Residents in the Specialization Process
    Gonzalez Mesa, Maria Isabel
    Zerquera Alvarez, Carlos Esteban
    Machin Asia, Annia
    MEDISUR-REVISTA DE CIENCIAS MEDICAS DE CIENFUEGOS, 2014, 12 (01): : 329 - 333
  • [39] Self-Training for Unsupervised Parsing with PRPN
    Mohananey, Anhad
    Kann, Katharina
    Bowman, Samuel R.
    16TH INTERNATIONAL CONFERENCE ON PARSING TECHNOLOGIES AND IWPT 2020 SHARED TASK ON PARSING INTO ENHANCED UNIVERSAL DEPENDENCIES, 2020, : 105 - 110
  • [40] Self-Training of ESD for Experienced Endoscopists
    Takahashi, Morio
    Katayama, Yasumi
    GASTROINTESTINAL ENDOSCOPY, 2012, 75 (04) : 373 - 373