A Pre-trained Knowledge Tracing Model with Limited Data

被引:0
|
作者
Yue, Wenli [1 ,3 ]
Su, Wei [1 ,3 ]
Liu, Lei [2 ]
Cai, Chuan [1 ]
Yuan, Yongna [1 ]
Jia, Zhongfeng [1 ]
Liu, Jiamin [1 ]
Xie, Wenjian [1 ]
机构
[1] Lanzhou Univ, Sch Informat Sci & Engn, Lanzhou, Peoples R China
[2] Duzhe Publishing Grp Co Ltd, Lanzhou, Peoples R China
[3] Key Lab Media Convergence Technol & Commun, Lanzhou, Gansu, Peoples R China
关键词
Knowledge Tracing; Limited Data; Pre-training; Fine-tuning;
D O I
10.1007/978-3-031-68309-1_14
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Online education systems have gained increasing popularity due to their capability to fully preserve users' learning data. This advantage enables researchers to assess learners' mastery through their learning trajectories, thereby facilitating personalized education and support. Knowledge tracing, an effective educational aid, simulates students' implicit knowledge states and predicts their mastery over knowledge based on their historical answer records. However, for newly developed online learning platforms, the lack of sufficient historical answer data may impede accurate prediction of students' knowledge states, rendering existing knowledge tracing models less effective. This paper introduces the first pre-trained knowledge tracing model that leverages a substantial amount of existing data for pre-training and a smaller dataset for fine-tuning. Validated across several publicly available knowledge tracing datasets, our method demonstrates significant improvement in tracing performance on small datasets, with a maximum AUC increase of 5.07%. Beyond incorporating small datasets, our approach of pre-training the entire dataset has shown an enhanced AUC compared to the baseline, marking a novel direction in knowledge tracing research. Furthermore, the paper analyzed the outcomes of pre-training experiments with varying numbers of interactions as fine-tuning datasets, providing valuable insights for Intelligent Tutoring Systems (ITS).
引用
收藏
页码:163 / 178
页数:16
相关论文
共 50 条
  • [1] Pre-trained Language Models with Limited Data for Intent Classification
    Kasthuriarachchy, Buddhika
    Chetty, Madhu
    Karmakar, Gour
    Walls, Darren
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [2] Pre-Trained Model-Based NFR Classification: Overcoming Limited Data Challenges
    Rahman, Kiramat
    Ghani, Anwar
    Alzahrani, Abdulrahman
    Tariq, Muhammad Usman
    Rahman, Arif Ur
    IEEE ACCESS, 2023, 11 : 81787 - 81802
  • [3] AdaDS: Adaptive data selection for accelerating pre-trained language model knowledge distillation
    Zhou, Qinhong
    Li, Peng
    Liu, Yang
    Guan, Yuyang
    Xing, Qizhou
    Chen, Ming
    Sun, Maosong
    Liu, Yang
    AI OPEN, 2023, 4 : 56 - 63
  • [4] Knowledge Enhanced Pre-trained Language Model for Product Summarization
    Yin, Wenbo
    Ren, Junxiang
    Wu, Yuejiao
    Song, Ruilin
    Liu, Lang
    Cheng, Zhen
    Wang, Sibo
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT II, 2022, 13552 : 263 - 273
  • [5] Knowledge Grounded Pre-Trained Model For Dialogue Response Generation
    Wang, Yanmeng
    Rong, Wenge
    Zhang, Jianfei
    Ouyang, Yuanxin
    Xiong, Zhang
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [6] Pre-trained Language Model with Prompts for Temporal Knowledge Graph Completion
    Xu, Wenjie
    Liu, Ben
    Peng, Miao
    Jia, Xu
    Peng, Min
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7790 - 7803
  • [7] Using Noise and External Knowledge to Enhance Chinese Pre-trained Model
    Ma, Haoyang
    Li, Zeyu
    Guo, Hongyu
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 476 - 480
  • [8] Pre-trained Language Model with Prompts for Temporal Knowledge Graph Completion
    Xu, Wenjie
    Liu, Ben
    Peng, Miao
    Jia, Xu
    Peng, Min
    arXiv, 2023,
  • [9] Acquiring Knowledge from Pre-Trained Model to Neural Machine Translation
    Weng, Rongxiang
    Yu, Heng
    Huang, Shujian
    Cheng, Shanbo
    Luo, Weihua
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9266 - 9273
  • [10] Knowledge Rumination for Pre-trained Language Models
    Yao, Yunzhi
    Wang, Peng
    Mao, Shengyu
    Tan, Chuanqi
    Huang, Fei
    Chen, Huajun
    Zhang, Ningyu
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3387 - 3404