Self-attention in Knowledge Tracing: Why It Works

被引:3
|
作者
Pu, Shi [1 ]
Becker, Lee [1 ]
机构
[1] Educ Testing Serv, 660 Rosedale Rd, Princeton, NJ 08540 USA
来源
ARTIFICIAL INTELLIGENCE IN EDUCATION, PT I | 2022年 / 13355卷
关键词
Deep knowledge tracing; Self-attention; Knowledge tracing;
D O I
10.1007/978-3-031-11644-5_75
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge tracing refers to the dynamic assessment of a learner's mastery of skills. There has been widespread adoption of the self-attention mechanism in knowledge-tracing models in recent years. These models consistently report performance gains over baseline knowledge tracing models in public datasets. However, why the self-attention mechanism works in knowledge tracing is unknown. This study argues that the ability to encode when a learner attempts to answer the same item multiple times in a row (henceforth referred to as repeated attempts) is a significant reason why self-attention models perform better than other deep knowledge tracing models. We present two experiments to support our argument. We use context-aware knowledge tracing (AKT) as our example self-attention model and dynamic key-value memory networks (DKVMN) and deep performance factors analysis (DPFA) as our baseline models. Firstly, we show that removing repeated attempts from datasets closes the performance gap between the AKT and the baseline models. Secondly, we present DPFA+, an extension of DPFA that is able to consume manually crafted repeated attempts features. We demonstrate that DPFA+ performs better than AKT across all datasets with manually crafted repeated attempts features.
引用
收藏
页码:731 / 736
页数:6
相关论文
共 50 条
  • [31] A Dual-View Knowledge Enhancing Self-Attention Network for Sequential Recommendation
    Tang, Hao
    Zhang, Feng
    Xu, Xinhai
    Zhang, Jieyuan
    Liu, Donghong
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 832 - 839
  • [32] Leverage External Knowledge and Self-attention for Chinese Semantic Dependency Graph Parsing
    Liu, Dianqing
    Zhang, Lanqiu
    Shao, Yanqiu
    Sun, Junzhao
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2021, 28 (02): : 447 - 458
  • [33] SELF-ATTENTION, CONCEPT ACTIVATION, AND THE CAUSAL SELF
    FENIGSTEIN, A
    LEVINE, MP
    JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY, 1984, 20 (03) : 231 - 245
  • [34] Research of Self-Attention in Image Segmentation
    Cao, Fude
    Zheng, Chunguang
    Huang, Limin
    Wang, Aihua
    Zhang, Jiong
    Zhou, Feng
    Ju, Haoxue
    Guo, Haitao
    Du, Yuxia
    JOURNAL OF INFORMATION TECHNOLOGY RESEARCH, 2022, 15 (01)
  • [35] Improve Image Captioning by Self-attention
    Li, Zhenru
    Li, Yaoyi
    Lu, Hongtao
    NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 91 - 98
  • [36] Self-Attention Generative Adversarial Networks
    Zhang, Han
    Goodfellow, Ian
    Metaxas, Dimitris
    Odena, Augustus
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [37] Rethinking the Self-Attention in Vision Transformers
    Kim, Kyungmin
    Wu, Bichen
    Dai, Xiaoliang
    Zhang, Peizhao
    Yan, Zhicheng
    Vajda, Peter
    Kim, Seon
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3065 - 3069
  • [38] Relative molecule self-attention transformer
    Łukasz Maziarka
    Dawid Majchrowski
    Tomasz Danel
    Piotr Gaiński
    Jacek Tabor
    Igor Podolak
    Paweł Morkisz
    Stanisław Jastrzębski
    Journal of Cheminformatics, 16
  • [39] Self-Attention ConvLSTM for Spatiotemporal Prediction
    Lin, Zhihui
    Li, Maomao
    Zheng, Zhuobin
    Cheng, Yangyang
    Yuan, Chun
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11531 - 11538
  • [40] Pyramid Self-attention for Semantic Segmentation
    Qi, Jiyang
    Wang, Xinggang
    Hu, Yao
    Tang, Xu
    Liu, Wenyu
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 480 - 492