CONTEXTUAL MULTI-ARMED BANDIT ALGORITHMS FOR PERSONALIZED LEARNING ACTION SELECTION

被引:0
|
作者
Manickam, Indu [1 ]
Lan, Andrew S. [1 ]
Baraniuk, Richard G. [1 ]
机构
[1] Rice Univ, Houston, TX 77251 USA
关键词
contextual bandits; personalized learning;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Optimizing the selection of learning resources and practice questions to address each individual student's needs has the potential to improve students' learning efficiency. In this paper, we study the problem of selecting a personalized learning action for each student (e. g. watching a lecture video, working on a practice question, etc.), based on their prior performance, in order to maximize their learning outcome. We formulate this problem using the contextual multi-armed bandits framework, where students' prior concept knowledge states (estimated from their responses to questions in previous assessments) correspond to contexts, the personalized learning actions correspond to arms, and their performance on future assessments correspond to rewards. We propose three new Bayesian policies to select personalized learning actions for students that each exhibits advantages over prior work, and experimentally validate them using real-world datasets.
引用
收藏
页码:6344 / 6348
页数:5
相关论文
共 50 条
  • [1] Multi-armed Bandit Algorithms for Adaptive Learning: A Survey
    Mui, John
    Lin, Fuhua
    Dewan, M. Ali Akber
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2021), PT II, 2021, 12749 : 273 - 278
  • [2] AB Testing for Process Versions with Contextual Multi-armed Bandit Algorithms
    Satyal, Suhrid
    Weber, Ingo
    Paik, Hye-Young
    Di Ciccio, Claudio
    Mendling, Jan
    ADVANCED INFORMATION SYSTEMS ENGINEERING, CAISE 2018, 2018, 10816 : 19 - 34
  • [3] Personalized clinical trial based on multi-armed bandit algorithms with covariates
    Shao, Yifei
    PROCEEDINGS OF INTERNATIONAL CONFERENCE ON ALGORITHMS, SOFTWARE ENGINEERING, AND NETWORK SECURITY, ASENS 2024, 2024, : 12 - 17
  • [4] Scaling Multi-Armed Bandit Algorithms
    Fouche, Edouard
    Komiyama, Junpei
    Boehm, Klemens
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1449 - 1459
  • [5] Con-CNAME: A Contextual Multi-armed Bandit Algorithm for Personalized Recommendations
    Zhang, Xiaofang
    Zhou, Qian
    He, Tieke
    Liang, Bin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT II, 2018, 11140 : 326 - 336
  • [6] Variational inference for the multi-armed contextual bandit
    Urteaga, Inigo
    Wiggins, Chris H.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [7] Multi-armed bandit algorithms and empirical evaluation
    Vermorel, J
    Mohri, M
    MACHINE LEARNING: ECML 2005, PROCEEDINGS, 2005, 3720 : 437 - 448
  • [8] Comparing Multi-Armed Bandit Algorithms and Q-learning for Multiagent Action Selection: a Case Study in Route Choice
    de Oliveira, Thiago B. F.
    Bazzan, Ana L. C.
    da Silva, Bruno C.
    Grunitzki, Ricardo
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [9] Anytime Algorithms for Multi-Armed Bandit Problems
    Kleinberg, Robert
    PROCEEDINGS OF THE SEVENTHEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2006, : 928 - 936
  • [10] A Contextual Multi-Armed Bandit approach for NDN forwarding
    Mordjana, Yakoub
    Djamaa, Badis
    Senouci, Mustapha Reda
    Herzallah, Aymen
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2024, 230