Automatic Lesson Plan Generation via Large Language Models with Self-critique Prompting

被引:1
|
作者
Zheng, Ying [1 ]
Li, Xueyi [1 ]
Huang, Yaying [1 ]
Liang, Qianru [1 ]
Guo, Teng [1 ]
Hou, Mingliang [2 ]
Gao, Boyu [1 ]
Tian, Mi [2 ]
Liu, Zitao [1 ]
Luo, Weiqi [1 ]
机构
[1] Jinan Univ, Guangdong Inst Smart Educ, Guangzhou, Peoples R China
[2] TAL Educ Grp, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
D O I
10.1007/978-3-031-64315-6_13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we utilize the understanding and generative abilities of large language models (LLMs) to automatically produce customized lesson plans. This addresses the common challenge where conventional plans may not sufficiently meet the distinct requirements of various teaching contexts and student populations. We propose a novel three-stage process, that encompasses the gradual generation of each key component of the lesson plan using Retrieval-Augmented Generation (RAG), self-critique by the LLMs, and subsequent refinement. We generate math lesson plans for grades 2 to 5 at the elementary school levels, covering over 80 topics using this method. Three experienced educators were invited to develop comprehensive lesson plan evaluation criteria, which are then used to benchmark our LLM-generated lesson plans against actual lesson plans on the same topics. Three evaluators assess the quality, relevance, and applicability of the plans. The results of the evaluation indicate that our approach can generate high-quality lesson plans. This innovative approach can significantly streamline the process of lesson planning and reduce the burden on educators.
引用
收藏
页码:163 / 178
页数:16
相关论文
共 50 条
  • [1] Prompting Large Language Models for Automatic Question Tagging
    Xu, Nuojia
    Xue, Dizhan
    Qian, Shengsheng
    Fang, Quan
    Hu, Jun
    MACHINE INTELLIGENCE RESEARCH, 2025,
  • [2] Learning to Learn via Self-Critique
    Antoniou, Antreas
    Storkey, Amos
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [3] Guiding Large Language Models via Directional Stimulus Prompting
    Li, Zekun
    Peng, Baolin
    He, Pengcheng
    Galley, Michel
    Gao, Jianfeng
    Yan, Xifeng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Grammar Prompting for Domain-Specific Language Generation with Large Language Models
    Wang, Bailin
    Wang, Zi
    Wang, Xuezhi
    Cao, Yuan
    Saurous, Rif A.
    Kim, Yoon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Do Language Models Enjoy Their Own Stories? Prompting Large Language Models for Automatic Story Evaluation
    Chhun, Cyril
    Suchanek, Fabian M.
    Clavel, Chloe
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 1122 - 1142
  • [6] Enabling controllable table-to-text generation via prompting large language models with guided planning
    Zhao, Shuo
    Sun, Xin
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [7] Considerations for Prompting Large Language Models
    Schulte, Brian
    JAMA ONCOLOGY, 2024, 10 (04) : 538 - 538
  • [8] FormalEval: A Method for Automatic Evaluation of Code Generation via Large Language Models
    Yang, Sichao
    Yang, Ye
    2024 INTERNATIONAL SYMPOSIUM OF ELECTRONICS DESIGN AUTOMATION, ISEDA 2024, 2024, : 660 - 665
  • [9] PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents
    Sun, Simeng
    Liu, Yang
    Wang, Shuohang
    Iter, Dan
    Zhu, Chenguang
    Iyyer, Mohit
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 469 - 486
  • [10] Automatic item generation in various STEM subjects using large language model prompting
    Park, Joonhyeong (joonhyeong.park@nie.edu.sg), 2025, 8