MaMiC: Macro and Micro Curriculum for Robotic Reinforcement Learning

被引:0
|
作者
Tomar, Manan [1 ]
Sathuluri, Akhil [1 ]
Ravindran, Balaraman [1 ,2 ]
机构
[1] Indian Inst Technol Madras, Chennai, Tamil Nadu, India
[2] RBCDSAI, Chennai, Tamil Nadu, India
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Shaping in humans and animals has been shown to be a powerful tool for learning complex tasks as compared to learning in a randomized fashion. This makes the problem less complex and enables one to solve the easier sub task at hand first. Generating a curriculum for such guided learning involves subjecting the agent to easier goals first, and then gradually increasing their difficulty. This paper takes a similar direction and proposes a dual curriculum scheme for solving robotic manipulation tasks with sparse rewards, called MaMiC. It includes a macro curriculum scheme which divides the task into multiple sub-tasks followed by a micro curriculum scheme which enables the agent to learn between such discovered sub-tasks. We show how combining macro and micro curriculum strategies help in overcoming major exploratory constraints considered in robot manipulation tasks without having to engineer any complex rewards. The performance of such a dual curriculum scheme is analyzed on the Fetch environments.
引用
收藏
页码:2226 / 2228
页数:3
相关论文
共 50 条
  • [41] Curriculum based Reinforcement Learning for traffic simulations
    Makri, Stela
    Charalambous, Panayiotis
    COMPUTERS & GRAPHICS-UK, 2023, 113 : 32 - 42
  • [42] A game theoretic approach to curriculum reinforcement learning
    Smyrnakis, Michalis
    Hoang, Lan
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 1212 - 1217
  • [43] CQM: Curriculum Reinforcement Learning with a QuantizedWorld Model
    Lee, Seungjae
    Cho, Daesol
    Park, Jonghae
    Kim, H. Jin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [44] Accelerating Reinforcement Learning for Reaching Using Continuous Curriculum Learning
    Luo, Sha
    Kasaei, Hamidreza
    Schomaker, Lambert
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [45] Curriculum Learning In Job Shop Scheduling Using Reinforcement Learning
    de Puiseau, Constantin Waubert
    Tercan, Hasan
    Meisen, Tobias
    PROCEEDINGS OF THE CONFERENCE ON PRODUCTION SYSTEMS AND LOGISTICS, CPSL 2023-1, 2023, : 34 - 43
  • [46] An acquiring method of macro-actions in reinforcement learning
    Yoshikawa, Takeshi
    Kurihara, Masahito
    2006 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-6, PROCEEDINGS, 2006, : 4813 - +
  • [47] Composing Synergistic Macro Actions for Reinforcement Learning Agents
    Chen, Yu-Ming
    Chang, Kaun-Yu
    Liu, Chien
    Hsiao, Tsu-Ching
    Hong, Zhang-Wei
    Lee, Chun-Yi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (05) : 7251 - 7258
  • [48] Piecewise constant reinforcement learning for robotic applications
    Bonarini, Andrea
    Lazaric, Alessandro
    Restelli, Marcello
    ICINCO 2007: PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL ICSO: INTELLIGENT CONTROL SYSTEMS AND OPTIMIZATION, 2007, : 214 - 221
  • [49] Reinforcement learning in robotic applications: a comprehensive survey
    Singh, Bharat
    Kumar, Rajesh
    Singh, Vinay Pratap
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (02) : 945 - 990
  • [50] Efficient Spatiotemporal Transformer for Robotic Reinforcement Learning
    Yang, Yiming
    Xing, Dengpeng
    Xu, Bo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 7982 - 7989