COOM: A Game Benchmark for Continual Reinforcement Learning

被引:0
|
作者
Tomilin, Tristan [1 ]
Fang, Meng [1 ,2 ]
Zhang, Yudi [1 ]
Pechenizkiy, Mykola [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
[2] Univ Liverpool, Liverpool, Merseyside, England
关键词
ROBOTICS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The advancement of continual reinforcement learning (RL) has been facing various obstacles, including standardized metrics and evaluation protocols, demanding computational requirements, and a lack of widely accepted standard benchmarks. In response to these challenges, we present COOM (Continual DOOM), a continual RL benchmark tailored for embodied pixel-based RL. COOM presents a meticulously crafted suite of task sequences set within visually distinct 3D environments, serving as a robust evaluation framework to assess crucial aspects of continual RL, such as catastrophic forgetting, knowledge transfer, and sample-efficient learning. Following an in-depth empirical evaluation of popular continual learning (CL) methods, we pinpoint their limitations, provide valuable insight into the benchmark and highlight unique algorithmic challenges. This makes our work the first to benchmark image-based CRL in 3D environments with embodied perception. The primary objective of the COOM benchmark is to offer the research community a valuable and cost-effective challenge. It seeks to deepen our comprehension of the capabilities and limitations of current and forthcoming CL methods in an RL setting. The code and environments are open-sourced and accessible on GitHub.
引用
收藏
页数:39
相关论文
共 50 条
  • [1] Continual World: A Robotic Benchmark For Continual Reinforcement Learning
    Wolczyk, Maciej
    Zajac, Michal
    Pascanu, Razvan
    Kucinski, Lukasz
    Milos, Piotr
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Continual learning, deep reinforcement learning, and microcircuits: a novel method for clever game playing
    Chang O.
    Ramos L.
    Morocho-Cayamcela M.E.
    Armas R.
    Zhinin-Vera L.
    Multimedia Tools and Applications, 2025, 84 (3) : 1537 - 1559
  • [3] REINFORCEMENT LEARNING vs. A* IN A ROLE PLAYING GAME BENCHMARK SCENARIO
    Alvarez-Ramos, C. M.
    Santos, M.
    Lopez, V.
    COMPUTATIONAL INTELLIGENCE: FOUNDATIONS AND APPLICATIONS: PROCEEDINGS OF THE 9TH INTERNATIONAL FLINS CONFERENCE, 2010, 4 : 644 - 650
  • [4] CGLB: Benchmark Tasks for Continual Graph Learning
    Zhang, Xikun
    Song, Dongjin
    Tao, Dacheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Continual Reinforcement Learning with Complex Synapses
    Kaplanis, Christos
    Shanahan, Murray
    Clopath, Claudia
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [6] Disentangling Transfer in Continual Reinforcement Learning
    Wolczyk, Maciej
    Zajac, Michal
    Pascanu, Razvan
    Kucinski, Lukasz
    Milos, Piotr
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [7] Adaptive Exploration for Continual Reinforcement Learning
    Stulp, Freek
    2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2012, : 1631 - 1636
  • [8] Prediction and Control in Continual Reinforcement Learning
    Anand, Nishanth
    Precup, Doina
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Policy Consolidation for Continual Reinforcement Learning
    Kaplanis, Christos
    Shanahan, Murray
    Clopath, Claudia
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [10] A Benchmark and Empirical Analysis for Replay Strategies in Continual Learning
    Yang, Qihan
    Feng, Fan
    Chan, Rosa H. M.
    CONTINUAL SEMI-SUPERVISED LEARNING, CSSL 2021, 2022, 13418 : 75 - 90