COOM: A Game Benchmark for Continual Reinforcement Learning

被引:0
|
作者
Tomilin, Tristan [1 ]
Fang, Meng [1 ,2 ]
Zhang, Yudi [1 ]
Pechenizkiy, Mykola [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
[2] Univ Liverpool, Liverpool, Merseyside, England
关键词
ROBOTICS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The advancement of continual reinforcement learning (RL) has been facing various obstacles, including standardized metrics and evaluation protocols, demanding computational requirements, and a lack of widely accepted standard benchmarks. In response to these challenges, we present COOM (Continual DOOM), a continual RL benchmark tailored for embodied pixel-based RL. COOM presents a meticulously crafted suite of task sequences set within visually distinct 3D environments, serving as a robust evaluation framework to assess crucial aspects of continual RL, such as catastrophic forgetting, knowledge transfer, and sample-efficient learning. Following an in-depth empirical evaluation of popular continual learning (CL) methods, we pinpoint their limitations, provide valuable insight into the benchmark and highlight unique algorithmic challenges. This makes our work the first to benchmark image-based CRL in 3D environments with embodied perception. The primary objective of the COOM benchmark is to offer the research community a valuable and cost-effective challenge. It seeks to deepen our comprehension of the capabilities and limitations of current and forthcoming CL methods in an RL setting. The code and environments are open-sourced and accessible on GitHub.
引用
收藏
页数:39
相关论文
共 50 条
  • [21] CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
    Srinivasan, Tejas
    Chang, Ting-Yun
    Alva, Leticia Pinto
    Chochlakis, Georgios
    Rostami, Mohammad
    Thomason, Jesse
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [22] CL-MASR: A Continual Learning Benchmark for Multilingual ASR
    Della Libera, Luc
    Mousavi, Pooneh
    Zaiem, Salah
    Subakan, Cem
    Ravanelli, Mirco
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 4931 - 4944
  • [23] Multi-world Model in Continual Reinforcement Learning
    Shen, Kevin
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23757 - 23759
  • [24] Optimal tuning of continual online exploration in reinforcement learning
    Achbany, Youssef
    Fouss, Francois
    Yen, Luh
    Pirotte, Alain
    Saerens, Marco
    ARTIFICIAL NEURAL NETWORKS - ICANN 2006, PT 1, 2006, 4131 : 790 - 800
  • [25] Continual Model-Based Reinforcement Learning with Hypernetworks
    Huang, Yizhou
    Xie, Kevin
    Bharadhwaj, Homanga
    Shkurti, Florian
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 799 - 805
  • [26] Towards Continual Reinforcement Learning through Evolutionary Meta-Learning
    Grbic, Djordje
    Risi, Sebastian
    PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION (GECCCO'19 COMPANION), 2019, : 119 - 120
  • [27] Leveraging Procedural Generation to Benchmark Reinforcement Learning
    Cobbe, Karl
    Hesse, Christopher
    Hilton, Jacob
    Schulman, John
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [28] Leveraging Procedural Generation to Benchmark Reinforcement Learning
    Cobbe, Karl
    Hesse, Christopher
    Hilton, Jacob
    Schulman, John
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [29] CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification
    Li, Haifeng
    Jiang, Hao
    Gu, Xin
    Peng, Jian
    Li, Wenbo
    Hong, Liang
    Tao, Chao
    SENSORS, 2020, 20 (04)
  • [30] SELF-ACTIVATING NEURAL ENSEMBLES FOR CONTINUAL REINFORCEMENT LEARNING
    Powers, Sam
    Xing, Eliot
    Gupta, Abhinav
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199