Scaling, Control and Generalization in Reinforcement Learning Level Generators

被引:0
|
作者
Earle, Sam [1 ]
Jiang, Zehua [1 ]
Togelius, Julian [1 ]
机构
[1] NYU, Game Innovat Lab, Brooklyn, NY 11201 USA
关键词
procedural content generation; reinforcement learning;
D O I
10.1109/CoG60054.2024.10645598
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Procedural Content Generation via Reinforcement Learning (PCGRL) has been introduced as a means by which controllable designer agents can be trained based only on a set of computable metrics acting as a proxy for the level's quality and key characteristics. While PCGRL offers a unique set of affordances for game designers, it is constrained by the compute-intensive process of training RL agents, and has so far been limited to generating relatively small levels. To address this issue of scale, we implement several PCGRL environments in Jax so that all aspects of learning and simulation happen in parallel on the GPU, resulting in faster environment simulation; removing the CPU-GPU transfer of information bottleneck during RL training; and ultimately resulting in significantly improved training speed. We replicate several key results from prior works in this new framework, letting models train for much longer than previously studied, and evaluating their behavior after 1 billion timesteps. Aiming for greater control for human designers, we introduce randomized level sizes and frozen "pinpoints" of pivotal game tiles as further ways of countering overfitting. To test the generalization ability of learned generators, we evaluate models on large, out-of-distribution map sizes, and find that partial observation sizes learn more robust design strategies.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] PCGRL+: Scaling, Control and Generalization in Reinforcement Learning Level Generators
    Earle, Sam
    Jiang, Zehua
    Togelius, Julian
    arXiv,
  • [2] LevDoom: A Benchmark for Generalization on Level Difficulty in Reinforcement Learning
    Tomilin, Tristan
    Dai, Tianhong
    Fang, Meng
    Pechenizkiy, Mykola
    2022 IEEE CONFERENCE ON GAMES, COG, 2022, : 72 - 79
  • [3] Visual Grounding for Object-Level Generalization in Reinforcement Learning
    Jiang, Haobin
    Lu, Zongqing
    COMPUTER VISION - ECCV 2024, PT XXX, 2025, 15088 : 55 - 72
  • [4] On the Generalization of Representations in Reinforcement Learning
    Le Lan, Charline
    Tu, Stephen
    Oberman, Adam
    Agarwal, Rishabh
    Bellemare, Marc
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [5] Quantifying Generalization in Reinforcement Learning
    Cobbe, Karl
    Klimov, Oleg
    Hesse, Chris
    Kim, Taehoon
    Schulman, John
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [6] Learning Dynamics and Generalization in Deep Reinforcement Learning
    Lyle, Clare
    Rowland, Mark
    Dabney, Will
    Kwiatkowksa, Marta
    Gal, Yarin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [7] On the Generalization Gap in Reparameterizable Reinforcement Learning
    Wang, Huan
    Zheng, Stephan
    Xiong, Caiming
    Socher, Richard
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [8] On the Importance of Exploration for Generalization in Reinforcement Learning
    Jiang, Yiding
    Kolter, J. Zico
    Raileanu, Roberta
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] A CAUTIOUS APPROACH TO GENERALIZATION IN REINFORCEMENT LEARNING
    Fonteneau, Raphael
    Murphy, Susan A.
    Wehenkel, Louis
    Ernst, Damien
    ICAART 2010: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1: ARTIFICIAL INTELLIGENCE, 2010, : 64 - 73
  • [10] Generalization to New Actions in Reinforcement Learning
    Jain, Ayush
    Szot, Andrew
    Lim, Joseph J.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119