Hierarchical reinforcement learning based on macro actions

被引:0
|
作者
Hao Jiang [1 ]
Gongju Wang [2 ]
Shengze Li [1 ]
Jieyuan Zhang [1 ]
Long Yan [2 ]
Xinhai Xu [1 ]
机构
[1] Chinese Academy of Military Science,Data Intelligence Division
[2] China Unicom Digital Technology Co,undefined
关键词
Hierarchical reinforcement learning; Macro action mapping model; Combat and non-combat macro actions; Rule-based execution logic;
D O I
10.1007/s40747-025-01895-9
中图分类号
学科分类号
摘要
The large action space is a key challenge in reinforcement learning. Although hierarchical methods have been proven to be effective in addressing this issue, they are not fully explored. This paper combines domain knowledge with hierarchical concepts to propose a novel Hierarchical Reinforcement Learning framework based on macro actions (HRL-MA). This framework includes a macro action mapping model that abstracts sequences of micro actions into macro actions, thereby simplifying the decision-making process. Macro actions are divided into two categories: combat macro actions (CMA) and non-combat macro actions (NO-CMA). NO-CMA are driven by decision tree-based logical rules and provide conditions for the execution of CMA. CMA form the action space of the reinforcement learning algorithm, which dynamically selects actions based on the current state. Comprehensive tests on the StarCraft II maps Simple64 and AbyssalReefLE demonstrate that the HRL-MA framework exhibits superior performance, achieving higher win rates compared to baseline algorithms. Furthermore, in mini-game scenarios, HRL-MA consistently outperforms baseline algorithms in terms of reward scores. The findings highlight the effectiveness of integrating hierarchical structures and macro actions in reinforcement learning to manage complex decision-making tasks in environments with large action spaces.
引用
收藏
相关论文
共 50 条
  • [31] Exploration Strategy based on Validity of Actions in Deep Reinforcement Learning
    Yoon, Hyung-Suk
    Lee, Sang-Hyun
    Seo, Seung-Woo
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 6134 - 6139
  • [32] Concurrent Hierarchical Reinforcement Learning
    Marthi, Bhaskara
    Russell, Stuart
    Latham, David
    Guestrin, Carlos
    19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), 2005, : 779 - 785
  • [33] Hierarchical reinforcement learning with OMQ
    Shen, Jing
    Liu, Haibo
    Gu, Guochang
    PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS, VOLS 1 AND 2, 2006, : 584 - 588
  • [34] Hierarchical Imitation and Reinforcement Learning
    Le, Hoang M.
    Jiang, Nan
    Agarwal, Alekh
    Dudik, Miroslav
    Yue, Yisong
    Daume, Hal, III
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [35] On Efficiency in Hierarchical Reinforcement Learning
    Wen, Zheng
    Precup, Doina
    Ibrahimi, Morteza
    Barreto, Andre
    Van Roy, Benjamin
    Singh, Satinder
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [36] Budgeted Hierarchical Reinforcement Learning
    Leon, Aurelia
    Denoyer, Ludovic
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [37] Learning the Selection of Actions for an Autonomous Social Robot by Reinforcement Learning Based on Motivations
    Álvaro Castro-González
    María Malfaz
    Miguel A. Salichs
    International Journal of Social Robotics, 2011, 3 : 427 - 441
  • [38] Learning the Selection of Actions for an Autonomous Social Robot by Reinforcement Learning Based on Motivations
    Castro-Gonzalez, Alvaro
    Malfaz, Maria
    Salichs, Miguel A.
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2011, 3 (04) : 427 - 441
  • [39] DeepLine: AutoML Tool for Pipelines Generation using Deep Reinforcement Learning and Hierarchical Actions Filtering
    Heffetz, Yuval
    Vainshtein, Roman
    Katz, Gilad
    Rokach, Lior
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 2103 - 2113
  • [40] Hierarchical Reinforcement Learning with Advantage-Based Auxiliary Rewards
    Li, Siyuan
    Wang, Rui
    Tang, Minxue
    Zhang, Chongjie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32