Multi-Agent Reinforcement Learning is A Sequence Modeling Problem

被引:0
|
作者
Wen, Muning [1 ,2 ]
Kuba, Jakub Grudzien [3 ]
Lin, Runji [4 ]
Zhang, Weinan [1 ]
Wen, Ying [1 ]
Wang, Jun [2 ,5 ]
Yang, Yaodong [6 ,7 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Digital Brain Lab, Berkeley, CA USA
[3] Univ Oxford, Oxford, England
[4] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[5] UCL, London, England
[6] Beijing Inst Gen AI, Beijing, Peoples R China
[7] Peking Univ, Inst AI, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large sequence models (SM) such as GPT series and BERT have displayed outstanding performance and generalization capabilities in natural language process, vision and recently reinforcement learning. A natural follow-up question is how to abstract multi-agent decision making also as an sequence modeling problem and benefit from the prosperous development of the SMs. In this paper, we introduce a novel architecture named Multi-Agent Transformer (MAT) that effectively casts co-operative multi-agent reinforcement learning (MARL) into SM problems wherein the objective is to map agents' observation sequences to agents' optimal action sequences. Our goal is to build the bridge between MARL and SMs so that the modeling power of modern sequence models can be unleashed for MARL. Central to our MAT is an encoder-decoder architecture which leverages the multi-agent advantage decomposition theorem to transform the joint policy search problem into a sequential decision making process; this renders only linear time complexity for multi-agent problems and, most importantly, endows MAT with monotonic performance improvement guarantee. Unlike prior arts such as Decision Transformer fit only pre-collected offline data, MAT is trained by online trial and error from the environment in an on-policy fashion. To validate MAT, we conduct extensive experiments on StarCraftII, Multi-Agent MuJoCo, Dexterous Hands Manipulation, and Google Research Football benchmarks. Results demonstrate that MAT achieves superior performance and data efficiency compared to strong baselines including MAPPO and HAPPO. Furthermore, we demonstrate that MAT is an excellent few-short learner on unseen tasks regardless of changes in the number of agents. See our project page at https://sites.google.com/view/multi-agent-transformer((1)).
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads
    Kazmi, Hussain
    Suykens, Johan
    Balint, Attila
    Driesen, Johan
    APPLIED ENERGY, 2019, 238 : 1022 - 1035
  • [22] Modeling and Algorithms of Multi-agent Reinforcement Learning Using Stochastic Game
    Xie Guangqiang
    Chen Xuesong
    2010 THE 3RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND INDUSTRIAL APPLICATION (PACIIA2010), VOL VII, 2010, : 375 - 378
  • [23] Hierarchical Reinforcement Learning with Opponent Modeling for Distributed Multi-agent Cooperation
    Liang, Zhixuan
    Cao, Jiannong
    Jiang, Shan
    Saxena, Divya
    Xu, Huafeng
    2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2022), 2022, : 884 - 894
  • [24] Modeling and Algorithms of Multi-agent Reinforcement Learning Using Stochastic Game
    Xie Guangqiang
    Chen Xuesong
    2011 INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTATION AND INDUSTRIAL APPLICATION (ICIA2011), VOL II, 2011, : 374 - 377
  • [25] A Conceptual Modeling of Flocking-regulated Multi-agent Reinforcement Learning
    Chen, C. S.
    Hou, Yaqing
    Ong, Y. S.
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 5256 - 5262
  • [26] Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning
    Tennant, Elizaveta
    Hailes, Stephen
    Musolesi, Mirco
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 317 - 325
  • [27] MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement Learning
    Malysheva, Aleksandra
    Kudenko, Daniel
    Shpilman, Aleksei
    2019 XVI INTERNATIONAL SYMPOSIUM PROBLEMS OF REDUNDANCY IN INFORMATION AND CONTROL SYSTEMS (REDUNDANCY), 2019, : 171 - 176
  • [28] TEAM POLICY LEARNING FOR MULTI-AGENT REINFORCEMENT LEARNING
    Cassano, Lucas
    Alghunaim, Sulaiman A.
    Sayed, Ali H.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3062 - 3066
  • [29] Aggregation Transfer Learning for Multi-Agent Reinforcement learning
    Xu, Dongsheng
    Qiao, Peng
    Dou, Yong
    2021 2ND INTERNATIONAL CONFERENCE ON BIG DATA & ARTIFICIAL INTELLIGENCE & SOFTWARE ENGINEERING (ICBASE 2021), 2021, : 547 - 551
  • [30] Learning to Communicate with Deep Multi-Agent Reinforcement Learning
    Foerster, Jakob N.
    Assael, Yannis M.
    de Freitas, Nando
    Whiteson, Shimon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29