A multi-agent reinforcement learning method for distribution system restoration considering dynamic network reconfiguration

被引:2
|
作者
Si, Ruiqi [1 ]
Chen, Siyuan [1 ]
Zhang, Jun [1 ]
Xu, Jian [1 ]
Zhang, Luxi [2 ]
机构
[1] Wuhan Univ, Sch Elect Engn & Automat, Wuhan 430072, Peoples R China
[2] Brandeis Univ, Waltham, MA 02454 USA
基金
国家重点研发计划;
关键词
Deep reinforcement learning; Multi-agent reinforcement learning; Distribution system restoration; Distribution network; Microgrid; UNBALANCED DISTRIBUTION-SYSTEMS; SERVICE RESTORATION; MANAGEMENT; MODEL;
D O I
10.1016/j.apenergy.2024.123625
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Extreme weather, chain failures, and other events have increased the probability of wide-area blackouts, which highlights the importance of rapidly and efficiently restoring the affected loads. This paper proposes a multi-agent reinforcement learning method for distribution system restoration. Firstly, considering that the topology of the distribution system may change during network reconfiguration, a dynamic agent network (DAN) architecture is designed to address the challenge of input dimensions changing in neural network. Two encoders are created to capture observations of the environment and other agents respectively, and an attention mechanism is used to aggregate an arbitrary-sized neighboring agent feature set. Then, considering the operation constraints of the DSR, an action mask mechanism is implemented to filter out invalid actions, ensuring the security of the strategy. Finally, an IEEE 123-node test system is used for validation, and the experimental results showed that the proposed algorithm can effectively assist agents in accomplishing collaborative DSR tasks.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Multi-Agent Reinforcement Learning for Dynamic Spectrum Access
    Jiang, Huijuan
    Wang, Tianyu
    Wang, Shaowei
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [22] Multi-Agent Hierarchical Reinforcement Learning with Dynamic Termination
    Han, Dongge
    Boehmer, Wendelin
    Wooldridge, Michael
    Rogers, Alex
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2006 - 2008
  • [23] Multi-agent Hierarchical Reinforcement Learning with Dynamic Termination
    Han, Dongge
    Bohmer, Wendelin
    Wooldridge, Michael
    Rogers, Alex
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2019, 11671 : 80 - 92
  • [24] Dynamic Multi-Agent Reinforcement Learning for Control Optimization
    Fagan, Derek
    Meier, Rene
    PROCEEDINGS FIFTH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS, MODELLING AND SIMULATION, 2014, : 99 - 104
  • [25] Multi-Agent Reinforcement Learning in Dynamic Industrial Context
    Zhang, Hongyi
    Li, Jingya
    Qi, Zhiqiang
    Aronsson, Anders
    Bosch, Jan
    Olsson, Helena Holmstrom
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 448 - 457
  • [26] Multi-Agent Reinforcement Learning With Decentralized Distribution Correction
    Li, Kuo
    Jia, Qing-Shan
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 1684 - 1696
  • [27] Multi-Agent Reinforcement Learning With Decentralized Distribution Correction
    Li, Kuo
    Jia, Qing-Shan
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 1684 - 1696
  • [28] Review of multi-agent reinforcement learning based dynamic spectrum allocation method
    Song B.
    Ye W.
    Meng X.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2021, 43 (11): : 3338 - 3351
  • [29] A Multi-Agent System for Restoration of an Electric Power Distribution Network with Local Generation
    Khamphanchai, Warodom
    Pipattanasomporn, Manisa
    Rahman, Saifur
    2012 IEEE POWER AND ENERGY SOCIETY GENERAL MEETING, 2012,
  • [30] A Distribution Network Restoration Decision Support Algorithm Based on Multi-Agent System
    Liu, Yu
    Hou, Yunhe
    Lei, Shunbo
    Wang, Dong
    2016 IEEE PES ASIA-PACIFIC POWER AND ENERGY ENGINEERING CONFERENCE (APPEEC), 2016, : 33 - 37