Deep Reinforcement Learning-Based Active Network Management and Emergency Load-Shedding Control for Power Systems

被引:3
|
作者
Zhang, Haotian [1 ]
Sun, Xinfeng [1 ]
Lee, Myoung Hoon [2 ]
Moon, Jun [1 ]
机构
[1] Hanyang Univ, Dept Elect Engn, Seoul 04763, South Korea
[2] Incheon Natl Univ, Dept Elect Engn, Incheon 22012, South Korea
关键词
Power system stability; Safety; Voltage control; Inference algorithms; Training; Power systems; Task analysis; Deep reinforcement learning; active network management; emergency control; safe reinforcement learning; load shedding;
D O I
10.1109/TSG.2023.3302846
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents two novel deep reinforcement learning (DRL) approaches aimed at solving complex power system control problems in a data-driven sense to maintain the stability of power systems. Specifically, we propose, respectively, SACPER (Soft Actor-Critic (SAC) with Prioritized Experience Replay (PER)) and Constrained Variational Policy Optimization (CVPO) DRL algorithms to address the sequential decision-making problem of active network management (ANM) in distributed power systems and optimizing emergency load shedding (ELS) control problems. First, we propose SACPER for the ANM problem, which prioritizes the training of samples with large errors and poor policy performance. Evaluation of SACPER in terms of stability improvement and convergence speed shows that the ANM problem is optimized and energy loss and operational constraint violations are minimized. Next, we introduce CVPO for the ELS control problem, which is formulated as the Safe Reinforcement Learning (SRL) framework to address safety constraint prioritization issues in power systems. We consider additional voltage variables in the network as strong constraints for SRL to achieve fast voltage recovery and minimize unnecessary energy loss, while ensuring good training performance and efficiency. To demonstrate the performances of SACPER, we apply it to ANM6-Easy environment. The CVPO algorithm is applied to IEEE 39-Bus and IEEE 300-Bus systems. The simulation results of SACPER and CVPO are validated through extensive comparisons with other state-of-the-art DRL approaches.
引用
收藏
页码:1423 / 1437
页数:15
相关论文
共 50 条
  • [41] Deep reinforcement learning-based network for optimized power flow in islanded DC microgrid
    Pandia Rajan Jeyaraj
    Siva Prakash Asokan
    Aravind Chellachi Kathiresan
    Edward Rajan Samuel Nadar
    Electrical Engineering, 2023, 105 : 2805 - 2816
  • [42] Voltage Control for Active Distribution Network Based on Bayesian Deep Reinforcement Learning
    Zhang, Xiao
    Wu, Zhi
    Zheng, Shu
    Gu, Wei
    Hu, Bo
    Dong, Jichao
    Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2024, 48 (20): : 81 - 90
  • [43] Power Network Topology Optimization and Power Flow Control Based on Deep Reinforcement Learning
    Zhou Y.
    Zhou L.
    Ding J.
    Gao J.
    Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, 2021, 55 : 7 - 14
  • [44] Adaptive Emergency Control of Power Systems Based on Deep Belief Network
    Wu, Junyong
    Li, Baoqin
    Hao, Liangliang
    Shi, Fashun
    Zhao, Pengjie
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2024, 10 (04): : 1618 - 1631
  • [45] Deep reinforcement learning based active disturbance rejection load frequency control of multi-area interconnected power systems with renewable energy
    Zheng, Yuemin
    Tao, Jin
    Sun, Qinglin
    Sun, Hao
    Chen, Zengqiang
    Sun, Mingwei
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2023, 360 (17): : 13908 - 13931
  • [46] Emergency load shedding algorithm for power systems based on successive optimization
    Wang, Gang
    Zhang, Xuemin
    Mei, Shengwei
    Tan, Wei
    Qinghua Daxue Xuebao/Journal of Tsinghua University, 2009, 49 (07): : 943 - 947
  • [47] Deep Reinforcement Learning-Based Channel and Power Allocation in Multibeam LEO Satellite Systems
    Li, Junrong
    Peng, Fuzhou
    Wang, Xijun
    Chen, Xiang
    IOT AS A SERVICE, IOTAAS 2023, 2025, 585 : 103 - 116
  • [48] Reinforcement Learning-Based Control of a Power Electronic Converter
    Alfred, Dajr
    Czarkowski, Dariusz
    Teng, Jiaxin
    MATHEMATICS, 2024, 12 (05)
  • [49] Deep Reinforcement Learning-Based Optimal Building Energy Management Strategies with Photovoltaic Systems
    Sim, Minjeong
    Hong, Geonkyo
    Suh, Dongjun
    PROCEEDINGS OF BUILDING SIMULATION 2021: 17TH CONFERENCE OF IBPSA, 2022, 17 : 2125 - 2132
  • [50] Deep Reinforcement Learning Based Load Control Strategy for Combined Heat and Power Units
    Xie, Ge
    Cui, Chenggang
    Zhao, Huirong
    Yang, Jiguang
    Shi, Yunfei
    2021 11TH INTERNATIONAL CONFERENCE ON POWER AND ENERGY SYSTEMS (ICPES 2021), 2021, : 280 - 284