Safe Deep Reinforcement Learning for Power System Operation under Scheduled Unavailability

被引:0
|
作者
Weiss, Xavier [1 ]
Mohammadi, Saeed [1 ]
Khanna, Parag [1 ]
Hesamzadeh, Mohammad Reza [1 ]
Nordstrom, Lars [1 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
基金
瑞典研究理事会;
关键词
Deep reinforcement learning; power system operation; deep learning; safe deep reinforcement learning;
D O I
10.1109/PESGM52003.2023.10252619
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
The electrical grid is a safety -critical system, since incorrect actions taken by a power system operator can result in grid failure and cause harm. For this reason, it is desirable to have an automated power system operator that can reliably take actions that avoid grid failure while fulfilling some objective. Given the existing and growing complexity of power system operation, the choice has often fallen on deep reinforcement learning (DRL) agents for automation, but these are neither explainable nor provably safe. Therefore in this work, the effect of shielding on DRL agent survivability, validation computational time, and convergence are explored. To do this, shielded and unshielded DRL agents are evaluated on a standard IEEE 14 -bus network. Agents are tasked with balancing generation and demand through re dispatch and topology changing actions at a human timescale of 5 minutes. To test survivability under controlled conditions, varying degrees of scheduled unavailability events are introduced which could cause grid failure if unaddressed. Results show improved convergence and generally greater survivability of shielded agents compared with unshielded agents. However, the safety assurances provided by the shield increase computational time. This will require trade-offs or optimizations to make real-time deployment more feasible.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Scheduled Operation of Wind Farm with Battery System Using Deep Reinforcement Learning
    Futakuchi, Mamoru
    Takayama, Satoshi
    Ishigame, Atsushi
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, 2021, 16 (05) : 696 - 703
  • [2] Intelligent Adjustment for Power System Operation Mode Based on Deep Reinforcement Learning
    Hu, Wei
    Mi, Ning
    Wu, Shuang
    Zhang, Huiling
    Hu, Zhewen
    Zhang, Lei
    IENERGY, 2024, 3 (04): : 252 - 260
  • [3] Power System Operation Mode Calculation Based on Improved Deep Reinforcement Learning
    Yu, Ziyang
    Zhou, Bowen
    Yang, Dongsheng
    Wu, Weirong
    Lv, Chen
    Cui, Yong
    MATHEMATICS, 2024, 12 (01)
  • [4] Application of reinforcement learning to power system operation
    Takayama S.
    IEEJ Transactions on Power and Energy, 2021, 141 (10) : 608 - 611
  • [5] Safe Deep Reinforcement Learning-Based Real-Time Operation Strategy in Unbalanced Distribution System
    Yoon, Yeunggurl
    Yoon, Myungseok
    Zhang, Xuehan
    Choi, Sungyun
    IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2024, 60 (06) : 8273 - 8283
  • [6] Deep Reinforcement Learning for Power System Applications: An Overview
    Zhang, Zidong
    Zhang, Dongxia
    Qiu, Robert C.
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2020, 6 (01): : 213 - 225
  • [7] Baggage Routing with Scheduled Departures using Deep Reinforcement Learning
    Sorensen, Rene A.
    Rosenberg, Jens
    Karstoft, Henrik
    2021 INTERNATIONAL SYMPOSIUM ON COMPUTER SCIENCE AND INTELLIGENT CONTROLS (ISCSIC 2021), 2021, : 13 - 19
  • [8] On the Feasibility Guarantees of Deep Reinforcement Learning Solutions for Distribution System Operation
    Hosseini, Mohammad Mehdi
    Parvania, Masood
    IEEE TRANSACTIONS ON SMART GRID, 2023, 14 (02) : 954 - 964
  • [9] Economic Operation and Management of Microgrid System Using Deep Reinforcement Learning
    Wu, Ling
    Zhang, Ji
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 100
  • [10] Improving the interpretability of deep reinforcement learning in urban drainage system operation
    Tian, Wenchong
    Fu, Guangtao
    Xin, Kunlun
    Zhang, Zhiyu
    Liao, Zhenliang
    WATER RESEARCH, 2024, 249