Safe Deep Reinforcement Learning for Power System Operation under Scheduled Unavailability

被引:0
|
作者
Weiss, Xavier [1 ]
Mohammadi, Saeed [1 ]
Khanna, Parag [1 ]
Hesamzadeh, Mohammad Reza [1 ]
Nordstrom, Lars [1 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
基金
瑞典研究理事会;
关键词
Deep reinforcement learning; power system operation; deep learning; safe deep reinforcement learning;
D O I
10.1109/PESGM52003.2023.10252619
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
The electrical grid is a safety -critical system, since incorrect actions taken by a power system operator can result in grid failure and cause harm. For this reason, it is desirable to have an automated power system operator that can reliably take actions that avoid grid failure while fulfilling some objective. Given the existing and growing complexity of power system operation, the choice has often fallen on deep reinforcement learning (DRL) agents for automation, but these are neither explainable nor provably safe. Therefore in this work, the effect of shielding on DRL agent survivability, validation computational time, and convergence are explored. To do this, shielded and unshielded DRL agents are evaluated on a standard IEEE 14 -bus network. Agents are tasked with balancing generation and demand through re dispatch and topology changing actions at a human timescale of 5 minutes. To test survivability under controlled conditions, varying degrees of scheduled unavailability events are introduced which could cause grid failure if unaddressed. Results show improved convergence and generally greater survivability of shielded agents compared with unshielded agents. However, the safety assurances provided by the shield increase computational time. This will require trade-offs or optimizations to make real-time deployment more feasible.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] Learning to Operate Distribution Networks With Safe Deep Reinforcement Learning
    Li, Hepeng
    He, Haibo
    IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (03) : 1860 - 1872
  • [22] Hierarchical Coordination of Networked-Microgrids Toward Decentralized Operation: A Safe Deep Reinforcement Learning Method
    Xia, Yang
    Xu, Yan
    Feng, Xue
    IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2024, 15 (03) : 1981 - 1993
  • [23] Safe deep reinforcement learning for building energy management
    Wang, Xiangwei
    Wang, Peng
    Huang, Renke
    Zhu, Xiuli
    Arroyo, Javier
    Li, Ning
    APPLIED ENERGY, 2025, 377
  • [24] A Deep Safe Reinforcement Learning Approach for Mapless Navigation
    Lv, Shaohua
    Li, Yanjie
    Liu, Qi
    Gao, Jianqi
    Pang, Xizheng
    Chen, Meiling
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE-ROBIO 2021), 2021, : 1520 - 1525
  • [25] Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation
    Marchesini, Enrico
    Corsi, Davide
    Farinelli, Alessandro
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 5590 - 5595
  • [26] Power System Dispatch: An Accelerated Safe Reinforcement Learning Approach by Incorporating Learning From Demonstration
    Yi Z.
    Liang S.
    Wang W.
    Jiang W.
    Yang C.
    Xin Y.
    Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 2024, 44 (13): : 5084 - 5096
  • [27] Optimal Scheduled Control Operation of Battery Energy Storage System using Model-Free Reinforcement Learning
    Selim, Alaa
    2022 IEEE SUSTAINABLE POWER AND ENERGY CONFERENCE (ISPEC), 2022,
  • [28] Low-power Autonomous Adaptation System with Deep Reinforcement Learning
    Lee, Juhyoung
    Jo, Wooyoung
    Park, Seong-Wook
    Yoo, Hoi-Jun
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 300 - 303
  • [29] Adaptive Power System Emergency Control Using Deep Reinforcement Learning
    Huang, Qiuhua
    Huang, Renke
    Hao, Weituo
    Tan, Jie
    Fan, Rui
    Huang, Zhenyu
    IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (02) : 1171 - 1182
  • [30] Power System Fault Diagnosis Method Based on Deep Reinforcement Learning
    Wang, Zirui
    Zhang, Ziqi
    Zhang, Xu
    Du, Mingxuan
    Zhang, Huiting
    Liu, Bowen
    ENERGIES, 2022, 15 (20)