Safe Deep Reinforcement Learning for Power System Operation under Scheduled Unavailability

被引:0
|
作者
Weiss, Xavier [1 ]
Mohammadi, Saeed [1 ]
Khanna, Parag [1 ]
Hesamzadeh, Mohammad Reza [1 ]
Nordstrom, Lars [1 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
基金
瑞典研究理事会;
关键词
Deep reinforcement learning; power system operation; deep learning; safe deep reinforcement learning;
D O I
10.1109/PESGM52003.2023.10252619
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
The electrical grid is a safety -critical system, since incorrect actions taken by a power system operator can result in grid failure and cause harm. For this reason, it is desirable to have an automated power system operator that can reliably take actions that avoid grid failure while fulfilling some objective. Given the existing and growing complexity of power system operation, the choice has often fallen on deep reinforcement learning (DRL) agents for automation, but these are neither explainable nor provably safe. Therefore in this work, the effect of shielding on DRL agent survivability, validation computational time, and convergence are explored. To do this, shielded and unshielded DRL agents are evaluated on a standard IEEE 14 -bus network. Agents are tasked with balancing generation and demand through re dispatch and topology changing actions at a human timescale of 5 minutes. To test survivability under controlled conditions, varying degrees of scheduled unavailability events are introduced which could cause grid failure if unaddressed. Results show improved convergence and generally greater survivability of shielded agents compared with unshielded agents. However, the safety assurances provided by the shield increase computational time. This will require trade-offs or optimizations to make real-time deployment more feasible.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] Power System Security Correction Control Based on Deep Reinforcement Learning
    Wang Y.
    Li L.
    Yu Y.
    Yang N.
    Liu M.
    Li T.
    Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2023, 47 (12): : 121 - 129
  • [32] Scheduled power tracking control of the wind-storage hybrid system based on the reinforcement learning theory
    Li, Ze
    2017 2ND INTERNATIONAL SEMINAR ON ADVANCES IN MATERIALS SCIENCE AND ENGINEERING, 2017, 231
  • [33] Computationally Efficient Safe Reinforcement Learning for Power Systems
    Tabas, Daniel
    Zhang, Baosen
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 3303 - 3310
  • [34] Deep Reinforcement Learning for Optimal Hydropower Reservoir Operation
    Xu, Wei
    Meng, Fanlin
    Guo, Weisi
    Li, Xia
    Fu, Guangtao
    JOURNAL OF WATER RESOURCES PLANNING AND MANAGEMENT, 2021, 147 (08)
  • [35] Reinforcement Learning-Based Solution to Power Grid Planning and Operation Under Uncertainties
    Shang, Xiumin
    Ye, Lin
    Zhang, Jing
    Yang, Jingping
    Xu, Jianping
    Lyu, Qin
    Diao, Ruisheng
    2020 IEEE/ACM WORKSHOP ON MACHINE LEARNING IN HIGH PERFORMANCE COMPUTING ENVIRONMENTS (MLHPC 2020) AND WORKSHOP ON ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR SCIENTIFIC APPLICATIONS (AI4S 2020), 2020, : 72 - 79
  • [36] Optimization of the Ice Storage Air Conditioning System Operation Based on Deep Reinforcement Learning
    Li, Mingte
    Xia, Fei
    Xia, Lin
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 8554 - 8559
  • [37] Deep Reinforcement Learning for URLLC data management on top of scheduled eMBB traffic
    Saggese, Fabio
    Pasqualini, Luca
    Moretti, Marco
    Abrardo, Andrea
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [38] Robust Safe Reinforcement Learning under Adversarial Disturbances
    Li, Zeyang
    Hu, Chuxiong
    Li, Shengbo Eben
    Cheng, Jia
    Wang, Yunan
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 334 - 341
  • [39] Typical Power Grid Operation Mode Generation Based on Reinforcement Learning and Deep Belief Network
    Wang, Zirui
    Zhou, Bowen
    Lv, Chen
    Yang, Hongming
    Ma, Quan
    Yang, Zhao
    Cui, Yong
    SUSTAINABILITY, 2023, 15 (20)
  • [40] Deep reinforcement learning approach to estimate the energy-mix proportion for secure operation of converter-dominated power system
    Shrestha, Ashish
    Marahatta, Anup
    Rajbhandari, Yaju
    Gonzalez-Longatt, Francisco
    ENERGY REPORTS, 2024, 11 : 1430 - 1444