Reinforcement learning for robust voltage control in distribution grids under uncertainties

被引:6
|
作者
Petrusev, Aleksandr [1 ,2 ]
Putratama, Muhammad Andy [1 ]
Rigo-Mariani, Remy [1 ]
Debusschere, Vincent [1 ]
Reignier, Patrick [2 ]
Hadjsaid, Nouredine [1 ]
机构
[1] Univ Grenoble Alpes, CNRS, Grenoble INP, G2Elab, F-38000 Grenoble, France
[2] Univ Grenoble Alpes, CNRS, Grenoble INP, LIG, F-38000 Grenoble, France
来源
关键词
Voltage control; Reinforcement learning; TD3PG; PPO; Flexibility; PV production; Batteries; Distribution grid; Second -order conic relaxation; Optimal power flow;
D O I
10.1016/j.segan.2022.100959
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Traditional optimization-based voltage controllers for distribution grid applications require consump-tion/production values from the meters as well as accurate grid data (i.e., line impedances) for modeling purposes. Those algorithms are sensitive to uncertainties, notably in consumption and production forecasts or grid models. This paper focuses on the latter. Indeed, line parameters gradually deviate from their original values over time due to exploitation and weather conditions. Also, those data are oftentimes not fully available at the low voltage side thus creating sudden changes between the datasheet and the actual value. To mitigate the impact of uncertain line parameters, this paper proposes the use of a deep reinforcement learning algorithm for voltage regulation purposes in a distribution grid with PV production by controlling the setpoints of distributed storage units as flexibilities. Two algorithms are considered, namely TD3PG and PPO. A two-stage strategy is also proposed, with offline training on a grid model and further online training on an actual system (with distinct impedance values). The controllers' performances are assessed concerning the algo-rithms' hyperparameters, and the obtained results are compared with a second-order conic relaxation optimization-based control. The results show the relevance of the RL-based control, in terms of accuracy, robustness to gradual or sudden variations on the line impedances, and significant speed improvement (once trained). Validation runs are performed on a simple 11-bus system before the method's scalability is tested on a 55-bus network.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Physics-Guided Multi-Agent Deep Reinforcement Learning for Robust Active Voltage Control in Electrical Distribution Systems
    Chen, Pengcheng
    Liu, Shichao
    Wang, Xiaozhe
    Kamwa, Innocent
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2024, 71 (02) : 922 - 933
  • [32] Reinforcement Learning-Based Distributed Robust Bipartite Consensus Control for Multispacecraft Systems With Dynamic Uncertainties
    Zhang, Yongwei
    Li, Jun-Yi
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (11) : 13341 - 13351
  • [33] Resilient Operation of Distribution Grids Using Deep Reinforcement Learning
    Hosseini, Mohammad Mehdi
    Parvania, Masood
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (03) : 2100 - 2109
  • [34] Determining Locational Hosting Capacities of High Voltage Grids Under Consideration of Uncertainties
    Braun, Simon
    Bong, Andreas
    Saat, Julian
    Ulbig, Andreas
    2024 INTERNATIONAL CONFERENCE ON SMART ENERGY SYSTEMS AND TECHNOLOGIES, SEST 2024, 2024,
  • [35] Agent Based Decentralized Voltage Control in Distribution Grids
    Wolter, Martin
    Hofmann, Lutz
    AT-AUTOMATISIERUNGSTECHNIK, 2011, 59 (03) : 161 - 166
  • [36] A Distributionally Robust Model Predictive Control for Static and Dynamic Uncertainties in Smart Grids
    Li, Qi
    Shi, Ye
    Jiang, Yuning
    Shi, Yuanming
    Wang, Haoyu
    Poor, H. Vincent
    IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (05) : 4890 - 4902
  • [37] Decentralized Safe Reinforcement Learning for Voltage Control
    Cui, Wenqi
    Li, Jiayi
    Zhang, Baosen
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 3351 - 3351
  • [38] Robust Deep Reinforcement Learning for Quadcopter Control
    Deshpande, Aditya M.
    Minai, Ali A.
    Kumar, Manish
    IFAC PAPERSONLINE, 2021, 54 (20): : 90 - 95
  • [39] Active Reinforcement Learning for Robust Building Control
    Jang, Doseok
    Yan, Larry
    Spangher, Lucas
    Spanos, Costas J.
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 22150 - 22158
  • [40] The driver and the engineer: Reinforcement learning and robust control
    Bernat, Natalie
    Chen, Jiexin
    Matni, Nikolai
    Doyle, John
    2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 3932 - 3939