Innovative energy solutions: Evaluating reinforcement learning algorithms for battery storage optimization in residential settings

被引:0
|
作者
Dou, Zhenlan [1 ]
Zhang, Chunyan [1 ]
Li, Junqiang [2 ]
Li, Dezhi [3 ]
Wang, Miao [3 ]
Sun, Lue [3 ]
Wang, Yong [2 ]
机构
[1] State Grid Shanghai Municipal Elect Power Co, Shanghai 200122, Peoples R China
[2] Nanchang Univ, Sch Informat Engn, Nanchang 330031, Peoples R China
[3] China Elect Power Res Inst, Beijing Key Lab Demand Side Multienergy Carriers O, Beijing 100192, Peoples R China
关键词
Reinforcement learning; Optimal controlling; Operation scheduling; Building energy Management; Energy storage; Solar PV system; SYSTEM; MANAGEMENT; OPERATION; BEHAVIOR; BIOMASS;
D O I
10.1016/j.psep.2024.09.123
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The implementation of BESS (battery energy storage systems) and the efficient optimization of their scheduling are crucial research challenges in effectively managing the intermittency and volatility of solar-PV (photovoltaic) systems. Nevertheless, an examination of the existing body of knowledge uncovers notable deficiencies in the ideal arrangement of energy systems' timetables. Most models primarily concentrate on a single aim, whereas only a few tackle the intricacies of multi-objective scenarios. This study examines homes connected to the power grid equipped with a BESS and a solar PV system. It leverages four distinct reinforcement learning (RL) algorithms, selected for their unique training methodologies, to develop effective scheduling models. The findings demonstrate that the RL model using Trust Region Policy Optimization (TRPO) effectively manages the BESS and PV system despite real-world uncertainties. This case study confirms the suitability and effectiveness of this approach. The TRPO-based RL framework surpasses previous models in decision-making by choosing the most optimal BESS scheduling strategies. The TRPO model exhibited the highest mean self-sufficiency rates compared to the A3C (Asynchronous Advantage Actor-Critic), DDPG (Deep Deterministic Policy Gradient), and TAC (Twin Actor Cretic) models, surpassing them by similar to 3%, 0.72%, and 3.5%, correspondingly. This results in enhanced autonomy and economic benefits by adapting to dynamic real-world conditions. Consequently, our approach was strategically designed to deliver an optimized outcome. This framework is primarily intended for seamless integration into an automated energy plant environment, facilitating regular electricity trading among multiple buildings. Backed by initiatives like the Renewable Energy Certificate weight, this technology is expected to play a crucial role in maintaining a balance between power generation and consumption. The MILP (Mixed Integer Linear Programming) architecture achieved a self-sufficiency rate of 29.12%, surpassing the rates of A3C, TRPO, DDPG, and TAC by 2.48%, 0.64%, 2%, and 3.04%, correspondingly.
引用
收藏
页码:2203 / 2221
页数:19
相关论文
共 50 条
  • [41] Optimal energy management of residential battery storage under uncertainty
    Su, Hao
    Zhou, Yun
    Feng, Donghan
    Li, Hengjie
    Numan, Muhammad
    INTERNATIONAL TRANSACTIONS ON ELECTRICAL ENERGY SYSTEMS, 2021, 31 (02)
  • [42] The nature of combining energy storage applications for residential battery technology
    Parra, David
    Patel, Martin K.
    APPLIED ENERGY, 2019, 239 : 1343 - 1355
  • [43] Profitability of Residential Battery Energy Storage Combined with Solar Photovoltaics
    Goebel, Christoph
    Cheng, Vicky
    Jacobsen, Hans-Arno
    ENERGIES, 2017, 10 (07):
  • [44] Hybrid Battery Energy Storage System for Residential Customer Services
    Montano-Martinez, Karen
    Irizarry-Rivera, Agustin
    2020 52ND NORTH AMERICAN POWER SYMPOSIUM (NAPS), 2021,
  • [45] Evaluating the Effectiveness of Deep Reinforcement Learning Algorithms in a Walking Environment
    Neervannan, Arjun
    BALTIC JOURNAL OF MODERN COMPUTING, 2018, 6 (04): : 335 - 348
  • [46] Who Controls Your Energy? On the (In)Security of Residential Battery Energy Storage Systems
    Baumgart, Ingmar
    Boersig, Matthias
    Goerke, Niklas
    Hackenjos, Timon
    Rill, Jochen
    Wehmer, Marek
    2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CONTROL, AND COMPUTING TECHNOLOGIES FOR SMART GRIDS (SMARTGRIDCOMM), 2019,
  • [47] Transactive Energy Trading of Residential Prosumers Using Battery Energy Storage Systems
    Nizami, M. S. H.
    Hossain, M. J.
    Amin, B. M. Ruhul
    Kashif, Muhammad
    Fernandez, Edstan
    Mahmud, Khizir
    2019 IEEE MILAN POWERTECH, 2019,
  • [48] Distributed Bayesian optimization of deep reinforcement learning algorithms
    Young, M. Todd
    Hinkle, Jacob D.
    Kannan, Ramakrishnan
    Ramanathan, Arvind
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2020, 139 : 43 - 52
  • [49] Evaluating the Adaptability of Reinforcement Learning Based HVAC Control for Residential Houses
    Kurte, Kuldeep
    Munk, Jeffrey
    Kotevska, Olivera
    Amasyali, Kadir
    Smith, Robert
    McKee, Evan
    Du, Yan
    Cui, Borui
    Kuruganti, Teja
    Zandi, Helia
    SUSTAINABILITY, 2020, 12 (18)
  • [50] RLOP: A Framework for Reinforcement Learning, Optimization and Planning Algorithms
    Zhang, Song
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8851 - 8854