Fault diagnosis and protection strategy based on spatio-temporal multi-agent reinforcement learning for active distribution system using phasor measurement units

被引:4
|
作者
Zhang, Tong [1 ]
Liu, Jianchang [2 ]
Wang, Honghai [2 ]
Li, Yong [3 ]
Wang, Nan [4 ]
Kang, Chengming [5 ]
机构
[1] Shenyang Univ Technol, Sch Artificial Intelligence, Shenyang Key Lab Informat Percept & Edge Comp, Shenliao West Rd 111, Shenyang, Peoples R China
[2] Northeastern Univ, Sch Informat Sci & Engn, Shenyang, Peoples R China
[3] Shenyang Univ Technol, Sch Elect Engn, Shenyang, Peoples R China
[4] Shenyang Univ, Coll Mech Engn, Shenyang 110000, Peoples R China
[5] Shenyang Pharmaceut Univ, Sch Pharmaceut Engn, Shenyang, Peoples R China
基金
中国博士后科学基金;
关键词
Phasor measurement unit; Active distribution network; Fault diagnosis and protection; Multi -agent reinforcement learning; Dynamic angles; ADAPTATION; FILTER;
D O I
10.1016/j.measurement.2023.113291
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Active distribution system (ADS) requires intelligent sensors to provide real-time data. Due to the harmonic distortion and sparse reward function, the multi-agent reinforcement learning strategy has the fuzzy characteristic and slow convergence. This work proposes a model-free spatio-temporal multi-agent reinforcement learning (STMARL) strategy for the spatio-temporal fault diagnosis and protection. The augmented-state extended Kalman filter tracks spatial-temporal sequences measured by phasor measurement unit (PMU) and feed into the diagnosis model. The supervised multi-residual generation learning (SMGL) model is constructed to diagnose the single-phase-to-ground fault. Based on spatio-temporal sequences, the SMGL diagnosis model integrates the ADS protection as a Markov decision process and the protection operation is quantified as the STMARL reward. In the hybrid multi-agent framework, the STMARL protection strategy converges faster based on the higher-level agent suggestion without the global reward. The STMARL protection strategy is validated in the IEEE 34-bus distribution test system with 10 PMUs. Comparing with the SOGI, WNN, Sarsa and DDPG algorithms, in the common fault conditions, the STMARL protection strategy shows better performance in the high dynamic environment with the response time 1.274 s and the diagnosis accuracy rate 97.125%. The STMARL diagnosis and protection strategy guides ADS in a stable operation coordinate with all PMUs, which lays foundation for the synchronous measurement application in the smart grid.
引用
收藏
页数:12
相关论文
共 47 条
  • [31] Optimal Bi-Level Bidding and Dispatching Strategy Between Active Distribution Network and Virtual Alliances Using Distributed Robust Multi-Agent Deep Reinforcement Learning
    Zhu, Ziqing
    Chan, Ka Wing
    Xia, Shiwei
    Bu, Siqi
    IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (04) : 2833 - 2843
  • [32] Aggregated Effect of Active Distribution System on Available Transfer Capability Using Multi-Agent System Based ITD Framework
    Shukla, Devesh
    Singh, Shiv P.
    IEEE SYSTEMS JOURNAL, 2021, 15 (01): : 1401 - 1412
  • [33] Online energy management strategy considering fuel cell fault for multi-stack fuel cell hybrid vehicle based on multi-agent reinforcement learning
    Shi, Wenzhuo
    Huangfu, Yigeng
    Xu, Liangcai
    Pang, Shengzhao
    APPLIED ENERGY, 2022, 328
  • [34] Multi-agent deep meta-reinforcement learning-based active fault tolerant gas supply management system for proton exchange membrane fuel cells
    Li, Jiawen
    Cheng, Yuanyuan
    Yu, Hengwen
    Du, Hongwei
    Cui, Haoyang
    ETRANSPORTATION, 2023, 18
  • [35] Manufacturing resource-based self-organizing scheduling using multi-agent system and deep reinforcement learning
    Li, Yuxin
    Liu, Qihao
    Li, Xinyu
    Gao, Liang
    JOURNAL OF MANUFACTURING SYSTEMS, 2025, 79 : 179 - 198
  • [36] A Deep Reinforcement Learning-Based Multi-Agent Framework to Enhance Power System Resilience Using Shunt Resources
    Kamruzzaman, Md.
    Duan, Jiajun
    Shi, Di
    Benidris, Mohammed
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2021, 36 (06) : 5525 - 5536
  • [37] Low-carbon economic dispatch strategy for integrated electrical and gas system with GCCP based on multi-agent deep reinforcement learning
    Feng, Wentao
    Deng, Bingyan
    Zhang, Ziwen
    Jiang, He
    Zheng, Yanxi
    Peng, Xinran
    Zhang, Le
    Jing, Zhiyuan
    FRONTIERS IN ENERGY RESEARCH, 2024, 12
  • [38] Physics-Informed Multi-Agent deep reinforcement learning enabled distributed voltage control for active distribution network using PV inverters
    Zhang, Bin
    Cao, Di
    Ghias, Amer M. Y. M.
    Chen, Zhe
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2024, 155
  • [39] A Distributed Control Method for Urban Networks Using Multi-Agent Reinforcement Learning Based on Regional Mixed Strategy Nash-Equilibrium
    Qu, Zhaowei
    Pan, Zhaotian
    Chen, Yongheng
    Wang, Xin
    Li, Haitao
    IEEE ACCESS, 2020, 8 : 19750 - 19766
  • [40] Two-timescale autonomous energy management strategy based on multi-agent deep reinforcement learning approach for residential multicarrier energy system
    Zhang, Bin
    Hu, Weihao
    Ghias, Amer M. Y. M.
    Xu, Xiao
    Chen, Zhe
    APPLIED ENERGY, 2023, 351