Multiscenarios Parameter Optimization Method for Active Disturbance Rejection Control of PMSM Based on Deep Reinforcement Learning

被引:18
|
作者
Wang, Yicheng [1 ]
Fang, Shuhua [1 ]
Hu, Jianxiong [1 ]
Huang, Demin [1 ]
机构
[1] Southeast Univ, Sch Elect Engn, Nanjing 211189, Peoples R China
关键词
Active disturbance rejection control; deep reinforcement learning (DRL); flux weakening (FW); more electric aircraft (MEA); multiscenarios parameter optimization; permanent magnet synchronous motor (PMSM); WEAKENING CONTROL; PARADIGM; ADRC;
D O I
10.1109/TIE.2022.3225829
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, a multiscenario parameter optimization method for an active disturbance rejection controller (ADRC) of a permanent magnet synchronous motor (PMSM) based on deep reinforcement learning (DRL) is proposed as MSPO-DRL. The parameter setting of the nonlinear ADRC has always been one of the difficulties affecting the optimal performance of ADRC, and there will be different optimal parameters under different control requirements. In this article, an artificial intelligence algorithm is introduced into the parameter optimization process of ADRC, and the DRL parameter optimization model that can automatically optimize and adjust the ADRC parameters in different application scenarios is constructed so that the ADRC can achieve the best control effect conveniently and the limitation of current methods has been solved. ADRC is applied to the speed loop of flux weakening control of PMSM for more electric aircraft, and the mathematical model of ADRC in this environment is built above all. The Markov decision process is integrated into ADRC. The interface module and reward function between ADRC and MSPO-DRL are designed. The concept of the ADRC control scenario is defined and integrated into the concept of the Markov decision process to improve the generalization of DRL. Then, the MSPO-DRL model is established, and the deep deterministic gradient strategy is used as the gradient descent strategy to converge the parameters optimization. After the model learning is completed, different environmental conditions are randomly selected for simulation and experiments to verify the optimization effect and generalization performance of the algorithm. Optimizations that are carried out by the heuristic algorithms are used for comparisons, and the superiority and feasibility of the proposed algorithm are verified.
引用
收藏
页码:10957 / 10968
页数:12
相关论文
共 50 条
  • [41] Power System Load Frequency Active Disturbance Rejection Control via Reinforcement Learning-Based Memetic Particle Swarm Optimization
    Zheng, Yuemin
    Huang, Zhaoyang
    Tao, Jin
    Sun, Hao
    Sun, Qinglin
    Dehmer, Matthias
    Sun, Mingwei
    Chen, Zengqiang
    IEEE ACCESS, 2021, 9 : 116194 - 116206
  • [42] A wavefront control method based on active disturbance rejection control technology
    Kong, Lingxi
    Cheng, Tao
    Yang, Ping
    Wang, Shuai
    SIXTH SYMPOSIUM ON NOVEL OPTOELECTRONIC DETECTION TECHNOLOGY AND APPLICATIONS, 2020, 11455
  • [43] Improved current control for PMSM via an active disturbance rejection controller
    Liu, Xiaojun
    Zhang, Guangming
    Shi, Zhihan
    EUROPEAN JOURNAL OF CONTROL, 2024, 78
  • [44] A Novel Active Disturbance Rejection Control Speed Controller for PMSM Drive
    Luan, Tianrui
    Yang, Ming
    Lang, Xiaoyu
    Lang, Zhi
    Xu, Dianguo
    2016 IEEE 8TH INTERNATIONAL POWER ELECTRONICS AND MOTION CONTROL CONFERENCE (IPEMC-ECCE ASIA), 2016, : 116 - 120
  • [45] Improved 2-Order Active Disturbance Rejection Control for PMSM
    Wang, Yuandong
    Yan, Gangfeng
    Shi, Xiasheng
    Sang, Xuyang
    2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 1788 - 1793
  • [46] Active Disturbance Rejection Repetitive Control for Current Harmonic Suppression of PMSM
    Xu, Jiaqun
    Wei, Zhenqiang
    Wang, Shikai
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2023, 38 (11) : 14423 - 14437
  • [47] Research on Active Disturbance Rejection Control with Parameter Autotuning for a Moving Mirror Control System Based on Improved Snake Optimization
    Zhi, Liangjie
    Huang, Min
    Qian, Lulu
    Wang, Zhanchao
    Wen, Qin
    Han, Wei
    ELECTRONICS, 2024, 13 (09)
  • [48] Disturbance rejection and high dynamic quadrotor control based on reinforcement learning and supervised learning
    Li, Mingjun
    Cai, Zhihao
    Zhao, Jiang
    Wang, Jinyan
    Wang, Yingxun
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (13): : 11141 - 11161
  • [49] Disturbance rejection and high dynamic quadrotor control based on reinforcement learning and supervised learning
    Mingjun Li
    Zhihao Cai
    Jiang Zhao
    Jinyan Wang
    Yingxun Wang
    Neural Computing and Applications, 2022, 34 : 11141 - 11161
  • [50] Deep reinforcement learning based active disturbance rejection load frequency control of multi-area interconnected power systems with renewable energy
    Zheng, Yuemin
    Tao, Jin
    Sun, Qinglin
    Sun, Hao
    Chen, Zengqiang
    Sun, Mingwei
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2023, 360 (17): : 13908 - 13931