Model-free compensation learning control of asymmetric hysteretic systems with initial state learning

被引:0
|
作者
Zhang, Yangming [1 ]
Luo, Biao [2 ]
Zhang, Yanqiong [3 ]
Sun, Shanxun [1 ]
机构
[1] Jinan Univ, Energy & Elect Res Ctr, Zhuhai 519070, Peoples R China
[2] Cent South Univ, Sch Automat, Changsha 410083, Peoples R China
[3] Hangzhou Dianzi Univ, HDU ITMO Joint Inst, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Hysteresis; Discrete-time nonlinear systems; Tracking; Convergence analysis; Model-free compensation; NONLINEAR-SYSTEMS; ADAPTIVE-CONTROL; INVERSE CONTROL; IDENTIFICATION; DESIGN; ACTUATORS;
D O I
10.1016/j.jsv.2024.118451
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this article, a model-free compensation learning control scheme is presented for asymmetric hysteretic systems to achieve high-precision output tracking, where the effects of asymmetric input hysteresis nonlinearities are denoted by the asymmetric Prandtl-Ishlinskii model (APIM). In the presented control scheme, the feedforward iterative learning technique is generalized to the control design of asymmetric hysteretic systems. To improve the robustness and dynamic performance of the existing feedforward learning control for hysteretic systems, both the feedback action and the differential term are utilized to design a discrete-time PD-P open-closedloop learning control law for simultaneously compensating the asymmetric input hysteresis nonlinearities and the linear dynamics effects without constructing compensators based on their models. The initial state error of such a system is also considered, a modified initial state learning strategy is proposed to ensure the initial state error tends to a prescribed level with the increasing of iterations. By fully analyzing the properties of the APIM, the convergence conditions with respect to the input error, the state error, and the tracking error along the iteration domain are given. The simulation and experimental results are provided to demonstrate strong robustness and excellent tracking accuracy with the proposed model-free compensation learning control scheme.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Generator excitation control method based on iterative learning control with initial state learning and model-free adaptive grey prediction control
    Bai J.
    Wu J.
    Wang G.
    Hu Y.
    Bai, Jingcai (okbjc@163.com), 2018, Eastern Macedonia and Thrace Institute of Technology (11) : 31 - 39
  • [2] Learning model-free motor control
    Agostini, A
    Celaya, E
    ECAI 2004: 16TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, 110 : 947 - 948
  • [3] The model-free learning adaptive control of a class of SISO nonlinear systems
    Hou, ZS
    Huang, WH
    PROCEEDINGS OF THE 1997 AMERICAN CONTROL CONFERENCE, VOLS 1-6, 1997, : 343 - 344
  • [4] Model-Free Learning Control of Nonlinear Discrete-Time Systems
    Sadegh, Nader
    2011 AMERICAN CONTROL CONFERENCE, 2011, : 3553 - 3558
  • [5] Model-Free Quantum Control with Reinforcement Learning
    Sivak, V. V.
    Eickbusch, A.
    Liu, H.
    Royer, B.
    Tsioutsios, I
    Devoret, M. H.
    PHYSICAL REVIEW X, 2022, 12 (01)
  • [6] Model-free learning of wire winding control
    Rodriguez, Abdel
    Vrancx, Peter
    Nowe, Ann
    Hostens, Erik
    2013 9TH ASIAN CONTROL CONFERENCE (ASCC), 2013,
  • [7] Model-free learning control for unstable system
    Ribeiro, CHC
    Hemerly, EM
    ELECTRONICS LETTERS, 1998, 34 (21) : 2070 - 2071
  • [8] Model-Free Learning for Massive MIMO Systems: Stochastic Approximation Adjoint Iterative Learning Control
    Aarnoudse, Leontine
    Oomen, Tom
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 2181 - 2186
  • [9] Model-Free Learning for Massive MIMO Systems: Stochastic Approximation Adjoint Iterative Learning Control
    Aarnoudse, Leontine
    Oomen, Tom
    IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (06): : 1946 - 1951
  • [10] Model-free learning control of neutralization processes using reinforcement learning
    Syafiie, S.
    Tadeo, F.
    Martinez, E.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2007, 20 (06) : 767 - 782