On Convergence of the Iteratively Preconditioned Gradient-Descent (IPG) Observer

被引:0
|
作者
Chakrabarti, Kushal [1 ]
Chopra, Nikhil [2 ]
机构
[1] Tata Consultancy Serv Res, Div Data & Decis Sci, Mumbai 400607, India
[2] Univ Maryland Coll Pk, Dept Mech Engn, College Pk, MD 20742 USA
来源
关键词
Estimation; observers for nonlinear systems; optimization algorithms; TIME; SYSTEMS; DESIGN;
D O I
10.1109/LCSYS.2024.3416337
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This letter considers the observer design problem for discrete-time nonlinear dynamical systems with sampled measurements. The recently proposed Iteratively Preconditioned Gradient-Descent (IPG) observer, a Newton-type observer, has been empirically shown to have improved robustness against measurement noise than the prominent nonlinear observers, a property that other Newton-type observers lack. However, no theoretical guarantees on the convergence of the IPG observer were provided. This letter presents a rigorous convergence analysis of the IPG observer for a class of nonlinear systems in deterministic settings, proving its local linear convergence to the actual trajectory. The assumptions are standard in the existing literature of Newton-type observers, and the analysis further confirms the relation of IPG observer with Newton observer, which was only hypothesized earlier.
引用
收藏
页码:1715 / 1720
页数:6
相关论文
共 50 条
  • [1] Accelerating the Iteratively Preconditioned Gradient-Descent Algorithm using Momentum
    Liu, Tianchen
    Chakrabarti, Kushal
    Chopra, Nikhil
    2023 NINTH INDIAN CONTROL CONFERENCE, ICC, 2023, : 68 - 73
  • [2] Iteratively Preconditioned Gradient-Descent Approach for Moving Horizon Estimation Problems
    Liu, Tianchen
    Chakrabarti, Kushal
    Chopra, Nikhil
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 8457 - 8462
  • [3] Novel Iteratively Preconditioned Gradient-Descent Algorithm via Successive Over-Relaxation Formulation
    Liu, Tianchen
    Chakrabarti, Kushal
    Chopra, Nikhil
    IEEE CONTROL SYSTEMS LETTERS, 2024, 8 : 3105 - 3110
  • [4] Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The Case of Distributed Linear Regression Problem
    Chakrabarti, Kushal
    Gupta, Nirupam
    Chopra, Nikhil
    IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (06): : 2180 - 2185
  • [5] Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The Case of Distributed Linear Regression Problem
    Chakrabarti, Kushal
    Gupta, Nirupam
    Chopra, Nikhil
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 2248 - 2253
  • [6] Practical Gradient-Descent for Memristive Crossbars
    Nair, Manu V.
    Dudek, Piotr
    2015 INTERNATIONAL CONFERENCE ON MEMRISTIVE SYSTEMS (MEMRISYS), 2015,
  • [7] Preconditioned Stochastic Gradient Descent
    Li, Xi-Lin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (05) : 1454 - 1466
  • [8] An implicit gradient-descent procedure for minimax problems
    Essid, Montacer
    Tabak, Esteban G.
    Trigila, Giulio
    MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2023, 97 (01) : 57 - 89
  • [9] Kernelized vector quantization in gradient-descent learning
    Villmann, Thomas
    Haase, Sven
    Kaden, Marika
    NEUROCOMPUTING, 2015, 147 : 83 - 95
  • [10] An implicit gradient-descent procedure for minimax problems
    Montacer Essid
    Esteban G. Tabak
    Giulio Trigila
    Mathematical Methods of Operations Research, 2023, 97 : 57 - 89