A Distributed Conjugate Gradient Online Learning Method over Networks

被引:1
|
作者
Xu, Cuixia [1 ,2 ]
Zhu, Junlong [1 ]
Shang, Youlin [2 ]
Wu, Qingtao [1 ]
机构
[1] Henan Univ Sci & Technol, Sch Informat Engn, Luoyang 471023, Peoples R China
[2] Henan Univ Sci & Technol, Sch Math & Stat, Luoyang 471023, Peoples R China
基金
中国国家自然科学基金;
关键词
SUFFICIENT DESCENT PROPERTY; CONVEX-OPTIMIZATION; CONVERGENCE PROPERTIES; ALGORITHM;
D O I
10.1155/2020/1390963
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
In a distributed online optimization problem with a convex constrained set over an undirected multiagent network, the local objective functions are convex and vary over time. Most of the existing methods used to solve this problem are based on the fastest gradient descent method. However, the convergence speed of these methods is decreased with an increase in the number of iterations. To accelerate the convergence speed of the algorithm, we present a distributed online conjugate gradient algorithm, different from a gradient method, in which the search directions are a set of vectors that are conjugated to each other and the step sizes are obtained through an accurate line search. We analyzed the convergence of the algorithm theoretically and obtained a regret bound of Omml:mfenced close=")" open="(" separators="|"T, where T is the number of iterations. Finally, numerical experiments conducted on a sensor network demonstrate the performance of the proposed algorithm.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Distributed and Inexact Proximal Gradient Method for Online Convex Optimization
    Bastianello, Nicola
    Dall'Anese, Emiliano
    2021 EUROPEAN CONTROL CONFERENCE (ECC), 2021, : 2432 - 2437
  • [42] A DISTRIBUTED ALGORITHM FOR DICTIONARY LEARNING OVER NETWORKS
    Zhao, Ming-Min
    Shi, Qingjiang
    Hong, Mingyi
    2016 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2016, : 505 - 509
  • [43] DISTRIBUTED COUPLED LEARNING OVER ADAPTIVE NETWORKS
    Alghunaim, Sulaiman A.
    Sayed, Ali H.
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6353 - 6357
  • [44] Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
    Xu, Jinming
    Zhu, Shanying
    Soh, Yeng Chai
    Xie, Lihua
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2018, 63 (02) : 434 - 448
  • [45] A Distributed Online Learning Algorithm with Weight Decay in Networks
    Fang, Runyue
    Shen, Xiuyu
    Li, Dequan
    Zhou, Yuejin
    Wu, Xiongjun
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 688 - 692
  • [46] Distributed Online Learning for Coexistence in Cognitive Radar Networks
    Howard, William W.
    Martone, Anthony F.
    Buehrer, R. Michael
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2023, 59 (02) : 1202 - 1216
  • [47] Projection-free Distributed Online Learning in Networks
    Zhang, Wenpeng
    Zhao, Peilin
    Zhu, Wenwu
    Hoi, Steven C. H.
    Zhang, Tong
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [48] Deterministic convergence of an online gradient method for neural networks
    Wu, W
    Xu, YS
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2002, 144 (1-2) : 335 - 347
  • [49] Convergence of Online Gradient Method for Recurrent Neural Networks
    Ding, Xiaoshuai
    Zhang, Ruiting
    JOURNAL OF INTERDISCIPLINARY MATHEMATICS, 2015, 18 (1-2) : 159 - 177
  • [50] The Parallel Approach to the Conjugate Gradient Learning Algorithm for the Feedforward Neural Networks
    Bilski, Jaroslaw
    Smolag, Jacek
    Galushkin, Alexander I.
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING ICAISC 2014, PT I, 2014, 8467 : 12 - 21