Numerical methods for distributed stochastic compositional optimization problems with aggregative structure

被引:0
|
作者
Zhao, Shengchao [1 ]
Liu, Yongchao [2 ]
机构
[1] China Univ Min & Technol, Sch Math, Xuzhou, Peoples R China
[2] Dalian Univ Technol, Sch Math Sci, Dalian, Peoples R China
关键词
Distributed stochastic compositional optimization; aggregative structure; hybrid variance reduction technique; dynamic consensus mechanism; communication compression; GRADIENT DESCENT; ALGORITHMS;
D O I
10.1080/10556788.2024.2381214
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The paper studies the distributed stochastic compositional optimization problems over networks, where all the agents' inner-level function is the sum of each agent's private expectation function. Focusing on the aggregative structure of the inner-level function, we employ the hybrid variance reduction method to obtain the information on each agent's private expectation function, and apply the dynamic consensus mechanism to track the information on each agent's inner-level function. Then by combining with the standard distributed stochastic gradient descent method, we propose a distributed aggregative stochastic compositional gradient descent method. When the objective function is smooth, the proposed method achieves the convergence rate $ \mathcal {O}(K<^>{-1/2}) $ O(K-1/2). We further combine the proposed method with the communication compression and propose the communication compressed variant distributed aggregative stochastic compositional gradient descent method. The compressed variant of the proposed method maintains the convergence rate $ \mathcal {O}(K<^>{-1/2}) $ O(K-1/2). Simulated experiments on decentralized reinforcement learning verify the effectiveness of the proposed methods.
引用
收藏
页数:32
相关论文
共 50 条
  • [41] Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization
    Nedic, Angelia
    IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) : 92 - 101
  • [42] Performance of Classic Numerical Methods in Unrestricted Optimization Problems
    Eisermann, Jonatan Ismael
    Brito, Maritza Camilli Almeida
    ABAKOS, 2021, 9 (02): : 25 - 47
  • [43] A Class of Distributed Online Aggregative Optimization in Unknown Dynamic Environment
    Yang, Chengqian
    Wang, Shuang
    Zhang, Shuang
    Lin, Shiwei
    Huang, Bomin
    MATHEMATICS, 2024, 12 (16)
  • [44] A Deep Learning Approach for Distributed Aggregative Optimization with Users' Feedback
    Brumali, Riccardo
    Carnevale, Guido
    Notarstefano, Giuseppe
    6TH ANNUAL LEARNING FOR DYNAMICS & CONTROL CONFERENCE, 2024, 242 : 1552 - 1564
  • [45] A Learning-based Distributed Algorithm for Personalized Aggregative Optimization
    Carnevale, Guido
    Notarstefano, Giuseppe
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 1576 - 1581
  • [46] Distributed Aggregative Optimization Over Multi-Agent Networks
    Li, Xiuxian
    Xie, Lihua
    Hong, Yiguang
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (06) : 3165 - 3171
  • [47] APPLICATION OF STOCHASTIC OPTIMIZATION METHODS IN THE GAS TURBINE PROJECTION PROBLEMS
    Afanasjevska, V. E.
    Tronchuk, A. A.
    Ugryumov, M. L.
    PROCEEDINGS OF THE ASME FLUIDS ENGINEERING DIVISION SUMMER CONFERENCE - 2010 - VOL 1, PTS A-C, 2010, : 679 - 682
  • [48] Distributed projection-free algorithm for constrained aggregative optimization
    Wang, Tongyu
    Yi, Peng
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2023, 33 (10) : 5273 - 5288
  • [49] Distributed Nonlinear Programming Methods for Optimization Problems with Inequality Constraints
    Matei, Ion
    Baras, John S.
    2015 54TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2015, : 2649 - 2654
  • [50] Stochastic Gradient Tracking Methods for Distributed Personalized Optimization over Networks
    Huang, Yan
    Xu, Jinming
    Meng, Wenchao
    Wai, Hoi-To
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 4571 - 4578