Momentum-Based Federated Reinforcement Learning with Interaction and Communication Efficiency

被引:0
|
作者
Yue, Sheng [1 ]
Hua, Xingyuan [2 ]
Chen, Lili [1 ]
Ren, Ju [1 ,3 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, BNRist, Beijing, Peoples R China
[2] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
[3] Zhongguancun Lab, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
D O I
10.1109/INFOCOM52122.2024.10621260
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Reinforcement Learning (FRL) has garnered increasing attention recently. However, due to the intrinsic spatio-temporal non-stationarity of data distributions, the current approaches typically suffer from high interaction and communication costs. In this paper, we introduce a new FRL algorithm, named MFPO, that utilizes momentum, importance sampling, and additional server-side adjustment to control the shift of stochastic policy gradients and enhance the efficiency of data utilization. We prove that by proper selection of momentum parameters and interaction frequency, MFPO can achieve (O) over tilde (HN-1 epsilon(-3/2)) and (O) over tilde (epsilon(-1)) interaction and communication complexities ( N represents the number of agents), where the interaction complexity achieves linear speedup with the number of agents, and the communication complexity aligns the best achievable of existing first-order FL algorithms. Extensive experiments corroborate the substantial performance gains of MFPO over existing methods on a suite of complex and high-dimensional benchmarks.
引用
收藏
页码:1131 / 1140
页数:10
相关论文
共 50 条
  • [41] Momentum-Based Topology Estimation of Articulated Objects
    Tirupachuri, Yeshasvi
    Traversaro, Silvio
    Nori, Francesco
    Pucci, Daniele
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, 2020, 1038 : 1093 - 1105
  • [42] Convergence of Momentum-Based Stochastic Gradient Descent
    Jin, Ruinan
    He, Xingkang
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION (ICCA), 2020, : 779 - 784
  • [43] Momentum in Reinforcement Learning
    Vieillard, Nino
    Scherrer, Bruno
    Pietquin, Olivier
    Geist, Matthieu
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108
  • [44] Momentum-based parameterization of dynamic character motion
    Abe, YH
    Liu, CK
    Popovic, Z
    GRAPHICAL MODELS, 2006, 68 (02) : 194 - 211
  • [45] Practical and Fast Momentum-Based Power Methods
    Rabbani, Tahseen
    Jain, Apollo
    Rajkumar, Arjun
    Huang, Furong
    MATHEMATICAL AND SCIENTIFIC MACHINE LEARNING, VOL 145, 2021, 145 : 721 - 756
  • [46] Deep Learning-Enabled Orbital Angular Momentum-Based Information Encryption Transmission
    Feng, Fu
    Hu, Junbao
    Guo, Zefeng
    Gan, Jia-An
    Chen, Peng-Fei
    Chen, Guangyong
    Min, Changjun
    Yuan, Xiaocong
    Somekh, Michael
    ACS PHOTONICS, 2022, 9 (03) : 820 - 829
  • [47] Federated Offline Reinforcement Learning
    Zhou, Doudou
    Zhang, Yufeng
    Sonabend-W, Aaron
    Wang, Zhaoran
    Lu, Junwei
    Cai, Tianxi
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2024, 119 (548) : 3152 - 3163
  • [48] Resource-Aware Personalized Federated Learning Based on Reinforcement Learning
    Wu, Tingting
    Li, Xiao
    Gao, Pengpei
    Yu, Wei
    Xin, Lun
    Guo, Manxue
    IEEE COMMUNICATIONS LETTERS, 2025, 29 (01) : 175 - 179
  • [49] Client Selection Method for Federated Learning Based on Grouping Reinforcement Learning
    Li, Guo-ming
    Liu, Wai-xi
    Guo, Zhen-zheng
    Chen, Dao-xiao
    2024 9TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATION SYSTEMS, ICCCS 2024, 2024, : 327 - 332
  • [50] Reinforcement Learning-Based Personalized Differentially Private Federated Learning
    Lu, Xiaozhen
    Liu, Zihan
    Xiao, Liang
    Dai, Huaiyu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 465 - 477