An Optimization Framework for Federated Edge Learning

被引:6
|
作者
Li, Yangchen [1 ]
Cui, Ying [1 ,2 ,3 ]
Lau, Vincent
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Hong Kong Univ Sci & Technol Guangzhou, IoT Thrust, Guangzhou 511400, Guangdong, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept ECE, Hong Kong, Peoples R China
基金
上海市自然科学基金;
关键词
Servers; Convergence; Computational modeling; Quantization (signal); Optimization; Edge computing; Costs; Federated learning; stochastic gradient descent; quantization; convergence analysis; optimization;
D O I
10.1109/TWC.2022.3199564
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The optimal design of federated learning (FL) algorithms for solving general machine learning (ML) problems in practical edge computing systems with quantized message passing remains an open problem. This paper considers an edge computing system where the server and workers have possibly different computing and communication capabilities and employ quantization before transmitting messages. To explore the full potential of FL in such an edge computing system, we first present a general FL algorithm, namely GenQSGD, parameterized by the numbers of global and local iterations, mini-batch size, and step size sequence. Then, we analyze its convergence for an arbitrary step size sequence and specify the convergence results under three commonly adopted step size rules, namely the constant, exponential, and diminishing step size rules. Next, we optimize the algorithm parameters to minimize the energy cost under the time constraint and convergence error constraint, with the focus on the overall implementing process of FL. Specifically, for any given step size sequence under each considered step size rule, we optimize the numbers of global and local iterations and mini-batch size to optimally implement FL for applications with preset step size sequences. We also optimize the step size sequence along with these algorithm parameters to explore the full potential of FL. The resulting optimization problems are challenging non-convex problems with non-differentiable constraint functions. We propose iterative algorithms to obtain KKT points using general inner approximation (GIA) and tricks for solving complementary geometric programming (CGP). Finally, we numerically demonstrate the remarkable gains of GenQSGD with optimized algorithm parameters over existing FL algorithms and reveal the significance of optimally designing general FL algorithms.
引用
收藏
页码:934 / 949
页数:16
相关论文
共 50 条
  • [1] An Optimization Framework for Federated Edge Learning
    Li, Yangchen
    Cui, Ying
    Lau, Vincent
    2022 IEEE 23RD INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATION (SPAWC), 2022,
  • [2] Elastic Optimization for Stragglers in Edge Federated Learning
    Sultana, Khadija
    Ahmed, Khandakar
    Gu, Bruce
    Wang, Hua
    BIG DATA MINING AND ANALYTICS, 2023, 6 (04) : 404 - 420
  • [3] A decentralized asynchronous federated learning framework for edge devices
    Wang, Bin
    Tian, Zhao
    Ma, Jie
    Zhang, Wenju
    She, Wei
    Liu, Wei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 166
  • [4] Federated learning framework for mobile edge computing networks
    Fantacci, Romano
    Picano, Benedetta
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2020, 5 (01) : 15 - 21
  • [5] Hierarchical Federated Learning with Edge Optimization in Constrained Networks
    Zhang, Xiaoyang
    Tham, Chen-Khong
    Wang, Wenyi
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
  • [6] Optimization-Based GenQSGD for Federated Edge Learning
    Li, Yangchen
    Cui, Ying
    Lau, Vincent
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [7] Accelerating Federated Edge Learning via Topology Optimization
    Huang, Shanfeng
    Zhang, Zezhong
    Wang, Shuai
    Wang, Rui
    Huang, Kaibin
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (03) : 2056 - 2070
  • [8] Ferrari: A Personalized Federated Learning Framework for Heterogeneous Edge Clients
    Yao, Zhiwei
    Liu, Jianchun
    Xu, Hongli
    Wang, Lun
    Qian, Chen
    Liao, Yunming
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 10031 - 10045
  • [9] A Distributed Federated Transfer Learning Framework for Edge Optical Network
    Yang, Hui
    Yao, Qiuyan
    Zhang, Jie
    2020 ASIA COMMUNICATIONS AND PHOTONICS CONFERENCE (ACP) AND INTERNATIONAL CONFERENCE ON INFORMATION PHOTONICS AND OPTICAL COMMUNICATIONS (IPOC), 2020,
  • [10] FLight: A lightweight federated learning framework in edge and fog computing
    Zhu, Wuji
    Goudarzi, Mohammad
    Buyya, Rajkumar
    SOFTWARE-PRACTICE & EXPERIENCE, 2024, 54 (05): : 813 - 841