Joint Optimal Quantization and Aggregation of Federated Learning Scheme in VANETs

被引:19
|
作者
Li, Yifei [1 ]
Guo, Yijia [2 ]
Alazab, Mamoun [3 ]
Chen, Shengbo [1 ]
Shen, Cong [4 ]
Yu, Keping [5 ]
机构
[1] Henan Univ, Sch Comp & Informat Engn, Kaifeng 475001, Peoples R China
[2] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100190, Peoples R China
[3] Charles Darwin Univ, Coll Engn IT & Environm, Casuarina, NT 0810, Australia
[4] Univ Virginia, Charles L Brown Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
[5] Waseda Univ, Global Informat & Telecommun Inst, Shinjuku Ku, Tokyo 1698050, Japan
基金
中国国家自然科学基金; 日本学术振兴会;
关键词
Quantization (signal); Servers; Collaborative work; Optimization; Data models; Computational modeling; Standards; Artificial intelligence; vehicular ad hoc networks; federated learning; quantization; VEHICLES;
D O I
10.1109/TITS.2022.3145823
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Vehicular ad hoc networks (VANETs) is one of the most promising approaches for the Intelligent Transportation Systems (ITS). With the rapid increase in the amount of traffic data, deep learning based algorithms have been used extensively in VANETs. The recently proposed federated learning is an attractive candidate for collaborative machine learning where instead of transferring a plethora of data to a centralized server, all clients train their respective local models and upload them to the server for model aggregation. Model quantization is an effective approach to address the communication efficiency issue in federated learning, and yet existing studies largely assume homogeneous quantization for all clients. However, in reality, clients are predominantly heterogeneous, where they support different quantization precision levels. In this work, we propose FedDO - Federated Learning with Double Optimization. Minimizing the drift term in the convergence analysis, which is a weighted sum of squared quantization errors (SQE) over all clients, leads to a double optimization at both clients and server sides. In particular, each client adopts a fully distributed, instantaneous (per learning round) and individualized (per client) quantization scheme that minimizes its own squared quantization error, and the server computes the aggregation weights that minimize the weighted sum of squared quantization errors over all clients. We show via numerical experiments that the minimal-SQE quantizer has a better performance than a widely adopted linear quantizer for federated learning. We also demonstrate the performance advantages of FedDO over the vanilla FedAvg with standard equal weights and linear quantization.
引用
收藏
页码:19852 / 19863
页数:12
相关论文
共 50 条
  • [31] Robust Aggregation for Federated Learning
    Pillutla, Krishna
    Kakade, Sham M.
    Harchaoui, Zaid
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 : 1142 - 1154
  • [32] Joint Client Scheduling and Quantization Optimization in Energy Harvesting-Enabled Federated Learning Networks
    Ni, Zhengwei
    Zhang, Zhaoyang
    Luong, Nguyen Cong
    Niyato, Dusit
    Kim, Dong In
    Feng, Shaohan
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (08) : 9566 - 9582
  • [33] CESA: Communication efficient secure aggregation scheme via sparse graph in federated learning
    Wang, Ruijin
    Wang, Jinbo
    Li, Xiong
    Lai, Jinshan
    Zhang, Fengli
    Pei, Xikai
    Khan, Muhammad Khurram
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2024, 231
  • [34] Contract-based hierarchical security aggregation scheme for enhancing privacy in federated learning
    Wei, Qianjin
    Rao, Gang
    Wu, Xuanjing
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 85
  • [35] Federated Learning-Based Privacy-Preserving Data Aggregation Scheme for IIoT
    Fan, Hongbin
    Huang, Changbing
    Liu, Yining
    IEEE ACCESS, 2023, 11 : 6700 - 6707
  • [36] Neural network quantization in federated learning at the edge
    Tonellotto, Nicola
    Gotta, Alberto
    Nardini, Franco Maria
    Gadler, Daniele
    Silvestri, Fabrizio
    INFORMATION SCIENCES, 2021, 575 : 417 - 436
  • [37] Efficient asynchronous federated learning with sparsification and quantization
    Jia, Juncheng
    Liu, Ji
    Zhou, Chendi
    Tian, Hao
    Dong, Mianxiong
    Dou, Dejing
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (09):
  • [38] Quantization Bits Allocation for Wireless Federated Learning
    Lan, Muhang
    Ling, Qing
    Xiao, Song
    Zhang, Wenyi
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (11) : 8336 - 8351
  • [39] UVeQFed: Universal Vector Quantization for Federated Learning
    Shlezinger, Nir
    Chen, Mingzhe
    Eldar, Yonina C.
    Poor, H. Vincent
    Cui, Shuguang
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 (69) : 500 - 514
  • [40] Neural network quantization in federated learning at the edge
    Tonellotto, Nicola
    Gotta, Alberto
    Nardini, Franco Maria
    Gadler, Daniele
    Silvestri, Fabrizio
    Information Sciences, 2021, 575 : 417 - 436