Towards Efficient Learning on the Computing Continuum: Advancing Dynamic Adaptation of Federated Learning

被引:0
|
作者
Valli, Mathis [1 ]
Costan, Alexandru [1 ]
Tedeschi, Cedric [1 ]
Cudennec, Loic [2 ]
机构
[1] Univ Rennes, IRISA, CNRS, INRIA, Rennes, France
[2] DGA Maitrise Informat, Rennes, France
关键词
federated learning; dynamic adaptation; computing continuum; machine learning; data privacy;
D O I
10.1145/3659995.3660042
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) has emerged as a paradigm shift enabling heterogeneous clients and devices to collaborate on training a shared global model while preserving the privacy of their local data. However, a common yet impractical assumption in existing FL approaches is that the deployment environment is static, which is rarely true in heterogeneous and highly-volatile environments like the Edge-Cloud Continuum, where FL is typically executed. While most of the current FL approaches process data in an online fashion, and are therefore adaptive by nature, they only support adaptation at the ML/DL level (e.g., through continual learning to tackle data and concept drift), putting aside the effects of system variance. Moreover, the study and validation of FL approaches strongly rely on simulations, which, although informative, tends to overlook the real-world complexities and dynamics of actual deployments, in particular with respect to changing network conditions, varying client resources, and security threats. In this paper we make a first step to address these challenges. We investigate the shortcomings of traditional, static FL models and identify areas of adaptation to tackle real-life deployment challenges. We devise a set of design principles for FL systems that can smartly adjust their strategies for aggregation, communication, privacy, and security in response to changing system conditions. To illustrate the benefits envisioned by these strategies, we present the results of a set of initial experiments on a 25-node testbed. The experiments, which vary both the number of participating clients and the network conditions, show how existing FL systems are strongly affected by changes in their operational environment. Based on these insights, we propose a set of take-aways for the FL community, towards further research into FL systems that are not only accurate and scalable but also able to dynamically adapt to the real-world deployment unpredictability.
引用
收藏
页码:34 / 41
页数:8
相关论文
共 50 条
  • [41] Towards Efficient Resource Allocation for Federated Learning in Virtualized Managed Environments
    Nikolaidis, Fotis
    Symeonides, Moysis
    Trihinas, Demetris
    FUTURE INTERNET, 2023, 15 (08)
  • [42] Towards Efficient and Privacy-Preserving Federated Learning for HMM Training
    Zheng, Yandong
    Zhu, Hui
    Lu, Rongxing
    Zhang, Songnian
    Guan, Yunguo
    Wang, Fengwei
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 38 - 43
  • [43] Towards Efficient Federated Learning Framework via Selective Aggregation of Models
    Shi, Yuchen
    Fan, Pingyi
    Zhu, Zheqi
    Peng, Chenghui
    Wang, Fei
    Letaief, Khaled B.
    2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS 2024, 2024, : 99 - 104
  • [44] Fed2Com: Towards Efficient Compression in Federated Learning
    Zhang, Yu
    Lin, Wei
    Chen, Sisi
    Song, Qingyu
    Lu, Jiaxun
    Shao, Yunfeng
    Yu, Bei
    Xu, Hong
    2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, : 560 - 566
  • [45] Dynamic Clustering in Federated Learning
    Kim, Yeongwoo
    Al Hakim, Ezeddin
    Haraldson, Johan
    Eriksson, Henrik
    da Silva, Jose Mairton B., Jr.
    Fischione, Carlo
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [46] Learning Fast and Slow: Towards Inclusive Federated Learning
    Munir, Muhammad Tahir
    Saeed, Muhammad Mustansar
    Ali, Mahad
    Qazi, Zafar Ayyub
    Raza, Agha Ali
    Qazi, Ihsan Ayyub
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 384 - 401
  • [47] Efficient federated learning for fault diagnosis in industrial cloud-edge computing
    Wang, Qizhao
    Li, Qing
    Wang, Kai
    Wang, Hong
    Zeng, Peng
    COMPUTING, 2021, 103 (10) : 2319 - 2337
  • [48] Toward Communication-Efficient Federated Learning in the Internet of Things With Edge Computing
    Sun, Haifeng
    Li, Shiqi
    Yu, F. Richard
    Qi, Qi
    Wang, Jingyu
    Liao, Jianxin
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (11) : 11053 - 11067
  • [49] A game incentive mechanism for energy efficient federated learning in computing power networks
    Lin, Xiao
    Wu, Ruolin
    Mei, Haibo
    Yang, Kun
    DIGITAL COMMUNICATIONS AND NETWORKS, 2024, 10 (06) : 1741 - 1747
  • [50] A game incentive mechanism for energy efficient federated learning in computing power networks
    Xiao Lin
    Ruolin Wu
    Haibo Mei
    Kun Yang
    Digital Communications and Networks, 2024, 10 (06) : 1741 - 1747