FedPETuning: When Federated Learning Meets the Parameter-Efficient Tuning Methods of Pre-trained Language Models

被引:0
|
作者
Zhang, Zhuo [1 ,2 ]
Yang, Yuanhang [1 ]
Dai, Yong [4 ]
Wang, Qifan [5 ]
Yu, Yue [2 ]
Que, Lizhen [3 ]
Xu, Zenglin [1 ,2 ]
机构
[1] Harbin Inst Technol, Shenzhen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Monash Univ, Melbourne, Vic, Australia
[4] Tencent, Shenzhen, Peoples R China
[5] Meta AI, Burlingame, CA USA
关键词
D O I
暂无
中图分类号
学科分类号
摘要
With increasing concerns about data privacy, there is an increasing necessity of fine-tuning pre-trained language models (PLMs) for adapting to downstream tasks located in end-user devices or local clients without transmitting data to the central server. This urgent necessity therefore calls the research of investigating federated learning (FL) for PLMs. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we investigate the parameter-efficient tuning (PETuning) of PLMs and develop a corresponding federated benchmark for four representative PETuning methods, dubbed FedPETuning. Specifically, FedPETuning provides the first holistic empirical study of representative PLMs tuning methods in FL, covering privacy attacks, performance comparisons, and resource-constrained analysis. Intensive experimental results have indicated that FedPETuning can efficiently defend against privacy attacks and maintains acceptable performance with reducing heavy resource consumption. The open-source code and data are available at https://github.com/SMILELab-FL/FedPETuning.
引用
收藏
页码:9963 / 9977
页数:15
相关论文
共 50 条
  • [21] OpenDelta: A Plug-and -play Library for Parameter-efficient Adaptation of Pre-trained Models
    Hu, Shengding
    Ding, Ning
    Zhao, Weilin
    Lv, Xingtai
    Zhang, Zhen
    Liu, Zhiyuan
    Sun, Maosong
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-DEMO 2023, VOL 3, 2023, : 274 - 281
  • [22] MAPL : Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
    Manas, Oscar
    Rodriguez, Pau
    Ahmadi, Saba
    Nematzadeh, Aida
    Goyal, Yash
    Agrawal, Aishwarya
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2523 - 2548
  • [23] APrompt: Attention Prompt Tuning for Efficient Adaptation of Pre-trained Language Models
    Wang, Qifan
    Mao, Yuning
    Wang, Jingang
    Yu, Hanchao
    Li, Shaoliang
    Wang, Sinong
    Feng, Fuli
    Huang, Lifu
    Quan, Xiaojun
    Xu, Zenglin
    Liu, Dongfang
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 9147 - 9160
  • [24] Prompt Tuning for Discriminative Pre-trained Language Models
    Yao, Yuan
    Dong, Bowen
    Zhang, Ao
    Zhang, Zhengyan
    Xie, Ruobing
    Liu, Zhiyuan
    Lin, Leyu
    Sun, Maosong
    Wang, Jianyong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473
  • [25] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning
    Gira, Michael
    Zhang, Ruisu
    Lee, Kangwook
    PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
  • [26] VL-MPFT: Multitask Parameter-Efficient Fine-Tuning for Visual-Language Pre-trained Models via Task-Adaptive Masking
    Zhu, Min
    Liu, Guanming
    Wei, Zhihua
    PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 379 - 394
  • [27] Enhancing Scalability of Pre-trained Language Models via Efficient Parameter Sharing
    Liu, Peiyu
    Gao, Ze-Feng
    Chen, Yushuo
    Zhao, Wayne Xin
    Wen, Ji-Rong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13771 - 13785
  • [28] FedBM: Stealing knowledge from pre-trained language models for heterogeneous federated learning
    Zhu, Meilu
    Yang, Qiushi
    Gao, Zhifan
    Yuan, Yixuan
    Liu, Jun
    MEDICAL IMAGE ANALYSIS, 2025, 102
  • [29] Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
    Tan, Yue
    Long, Guodong
    Ma, Jie
    Liu, Lu
    Zhou, Tianyi
    Jiang, Jing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [30] Guiding The Last Layer in Federated Learning with Pre-Trained Models
    Legate, Gwen
    Bernier, Nicolas
    Caccia, Lucas
    Oyallon, Edouard
    Belilovsky, Eugene
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,