FedUTN: federated self-supervised learning with updating target network

被引:8
|
作者
Li, Simou [1 ]
Mao, Yuxing [1 ]
Li, Jian [1 ]
Xu, Yihang [1 ]
Li, Jinsen [1 ]
Chen, Xueshuo [1 ]
Liu, Siyang [1 ,2 ]
Zhao, Xianping [2 ]
机构
[1] Chongqing Univ, State Key Lab Power Transmiss Equipment & Syst Se, Chongqing 400044, Peoples R China
[2] Yunnan Power Grid Co Ltd, Elect Power Res Inst, Kunming 650217, Yunnan, Peoples R China
关键词
Computer vision; Self-supervised learning; Federated learning; Federated self-supervised learning;
D O I
10.1007/s10489-022-04070-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning (SSL) is capable of learning noteworthy representations from unlabeled data, which has mitigated the problem of insufficient labeled data to a certain extent. The original SSL method centered on centralized data, but the growing awareness of privacy protection restricts the sharing of decentralized, unlabeled data generated by a variety of mobile devices, such as cameras, phones, and other terminals. Federated Self-supervised Learning (FedSSL) is the result of recent efforts to create Federated learning, which is always used for supervised learning using SSL. Informed by past work, we propose a new FedSSL framework, FedUTN. This framework aims to permit each client to train a model that works well on both independent and identically distributed (IID) and independent and non-identically distributed (non-IID) data. Each party possesses two asymmetrical networks, a target network and an online network. FedUTN first aggregates the online network parameters of each terminal and then updates the terminals' target network with the aggregated parameters, which is a radical departure from the update technique utilized in earlier studies. In conjunction with this method, we offer a novel control algorithm to replace EMA for the training operation. After extensive trials, we demonstrate that: (1) the feasibility of utilizing the aggregated online network to update the target network. (2) FedUTN's aggregation strategy is simpler, more effective, and more robust. (3) FedUTN outperforms all other prevalent FedSSL algorithms and outperforms the SOTA algorithm by 0.5%similar to 1.6% under regular experiment con1ditions.
引用
收藏
页码:10879 / 10892
页数:14
相关论文
共 50 条
  • [31] Longitudinal self-supervised learning
    Zhao, Qingyu
    Liu, Zixuan
    Adeli, Ehsan
    Pohl, Kilian M.
    MEDICAL IMAGE ANALYSIS, 2021, 71
  • [32] Self-supervised learning model
    Saga, Kazushie
    Sugasaka, Tamami
    Sekiguchi, Minoru
    Fujitsu Scientific and Technical Journal, 1993, 29 (03): : 209 - 216
  • [33] Credal Self-Supervised Learning
    Lienen, Julian
    Huellermeier, Eyke
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [34] Self-Supervised Learning for Recommendation
    Huang, Chao
    Xia, Lianghao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 5136 - 5139
  • [35] Quantum self-supervised learning
    Jaderberg, B.
    Anderson, L. W.
    Xie, W.
    Albanie, S.
    Kiffner, M.
    Jaksch, D.
    QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (03):
  • [36] Self-Supervised Learning for Electroencephalography
    Rafiei, Mohammad H.
    Gauthier, Lynne V.
    Adeli, Hojjat
    Takabi, Daniel
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 1457 - 1471
  • [37] A Self-Supervised Learning Approach for Accelerating Wireless Network Optimization
    Zhang, Shuai
    Ajayi, Oluwaseun T.
    Cheng, Yu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (06) : 8074 - 8087
  • [38] TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network
    Xu, Hao
    Hui, Ka-Hei
    Fu, Chi-Wing
    Zhang, Hao
    ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04):
  • [39] A Novel Self-Supervised Learning Network for Binocular Disparity Estimation
    Tian, Jiawei
    Zhou, Yu
    Chen, Xiaobing
    AlQahtani, Salman A.
    Chen, Hongrong
    Yang, Bo
    Lu, Siyu
    Zheng, Wenfeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2025, 142 (01):
  • [40] Federated Self-supervised Speech Representations: Are We There Yet?
    Gao, Yan
    Fernandez-Marques, Javier
    Parcollet, Titouan
    Mehrotra, Abhinav
    Lane, Nicholas D.
    INTERSPEECH 2022, 2022, : 3809 - 3813