FedUTN: federated self-supervised learning with updating target network

被引:8
|
作者
Li, Simou [1 ]
Mao, Yuxing [1 ]
Li, Jian [1 ]
Xu, Yihang [1 ]
Li, Jinsen [1 ]
Chen, Xueshuo [1 ]
Liu, Siyang [1 ,2 ]
Zhao, Xianping [2 ]
机构
[1] Chongqing Univ, State Key Lab Power Transmiss Equipment & Syst Se, Chongqing 400044, Peoples R China
[2] Yunnan Power Grid Co Ltd, Elect Power Res Inst, Kunming 650217, Yunnan, Peoples R China
关键词
Computer vision; Self-supervised learning; Federated learning; Federated self-supervised learning;
D O I
10.1007/s10489-022-04070-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning (SSL) is capable of learning noteworthy representations from unlabeled data, which has mitigated the problem of insufficient labeled data to a certain extent. The original SSL method centered on centralized data, but the growing awareness of privacy protection restricts the sharing of decentralized, unlabeled data generated by a variety of mobile devices, such as cameras, phones, and other terminals. Federated Self-supervised Learning (FedSSL) is the result of recent efforts to create Federated learning, which is always used for supervised learning using SSL. Informed by past work, we propose a new FedSSL framework, FedUTN. This framework aims to permit each client to train a model that works well on both independent and identically distributed (IID) and independent and non-identically distributed (non-IID) data. Each party possesses two asymmetrical networks, a target network and an online network. FedUTN first aggregates the online network parameters of each terminal and then updates the terminals' target network with the aggregated parameters, which is a radical departure from the update technique utilized in earlier studies. In conjunction with this method, we offer a novel control algorithm to replace EMA for the training operation. After extensive trials, we demonstrate that: (1) the feasibility of utilizing the aggregated online network to update the target network. (2) FedUTN's aggregation strategy is simpler, more effective, and more robust. (3) FedUTN outperforms all other prevalent FedSSL algorithms and outperforms the SOTA algorithm by 0.5%similar to 1.6% under regular experiment con1ditions.
引用
收藏
页码:10879 / 10892
页数:14
相关论文
共 50 条
  • [21] Hyperspectral target detection using self-supervised background learning
    Ali, Muhammad Khizer
    Amin, Benish
    Maud, Abdur Rahman
    Bhatti, Farrukh Aziz
    Sukhia, Komal Nain
    Khurshid, Khurram
    ADVANCES IN SPACE RESEARCH, 2024, 74 (02) : 628 - 646
  • [22] Adversarial Self-Supervised Learning for Robust SAR Target Recognition
    Xu, Yanjie
    Sun, Hao
    Chen, Jin
    Lei, Lin
    Ji, Kefeng
    Kuang, Gangyao
    REMOTE SENSING, 2021, 13 (20)
  • [23] Federated Cross-Incremental Self-Supervised Learning for Medical Image Segmentation
    Zhang, Fan
    Liu, Huiying
    Cai, Qing
    Feng, Chun-Mei
    Wang, Binglu
    Wang, Shanshan
    Dong, Junyu
    Zhang, David
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [24] Federated Self-Supervised Learning in Heterogeneous Settings: Limits of a Baseline Approach on HAR
    Sannara, E. K.
    Rombourg, Romain
    Portet, Francois
    Lalanda, Philippe
    2022 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS (PERCOM WORKSHOPS), 2022,
  • [25] TabFedSL: A Self-Supervised Approach to Labeling Tabular Data in Federated Learning Environments
    Wang, Ruixiao
    Hu, Yanxin
    Chen, Zhiyu
    Guo, Jianwei
    Liu, Gang
    MATHEMATICS, 2024, 12 (08)
  • [26] Convolutional Feature Aggregation Network With Self-Supervised Learning and Decision Fusion for SAR Target Recognition
    Huang, Linqing
    Liu, Gongshen
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [27] Self-Supervised Classification Network
    Amrani, Elad
    Karlinsky, Leonid
    Bronstein, Alex
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 116 - 132
  • [28] Federated Self-Supervised Learning Based on Prototypes Clustering Contrastive Learning for Internet of Vehicles Applications
    Dai, Cheng
    Wei, Shuai
    Dai, Shengxin
    Garg, Sahil
    Kaddoum, Georges
    Hossain, M. Shamim
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (05): : 4692 - 4700
  • [29] Gated Self-supervised Learning for Improving Supervised Learning
    Fuadi, Erland Hillman
    Ruslim, Aristo Renaldo
    Wardhana, Putu Wahyu Kusuma
    Yudistira, Novanto
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 611 - 615
  • [30] Self-Supervised Dialogue Learning
    Wu, Jiawei
    Wang, Xin
    Wang, William Yang
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3857 - 3867