Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence

被引:63
|
作者
Saeed, Aaqib [1 ]
Salim, Flora D. [2 ,3 ]
Ozcelebi, Tanir [1 ]
Lukkien, Johan [1 ]
机构
[1] Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5612 AE Eindhoven, Netherlands
[2] RMIT Univ, Sch Sci, Melbourne, Vic 3001, Australia
[3] RMIT Univ, RMIT Ctr Informat Discovery & Data Analyt, Melbourne, Vic 3001, Australia
关键词
Brain modeling; Task analysis; Data models; Internet of Things; Wavelet transforms; Sleep; Deep learning; embedded intelligence; federated learning; learning representations; low-data regime; self-supervised learning; sensor analytics;
D O I
10.1109/JIOT.2020.3009358
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Smartphones, wearables, and Internet-of-Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed <italic>scalogram-signal correspondence learning</italic> based on wavelet transform (WT) to learn useful representations from unlabeled sensor inputs as electroencephalography, blood volume pulse, accelerometer, and WiFi channel-state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary view (i.e., a scalogram generated with WT) align with each other, by optimizing a contrastive objective. We extensively assess the quality of learned features with our multiview strategy on diverse public data sets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully supervised networks and it works significantly better than pretraining with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semisupervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.
引用
收藏
页码:1030 / 1040
页数:11
相关论文
共 50 条
  • [31] Continually Learning Self-Supervised Representations with Projected Functional Regularization
    Gomez-Villa, Alex
    Twardowski, Bartlomiej
    Yu, Lu
    Bagdanov, Andrew D.
    van de Weijer, Joost
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3866 - 3876
  • [32] Self-Supervised Learning of Face Representations for Video Face Clustering
    Sharma, Vivek
    Tapaswi, Makarand
    Sarfraz, M. Saquib
    Stiefelhagen, Rainer
    2019 14TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2019), 2019, : 360 - 367
  • [33] Self-Supervised Representations for Multi-View Reinforcement Learning
    Yang, Huanhuan
    Shi, Dianxi
    Xie, Guojun
    Peng, Yingxuan
    Zhang, Yi
    Yang, Yantai
    Yang, Shaowu
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 2203 - 2213
  • [34] Mobility-aware federated self-supervised learning in vehicular network
    Xueying Gu
    Qiong Wu
    Qiang Fan
    Pingyi Fan
    Urban Lifeline, 2 (1):
  • [35] Federated Graph Anomaly Detection via Contrastive Self-Supervised Learning
    Kong, Xiangjie
    Zhang, Wenyi
    Wang, Hui
    Hou, Mingliang
    Chen, Xin
    Yan, Xiaoran
    Das, Sajal K.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [36] FedLID: Self-Supervised Federated Learning for Leveraging Limited Image Data
    Psaltis, Athanasios
    Kastellos, Anestis
    Patrikakis, Charalampos Z.
    Daras, Petros
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 1031 - 1040
  • [37] Self-Supervised On-Device Federated Learning From Unlabeled Streams
    Shi, Jiahe
    Wu, Yawen
    Zeng, Dewen
    Tao, Jun
    Hu, Jingtong
    Shi, Yiyu
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (12) : 4871 - 4882
  • [38] EXPLORING FEDERATED SELF-SUPERVISED LEARNING FOR GENERAL PURPOSE AUDIO UNDERSTANDING
    Rehman, Yasar Abbas Ur
    Lau, Kin Wai
    Xie, Yuyang
    Ma, Lan
    Shen, Jiajun
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW 2024, 2024, : 335 - 340
  • [39] Self-supervised graph representations of WSIs
    Pina, Oscar
    Vilaplana, Veronica
    GEOMETRIC DEEP LEARNING IN MEDICAL IMAGE ANALYSIS, VOL 194, 2022, 194 : 107 - 117
  • [40] A study of the generalizability of self-supervised representations
    Tendle, Atharva
    Hasan, Mohammad Rashedul
    MACHINE LEARNING WITH APPLICATIONS, 2021, 6