Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence

被引:63
|
作者
Saeed, Aaqib [1 ]
Salim, Flora D. [2 ,3 ]
Ozcelebi, Tanir [1 ]
Lukkien, Johan [1 ]
机构
[1] Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5612 AE Eindhoven, Netherlands
[2] RMIT Univ, Sch Sci, Melbourne, Vic 3001, Australia
[3] RMIT Univ, RMIT Ctr Informat Discovery & Data Analyt, Melbourne, Vic 3001, Australia
关键词
Brain modeling; Task analysis; Data models; Internet of Things; Wavelet transforms; Sleep; Deep learning; embedded intelligence; federated learning; learning representations; low-data regime; self-supervised learning; sensor analytics;
D O I
10.1109/JIOT.2020.3009358
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Smartphones, wearables, and Internet-of-Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed <italic>scalogram-signal correspondence learning</italic> based on wavelet transform (WT) to learn useful representations from unlabeled sensor inputs as electroencephalography, blood volume pulse, accelerometer, and WiFi channel-state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary view (i.e., a scalogram generated with WT) align with each other, by optimizing a contrastive objective. We extensively assess the quality of learned features with our multiview strategy on diverse public data sets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully supervised networks and it works significantly better than pretraining with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semisupervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.
引用
收藏
页码:1030 / 1040
页数:11
相关论文
共 50 条
  • [41] A framework for self-supervised federated domain adaptation
    Bin Wang
    Gang Li
    Chao Wu
    WeiShan Zhang
    Jiehan Zhou
    Ye Wei
    EURASIP Journal on Wireless Communications and Networking, 2022
  • [42] A framework for self-supervised federated domain adaptation
    Wang, Bin
    Li, Gang
    Wu, Chao
    Zhang, WeiShan
    Zhou, Jiehan
    Wei, Ye
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2022, 2022 (01)
  • [43] Maximizing model generalization for machine condition monitoring with Self-Supervised Learning and Federated Learning
    Russell, Matthew
    Wang, Peng
    JOURNAL OF MANUFACTURING SYSTEMS, 2023, 71 : 274 - 285
  • [44] Learning self-supervised molecular representations for drug–drug interaction prediction
    Rogia Kpanou
    Patrick Dallaire
    Elsa Rousseau
    Jacques Corbeil
    BMC Bioinformatics, 25
  • [45] BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping
    Elbanna, Gasser
    Scheidwasser-Clow, Neil
    Kegler, Mikolaj
    Beckmann, Pierre
    El Hajal, Karl
    Cernak, Milos
    HEAR: HOLISTIC EVALUATION OF AUDIO REPRESENTATIONS, VOL 166, 2021, 166 : 25 - 47
  • [46] Visual Reinforcement Learning With Self-Supervised 3D Representations
    Ze, Yanjie
    Hansen, Nicklas
    Chen, Yinbo
    Jain, Mohit
    Wang, Xiaolong
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2890 - 2897
  • [47] Universal representations in cardiovascular ECG assessment: A self-supervised learning approach
    Liu, Zhi-Yong
    Lin, Ching-Heng
    Hsu, Yu-Chun
    Chen, Jung-Sheng
    Chang, Po-Cheng
    Wen, Ming-Shien
    Kuo, Chang-Fu
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2025, 195
  • [48] Repeat and learn: Self-supervised visual representations learning by Scene Localization
    Altabrawee, Hussein
    Noor, Mohd Halim Mohd
    PATTERN RECOGNITION, 2024, 156
  • [49] Self-Supervised Learning of Audio Representations From Permutations With Differentiable Ranking
    Carr, Andrew N.
    Berthet, Quentin
    Blondel, Mathieu
    Teboul, Olivier
    Zeghidour, Neil
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 708 - 712
  • [50] Federated Cross-Incremental Self-Supervised Learning for Medical Image Segmentation
    Zhang, Fan
    Liu, Huiying
    Cai, Qing
    Feng, Chun-Mei
    Wang, Binglu
    Wang, Shanshan
    Dong, Junyu
    Zhang, David
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,