Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence

被引:63
|
作者
Saeed, Aaqib [1 ]
Salim, Flora D. [2 ,3 ]
Ozcelebi, Tanir [1 ]
Lukkien, Johan [1 ]
机构
[1] Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5612 AE Eindhoven, Netherlands
[2] RMIT Univ, Sch Sci, Melbourne, Vic 3001, Australia
[3] RMIT Univ, RMIT Ctr Informat Discovery & Data Analyt, Melbourne, Vic 3001, Australia
关键词
Brain modeling; Task analysis; Data models; Internet of Things; Wavelet transforms; Sleep; Deep learning; embedded intelligence; federated learning; learning representations; low-data regime; self-supervised learning; sensor analytics;
D O I
10.1109/JIOT.2020.3009358
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Smartphones, wearables, and Internet-of-Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed <italic>scalogram-signal correspondence learning</italic> based on wavelet transform (WT) to learn useful representations from unlabeled sensor inputs as electroencephalography, blood volume pulse, accelerometer, and WiFi channel-state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary view (i.e., a scalogram generated with WT) align with each other, by optimizing a contrastive objective. We extensively assess the quality of learned features with our multiview strategy on diverse public data sets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully supervised networks and it works significantly better than pretraining with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semisupervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.
引用
收藏
页码:1030 / 1040
页数:11
相关论文
共 50 条
  • [21] Learning Representations for New Sound Classes With Continual Self-Supervised Learning
    Wang, Zhepei
    Subakan, Cem
    Jiang, Xilin
    Wu, Junkai
    Tzinis, Efthymios
    Ravanelli, Mirco
    Smaragdis, Paris
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2607 - 2611
  • [22] Calibre: Towards Fair and Accurate Personalized Federated Learning with Self-Supervised Learning
    Chen, Sijia
    Su, Ningxin
    Li, Baochun
    2024 IEEE 44TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS 2024, 2024, : 891 - 901
  • [23] Self-supervised learning of monocular depth estimators in autonomous vehicles with federated learning
    Soares, Elton F. de S.
    Campos, Carlos Alberto V.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 151
  • [24] ADDING DISTANCE INFORMATION TO SELF-SUPERVISED LEARNING FOR RICH REPRESENTATIONS
    Kim, Yeji
    Kong, Bai-Sun
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1270 - 1274
  • [25] Efficient Self-Supervised Learning Representations for Spoken Language Identification
    Liu, Hexin
    Perera, Leibny Paola Garcia
    Khong, Andy W. H.
    Chng, Eng Siong
    Styles, Suzy J.
    Khudanpur, Sanjeev
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1296 - 1307
  • [26] Self-Supervised Visual Representations Learning by Contrastive Mask Prediction
    Zhao, Yucheng
    Wang, Guangting
    Luo, Chong
    Zeng, Wenjun
    Zha, Zheng-Jun
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10140 - 10149
  • [27] InferCode: Self-Supervised Learning of Code Representations by Predicting Subtrees
    Bui, Nghi D. Q.
    Yu, Yijun
    Jiang, Lingxiao
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, : 1186 - 1197
  • [28] Decoupling Common and Unique Representations for Multimodal Self-supervised Learning
    Wang, Yi
    Albrecht, Conrad M.
    Braham, Nassim Ait Ali
    Liu, Chenying
    Xiong, Zhitong
    Zhu, Xiao Xiang
    COMPUTER VISION - ECCV 2024, PT XXIX, 2025, 15087 : 286 - 303
  • [29] Align Representations with Base: A New Approach to Self-Supervised Learning
    Zhang, Shaofeng
    Qiu, Lyn
    Zhu, Feng
    Yan, Junchi
    Zhang, Hengrui
    Zhao, Rui
    Li, Hongyang
    Yang, Xiaokang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16579 - 16588
  • [30] Towards Efficient and Effective Self-supervised Learning of Visual Representations
    Addepalli, Sravanti
    Bhogale, Kaushal
    Dey, Priyam
    Babu, R. Venkatesh
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 523 - 538