FEDERATED DIVERSE SELF-ENSEMBLING LEARNING APPROACH FOR DATA HETEROGENITY IN DRIVE VISION

被引:0
|
作者
Manimaran, M. [1 ]
Dhilipkumar, V. [1 ]
机构
[1] Vel Tech Rangarajan Dr Sagunthala R&D Inst Sci & T, Dept Comp Sci & Engn, Chennai, India
来源
关键词
Federated Learning; Data Heterogeneity; Autonomous Vehicle; Ensemble Learning;
D O I
10.12694/scpe.v25i6.3305
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Federated learning (FL) has developed as an efficient framework that can be used to train models across isolated data sources while also protecting the privacy of the data. In FL a common method is to construct local and global models together, with the global model (server) informing the local models and the local models (clients) updating the global model. Most present works assume clients have labeled datasets and the server has no data for supervised learning (SL) problems. In reality, clients lack the competence and drive to identify their data, while the server may host a tiny amount. How to reasonably use serverlabeled and client-unlabeled data is crucial in semi-supervised learning (SSL) and Cclientdata heterogeneity is widespread in FL. However, inadequate high-quality labels and non-IID client data, especially in autonomous driving, decrease model performance across domains and interact negatively. To solve this Semi-Supervised Federated Learning (SSFL) problem, we come up with a new FL algorithm called FedDSL in this work. We use self-ensemble learning and complementary negative learning in our method to make clients' unsupervised learning on unlabeled data more accurate and efficient. It also coordinates the model training on both the server side and the clients' side. In an important distinction to earlier research that kept some subset of labels at each client, our method is the first to implement SSFL for clients with 0% labeled non-IID data. Our contributions include the effectiveness of self-ensemble learning by using confidence score vector for calculating only for the current model performing data filtering and initiated negative learning by showing the data filtering performance in the beginning rounds. Our approach has been rigorously validated on two significant autonomous driving datasets, BDD100K and Cityscapes, proving to be highly effective. We have achieved state-of-the-art results and the metric that is utilized to evaluate the effectiveness of each detection task is mean average precision (mAP@0.5). Astonishingly FedDSL performs nearly as well as fully-supervised centralized training approaches, despite the fact that it only uses 25% of the labels in the Global model.
引用
收藏
页码:4576 / 4588
页数:13
相关论文
共 50 条
  • [1] Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification
    Almalik, Faris
    Yaqub, Mohammad
    Nandakumar, Karthik
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT III, 2022, 13433 : 376 - 386
  • [2] Weakly-Supervised Self-Ensembling Vision Transformer for MRI Cardiac Segmentation
    Wang, Ziyang
    Mang, Haodong
    Liu, Yang
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 101 - 102
  • [3] Semi-supervised Learning by Disentangling and Self-ensembling over Stochastic Latent Space
    Gyawali, Prashnna Kumar
    Li, Zhiyuan
    Ghimire, Sandesh
    Wang, Linwei
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT VI, 2019, 11769 : 766 - 774
  • [4] Domain Adaptation Semantic Segmentation for Urban Scene Combining Self-ensembling and Adversarial Learning
    Zhang G.
    Lu F.
    Long B.
    Miao J.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2021, 34 (01): : 58 - 67
  • [5] Self-Ensembling with GAN-based Data Augmentation for Domain Adaptation in Semantic Segmentation
    Choi, Jaehoon
    Kim, Taekyung
    Kim, Changick
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6829 - 6839
  • [6] Every node counts: Self-ensembling graph convolutional networks for semi-supervised learning
    Luo, Yawei
    Ji, Rongrong
    Guan, Tao
    Yu, Junqing
    Liu, Ping
    Yang, Yi
    PATTERN RECOGNITION, 2020, 106 (106)
  • [7] Towards More Efficient Data Valuation in Healthcare Federated Learning Using Ensembling
    Kumar, Sourav
    Lakshminarayanan, A.
    Chang, Ken
    Guretno, Feri
    Mien, Ivan Ho
    Kalpathy-Cramer, Jayashree
    Krishnaswamy, Pavitra
    Singh, Praveer
    DISTRIBUTED, COLLABORATIVE, AND FEDERATED LEARNING, AND AFFORDABLE AI AND HEALTHCARE FOR RESOURCE DIVERSE GLOBAL HEALTH, DECAF 2022, FAIR 2022, 2022, 13573 : 119 - 129
  • [8] Federated Semi-Supervised Learning Through a Combination of Self and Cross Model Ensembling
    Wen, Tingjie
    Zhao, Shengjie
    Zhang, Rongqing
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [9] Corrective classification: Learning from data imperfections with aggressive and diverse classifier ensembling
    Zhang, Yan
    Zhu, Xingquan
    Wu, Xindong
    Bond, Jeffrey P.
    INFORMATION SYSTEMS, 2011, 36 (08) : 1135 - 1157
  • [10] Automatic Grading Assessments for Knee MRI Cartilage Defects via Self-ensembling Semi-supervised Learning with Dual-Consistency
    Huo, Jiayu
    Ouyang, Xi
    Si, Liping
    Xuan, Kai
    Wang, Sheng
    Yao, Weiwu
    Liu, Ying
    Xu, Jia
    Qian, Dahong
    Xue, Zhong
    Wang, Qian
    Shen, Dinggang
    Zhang, Lichi
    Medical Image Analysis, 2022, 80