Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI

被引:24
|
作者
Kirienko, Margarita [1 ,2 ]
Sollini, Martina [2 ,3 ]
Ninatti, Gaia [2 ]
Loiacono, Daniele [4 ]
Giacomello, Edoardo [4 ]
Gozzi, Noemi [3 ]
Amigoni, Francesco [4 ]
Mainardi, Luca [4 ]
Lanzi, Pier Luca [4 ]
Chiti, Arturo [2 ,3 ]
机构
[1] Fdn IRCCS Ist Nazl Tumori, Milan, Italy
[2] Humanitas Univ, Dept Biomed Sci, Milan, Italy
[3] IRCCS Humanitas Res Hosp, Milan, Italy
[4] Politecn Milan, DEIB, Milan, Italy
关键词
Machine learning; Clinical trial; Privacy; Ethics; Distributed learning; Federated learning;
D O I
10.1007/s00259-021-05339-7
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose The present scoping review aims to assess the non-inferiority of distributed learning over centrally and locally trained machine learning (ML) models in medical applications. Methods We performed a literature search using the term "distributed learning" OR "federated learning" in the PubMed/MEDLINE and EMBASE databases. No start date limit was used, and the search was extended until July 21, 2020. We excluded articles outside the field of interest; guidelines or expert opinion, review articles and meta-analyses, editorials, letters or commentaries, and conference abstracts; articles not in the English language; and studies not using medical data. Selected studies were classified and analysed according to their aim(s). Results We included 26 papers aimed at predicting one or more outcomes: namely risk, diagnosis, prognosis, and treatment side effect/adverse drug reaction. Distributed learning was compared to centralized or localized training in 21/26 and 14/26 selected papers, respectively. Regardless of the aim, the type of input, the method, and the classifier, distributed learning performed close to centralized training, but two experiments focused on diagnosis. In all but 2 cases, distributed learning outperformed locally trained models. Conclusion Distributed learning resulted in a reliable strategy for model development; indeed, it performed equally to models trained on centralized datasets. Sensitive data can get preserved since they are not shared for model development. Distributed learning constitutes a promising solution for ML-based research and practice since large, diverse datasets are crucial for success.
引用
收藏
页码:3791 / 3804
页数:14
相关论文
共 50 条
  • [1] Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI
    Margarita Kirienko
    Martina Sollini
    Gaia Ninatti
    Daniele Loiacono
    Edoardo Giacomello
    Noemi Gozzi
    Francesco Amigoni
    Luca Mainardi
    Pier Luca Lanzi
    Arturo Chiti
    European Journal of Nuclear Medicine and Molecular Imaging, 2021, 48 : 3791 - 3804
  • [2] Privacy-Preserving and Reliable Distributed Federated Learning
    Dong, Yipeng
    Zhang, Lei
    Xu, Lin
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 130 - 149
  • [3] Privacy-Preserving and Reliable Federated Learning
    Lu, Yi
    Zhang, Lei
    Wang, Lulu
    Gao, Yuanyuan
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT III, 2022, 13157 : 346 - 361
  • [4] Federated learning for privacy-preserving AI
    Cheng, Yong
    Liu, Yang
    Chen, Tianjian
    Yang, Qiang
    COMMUNICATIONS OF THE ACM, 2020, 63 (12) : 33 - 36
  • [5] Privacy-Preserving and Reliable Decentralized Federated Learning
    Gao, Yuanyuan
    Zhang, Lei
    Wang, Lulu
    Choo, Kim-Kwang Raymond
    Zhang, Rui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (04) : 2879 - 2891
  • [6] Privacy-Preserving Distributed Deep Learning with Privacy Transformations
    Cheung, Sen-ching S.
    Rafique, Muhammad Usman
    Tan, Wai-tian
    2018 10TH IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2018,
  • [7] Privacy-preserving distributed learning with chaotic maps
    Arevalo, Irina
    Salmeron, Jose L.
    Romero, Ivan
    IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS 2024, IEEE EAIS 2024, 2024, : 388 - 394
  • [8] Differential Privacy-preserving Distributed Machine Learning
    Wang, Xin
    Ishii, Hideaki
    Du, Linkang
    Cheng, Peng
    Chen, Jiming
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 7339 - 7344
  • [9] Reliable and Privacy-Preserving Federated Learning with Anomalous Users
    ZHANG Weiting
    LIANG Haotian
    XU Yuhua
    ZHANG Chuan
    ZTECommunications, 2023, 21 (01) : 15 - 24
  • [10] Privacy-preserving collaborative AI for distributed deep learning with cross-sectional data
    Iqbal, Saeed
    Qureshi, Adnan N.
    Alhussein, Musaed
    Aurangzeb, Khursheed
    Javeed, Khalid
    Ali Naqvi, Rizwan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (33) : 80051 - 80073