Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI

被引:24
|
作者
Kirienko, Margarita [1 ,2 ]
Sollini, Martina [2 ,3 ]
Ninatti, Gaia [2 ]
Loiacono, Daniele [4 ]
Giacomello, Edoardo [4 ]
Gozzi, Noemi [3 ]
Amigoni, Francesco [4 ]
Mainardi, Luca [4 ]
Lanzi, Pier Luca [4 ]
Chiti, Arturo [2 ,3 ]
机构
[1] Fdn IRCCS Ist Nazl Tumori, Milan, Italy
[2] Humanitas Univ, Dept Biomed Sci, Milan, Italy
[3] IRCCS Humanitas Res Hosp, Milan, Italy
[4] Politecn Milan, DEIB, Milan, Italy
关键词
Machine learning; Clinical trial; Privacy; Ethics; Distributed learning; Federated learning;
D O I
10.1007/s00259-021-05339-7
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose The present scoping review aims to assess the non-inferiority of distributed learning over centrally and locally trained machine learning (ML) models in medical applications. Methods We performed a literature search using the term "distributed learning" OR "federated learning" in the PubMed/MEDLINE and EMBASE databases. No start date limit was used, and the search was extended until July 21, 2020. We excluded articles outside the field of interest; guidelines or expert opinion, review articles and meta-analyses, editorials, letters or commentaries, and conference abstracts; articles not in the English language; and studies not using medical data. Selected studies were classified and analysed according to their aim(s). Results We included 26 papers aimed at predicting one or more outcomes: namely risk, diagnosis, prognosis, and treatment side effect/adverse drug reaction. Distributed learning was compared to centralized or localized training in 21/26 and 14/26 selected papers, respectively. Regardless of the aim, the type of input, the method, and the classifier, distributed learning performed close to centralized training, but two experiments focused on diagnosis. In all but 2 cases, distributed learning outperformed locally trained models. Conclusion Distributed learning resulted in a reliable strategy for model development; indeed, it performed equally to models trained on centralized datasets. Sensitive data can get preserved since they are not shared for model development. Distributed learning constitutes a promising solution for ML-based research and practice since large, diverse datasets are crucial for success.
引用
收藏
页码:3791 / 3804
页数:14
相关论文
共 50 条
  • [31] PRIVACY-PRESERVING SERVICES USING FEDERATED LEARNING
    Taylor, Paul
    Kiss, Stephanie
    Gullon, Lucy
    Yearling, David
    Journal of the Institute of Telecommunications Professionals, 2022, 16 : 16 - 22
  • [32] Privacy-Preserving distributed deep learning based on secret sharing
    Duan, Jia
    Zhou, Jiantao
    Li, Yuanman
    Information Sciences, 2020, 527 : 108 - 127
  • [33] Anonymous and Efficient Authentication Scheme for Privacy-Preserving Distributed Learning
    Jiang, Yili
    Zhang, Kuan
    Qian, Yi
    Zhou, Liang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 2227 - 2240
  • [34] A Privacy-Preserving Distributed Architecture for Deep-Learning-as-a-Service
    Disabato, Simone
    Falcetta, Alessandro
    Mongelluzzo, Alessio
    Roveri, Manuel
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [35] A Distributed Privacy-Preserving Learning Dynamics in General Social Networks
    Tao, Youming
    Chen, Shuzhen
    Li, Feng
    Yu, Dongxiao
    Yu, Jiguo
    Sheng, Hao
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 9547 - 9561
  • [36] Efficient Privacy-Preserving Machine Learning in Hierarchical Distributed System
    Jia, Qi
    Guo, Linke
    Fang, Yuguang
    Wang, Guirong
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2019, 6 (04): : 599 - 612
  • [37] Privacy-preserving Bayesian network learning on distributed heterogeneous data
    Wang, Hong-Mei
    Zeng, Yuan
    Zhao, Zheng
    Wang, Cheng-Shan
    Tianjin Daxue Xuebao (Ziran Kexue yu Gongcheng Jishu Ban)/Journal of Tianjin University Science and Technology, 2007, 40 (09): : 1025 - 1028
  • [38] Privacy-Preserving Distributed Machine Learning Based on Secret Sharing
    Dong, Ye
    Chen, Xiaojun
    Shen, Liyan
    Wang, Dakui
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 684 - 702
  • [39] Privacy-Preserving Asynchronous Federated Learning Framework in Distributed IoT
    Yan, Xinru
    Miao, Yinbin
    Li, Xinghua
    Choo, Kim-Kwang Raymond
    Meng, Xiangdong
    Deng, Robert H. H.
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (15) : 13281 - 13291
  • [40] Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge Caching
    Liu, Shengheng
    Zheng, Chong
    Huang, Yongming
    Quek, Tony Q. S.
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (03) : 749 - 760