Exploring Adversarial Attacks in Federated Learning for Medical Imaging

被引:0
|
作者
Darzi, Erfan [1 ]
Dubost, Florian [2 ]
Sijtsema, Nanna. M. [3 ]
van Ooijen, P. M. A. [3 ]
机构
[1] Harvard Univ, Harvard Med Sch, Dept Radiol, Boston, MA 02115 USA
[2] Google, Mountain View, CA 94043 USA
[3] Univ Groningen, Univ Med Ctr Groningen, Dept Radiotherapy, NL-9713 GZ Groningen, Netherlands
关键词
Biomedical imaging; Federated learning; Perturbation methods; Security; Privacy; Medical services; Data models; Adversarial attacks; deep learning; federated learning; medical imaging;
D O I
10.1109/TII.2024.3423457
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning provides a privacy-preserving framework for medical image analysis but is also vulnerable to a unique category of adversarial attacks. This article presents an in-depth exploration of these vulnerabilities, emphasizing the potential for adversaries to execute attack transferability, a phenomenon where adversarial attacks developed on one model can be successfully applied to other models within the federated network. We delve into the specific risks associated with such attacks in the context of medical imaging, using domain-specific MRI tumor and pathology datasets. Our comprehensive evaluation assesses the efficacy of various known threat scenarios within a federated learning environment. The study demonstrates the system's susceptibility to multiple forms of attacks and highlights how domain-specific configurations can significantly elevate the success rate of these attacks. This analysis brings to light the need for defense mechanisms and advocates for a reevaluation of the current security protocols in federated medical image analysis systems.
引用
收藏
页码:13591 / 13599
页数:9
相关论文
共 50 条
  • [41] Research and Application of Generative-Adversarial-Network Attacks Defense Method Based on Federated Learning
    Ma, Xiaoyu
    Gu, Lize
    ELECTRONICS, 2023, 12 (04)
  • [42] Outcomes of Adversarial Attacks on Deep Learning Models for Ophthalmology Imaging Domains
    Yoo, Tae Keun
    Choi, Joon Yul
    JAMA OPHTHALMOLOGY, 2020, 138 (11) : 1213 - 1215
  • [43] Learning to Ignore Adversarial Attacks
    Zhang, Yiming
    Zhou, Yangqiaoyu
    Carton, Samuel
    Tan, Chenhao
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2970 - 2984
  • [44] Delving into the Adversarial Robustness of Federated Learning
    Zhang, Jie
    Li, Bo
    Chen, Chen
    Lyu, Lingjuan
    Wu, Shuang
    Ding, Shouhong
    Wu, Chao
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11245 - 11253
  • [45] Adversarial radiomics: the rising of potential risks in medical imaging from adversarial learning
    Andrea Barucci
    Emanuele Neri
    European Journal of Nuclear Medicine and Molecular Imaging, 2020, 47 : 2941 - 2943
  • [46] Adversarial radiomics: the rising of potential risks in medical imaging from adversarial learning
    Barucci, Andrea
    Neri, Emanuele
    EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2020, 47 (13) : 2941 - 2943
  • [47] Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging
    Kanca, Elif
    Ayas, Selen
    Kablan, Elif Baykal
    Ekinci, Murat
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, : 673 - 690
  • [48] DEFENDING AGAINST ADVERSARIAL ATTACKS ON MEDICAL IMAGING AI SYSTEM, CLASSIFICATION OR DETECTION?
    Li, Xin
    Pan, Deng
    Zhu, Dongxiao
    2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, : 1677 - 1681
  • [49] Adversarial Attacks on Medical Image Classification
    Tsai, Min-Jen
    Lin, Ping-Yi
    Lee, Ming-En
    CANCERS, 2023, 15 (17)
  • [50] Adversarial Attacks on Federated-Learned Adaptive Bitrate Algorithms
    Zhang, Rui-Xiao
    Huang, Tianchi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 1, 2024, : 419 - 427