Split Edge-Cloud Neural Networks for Better Adversarial Robustness

被引:0
|
作者
Douch, Salmane [1 ]
Abid, Mohamed Riduan [2 ]
Zine-Dine, Khalid [3 ]
Bouzidi, Driss [1 ]
Benhaddou, Driss [4 ]
机构
[1] Mohammed V Univ Rabat, Natl Sch Comp Sci & Syst Anal ENSIAS, Rabat 30050, Morocco
[2] Columbus State Univ, TSYS Sch Comp Sci, Columbus, GA 31907 USA
[3] Mohammed V Univ Rabat, Fac Sci FSR, Rabat 30050, Morocco
[4] Alfaisal Univ, Coll Engn, Riyadh 11533, Saudi Arabia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Robustness; Edge computing; Perturbation methods; Computational modeling; Cloud computing; Certification; Biological neural networks; Quantization (signal); Image edge detection; Deep learning; Adversarial attacks; cloud computing; edge computing; edge intelligence; robustness certification; split neural networks;
D O I
10.1109/ACCESS.2024.3487435
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cloud computing is a critical component in the success of 5G and 6G networks, particularly given the computation-intensive nature of emerging applications. Despite all it advantages, cloud computing faces limitations in meeting the strict latency and bandwidth requirements of applications such as eHealth and automotive systems. To overcome these limitations, edge computing has emerged as a novel paradigm that bring computation closer to the user. Moreover, intelligent tasks such as deep learning ones demand more memory and processing power than edge devices can handle. To address these challenges, methods like quantization, pruning, and distributed inference have been proposed. Similarly, this paper study a promising approach for running deep learning models at the edge: split neural networks (SNN). SNNs feature a neural network architecture with multiple early exit points, allowing the model to make confident decisions at earlier layers without processing the entire network. This not only reduces memory and computational demands but it also makes SNNs well-suited for edge computing applications. As the use of SNNs expands, ensuring their safety-particularly their robustness to perturbations-becomes crucial for deployment in safety-critical scenarios. This paper presents the first in-depth study on the robustness of split Edge Cloud neural networks. We review state-of-the-art robustness certification techniques and evaluate SNN robustness using the auto_LiRPA and Auto Attack libraries, comparing them to standard neural networks. Our results demonstrate that SNNs reduce average inference time by 75'% and certify 4 to 10 times more images as robust, while improving overall robustness accuracy by 1% to 10%.
引用
收藏
页码:158854 / 158865
页数:12
相关论文
共 50 条
  • [21] On-demand inference acceleration for directed acyclic graph neural networks over edge-cloud collaboration
    Yang, Lei
    Shen, Xiaoyuan
    Zhong, Changyi
    Liao, Yuwei
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2023, 171 : 79 - 87
  • [22] An orthogonal classifier for improving the adversarial robustness of neural networks
    Xu, Cong
    Li, Xiang
    Yang, Min
    INFORMATION SCIENCES, 2022, 591 : 251 - 262
  • [23] A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks
    Sharmin, Saima
    Panda, Priyadarshini
    Sarwar, Syed Shakib
    Lee, Chankyu
    Ponghiran, Wachirawit
    Roy, Kaushik
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [24] Evaluating Accuracy and Adversarial Robustness of Quanvolutional Neural Networks
    Sooksatra, Korn
    Rivas, Pablo
    Orduz, Javier
    2021 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2021), 2021, : 152 - 157
  • [25] Towards Edge-Cloud Computing
    Tianfield, Huaglory
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, : 4883 - 4885
  • [26] Adversarial Robustness Guarantees for Random Deep Neural Networks
    De Palma, Giacomo
    Kiani, Bobak T.
    Lloyd, Seth
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [27] NON-SINGULAR ADVERSARIAL ROBUSTNESS OF NEURAL NETWORKS
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3840 - 3844
  • [28] Towards Demystifying Adversarial Robustness of Binarized Neural Networks
    Qin, Zihao
    Lin, Hsiao-Ying
    Shi, Jie
    APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2021, 2021, 12809 : 439 - 462
  • [29] Towards Proving the Adversarial Robustness of Deep Neural Networks
    Katz, Guy
    Barrett, Clark
    Dill, David L.
    Julian, Kyle
    Kochenderfer, Mykel J.
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2017, (257): : 19 - 26
  • [30] An Intelligent Workflow Scheduling Scheme for Complex Network Robustness in Fuzzy Edge-Cloud Environments
    Chen, Xing
    Lin, Chaowei
    Lin, Bing
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (01): : 1106 - 1123