Split Edge-Cloud Neural Networks for Better Adversarial Robustness

被引:0
|
作者
Douch, Salmane [1 ]
Abid, Mohamed Riduan [2 ]
Zine-Dine, Khalid [3 ]
Bouzidi, Driss [1 ]
Benhaddou, Driss [4 ]
机构
[1] Mohammed V Univ Rabat, Natl Sch Comp Sci & Syst Anal ENSIAS, Rabat 30050, Morocco
[2] Columbus State Univ, TSYS Sch Comp Sci, Columbus, GA 31907 USA
[3] Mohammed V Univ Rabat, Fac Sci FSR, Rabat 30050, Morocco
[4] Alfaisal Univ, Coll Engn, Riyadh 11533, Saudi Arabia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Robustness; Edge computing; Perturbation methods; Computational modeling; Cloud computing; Certification; Biological neural networks; Quantization (signal); Image edge detection; Deep learning; Adversarial attacks; cloud computing; edge computing; edge intelligence; robustness certification; split neural networks;
D O I
10.1109/ACCESS.2024.3487435
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cloud computing is a critical component in the success of 5G and 6G networks, particularly given the computation-intensive nature of emerging applications. Despite all it advantages, cloud computing faces limitations in meeting the strict latency and bandwidth requirements of applications such as eHealth and automotive systems. To overcome these limitations, edge computing has emerged as a novel paradigm that bring computation closer to the user. Moreover, intelligent tasks such as deep learning ones demand more memory and processing power than edge devices can handle. To address these challenges, methods like quantization, pruning, and distributed inference have been proposed. Similarly, this paper study a promising approach for running deep learning models at the edge: split neural networks (SNN). SNNs feature a neural network architecture with multiple early exit points, allowing the model to make confident decisions at earlier layers without processing the entire network. This not only reduces memory and computational demands but it also makes SNNs well-suited for edge computing applications. As the use of SNNs expands, ensuring their safety-particularly their robustness to perturbations-becomes crucial for deployment in safety-critical scenarios. This paper presents the first in-depth study on the robustness of split Edge Cloud neural networks. We review state-of-the-art robustness certification techniques and evaluate SNN robustness using the auto_LiRPA and Auto Attack libraries, comparing them to standard neural networks. Our results demonstrate that SNNs reduce average inference time by 75'% and certify 4 to 10 times more images as robust, while improving overall robustness accuracy by 1% to 10%.
引用
收藏
页码:158854 / 158865
页数:12
相关论文
共 50 条
  • [41] An Accurate and Energy-Efficient Anomaly Detection in Edge-Cloud Networks
    Li, Yi
    Zhao, Deng
    Hung, Patrick C. K.
    Shu, Lei
    Zhou, Zhangbing
    2022 IEEE 19TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2022), 2022, : 451 - 466
  • [42] Enabling Data-intensive Workflows in Heterogeneous Edge-cloud Networks
    Shang X.
    Performance Evaluation Review, 2023, 50 (03): : 36 - 38
  • [43] Edge-cloud computing oriented large-scale online music education mechanism driven by neural networks
    Xing, Wen
    Slowik, Adam
    Peter, J. Dinesh
    JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2024, 13 (01):
  • [44] Adversarial Robustness of Multi-bit Convolutional Neural Networks
    Frickenstein, Lukas
    Sampath, Shambhavi Balamuthu
    Mori, Pierpaolo
    Vemparala, Manoj-Rohit
    Fasfous, Nael
    Frickenstein, Alexander
    Unger, Christian
    Passerone, Claudio
    Stechele, Walter
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 3, INTELLISYS 2023, 2024, 824 : 157 - 174
  • [45] A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
    Wang, Yang
    Dong, Bo
    Xu, Ke
    Piao, Haiyin
    Ding, Yufei
    Yin, Baocai
    Yang, Xin
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (05)
  • [46] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [47] Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities
    Chaudhury, Subhajit
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13714 - 13715
  • [48] Adversarial Robustness of Vision Transformers Versus Convolutional Neural Networks
    Ali, Kazim
    Bhatti, Muhammad Shahid
    Saeed, Atif
    Athar, Atifa
    Al Ghamdi, Mohammed A.
    Almotiri, Sultan H.
    Akram, Samina
    IEEE ACCESS, 2024, 12 : 105281 - 105293
  • [49] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [50] Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34