Invited Paper: Hyperdimensional Computing for Resilient Edge Learning

被引:1
|
作者
Barkam, Hamza Errahmouni [1 ]
Jeon, SungHeon Eavn [1 ]
Yun, Sanggeon [1 ]
Yeung, Calvin [1 ]
Zou, Zhuowen [1 ]
Jiao, Xun [2 ]
Srinivasa, Narayan [3 ]
Imani, Mohsen [1 ]
机构
[1] Univ Calif Irvine, Irvine, CA 92717 USA
[2] Vilanova Univ, Villanova, PA USA
[3] Intel Labs, Hillsboro, OR USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICCAD57390.2023.10323671
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recent strides in deep learning have yielded impressive practical applications such as autonomous driving, natural language processing, and graph reasoning. However, the susceptibility of deep learning models to subtle input variations, which stems from device imperfections and non-idealities, or adversarial attacks on edge devices, presents a critical challenge. These vulnerabilities hold dual significance-security concerns in critical applications and insights into human-machine sensory alignment. Efforts to enhance model robustness encounter resource constraints in the edge and the black box nature of neural networks, hindering their deployment on edge devices. This paper focuses on algorithmic adaptations inspired by the human brain to address these challenges. Hyper Dimensional Computing (HDC), rooted in neural principles, replicates brain functions while enabling efficient, noise-tolerant computation. HDC leverages high-dimensional vectors to encode information, seamlessly blending learning and memory functions. Its transparency empowers practitioners, enhancing both robustness and understanding of deployed models. In this paper, we introduce the first comprehensive study that compares the robustness of HDC to white-box malicious attacks to that of deep neural network (DNN) models and the first HDC gradient-based attack in the literature. We develop a framework that enables HDC models to generate gradient-based adversarial examples using state-of-the-art techniques applied to DNNs. Our evaluation shows that our HDC model provides, on average, 19.9% higher robustness than DNNs to adversarial samples and up to 90% robustness improvement against random noise on the weights of the model compared to the DNN.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Efficient Exploration in Edge-Friendly Hyperdimensional Reinforcement Learning
    Ni, Yang
    Chung, William Youngwoo
    Cho, Samuel
    Zou, Zhuowen
    Imani, Mohsen
    PROCEEDING OF THE GREAT LAKES SYMPOSIUM ON VLSI 2024, GLSVLSI 2024, 2024, : 111 - 118
  • [42] INVITED PAPER: MACHINE LEARNING AND ECONOMETRICS
    Hsiao, Cheng
    SINGAPORE ECONOMIC REVIEW, 2024, 69 (04): : 1601 - 1616
  • [43] JKQ: JKU Tools for Quantum Computing (Invited Paper)
    Wille, Robert
    Hillmich, Stefan
    Burgholzer, Lukas
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED-DESIGN (ICCAD), 2020,
  • [44] Pervasive computing: Past, present and future (Invited paper)
    Kurkovsky, Stan
    MEDIA CONVERGENCE: MOVING TO THE NEXT GENERATION, 2007, : 65 - 71
  • [45] Invited Paper: Algorithm/Hardware Co-design for Few-Shot Learning at the Edge
    Laguna, Ann Franchesca
    Sharifi, Mohammed Mehdi
    Reis, Dayane
    Liu, Liu
    Hennessee, Andrew
    O'Dell, Clayton
    O'Connor, Ian
    Niemier, Michael
    Hu, X. Sharon
    2023 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2023,
  • [46] FL-HDC: Hyperdimensional Computing Design for the Application of Federated Learning
    Hsieh, Cheng-Yen
    Chuang, Yu-Chuan
    Wu, An-Yeu Andy
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [47] Multicenter Hierarchical Federated Learning With Fault-Tolerance Mechanisms for Resilient Edge Computing Networks
    Chen, Xiaohong
    Xu, Guanying
    Xu, Xuesong
    Jiang, Haichong
    Tian, Zhiping
    Ma, Tao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 47 - 61
  • [48] Multicenter Hierarchical Federated Learning With Fault-Tolerance Mechanisms for Resilient Edge Computing Networks
    Chen, Xiaohong
    Xu, Guanying
    Xu, Xuesong
    Jiang, Haichong
    Tian, Zhiping
    Ma, Tao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 47 - 61
  • [49] Privacy-Preserving Federated Learning with Differentially Private Hyperdimensional Computing
    Piran, Fardin Jalil
    Chen, Zhiling
    Imani, Mohsen
    Imani, Farhad
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 123
  • [50] In-memory hyperdimensional computing
    Karunaratne, Geethan
    Le Gallo, Manuel
    Cherubini, Giovanni
    Benini, Luca
    Rahimi, Abbas
    Sebastian, Abu
    NATURE ELECTRONICS, 2020, 3 (06) : 327 - +