Boosting Bit-Error Resilience of DNN Accelerators Through Median Feature Selection

被引:22
|
作者
Ozen, Elbruz [1 ]
Orailoglu, Alex [1 ]
机构
[1] Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA 92093 USA
关键词
Approximate computing; fault tolerance; neural network hardware; neural networks; NEURAL-NETWORKS;
D O I
10.1109/TCAD.2020.3012209
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning techniques have enjoyed wide adoption in real life, including in various safety-critical embedded applications. While neural network computations require protection against hardware errors, the substantial overheads of conventional error-tolerance techniques limit their use on embedded platforms, which carry out demanding deep neural network computations with limited resources. The utilization of conventional techniques is further constrained in high error rate scenarios, increasingly prevalent under aggressive energy and performance optimizations. To resolve this conundrum, we introduce a novel median feature selection technique to filter the impact of bit errors prior to the execution of each layer. While our technique can be deemed as a fine-grained modular redundancy scheme, its construction purely out of the inherent redundancy of the network necessitates neither additional parameters nor extra multiply-accumulate operations, squashing the inordinate overheads typically associated with such techniques. Median feature selection can be efficiently performed in hardware and seamlessly integrated into embedded deep learning accelerators as a modular plug-in. Deep learning models can be trained with standard tools and techniques to ensure a graceful operational interface with the feature selection stages. The proposed technique allows the system to perform accurately even at high error rates by improving its resilience up to four orders of magnitude, yet incurs negligible 0.19%-0.48% area and 0.07%-0.19% power overheads for the required operations.
引用
收藏
页码:3250 / 3262
页数:13
相关论文
共 7 条
  • [1] Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
    Stutz, David
    Chandramoorthy, Nandhini
    Hein, Matthias
    Schiele, Bernt
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3632 - 3647
  • [2] A framework for feature selection through boosting
    Alsahaf, Ahmad
    Petkov, Nicolai
    Shenoy, Vikram
    Azzopardi, George
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 187
  • [3] Just Say Zero: Containing Critical Bit-Error Propagation in Deep Neural Networks With Anomalous Feature Suppression
    Ozen, Elbruz
    Orailoglu, Alex
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED-DESIGN (ICCAD), 2020,
  • [4] Feature Selection in Click-Through Rate Prediction Based on Gradient Boosting
    Wang, Zheng
    Yu, Qingsong
    Shen, Chaomin
    Hu, Wenxin
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2016, 2016, 9937 : 134 - 142
  • [5] Feature selection algorithm recommendation for gene expression data through gradient boosting and neural network metamodels
    Aduviri, Robert
    Matos, Daniel
    Villanueva, Edwin
    PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 2726 - 2728
  • [6] Bit-error rate analysis of low-density parity-check codes with generalised selection combining over a Rayleigh-fading channel using Gaussian approximation
    Tan, B. S.
    Li, K. H.
    Teh, K. C.
    IET COMMUNICATIONS, 2012, 6 (01) : 90 - 96
  • [7] Enhancing Arousal Level Detection in EEG Signals through Genetic Algorithm-based Feature Selection and Fast Bit Hopping
    Sheikhian, Elnaz
    Ghoshuni, Majid
    Azarnoosh, Mahdi
    Khalilzadeh, Mohammad Mahdi
    JOURNAL OF MEDICAL SIGNALS & SENSORS, 2024, 14 (07):