Improve Robustness of Deep Neural Networks by Coding

被引:4
|
作者
Huang, Kunping [1 ]
Raviv, Netanel [2 ]
Jain, Siddharth [3 ]
Upadhyaya, Pulakesh [1 ]
Bruck, Jehoshua [3 ]
Siegel, Paul H. [4 ]
Jiang, Anxiao [1 ]
机构
[1] Texas A&M Univ, Comp Sci & Engn Dept, College Stn, TX 77843 USA
[2] Washington Univ, Comp Sci & Engn Dept, St Louis, MO 14263 USA
[3] CALTECH, Elect Engn Dept, Pasadena, CA 91125 USA
[4] Univ Calif San Diego, Elect & Comp Engn Dept, La Jolla, CA 92093 USA
关键词
PARTIAL FAULT-TOLERANCE;
D O I
10.1109/ita50056.2020.9244998
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which are usually stored in non-volatile memories, their performance can degrade significantly. We review two recently presented approaches that improve the robustness of DNNs in complementary ways. In the first approach, we use error-correcting codes as external redundancy to protect the weights from errors. A deep reinforcement learning algorithm is used to optimize the redundancy-performance tradeoff. In the second approach, internal redundancy is added to neurons via coding. It enables neurons to perform robust inference in noisy environments.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] Evaluating the Robustness of Ultrasound Beamforming with Deep Neural Networks
    Luchies, Adam
    Byram, Brett
    2018 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS), 2018,
  • [22] Robustness of Deep Convolutional Neural Networks for Image Recognition
    Ulicny, Matej
    Lundstrom, Jens
    Byttner, Stefan
    INTELLIGENT COMPUTING SYSTEMS, 2016, 597 : 16 - 30
  • [23] ROBUSTNESS OF DEEP CONVOLUTIONAL NEURAL NETWORKS FOR IMAGE DEGRADATIONS
    Ghosh, Sanjukta
    Shet, Rohan
    Amon, Peter
    Hutter, Andreas
    Kaup, Andre
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 2916 - 2920
  • [24] Towards Proving the Adversarial Robustness of Deep Neural Networks
    Katz, Guy
    Barrett, Clark
    Dill, David L.
    Julian, Kyle
    Kochenderfer, Mykel J.
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2017, (257): : 19 - 26
  • [25] On the Robustness of Backdoor-basedWatermarking in Deep Neural Networks
    Shafieinejad, Masoumeh
    Lukas, Nils
    Wang, Jiaqi
    Li, Xinda
    Kerschbaum, Florian
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 177 - 188
  • [26] Towards Robustness of Deep Neural Networks via Regularization
    Li, Yao
    Min, Martin Renqiang
    Lee, Thomas
    Yu, Wenchao
    Kruus, Erik
    Wang, Wei
    Hsieh, Cho-Jui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7476 - 7485
  • [27] A Review on Deep Neural Networks for ICD Coding
    Teng, Fei
    Liu, Yiming
    Li, Tianrui
    Zhang, Yi
    Li, Shuangqing
    Zhao, Yue
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (05) : 4357 - 4375
  • [28] Universal Source Coding of Deep Neural Networks
    Basu, Sourya
    Varshney, Lav R.
    2017 DATA COMPRESSION CONFERENCE (DCC), 2017, : 310 - 319
  • [29] TDSNN: From Deep Neural Networks to Deep Spike Neural Networks with Temporal-Coding
    Zhang, Lei
    Zhou, Shengyuan
    Zhi, Tian
    Du, Zidong
    Chen, Yunji
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 1319 - 1326
  • [30] iDropout: Leveraging Deep Taylor Decomposition for the Robustness of Deep Neural Networks
    Schreckenberger, Christian
    Bartelt, Christian
    Stuckenschmidt, Heiner
    ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS: OTM 2019 CONFERENCES, 2019, 11877 : 113 - 126