ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES

被引:0
|
作者
Teng, Da [1 ]
Song, Xiao m [1 ]
Gong, Guanghong [1 ]
Han, Liang [1 ]
机构
[1] Beihang Univ, Sch Automat, Beijing, Peoples R China
关键词
machine learning; deep learning; neural networks; adversarial examples; COMMAND;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence areas, such as object recognition, speech recognition, and machine translation. While deep neural networks have high expression capabilities, they are prone to over fitting due to the high dimensionalities of the networks. In recent applications, deep neural networks have been found to be unstable in adversarial perturbations, which are small but can increase the network's prediction errors. This paper proposes a novel training algorithm to improve the robustness of the neural networks in adversarial examples.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [21] Improving Adversarial Robustness of Deep Neural Networks via Linear Programming
    Tang, Xiaochao
    Yang, Zhengfeng
    Fu, Xuanming
    Wang, Jianlin
    Zeng, Zhenbing
    THEORETICAL ASPECTS OF SOFTWARE ENGINEERING, TASE 2022, 2022, 13299 : 326 - 343
  • [22] CSTAR: Towards Compact and Structured Deep Neural Networks with Adversarial Robustness
    Phan, Huy
    Yin, Miao
    Sui, Yang
    Yuan, Bo
    Zonouz, Saman
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 2065 - 2073
  • [23] Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity
    Zhang, Chongzhi
    Liu, Aishan
    Liu, Xianglong
    Xu, Yitao
    Yu, Hang
    Ma, Yuqing
    Li, Tianlin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 1291 - 1304
  • [24] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [25] Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks
    Rasheed, Bader
    Abdelhamid, Mohamed
    Khan, Adil
    Menezes, Igor
    Khatak, Asad Masood
    IEEE ACCESS, 2024, 12 : 131323 - 131335
  • [26] DLR: Adversarial examples detection and label recovery for deep neural networks
    Han, Keji
    Ge, Yao
    Wang, Ruchuan
    Li, Yun
    PATTERN RECOGNITION LETTERS, 2025, 188 : 133 - 139
  • [27] Detection of Adversarial Examples in Deep Neural Networks with Natural Scene Statistics
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforge, Olivier
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [28] Neuron Selecting: Defending Against Adversarial Examples in Deep Neural Networks
    Zhang, Ming
    Li, Hu
    Kuang, Xiaohui
    Pang, Ling
    Wu, Zhendong
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 613 - 629
  • [29] Creating Simple Adversarial Examples for Speech Recognition Deep Neural Networks
    Redden, Nathaniel
    Bernard, Ben
    Straub, Jeremy
    2019 IEEE 16TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SENSOR SYSTEMS WORKSHOPS (MASSW 2019), 2019, : 58 - 62
  • [30] Detecting adversarial examples via prediction difference for deep neural networks
    Guo, Feng
    Zhao, Qingjie
    Li, Xuan
    Kuang, Xiaohui
    Zhang, Jianwei
    Han, Yahong
    Tan, Yu-an
    INFORMATION SCIENCES, 2019, 501 : 182 - 192