Robustness of Neural Ensembles Against Targeted and Random Adversarial Learning

被引:0
|
作者
Wang, Shir Li [1 ]
Shafi, Kamran [1 ]
Lokan, Chris [1 ]
Abbass, Hussein A. [1 ]
机构
[1] Univ New S Wales, Sch SEIT UNSW ADFA, Sydney, NSW 2052, Australia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning has become a prominent tool in various domains owing to its adaptability. However, this adaptability can be taken advantage of by an adversary to cause dysfunction of machine learning; a process known as Adversarial Learning. This paper investigates Adversarial Learning in the context of artificial neural networks. The aim is to test the hypothesis that an ensemble of neural networks trained on the same data manipulated by an adversary would be more robust than a single network. We investigate two attack types: targeted and random. We use Mahalanobis distance and covariance matrices to selected targeted attacks. The experiments use both artificial and UCI datasets. The results demonstrate that an ensemble of neural networks trained on attacked data are more robust against the attack than a single network. While many papers have demonstrated that an ensemble of neural networks is more robust against noise than a single network, the significance of the current work lies in the fact that targeted attacks are not white noise.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Evolutionary random neural ensembles based on negative correlation learning
    Chen, Huanhuan
    Yao, Xin
    2007 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-10, PROCEEDINGS, 2007, : 1468 - 1474
  • [22] Robustness enhancement with network topology reconfiguration against targeted and random attack
    Sekiyama, Kosuke
    Tkada, Isao
    Araki, Hirohisa
    2006 SICE-ICASE INTERNATIONAL JOINT CONFERENCE, VOLS 1-13, 2006, : 389 - +
  • [23] Using Options to Improve Robustness of Imitation Learning Against Adversarial Attacks
    Dasgupta, Prithviraj
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS III, 2021, 11746
  • [24] Lateralized Learning for Robustness Against Adversarial Attacks in a Visual Classification System
    Siddique, Abubakar
    Browne, Will N.
    Grimshaw, Gina M.
    GECCO'20: PROCEEDINGS OF THE 2020 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2020, : 395 - 403
  • [25] Metric Learning for Adversarial Robustness
    Mao, Chengzhi
    Zhong, Ziyuan
    Yang, Junfeng
    Vondrick, Carl
    Ray, Baishakhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [26] On the Adversarial Robustness of Subspace Learning
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 1470 - 1483
  • [27] ON THE ADVERSARIAL ROBUSTNESS OF SUBSPACE LEARNING
    Li, Fuwei
    Lai, Lifeng
    Cui, Shuguang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2477 - 2481
  • [28] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [29] Neural Architecture Dilation for Adversarial Robustness
    Li, Yanxi
    Yang, Zhaohui
    Wang, Yunhe
    Xu, Chang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [30] Bringing robustness against adversarial attacks
    Gean T. Pereira
    André C. P. L. F. de Carvalho
    Nature Machine Intelligence, 2019, 1 : 499 - 500