Deep Inversion Method for Attacking Lifelong Learning Neural Networks

被引:0
|
作者
Du, Boyuan [1 ]
Yu, Yuanlong [1 ]
Liu, Huaping [2 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou, Peoples R China
[2] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
关键词
lifelong learning; data poisoning attack; backdoor attack; deep inversion;
D O I
10.1109/IJCNN54540.2023.10191626
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural networks suffer from catastrophic forgetting when knowledge needs to be learned from multi-batch or streaming data. In response to this problem, researchers have proposed a variety of lifelong learning methods to avoid catastrophic forgetting. However, current methods usually do not consider the possibility of malicious attacks. Meanwhile, in real lifelong learning scenarios, batch data or streaming data usually come from an incompletely trusted environment. Attackers can easily manipulate data or inject malicious samples into the training data set. As a result, the reliability of neural networks decreases. Recently, researches of lifelong learning attacks need to obtain real samples of the attacked classes, whether using backdoor attacks or data poisoning attacks. In this paper, we focus on an attack setting that is more suitable for lifelong learning scenario. This setting has two main features. The first is the setting does not require real samples of the attacked classes, and the second is it allows attacks to be performed on tasks that exclude the attacked classes. For this scenario, we propose a lifelong learning attack model based on deep inversion. In the scenario where EWC is used as the benchmark lifelong learning model, our experiments show that 1) in the data poisoning attack, the target accuracy can be significantly decreased by adding 0.5% of poisoned samples; 2) The backdoor attack with high accuracy can be achieved by adding 1% of backdoor samples.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Deep learning electromagnetic inversion with convolutional neural networks
    Puzyrev, Vladimir
    GEOPHYSICAL JOURNAL INTERNATIONAL, 2019, 218 (02) : 817 - 832
  • [2] Lifelong Learning in Artificial Neural Networks
    Anthes, Gary
    COMMUNICATIONS OF THE ACM, 2019, 62 (06) : 13 - 15
  • [3] Attacking Neural Networks with Neural Networks: Towards Deep Synchronization for Backdoor Attacks
    Guan, Zihan
    Sun, Lichao
    Du, Mengnan
    Liu, Ninghao
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 608 - 618
  • [4] DEEP LEARNING BASED METHOD FOR PRUNING DEEP NEURAL NETWORKS
    Li, Lianqiang
    Zhu, Jie
    Sun, Ming-Ting
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 312 - 317
  • [5] Continual lifelong learning with neural networks: A review
    Parisi, German I.
    Kemker, Ronald
    Part, Jose L.
    Kanan, Christopher
    Wermter, Stefan
    NEURAL NETWORKS, 2019, 113 : 54 - 71
  • [6] Deep Reinforcement Learning for Attacking Wireless Sensor Networks
    Parras, Juan
    Huettenrauch, Maximilian
    Zazo, Santiago
    Neumann, Gerhard
    SENSORS, 2021, 21 (12)
  • [7] An adversarial attack detection method in deep neural networks based on re-attacking approach
    Ahmadi, Morteza Ali
    Dianat, Rouhollah
    Amirkhani, Hossein
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (07) : 10985 - 11014
  • [8] An adversarial attack detection method in deep neural networks based on re-attacking approach
    Morteza Ali Ahmadi
    Rouhollah Dianat
    Hossein Amirkhani
    Multimedia Tools and Applications, 2021, 80 : 10985 - 11014
  • [9] Analysis of Deep Learning Neural Networks for Seismic Impedance Inversion: A Benchmark Study
    Marques, Caique Rodrigues
    dos Santos, Vinicius Guedes
    Lunelli, Rafael
    Roisenberg, Mauro
    Rodrigues, Bruno Barbosa
    ENERGIES, 2022, 15 (20)
  • [10] Learning Automata Based Incremental Learning Method for Deep Neural Networks
    Guo, Haonan
    Wang, Shilin
    Fan, Jianxun
    Li, Shenghong
    IEEE ACCESS, 2019, 7 (41164-41171) : 41164 - 41171