Deep Inversion Method for Attacking Lifelong Learning Neural Networks

被引:0
|
作者
Du, Boyuan [1 ]
Yu, Yuanlong [1 ]
Liu, Huaping [2 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou, Peoples R China
[2] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
关键词
lifelong learning; data poisoning attack; backdoor attack; deep inversion;
D O I
10.1109/IJCNN54540.2023.10191626
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural networks suffer from catastrophic forgetting when knowledge needs to be learned from multi-batch or streaming data. In response to this problem, researchers have proposed a variety of lifelong learning methods to avoid catastrophic forgetting. However, current methods usually do not consider the possibility of malicious attacks. Meanwhile, in real lifelong learning scenarios, batch data or streaming data usually come from an incompletely trusted environment. Attackers can easily manipulate data or inject malicious samples into the training data set. As a result, the reliability of neural networks decreases. Recently, researches of lifelong learning attacks need to obtain real samples of the attacked classes, whether using backdoor attacks or data poisoning attacks. In this paper, we focus on an attack setting that is more suitable for lifelong learning scenario. This setting has two main features. The first is the setting does not require real samples of the attacked classes, and the second is it allows attacks to be performed on tasks that exclude the attacked classes. For this scenario, we propose a lifelong learning attack model based on deep inversion. In the scenario where EWC is used as the benchmark lifelong learning model, our experiments show that 1) in the data poisoning attack, the target accuracy can be significantly decreased by adding 0.5% of poisoned samples; 2) The backdoor attack with high accuracy can be achieved by adding 1% of backdoor samples.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] A Deep Learning Method With Merged LSTM Neural Networks for SSHA Prediction
    Song, Tao
    Jiang, Jingyu
    Li, Wei
    Xu, Danya
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2020, 13 : 2853 - 2860
  • [23] Learning With Sharing: An Edge-Optimized Incremental Learning Method for Deep Neural Networks
    Hussain, Muhammad Awais
    Huang, Shih-An
    Tsai, Tsung-Han
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (02) : 461 - 473
  • [24] Learning with Deep Photonic Neural Networks
    Leelar, Bhawani Shankar
    Shivaleela, E. S.
    Srinivas, T.
    2017 IEEE WORKSHOP ON RECENT ADVANCES IN PHOTONICS (WRAP), 2017,
  • [25] Deep Learning with Random Neural Networks
    Gelenbe, Erol
    Yin, Yongha
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 1633 - 1638
  • [26] Deep Learning with Random Neural Networks
    Gelenbe, Erol
    Yin, Yongha
    PROCEEDINGS OF SAI INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS) 2016, VOL 2, 2018, 16 : 450 - 462
  • [27] Deep learning in spiking neural networks
    Tavanaei, Amirhossein
    Ghodrati, Masoud
    Kheradpisheh, Saeed Reza
    Masquelier, Timothee
    Maida, Anthony
    NEURAL NETWORKS, 2019, 111 : 47 - 63
  • [28] Deep learning in neural networks: An overview
    Schmidhuber, Juergen
    NEURAL NETWORKS, 2015, 61 : 85 - 117
  • [29] Artificial neural networks and deep learning
    Geubbelmans, Melvin
    Rousseau, Axel-Jan
    Burzykowski, Tomasz
    Valkenborg, Dirk
    AMERICAN JOURNAL OF ORTHODONTICS AND DENTOFACIAL ORTHOPEDICS, 2024, 165 (02) : 248 - 251
  • [30] Shortcut learning in deep neural networks
    Robert Geirhos
    Jörn-Henrik Jacobsen
    Claudio Michaelis
    Richard Zemel
    Wieland Brendel
    Matthias Bethge
    Felix A. Wichmann
    Nature Machine Intelligence, 2020, 2 : 665 - 673