Targeted Data Poisoning Attacks Against Continual Learning Neural Networks

被引:2
|
作者
Li, Huayu [1 ]
Ditzler, Gregory [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
continual learning; adversarial machine learning; data poisoning attack;
D O I
10.1109/IJCNN55064.2022.9892774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
    Wang, Jialai
    Zhang, Ziyuan
    Wang, Meiqi
    Qiu, Han
    Zhang, Tianwei
    Li, Qi
    Li, Zongpeng
    Wei, Tao
    Zhang, Chao
    PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2329 - 2346
  • [42] Decentralized Learning Robust to Data Poisoning Attacks
    Mao, Yanwen
    Data, Deepesh
    Diggavi, Suhas
    Tabuada, Paulo
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 6788 - 6793
  • [43] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [44] Securing federated learning: a defense strategy against targeted data poisoning attack
    Ansam Khraisat
    Ammar Alazab
    Moutaz Alazab
    Tony Jan
    Sarabjot Singh
    Md. Ashraf Uddin
    Discover Internet of Things, 5 (1):
  • [45] Targeted Clean-Label Poisoning Attacks on Federated Learning
    Patel, Ayushi
    Singh, Priyanka
    RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION, RTIP2R 2022, 2023, 1704 : 231 - 243
  • [46] A Defense Method against Poisoning Attacks on IoT Machine Learning Using Poisonous Data
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    2020 IEEE THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2020), 2020, : 100 - 107
  • [47] Have You Poisoned My Data? Defending Neural Networks Against Data Poisoning
    De Gaspari, Fabio
    Hitaj, Dorjan
    Mancini, Luigi, V
    COMPUTER SECURITY-ESORICS 2024, PT I, 2024, 14982 : 85 - 104
  • [48] A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2021, 15 (02) : 215 - 240
  • [49] Defending Against Poisoning Attacks in Federated Learning with Blockchain
    Dong N.
    Wang Z.
    Sun J.
    Kampffmeyer M.
    Knottenbelt W.
    Xing E.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 13
  • [50] Data Poisoning Attack against Neural Network-Based On-Device Learning Anomaly Detector by Physical Attacks on Sensors
    Ino, Takahito
    Yoshida, Kota
    Matsutani, Hiroki
    Fujino, Takeshi
    SENSORS, 2024, 24 (19)