Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models

被引:0
|
作者
Yang, Ziying [1 ]
Zhang, Jie [2 ]
Wang, Wei [1 ]
Li, Huan [1 ]
机构
[1] Hebei Normal Univ, Sch Comp & Cyber Secur, Shijiazhuang 050024, Peoples R China
[2] Xian Jiaotong Liverpool Univ, Sch Adv Technol, Suzhou 215123, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 19期
基金
中国国家自然科学基金;
关键词
backdoor attack; deep generative models; data poisoning; invisible trigger;
D O I
10.3390/app14198742
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regard to invisible backdoor attacks. Currently, most backdoor attack methods rely on visible backdoor triggers that are easily detectable and defendable against. Although some studies have explored invisible backdoor attacks, they often require parameter modifications and additions to the model generator, resulting in practical inconveniences. In this study, we aim to overcome these limitations by proposing a novel method for invisible backdoor attacks. We employ an encoder-decoder network to 'poison' the data during the preparation stage without modifying the model itself. Through meticulous design, the trigger remains visually undetectable, substantially enhancing attacker stealthiness and success rates. Consequently, this attack method poses a serious threat to the security of DGMs while presenting new challenges for security mechanisms. Therefore, we urge researchers to intensify their investigations into DGM security issues and collaboratively promote the healthy development of DGM security.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Adversarial Attacks Against Deep Generative Models on Data: A Survey
    Sun, Hui
    Zhu, Tianqing
    Zhang, Zhiqiu
    Jin, Dawei
    Xiong, Ping
    Zhou, Wanlei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 3367 - 3388
  • [2] Concealed Data Poisoning Attacks on NLP Models
    Wallace, Eric
    Zhao, Tony Z.
    Feng, Shi
    Singh, Sameer
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 139 - 150
  • [3] Data Poisoning Attacks against Autoregressive Models
    Alfeld, Scott
    Zhu, Xiaojin
    Barford, Paul
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1452 - 1458
  • [4] Deep Generative Models for Synthetic Data: A Survey
    Eigenschink, Peter
    Reutterer, Thomas
    Vamosi, Stefan
    Vamosi, Ralf
    Sun, Chang
    Kalcher, Klaudius
    IEEE ACCESS, 2023, 11 : 47304 - 47320
  • [5] Implications of data topology for deep generative models
    Jin, Yinzhu
    Mcdaniel, Rory
    Tatro, N. Joseph
    Catanzaro, Michael J.
    Smith, Abraham D.
    Bendich, Paul
    Dwyer, Matthew B.
    Fletcher, P. Thomas
    FRONTIERS IN COMPUTER SCIENCE, 2024, 6
  • [6] Data Poisoning Attacks to Deep Learning Based Recommender Systems
    Huang, Hai
    Mu, Jiaming
    Gong, Neil Zhenqiang
    Li, Qi
    Liu, Bin
    Xu, Mingwei
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [7] Data Poisoning Attacks Against Outcome Interpretations of Predictive Models
    Zhang, Hengtong
    Gao, Jing
    Su, Lu
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2165 - 2173
  • [8] Subpopulation Data Poisoning Attacks
    Jagielski, Matthew
    Severi, Giorgio
    Harger, Niklas Pousette
    Oprea, Mina
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3104 - 3122
  • [9] Online Data Poisoning Attacks
    Zhang, Xuezhou
    Zhu, Xiaojin
    Lessard, Laurent
    LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 201 - 210
  • [10] Deep Generative Models for Relational Data with Side Information
    Hu, Changwei
    Rai, Piyush
    Carin, Lawrence
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70