Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models

被引:0
|
作者
Yang, Ziying [1 ]
Zhang, Jie [2 ]
Wang, Wei [1 ]
Li, Huan [1 ]
机构
[1] Hebei Normal Univ, Sch Comp & Cyber Secur, Shijiazhuang 050024, Peoples R China
[2] Xian Jiaotong Liverpool Univ, Sch Adv Technol, Suzhou 215123, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 19期
基金
中国国家自然科学基金;
关键词
backdoor attack; deep generative models; data poisoning; invisible trigger;
D O I
10.3390/app14198742
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regard to invisible backdoor attacks. Currently, most backdoor attack methods rely on visible backdoor triggers that are easily detectable and defendable against. Although some studies have explored invisible backdoor attacks, they often require parameter modifications and additions to the model generator, resulting in practical inconveniences. In this study, we aim to overcome these limitations by proposing a novel method for invisible backdoor attacks. We employ an encoder-decoder network to 'poison' the data during the preparation stage without modifying the model itself. Through meticulous design, the trigger remains visually undetectable, substantially enhancing attacker stealthiness and success rates. Consequently, this attack method poses a serious threat to the security of DGMs while presenting new challenges for security mechanisms. Therefore, we urge researchers to intensify their investigations into DGM security issues and collaboratively promote the healthy development of DGM security.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Approximate Query Processing for Data Exploration using Deep Generative Models
    Thirumuruganathan, Saravanan
    Hasan, Shohedul
    Koudas, Nick
    Das, Gautam
    2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2020), 2020, : 1309 - 1320
  • [42] Evaluating the Impact of Health Care Data Completeness for Deep Generative Models
    Smith, Benjamin
    Van Steelandt, Senne
    Khojandi, Anahita
    METHODS OF INFORMATION IN MEDICINE, 2023, 62 (01/02) : 31 - 39
  • [43] Defend Data Poisoning Attacks on Voice Authentication
    Li, Ke
    Baird, Cameron
    Lin, Dan
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 1754 - 1769
  • [44] Decentralized Learning Robust to Data Poisoning Attacks
    Mao, Yanwen
    Data, Deepesh
    Diggavi, Suhas
    Tabuada, Paulo
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 6788 - 6793
  • [45] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [46] Data Poisoning Attacks and Defenses to Crowdsourcing Systems
    Fang, Minghong
    Sun, Minghao
    Li, Qi
    Gong, Neil Zhenqiang
    Tian, Jin
    Liu, Jia
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 969 - 980
  • [47] Detecting Data Poisoning Attacks in Federated Learning for Healthcare Applications Using Deep Learning
    Omran, Alaa Hamza
    Mohammed, Sahar Yousif
    Aljanabi, Mohammed
    Iraqi Journal for Computer Science and Mathematics, 2023, 4 (04): : 225 - 237
  • [48] Exploring Data and Model Poisoning Attacks to Deep Learning-Based NLP Systems
    Marulli, Fiammetta
    Verde, Laura
    Campanile, Lelio
    KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KSE 2021), 2021, 192 : 3570 - 3579
  • [49] An improved real time detection of data poisoning attacks in deep learning vision systems
    Raghavan V.
    Mazzuchi T.
    Sarkani S.
    Discover Artificial Intelligence, 2022, 2 (01):
  • [50] Non-control-data attacks are realistic threats
    Chen, S
    Xu, J
    Sezer, EC
    Gauriar, P
    Iyer, RK
    USENIX Association Proceedings of the 14th USENIX Security Symposium, 2005, : 177 - 191