Multi-Targeted Poisoning Attack in Deep Neural Networks

被引:0
|
作者
Kwon H. [1 ]
Cho S. [2 ]
机构
[1] Department of Artificial Intelligence and Data Science, Korea Military Academy
[2] Department of Electrical Engineering, Korea Military Academy
来源
基金
新加坡国家研究基金会;
关键词
deep neural network; different classes; machine learning; poisoning attack;
D O I
10.1587/transinf.2022NGL0006
中图分类号
学科分类号
摘要
Deep neural networks show good performance in image recognition, speech recognition, and pattern analysis. However, deep neural networks also have weaknesses, one of which is vulnerability to poisoning attacks. A poisoning attack reduces the accuracy of a model by training the model on malicious data. A number of studies have been conducted on such poisoning attacks. The existing type of poisoning attack causes misrecognition by one classifier. In certain situations, however, it is necessary for multiple models to misrecognize certain data as different specific classes. For example, if there are enemy autonomous vehicles A, B, and C, a poisoning attack could mislead A to turn to the left, B to stop, and C to turn to the right simply by using a traffic sign. In this paper, we propose a multi-targeted poisoning attack method that causes each of several models to misrecognize certain data as a different target class. This study used MNIST and CIFAR10 as datasets and Tensorflow as a machine learning library. The experimental results show that the proposed scheme has a 100% average attack success rate on MNIST and CIFAR10 when malicious data accounting for 5% of the training dataset have been used for training. Copyright © 2022 The Institute of Electronics, Information and Communication Engineers.
引用
收藏
页码:1916 / 1920
页数:4
相关论文
共 50 条
  • [21] COST AWARE UNTARGETED POISONING ATTACK AGAINST GRAPH NEURAL NETWORKS
    Han, Yuwei
    Lai, Yuni
    Zhu, Yulin
    Zhou, Kai
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 4940 - 4944
  • [22] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [23] One Pixel Attack for Fooling Deep Neural Networks
    Su, Jiawei
    Vargas, Danilo Vasconcellos
    Sakurai, Kouichi
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2019, 23 (05) : 828 - 841
  • [24] Projan: A probabilistic trojan attack on deep neural networks
    Saremi, Mehrin
    Khalooei, Mohammad
    Rastgoo, Razieh
    Sabokrou, Mohammad
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [25] POSTER: Practical Fault Attack on Deep Neural Networks
    Breier, Jakub
    Hou, Xiaolu
    Jap, Dirmanto
    Ma, Lei
    Bhasin, Shivam
    Liu, Yang
    PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 2204 - 2206
  • [26] Cocktail Universal Adversarial Attack on Deep Neural Networks
    Li, Shaoxin
    Li, Xiaofeng
    Che, Xin
    Li, Xintong
    Zhang, Yong
    Chu, Lingyang
    COMPUTER VISION - ECCV 2024, PT LXV, 2025, 15123 : 396 - 412
  • [27] Patch Based Backdoor Attack on Deep Neural Networks
    Manna, Debasmita
    Tripathy, Somanath
    INFORMATION SYSTEMS SECURITY, ICISS 2024, 2025, 15416 : 422 - 440
  • [28] Defending against backdoor attack on deep neural networks based on multi-scale inactivation
    Zhang, Anqing
    Chen, Honglong
    Wang, Xiaomeng
    Li, Junjian
    Gao, Yudong
    Wang, Xingang
    INFORMATION SCIENCES, 2025, 690
  • [29] Tyrosine kinase inhibitors: Multi-targeted or single-targeted?
    Broekman, Fleur
    Giovannetti, Elisa
    Peters, Godefridus J.
    WORLD JOURNAL OF CLINICAL ONCOLOGY, 2011, 2 (02): : 80 - 93
  • [30] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions
    Zhao, Pu
    Xu, Kaidi
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505