Black-box adversarial attacks by manipulating image attributes

被引:23
|
作者
Wei, Xingxing [1 ]
Guo, Ying [1 ]
Li, Bo [1 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, Beijing, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Adversarial attack; Adversarial attributes; Black-box setting;
D O I
10.1016/j.ins.2020.10.028
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although there exist various adversarial attacking methods, most of them are performed by generating adversarial noises. Inspired by the fact that people usually set different camera parameters to obtain diverse visual styles when taking a picture, we propose the adversarial attributes, which generate adversarial examples by manipulating the image attributes like brightness, contrast, sharpness, chroma to simulate the imaging process. This task is accomplished under the black-box setting, where only the predicted probabilities are known. We formulate this process into an optimization problem. After efficiently solving this problem, the optimal adversarial attributes are obtained with limited queries. To guarantee the realistic effect of adversarial examples, we bound the attribute changes using L-p norm versus different p values. Besides, we also give a formal explanation for the adversarial attributes based on the linear nature of Deep Neural Networks (DNNs). Extensive experiments are conducted on two public datasets, including CIFAR-10 and ImageNet with respective to four representative DNNs like VGG16, AlexNet, Inception v3 and Resnet50. The results show that at most 97.79% of images in CIFAR-10 test dataset and 98:01% of the ImageNet images can be successfully perturbed to at least one wrong class with only <= 300 queries per image on average. (C) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页码:285 / 296
页数:12
相关论文
共 50 条
  • [41] AKD: Using Adversarial Knowledge Distillation to Achieve Black-box Attacks
    Lian, Xin
    Huang, Zhiqiu
    Wang, Chao
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [42] Ensemble adversarial black-box attacks against deep learning systems
    Hang, Jie
    Han, Keji
    Chen, Hui
    Li, Yun
    PATTERN RECOGNITION, 2020, 101
  • [43] AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows
    Dolatabadi, Hadi M.
    Erfani, Sarah
    Leckie, Christopher
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [44] GCSA: A New Adversarial Example-Generating Scheme Toward Black-Box Adversarial Attacks
    Fan, Xinxin
    Li, Mengfan
    Zhou, Jia
    Jing, Quanliang
    Lin, Chi
    Lu, Yunfeng
    Bi, Jingping
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 2038 - 2048
  • [45] Query-Efficient Black-Box Adversarial Attacks on Automatic Speech Recognition
    Tong, Chuxuan
    Zheng, Xi
    Li, Jianhua
    Ma, Xingjun
    Gao, Longxiang
    Xiang, Yong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3981 - 3992
  • [46] MalDBA: Detection for Query-Based Malware Black-Box Adversarial Attacks
    Kong, Zixiao
    Xue, Jingfeng
    Liu, Zhenyan
    Wang, Yong
    Han, Weijie
    ELECTRONICS, 2023, 12 (07)
  • [47] Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data
    Shi, Yi
    Sagduyu, Yalin E.
    Davaslioglu, Kemal
    Li, Jason H.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY (ISSPIT), 2018, : 453 - 458
  • [48] Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries
    Suya, Fnu
    Chi, Jianfeng
    Evans, David
    Tian, Yuan
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1327 - 1344
  • [49] Black-box adversarial attacks through speech distortion for speech emotion recognition
    Gao, Jinxing
    Yan, Diqun
    Dong, Mingyu
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2022, 2022 (01)
  • [50] PRADA: Practical Black-box Adversarial Attacks against Neural Ranking Models
    Wu, Chen
    Zhang, Ruqing
    Guo, Jiafeng
    De Rijke, Maarten
    Fan, Yixing
    Cheng, Xueqi
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (04)