ADC: Adversarial attacks against object Detection that evade Context consistency checks

被引:11
|
作者
Yin, Mingjun [1 ]
Li, Shasha [1 ]
Song, Chengyu [1 ]
Asif, M. Salman [1 ]
Roy-Chowdhury, Amit K. [1 ]
Krishnamurthy, Srikanth, V [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
关键词
D O I
10.1109/WACV51458.2022.00289
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples, which are slightly perturbed input images which lead DNNs to make wrong predictions. To protect from such examples, various defense strategies have been proposed. A very recent defense strategy for detecting adversarial examples, that has been shown to be robust to current attacks, is to check for intrinsic context consistencies in the input data, where context refers to various relationships (e.g., object-to-object co-occurrence relationships) in images. In this paper, we show that even context consistency checks can be brittle to properly crafted adversarial examples and to the best of our knowledge, we are the first to do so. Specifically, we propose an adaptive framework to generate examples that subvert such defenses, namely, Adversarial attacks against object Detection that evade Context consistency checks (ADC). In ADC, we formulate a joint optimization problem which has two attack goals, viz., (i) fooling the object detector and (ii) evading the context consistency check system, at the same time. Experiments on both PASCAL VOC and MS COCO datasets show that examples generated with ADC fool the object detector with a success rate of over 85% in most cases, and at the same time evade the recently proposed context consistency checks, with a "bypassing" rate of over 80% in most cases. Our results suggest that "how to robustly model context and check its consistency," is still an open problem.
引用
收藏
页码:2836 / 2845
页数:10
相关论文
共 50 条
  • [1] Adversarial Attacks for Object Detection
    Xu, Bo
    Zhu, Jinlin
    Wang, Danwei
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 7281 - 7287
  • [2] Survey of Physical Adversarial Attacks Against Object Detection Models
    Cai, Wei
    Di, Xingyu
    Jiang, Xinhao
    Wang, Xin
    Gao, Weijie
    Computer Engineering and Applications, 2024, 60 (10) : 61 - 75
  • [3] ROSA: Robust Salient Object Detection Against Adversarial Attacks
    Li, Haofeng
    Li, Guanbin
    Yu, Yizhou
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (11) : 4835 - 4847
  • [4] Adversarial Evasion Noise Attacks Against TensorFlow Object Detection API
    Kannan, Raadhesh
    Jian, Chin Ji
    Guo, XiaoNing
    INTERNATIONAL CONFERENCE FOR INTERNET TECHNOLOGY AND SECURED TRANSACTIONS (ICITST-2020), 2020, : 172 - 175
  • [5] CONTEXTUAL ADVERSARIAL ATTACKS FOR OBJECT DETECTION
    Zhang, Hantao
    Zhou, Wengang
    Li, Houqiang
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [6] Using bilateral filtering and autoencoder to defend against adversarial attacks for object detection
    Wang, Xiaoqin
    Sun, Lei
    Mao, Xiuqing
    Yang, Youhuan
    Liu, Peiyuan
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (04)
  • [7] Upcycling adversarial attacks for infrared object detection
    Kim, Hoseong
    Lee, Chanyong
    NEUROCOMPUTING, 2022, 482 : 1 - 13
  • [8] Survey on adversarial attacks and defenses for object detection
    Wang, Xinxin
    Chen, Jing
    He, Kun
    Zhang, Zijun
    Du, Ruiying
    Li, Qiao
    She, Jisi
    Tongxin Xuebao/Journal on Communications, 2023, 44 (11): : 260 - 277
  • [9] Robust Deep Object Tracking against Adversarial Attacks
    Jia, Shuai
    Ma, Chao
    Song, Yibing
    Yang, Xiaokang
    Yang, Ming-Hsuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1238 - 1257
  • [10] Evoattack: suppressive adversarial attacks against object detection models using evolutionary search
    Chan, Kenneth H.
    Cheng, Betty H. C.
    AUTOMATED SOFTWARE ENGINEERING, 2025, 32 (01)