Adversarial patch attacks against aerial imagery object detectors

被引:14
|
作者
Tang, Guijian [1 ,2 ]
Jiang, Tingsong [2 ]
Zhou, Weien [2 ]
Li, Chao [2 ,3 ]
Yao, Wen [2 ]
Zhao, Yong [1 ]
机构
[1] Natl Univ Def Technol, Coll Aerosp Sci & Engn, 109 Deya Rd, Changsha 410073, Peoples R China
[2] Chinese Acad Mil Sci, Def Innovat Inst, 53 Fengtai East St, Beijing 100071, Peoples R China
[3] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial patch attacks; Aerial imagery; Object detection; Black; -box;
D O I
10.1016/j.neucom.2023.03.050
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although Deep Neural Networks (DNNs)-based object detectors are widely used in various fields, espe-cially on aerial imagery object detections, it has been observed that a small elaborately designed patch attached to the images can mislead the DNNs-based detectors into producing erroneous output. However, the target detectors being attacked are quite simple, and the attack efficiency is relatively low in previous works, making it not practicable in real scenarios. To address these limitations, a new adversarial patch attack algorithm is proposed in this paper. Firstly, we designed a novel loss function using the intermediate outputs of the models rather than the model's final outputs interpreted by the detection head to optimize adversarial patches. The experiments conducted on the DOTA, RSOD, and NWPU VHR-10 datasets demonstrate that our method can significantly degrade the performance of the detectors. Secondly, we conducted intensive experiments to investigate the impact of different out-puts of the detection model on generating adversarial patches, demonstrating the class score is not as effective as the objectness score. Thirdly, we comprehensively analyzed the attack transferability across different aerial imagery datasets, verifying that the patches generated on one dataset are also effective in attacking another. Moreover, we proposed ensemble training to boost the attack's transferability across models. Our work alarms the application of DNNs-based object detectors in aerial imagery.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页码:128 / 140
页数:13
相关论文
共 50 条
  • [21] PAD: Patch-Agnostic Defense against Adversarial Patch Attacks
    Jing, Lihua
    Wang, Rui
    Ren, Wenqi
    Dong, Xin
    Zou, Cong
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 24472 - 24481
  • [22] PatchZero: Defending against Adversarial Patch Attacks by Detecting and Zeroing the Patch
    Xu, Ke
    Xiao, Yao
    Zheng, Zhaoheng
    Cai, Kaijie
    Nevatia, Ram
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 4621 - 4630
  • [23] Evaluating the effectiveness of Adversarial Attacks against Botnet Detectors
    Apruzzese, Giovanni
    Colajanni, Michele
    Marchetti, Mirco
    2019 IEEE 18TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA), 2019, : 193 - 200
  • [24] Realistic Adversarial Attacks on Object Detectors Using Generative Models
    D. Shelepneva
    K. Arkhipenko
    Journal of Mathematical Sciences, 2024, 285 (2) : 245 - 254
  • [25] Enhancing Remote Adversarial Patch Attacks on Face Detectors with Tiling and Scaling
    Okano, Masora
    Ito, Koichi
    Nishigaki, Masakatsu
    Ohki, Tetsushi
    2024 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2024,
  • [26] Robust Deep Object Tracking against Adversarial Attacks
    Jia, Shuai
    Ma, Chao
    Song, Yibing
    Yang, Xiaokang
    Yang, Ming-Hsuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1238 - 1257
  • [27] False Positive Adversarial Example Against Object Detectors
    Yuan X.
    Hu J.
    Huang Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (11): : 2534 - 2548
  • [28] A Survey of Attacks Against Twitter Spam Detectors in an Adversarial Environment
    Imam, Niddal H.
    Vassilakis, Vassilios G.
    ROBOTICS, 2019, 8 (03)
  • [29] Hardening Random Forest Cyber Detectors Against Adversarial Attacks
    Apruzzese, Giovanni
    Andreolini, Mauro
    Colajanni, Michele
    Marchetti, Mirco
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2020, 4 (04): : 427 - 439
  • [30] A Comprehensive Study of the Robustness for LiDAR-Based 3D Object Detectors Against Adversarial Attacks
    Zhang, Yifan
    Hou, Junhui
    Yuan, Yixuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (05) : 1592 - 1624