On the Defense Against Adversarial Examples Beyond the Visible Spectrum

被引:0
|
作者
Ortiz, Anthony [1 ]
Fuentes, Olac [1 ]
Rosario, Dalton [2 ]
Kiekintveld, Christopher [1 ]
机构
[1] Univ Texas El Paso, Dept Comp Sci, El Paso, TX 79968 USA
[2] US Army, Res Lab, Image Proc Branch, Adelphi, MD USA
关键词
Adversarial Examples; Adversarial Machine Learning; Multispectral Imagery; Defenses;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Machine learning (ML) models based on RGB images are vulnerable to adversarial attacks, representing a potential cyber threat to the user. Adversarial examples are inputs maliciously constructed to induce errors by ML systems at test time. Recently, researchers also showed that such attacks can be successfully applied at test time to ML models based on multispectral imagery, suggesting this threat is likely to extend to the hyperspectral data space as well. Military communities across the world continue to grow their investment portfolios in multispectral and hyperspectral remote sensing, while expressing their interest in machine learning based systems. This paper aims at increasing the military community's awareness of the adversarial threat and also in proposing ML training strategies and resilient solutions for state of the art artificial neural networks. Specifically, the paper introduces an adversarial detection network that explores domain specific knowledge of material response in the shortwave infrared spectrum, and a framework that jointly integrates an automatic band selection method for multispectral imagery with adversarial training and adversarial spectral rule-based detection. Experiment results show the effectiveness of the approach in an automatic semantic segmentation task using Digital Globe's WorldView-3 satellite 16-band imagery.
引用
收藏
页码:553 / 558
页数:6
相关论文
共 50 条
  • [41] Understanding adversarial robustness against on-manifold adversarial examples
    Xiao, Jiancong
    Yang, Liusha
    Fan, Yanbo
    Wang, Jue
    Luo, Zhi-Quan
    PATTERN RECOGNITION, 2025, 159
  • [42] Defense against Universal Adversarial Perturbations
    Akhtar, Naveed
    Liu, Jian
    Mian, Ajmal
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3389 - 3398
  • [43] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67
  • [44] Sequence Squeezing: A Defense Method Against Adversarial Examples for API Call-Based RNN Variants
    Rosenberg, Ishai
    Shabtai, Asaf
    Elovici, Yuval
    Rokach, Lior
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [45] Detect and defense against adversarial examples in deep learning using natural scene statistics and adaptive denoising
    Anouar Kherchouche
    Sid Ahmed Fezza
    Wassim Hamidouche
    Neural Computing and Applications, 2022, 34 : 21567 - 21582
  • [46] Detect and defense against adversarial examples in deep learning using natural scene statistics and adaptive denoising
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (24): : 21567 - 21582
  • [47] ERROR DIFFUSION HALFTONING AGAINST ADVERSARIAL EXAMPLES
    Lo, Shao-Yuan
    Patel, Vishal M.
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3892 - 3896
  • [48] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020
  • [49] Robust Decision Trees Against Adversarial Examples
    Chen, Hongge
    Zhang, Huan
    Boning, Duane
    Hsieh, Cho-Jui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [50] Deep neural rejection against adversarial examples
    Sotgiu, Angelo
    Demontis, Ambra
    Melis, Marco
    Biggio, Battista
    Fumera, Giorgio
    Feng, Xiaoyi
    Roli, Fabio
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)