On the Defense Against Adversarial Examples Beyond the Visible Spectrum

被引:0
|
作者
Ortiz, Anthony [1 ]
Fuentes, Olac [1 ]
Rosario, Dalton [2 ]
Kiekintveld, Christopher [1 ]
机构
[1] Univ Texas El Paso, Dept Comp Sci, El Paso, TX 79968 USA
[2] US Army, Res Lab, Image Proc Branch, Adelphi, MD USA
关键词
Adversarial Examples; Adversarial Machine Learning; Multispectral Imagery; Defenses;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Machine learning (ML) models based on RGB images are vulnerable to adversarial attacks, representing a potential cyber threat to the user. Adversarial examples are inputs maliciously constructed to induce errors by ML systems at test time. Recently, researchers also showed that such attacks can be successfully applied at test time to ML models based on multispectral imagery, suggesting this threat is likely to extend to the hyperspectral data space as well. Military communities across the world continue to grow their investment portfolios in multispectral and hyperspectral remote sensing, while expressing their interest in machine learning based systems. This paper aims at increasing the military community's awareness of the adversarial threat and also in proposing ML training strategies and resilient solutions for state of the art artificial neural networks. Specifically, the paper introduces an adversarial detection network that explores domain specific knowledge of material response in the shortwave infrared spectrum, and a framework that jointly integrates an automatic band selection method for multispectral imagery with adversarial training and adversarial spectral rule-based detection. Experiment results show the effectiveness of the approach in an automatic semantic segmentation task using Digital Globe's WorldView-3 satellite 16-band imagery.
引用
收藏
页码:553 / 558
页数:6
相关论文
共 50 条
  • [21] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2022, 18 (01)
  • [22] Detection Based Defense Against Adversarial Examples From the Steganalysis Point of View
    Liu, Jiayang
    Zhang, Weiming
    Zhang, Yiwei
    Hou, Dongdong
    Liu, Yujia
    Zha, Hongyue
    Yu, Nenghai
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4820 - 4829
  • [23] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [24] On the Limitation of MagNet Defense against L1-based Adversarial Examples
    Lu, Pei-Hsuan
    Chen, Pin-Yu
    Chen, Kang-Cheng
    Yu, Chia-Mu
    2018 48TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS (DSN-W), 2018, : 200 - 214
  • [25] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    ACM Transactions on Sensor Networks, 2021, 18 (01)
  • [26] Markov Chain Based Efficient Defense Against Adversarial Examples in Computer Vision
    Zhou, Yue
    Hu, Xiaofang
    Wang, Lidan
    Duan, Shukai
    Chen, Yiran
    IEEE ACCESS, 2019, 7 : 5695 - 5706
  • [27] A robust defense for spiking neural networks against adversarial examples via input filtering
    Guo, Shasha
    Wang, Lei
    Yang, Zhijie
    Lu, Yuliang
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 153
  • [28] Learning defense transformations for counterattacking adversarial examples
    Li, Jincheng
    Zhang, Shuhai
    Cao, Jiezhang
    Tan, Mingkui
    NEURAL NETWORKS, 2023, 164 : 177 - 185
  • [29] Adversarial Training Defense Based on Second-order Adversarial Examples
    Qian Yaguan
    Zhang Ximin
    Wang Bin
    Gu Zhaoquan
    Li Wei
    Yun Bensheng
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2021, 43 (11) : 3367 - 3373
  • [30] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699