On the Defense Against Adversarial Examples Beyond the Visible Spectrum

被引:0
|
作者
Ortiz, Anthony [1 ]
Fuentes, Olac [1 ]
Rosario, Dalton [2 ]
Kiekintveld, Christopher [1 ]
机构
[1] Univ Texas El Paso, Dept Comp Sci, El Paso, TX 79968 USA
[2] US Army, Res Lab, Image Proc Branch, Adelphi, MD USA
关键词
Adversarial Examples; Adversarial Machine Learning; Multispectral Imagery; Defenses;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Machine learning (ML) models based on RGB images are vulnerable to adversarial attacks, representing a potential cyber threat to the user. Adversarial examples are inputs maliciously constructed to induce errors by ML systems at test time. Recently, researchers also showed that such attacks can be successfully applied at test time to ML models based on multispectral imagery, suggesting this threat is likely to extend to the hyperspectral data space as well. Military communities across the world continue to grow their investment portfolios in multispectral and hyperspectral remote sensing, while expressing their interest in machine learning based systems. This paper aims at increasing the military community's awareness of the adversarial threat and also in proposing ML training strategies and resilient solutions for state of the art artificial neural networks. Specifically, the paper introduces an adversarial detection network that explores domain specific knowledge of material response in the shortwave infrared spectrum, and a framework that jointly integrates an automatic band selection method for multispectral imagery with adversarial training and adversarial spectral rule-based detection. Experiment results show the effectiveness of the approach in an automatic semantic segmentation task using Digital Globe's WorldView-3 satellite 16-band imagery.
引用
收藏
页码:553 / 558
页数:6
相关论文
共 50 条
  • [1] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [2] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [3] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255
  • [4] Advocating for Multiple Defense Strategies Against Adversarial Examples
    Araujo, Alexandre
    Meunier, Laurent
    Pinot, Rafael
    Negrevergne, Benjamin
    ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 165 - 177
  • [5] Defense Against Adversarial Examples Using Beneficial Noise
    Raval, Param
    Khakhi, Harin
    Kuribayashi, Minoru
    Raval, Mehul S.
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 1842 - 1848
  • [6] Morphence: Moving Target Defense Against Adversarial Examples
    Amich, Abderrahmen
    Eshete, Birhanu
    37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 61 - 75
  • [7] Deep image prior based defense against adversarial examples
    Dai, Tao
    Feng, Yan
    Chen, Bin
    Lu, Jian
    Xia, Shu-Tao
    PATTERN RECOGNITION, 2022, 122
  • [8] MagNet: a Two-Pronged Defense against Adversarial Examples
    Meng, Dongyu
    Chen, Hao
    CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 135 - 147
  • [9] Defense against adversarial examples based on wavelet domain analysis
    Sarvar, Armaghan
    Amirmazlaghani, Maryam
    APPLIED INTELLIGENCE, 2023, 53 (01) : 423 - 439
  • [10] Defense against adversarial examples based on wavelet domain analysis
    Armaghan Sarvar
    Maryam Amirmazlaghani
    Applied Intelligence, 2023, 53 : 423 - 439