Towards Defending against Adversarial Examples via Attack-Invariant Features

被引:0
|
作者
Zhou, Dawei [1 ,2 ]
Liu, Tongliang [2 ]
Han, Bo [3 ]
Wang, Nannan [1 ]
Peng, Chunlei [4 ]
Gao, Xinbo [5 ]
机构
[1] Xidian Univ, Sch Telecommun Engn, State Key Lab Integrated Serv Networks, Xian, Shaanxi, Peoples R China
[2] Univ Sydney, Sch Comp Sci, Trustworthy Machine Learning Lab, Sydney, NSW, Australia
[3] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
[4] Xidian Univ, State Key Lab Integrated Serv Networks, Sch Cyber Engn, Xian, Shaanxi, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing, Peoples R China
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
CORTEX;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) are vulnerable to adversarial noise. Their adversarial robustness can be improved by exploiting adversarial examples. However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples. To solve this problem, in this paper, we propose to remove adversarial noise by learning generalizable invariant features across attacks which maintain semantic classification information. Specifically, we introduce an adversarial feature learning mechanism to disentangle invariant features from adversarial noise. A normalization term has been proposed in the encoded space of the attack-invariant features to address the bias issue between the seen and unseen types of attacks. Empirical evaluations demonstrate that our method could provide better protection in comparison to previous state-of-the-art approaches, especially against unseen types of attacks and adaptive attacks.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Defending Against Model Inversion Attack by Adversarial Examples
    Wen, Jing
    Yiu, Siu-Ming
    Hui, Lucas C. K.
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 551 - 556
  • [2] DeT: Defending Against Adversarial Examples via Decreasing Transferability
    Li, Changjiang
    Weng, Haiqin
    Ji, Shouling
    Dong, Jianfeng
    He, Qinming
    CYBERSPACE SAFETY AND SECURITY, PT I, 2020, 11982 : 307 - 322
  • [3] Attack-invariant attention feature for adversarial defense in hyperspectral image classification
    Shi, Cheng
    Liu, Ying
    Zhao, Minghua
    Pun, Chi-Man
    Miao, Qiguang
    PATTERN RECOGNITION, 2024, 145
  • [4] Towards Robust Ensemble Defense Against Adversarial Examples Attack
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [5] Defending Against Adversarial Examples via Soft Decision Trees Embedding
    Hua, Yingying
    Ge, Shiming
    Gao, Xindi
    Jin, Xin
    Zeng, Dan
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2106 - 2114
  • [6] Defending Against Adversarial Attack Towards Deep Neural Networks Via Collaborative Multi-Task Training
    Wang, Derui
    Li, Chaoran
    Wen, Sheng
    Nepal, Surya
    Xiang, Yang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (02) : 953 - 965
  • [7] Defending against and generating adversarial examples together with generative adversarial networks
    Ying Wang
    Xiao Liao
    Wei Cui
    Yang Yang
    Scientific Reports, 15 (1)
  • [8] Dynamic and Diverse Transformations for Defending Against Adversarial Examples
    Chen, Yongkang
    Zhang, Ming
    Li, Jin
    Kuang, Xiaohui
    Zhang, Xuhong
    Zhang, Han
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 976 - 983
  • [9] DCAL: A New Method for Defending Against Adversarial Examples
    Lin, Xiaoyu
    Cao, Chunjie
    Wang, Longjuan
    Liu, Zhiyuan
    Li, Mengqian
    Ma, Haiying
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT II, 2022, 13339 : 38 - 50
  • [10] Enhance Domain-Invariant Transferability of Adversarial Examples via Distance Metric Attack
    Zhang, Jin
    Peng, Wenyu
    Wang, Ruxin
    Lin, Yu
    Zhou, Wei
    Lan, Ge
    MATHEMATICS, 2022, 10 (08)