Towards Undetectable Adversarial Examples: A Steganographic Perspective

被引:0
|
作者
Zeng, Hui [1 ,2 ]
Chen, Biwei [3 ]
Yang, Rongsong [1 ]
Li, Chenggang [1 ]
Peng, Anjie [1 ,2 ]
机构
[1] Southwest Univ Sci & Technol, Sch Comp Sci & Technol, Mianyang, Sichuan, Peoples R China
[2] Guangdong Prov Key Lab Informat Secur Technol, Guangzhou, Peoples R China
[3] Beijing Normal Univ, Beijing, Peoples R China
关键词
Adversarial examples; statistical analysis; embedding suitability map; steganography;
D O I
10.1007/978-981-99-8070-3_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over the past decade, adversarial examples have demonstrated an enhancing ability to fool neural networks. However, most adversarial examples can be easily detected, especially under statistical analysis. Ensuring undetectability is crucial for the success of adversarial examples in practice. In this paper, we borrow the idea of the embedding suitability map from steganography and employ it to modulate the adversarial perturbation. In this way, the adversarial perturbations are concentrated in the hard-to-detect areas and are attenuated in predictable regions. Extensive experiments show that the proposed scheme is compatible with various existing attacks and can significantly boost the undetectability of adversarial examples against both human inspection and statistical analysis of the same attack ability. The code is available at github.com/zengh5/Undetectable-attack.
引用
收藏
页码:172 / 183
页数:12
相关论文
共 50 条
  • [1] COUNTERING ADVERSARIAL EXAMPLES BY MEANS OF STEGANOGRAPHIC ATTACKS
    Colangelo, Federico
    Neri, Alessandro
    Battisti, Federica
    2019 8TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP 2019), 2019, : 193 - 198
  • [2] Remix: Towards the transferability of adversarial examples
    Zhao, Hongzhi
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Cai, Xin
    NEURAL NETWORKS, 2023, 163 : 367 - 378
  • [3] Towards Robust Detection of Adversarial Examples
    Pang, Tianyu
    Du, Chao
    Dong, Yinpeng
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] Towards Transferable Targeted Adversarial Examples
    Wang, Zhibo
    Yang, Hongshan
    Feng, Yunhe
    Sun, Peng
    Guo, Hengchang
    Zhang, Zhifei
    Ren, Kui
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20534 - 20543
  • [5] Towards Universal Adversarial Examples and Defenses
    Rakin, Adnan Siraj
    Wang, Ye
    Aeron, Shuchin
    Koike-Akino, Toshiaki
    Moulin, Pierre
    Parsons, Kieran
    2021 IEEE INFORMATION THEORY WORKSHOP (ITW), 2021,
  • [6] An Empirical Study Towards SAR Adversarial Examples
    Zhang, Zhiwei
    Liu, Shuowei
    Gao, Xunzhang
    Diao, Yujia
    2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 127 - 132
  • [7] Towards robust classification detection for adversarial examples
    Liu, Huangxiaolie
    Zhang, Dong
    Chen, Huijun
    INTERNATIONAL CONFERENCE FOR INTERNET TECHNOLOGY AND SECURED TRANSACTIONS (ICITST-2020), 2020, : 23 - 29
  • [8] An Image based Undetectable Steganographic Technique (ImUST)
    Joshi, Vaibhav B.
    Raval, Mehul S.
    2017 20TH INTERNATIONAL SYMPOSIUM ON WIRELESS PERSONAL MULTIMEDIA COMMUNICATIONS (WPMC), 2017, : 6 - 10
  • [9] An Empirical Study Towards SAR Adversarial Examples
    Zhang, Zhiwei
    Gao, Xunzhang
    Liu, Shuowei
    Diao, Yujia
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 1144 - 1148
  • [10] A steganographic embedding undetectable by JPEG compatibility steganalysis
    Newman, RE
    Moskowitz, IS
    Chang, LW
    Brahmadesam, MM
    INFORMATION HIDING, 2003, 2578 : 258 - 277