Towards Undetectable Adversarial Examples: A Steganographic Perspective

被引:0
|
作者
Zeng, Hui [1 ,2 ]
Chen, Biwei [3 ]
Yang, Rongsong [1 ]
Li, Chenggang [1 ]
Peng, Anjie [1 ,2 ]
机构
[1] Southwest Univ Sci & Technol, Sch Comp Sci & Technol, Mianyang, Sichuan, Peoples R China
[2] Guangdong Prov Key Lab Informat Secur Technol, Guangzhou, Peoples R China
[3] Beijing Normal Univ, Beijing, Peoples R China
关键词
Adversarial examples; statistical analysis; embedding suitability map; steganography;
D O I
10.1007/978-981-99-8070-3_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over the past decade, adversarial examples have demonstrated an enhancing ability to fool neural networks. However, most adversarial examples can be easily detected, especially under statistical analysis. Ensuring undetectability is crucial for the success of adversarial examples in practice. In this paper, we borrow the idea of the embedding suitability map from steganography and employ it to modulate the adversarial perturbation. In this way, the adversarial perturbations are concentrated in the hard-to-detect areas and are attenuated in predictable regions. Extensive experiments show that the proposed scheme is compatible with various existing attacks and can significantly boost the undetectability of adversarial examples against both human inspection and statistical analysis of the same attack ability. The code is available at github.com/zengh5/Undetectable-attack.
引用
收藏
页码:172 / 183
页数:12
相关论文
共 50 条
  • [41] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples
    Lee, Sungyoon
    Lee, Woojin
    Park, Jinseong
    Lee, Jaewook
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [42] Towards Defending against Adversarial Examples via Attack-Invariant Features
    Zhou, Dawei
    Liu, Tongliang
    Han, Bo
    Wang, Nannan
    Peng, Chunlei
    Gao, Xinbo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [43] Robust and Undetectable Steganographic Timing Channels for i.i.d. Traffic
    Liu, Yali
    Ghosal, Dipak
    Armknecht, Frederik
    Sadeghi, Ahmad-Reza
    Schulz, Steffen
    Katzenbeisser, Stefan
    INFORMATION HIDING, 2010, 6387 : 193 - +
  • [44] Efficient Adversarial Training with Transferable Adversarial Examples
    Zheng, Haizhong
    Zhang, Ziqi
    Gu, Juncheng
    Lee, Honglak
    Prakash, Atul
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1178 - 1187
  • [45] Automatic Steganographic Distortion Learning Using a Generative Adversarial Network
    Tang, Weixuan
    Tan, Shunquan
    Li, Bin
    Huang, Jiwu
    IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (10) : 1547 - 1551
  • [46] Towards Generating Adversarial Examples on Combined Systems of Automatic Speaker Verification and Spoofing Countermeasure
    Zhang, Xingyu
    Zhang, Xiongwei
    Zou, Xia
    Liu, Haibo
    Sun, Meng
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [47] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [48] Adversarial Examples with Specular Highlights
    Vats, Vanshika
    Jerripothula, Koteswar Rao
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3604 - 3613
  • [49] Survey on Generating Adversarial Examples
    Pan W.-W.
    Wang X.-Y.
    Song M.-L.
    Chen C.
    Ruan Jian Xue Bao/Journal of Software, 2020, 31 (01): : 67 - 81
  • [50] Adversarial Examples in Remote Sensing
    Czaja, Wojciech
    Fendley, Neil
    Pekala, Michael
    Ratto, Christopher
    Wang, I-Jeng
    26TH ACM SIGSPATIAL INTERNATIONAL CONFERENCE ON ADVANCES IN GEOGRAPHIC INFORMATION SYSTEMS (ACM SIGSPATIAL GIS 2018), 2018, : 408 - 411