Towards Undetectable Adversarial Examples: A Steganographic Perspective

被引:0
|
作者
Zeng, Hui [1 ,2 ]
Chen, Biwei [3 ]
Yang, Rongsong [1 ]
Li, Chenggang [1 ]
Peng, Anjie [1 ,2 ]
机构
[1] Southwest Univ Sci & Technol, Sch Comp Sci & Technol, Mianyang, Sichuan, Peoples R China
[2] Guangdong Prov Key Lab Informat Secur Technol, Guangzhou, Peoples R China
[3] Beijing Normal Univ, Beijing, Peoples R China
关键词
Adversarial examples; statistical analysis; embedding suitability map; steganography;
D O I
10.1007/978-981-99-8070-3_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over the past decade, adversarial examples have demonstrated an enhancing ability to fool neural networks. However, most adversarial examples can be easily detected, especially under statistical analysis. Ensuring undetectability is crucial for the success of adversarial examples in practice. In this paper, we borrow the idea of the embedding suitability map from steganography and employ it to modulate the adversarial perturbation. In this way, the adversarial perturbations are concentrated in the hard-to-detect areas and are attenuated in predictable regions. Extensive experiments show that the proposed scheme is compatible with various existing attacks and can significantly boost the undetectability of adversarial examples against both human inspection and statistical analysis of the same attack ability. The code is available at github.com/zengh5/Undetectable-attack.
引用
收藏
页码:172 / 183
页数:12
相关论文
共 50 条
  • [21] Generating Transferable Adversarial Examples From the Perspective of Ensemble and Distribution
    Zhang, Huangyi
    Liu, Ximeng
    PROCEEDINGS OF 2024 3RD INTERNATIONAL CONFERENCE ON CYBER SECURITY, ARTIFICIAL INTELLIGENCE AND DIGITAL ECONOMY, CSAIDE 2024, 2024, : 173 - 177
  • [22] Towards universal and sparse adversarial examples for visual object tracking
    Sheng, Jingjing
    Zhang, Dawei
    Chen, Jianxin
    Xiao, Xin
    Zheng, Zhonglong
    APPLIED SOFT COMPUTING, 2024, 153
  • [23] Towards Robust Ensemble Defense Against Adversarial Examples Attack
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [24] ADVERSARIAL EXAMPLES FOR GOOD: ADVERSARIAL EXAMPLES GUIDED IMBALANCED LEARNING
    Zhang, Jie
    Zhang, Lei
    Li, Gang
    Wu, Chao
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 136 - 140
  • [25] Generating steganographic images via adversarial training
    Hayes, Jamie
    Danezis, George
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [26] Towards Visualizing and Detecting Audio Adversarial Examples for Automatic Speech Recognition
    Zong, Wei
    Chow, Yang-Wai
    Susilo, Willy
    INFORMATION SECURITY AND PRIVACY, ACISP 2021, 2021, 13083 : 531 - 549
  • [27] From Spatial to Spectral Domain, a New Perspective for Detecting Adversarial Examples
    Liu, Zhiyuan
    Cao, Chunjie
    Tao, Fangjian
    Li, Yifan
    Lin, Xiaoyu
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [28] Rethinking the optimization objective for transferable adversarial examples from a fuzzy perspective
    Yang, Xiangyuan
    Lin, Jie
    Zhang, Hanlin
    Zhao, Peng
    NEURAL NETWORKS, 2025, 184
  • [29] Generating Adversarial Examples with Adversarial Networks
    Xiao, Chaowei
    Li, Bo
    Zhu, Jun-Yan
    He, Warren
    Liu, Mingyan
    Song, Dawn
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3905 - 3911
  • [30] Adversarial examples are just bugs, too refining the source of adversarial examples
    Nakkiran, Preetum
    Distill, 2019, 4 (08):