Towards Undetectable Adversarial Examples: A Steganographic Perspective

被引:0
|
作者
Zeng, Hui [1 ,2 ]
Chen, Biwei [3 ]
Yang, Rongsong [1 ]
Li, Chenggang [1 ]
Peng, Anjie [1 ,2 ]
机构
[1] Southwest Univ Sci & Technol, Sch Comp Sci & Technol, Mianyang, Sichuan, Peoples R China
[2] Guangdong Prov Key Lab Informat Secur Technol, Guangzhou, Peoples R China
[3] Beijing Normal Univ, Beijing, Peoples R China
关键词
Adversarial examples; statistical analysis; embedding suitability map; steganography;
D O I
10.1007/978-981-99-8070-3_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over the past decade, adversarial examples have demonstrated an enhancing ability to fool neural networks. However, most adversarial examples can be easily detected, especially under statistical analysis. Ensuring undetectability is crucial for the success of adversarial examples in practice. In this paper, we borrow the idea of the embedding suitability map from steganography and employ it to modulate the adversarial perturbation. In this way, the adversarial perturbations are concentrated in the hard-to-detect areas and are attenuated in predictable regions. Extensive experiments show that the proposed scheme is compatible with various existing attacks and can significantly boost the undetectability of adversarial examples against both human inspection and statistical analysis of the same attack ability. The code is available at github.com/zengh5/Undetectable-attack.
引用
收藏
页码:172 / 183
页数:12
相关论文
共 50 条
  • [31] Adversarial Examples in Arabic
    Alshemali, Basemah
    Kalita, Jugal
    2019 6TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2019), 2019, : 371 - 376
  • [32] Smooth adversarial examples
    Zhang, Hanwei
    Avrithis, Yannis
    Furon, Teddy
    Amsaleg, Laurent
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [33] Semantic Adversarial Examples
    Hosseini, Hossein
    Poovendran, Radha
    PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 1695 - 1700
  • [34] Minimum Adversarial Examples
    Du, Zhenyu
    Liu, Fangzheng
    Yan, Xuehu
    ENTROPY, 2022, 24 (03)
  • [35] Smooth adversarial examples
    Hanwei Zhang
    Yannis Avrithis
    Teddy Furon
    Laurent Amsaleg
    EURASIP Journal on Information Security, 2020
  • [36] On the Salience of Adversarial Examples
    Fernandez, Amanda
    ADVANCES IN VISUAL COMPUTING, ISVC 2019, PT II, 2019, 11845 : 221 - 232
  • [37] Natural Adversarial Examples
    Hendrycks, Dan
    Zhao, Kevin
    Basart, Steven
    Steinhardt, Jacob
    Song, Dawn
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15257 - 15266
  • [38] Synchronized Detection and Recovery of Steganographic Messages with Adversarial Learning
    Shi, Haichao
    Zhang, Xiao-Yu
    Wang, Shupeng
    Fu, Ge
    Tang, Jianqi
    COMPUTATIONAL SCIENCE - ICCS 2019, PT II, 2019, 11537 : 31 - 43
  • [39] Search the Steganographic Policy for Image via Adversarial Attack
    Li L.
    Fan M.
    Hao J.
    Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2022, 51 (02): : 259 - 263
  • [40] Towards a Robust Classifier: An MDL-Based Method for Generating Adversarial Examples
    Asadi, Behzad
    Varadharajan, Vijay
    2020 IEEE 19TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2020), 2020, : 793 - 801