Stealthiness Assessment of Adversarial Perturbation: From a Visual Perspective

被引:0
|
作者
Liu, Hangcheng [1 ]
Zhou, Yuan [2 ]
Yang, Ying [3 ,4 ]
Zhao, Qingchuan [5 ]
Zhang, Tianwei [1 ]
Xiang, Tao [6 ]
机构
[1] Nanyang Technol Univ, Coll Comp & Data Sci, Jurong West 639798, Singapore
[2] Zhejiang Sci Tech Univ, Sch Comp Sci & Technol, Hangzhou 310018, Zhejiang, Peoples R China
[3] ASTAR, Inst High Performance Comp IHPC, Singapore 138632, Singapore
[4] ASTAR, Ctr Frontier AI Res CFAR, Singapore 138632, Singapore
[5] City Univ Hong Kong, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
[6] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Perturbation methods; Measurement; Observers; Predictive models; Distortion; Noise; Feature extraction; Computer science; Visualization; Visual systems; Adversarial stealthiness assessment; adversarial attack; classification; IMAGE QUALITY ASSESSMENT; DATABASE;
D O I
10.1109/TIFS.2024.3520016
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Assessing the stealthiness of adversarial perturbations is challenging due to the lack of appropriate evaluation metrics. Existing evaluation metrics, e.g., L-p norms or Image Quality Assessment (IQA), fall short of assessing the pixel-level stealthiness of subtle adversarial perturbations since these metrics are primarily designed for traditional distortions. To bridge this gap, we present the first comprehensive study on the subjective and objective assessment of the stealthiness of adversarial perturbations from a visual perspective at a pixel level. Specifically, we propose new subjective assessment criteria for human observers to score adversarial stealthiness in a fine-grained manner. Then, we create a large-scale adversarial example dataset comprising 10586 pairs of clean and adversarial samples encompassing twelve state-of-the-art adversarial attacks. To obtain the subjective scores according to the proposed criterion, we recruit 60 human observers, and each adversarial example is evaluated by at least 15 observers. The mean opinion score of each adversarial example is utilized for labeling. Finally, we develop a three-stage objective scoring model that mimics human scoring habits to predict adversarial perturbation's stealthiness. Experimental results demonstrate that our objective model exhibits superior consistency with the human visual system, surpassing commonly employed metrics like PSNR and SSIM.
引用
收藏
页码:898 / 913
页数:16
相关论文
共 50 条
  • [1] Adversarial Example Soups: Improving Transferability and Stealthiness for Free
    Yang, Bo
    Zhang, Hengwei
    Wang, Jindong
    Yang, Yulong
    Lin, Chenhao
    Shen, Chao
    Zhao, Zhengyu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 1882 - 1894
  • [2] AdvReverb: Rethinking the Stealthiness of Audio Adversarial Examples to Human Perception
    Chen, Meng
    Lu, Li
    Yu, Jiadi
    Ba, Zhongjie
    Lin, Feng
    Ren, Kui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 1948 - 1962
  • [3] Topological Interference Management With Adversarial Topology Perturbation: An Algorithmic Perspective
    Liang, Ya-Chun
    Liao, Chung-Shou
    Yi, Xinping
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (12) : 8153 - 8166
  • [4] Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective
    Oh, Seong Joon
    Fritz, Mario
    Schiele, Bernt
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1491 - 1500
  • [5] Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability
    Xue, Haotian
    Araujo, Alexandre
    Hu, Bin
    Chen, Yongxin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Acoustic and Visual Approaches to Adversarial Text Generation for Google Perspective
    Brown, Stephan
    Milkov, Petar
    Patel, Sameep
    Looi, Yi Zen
    Jain, Edwin
    Dong, Ziqian
    Gu, Huanying
    Artan, N. Sertac
    2019 6TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2019), 2019, : 355 - 360
  • [7] Learning Universal Adversarial Perturbation by Adversarial Example
    Li, Maosen
    Yang, Yanhua
    Wei, Kun
    Yang, Xu
    Huang, Heng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1350 - 1358
  • [8] Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
    Qin, Zeyu
    Fan, Yanbo
    Liu, Yi
    Shen, Li
    Zhang, Yong
    Wang, Jue
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Point Cloud Adversarial Perturbation Generation for Adversarial Attacks
    He, Fengmei
    Chen, Yihuai
    Chen, Ruidong
    Nie, Weizhi
    IEEE ACCESS, 2023, 11 : 2767 - 2774
  • [10] Low Frequency Adversarial Perturbation
    Guo, Chuan
    Frank, Dared S.
    Weinberger, Kilian Q.
    35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 1127 - 1137