Stealthiness Assessment of Adversarial Perturbation: From a Visual Perspective

被引:0
|
作者
Liu, Hangcheng [1 ]
Zhou, Yuan [2 ]
Yang, Ying [3 ,4 ]
Zhao, Qingchuan [5 ]
Zhang, Tianwei [1 ]
Xiang, Tao [6 ]
机构
[1] Nanyang Technol Univ, Coll Comp & Data Sci, Jurong West 639798, Singapore
[2] Zhejiang Sci Tech Univ, Sch Comp Sci & Technol, Hangzhou 310018, Zhejiang, Peoples R China
[3] ASTAR, Inst High Performance Comp IHPC, Singapore 138632, Singapore
[4] ASTAR, Ctr Frontier AI Res CFAR, Singapore 138632, Singapore
[5] City Univ Hong Kong, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
[6] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Perturbation methods; Measurement; Observers; Predictive models; Distortion; Noise; Feature extraction; Computer science; Visualization; Visual systems; Adversarial stealthiness assessment; adversarial attack; classification; IMAGE QUALITY ASSESSMENT; DATABASE;
D O I
10.1109/TIFS.2024.3520016
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Assessing the stealthiness of adversarial perturbations is challenging due to the lack of appropriate evaluation metrics. Existing evaluation metrics, e.g., L-p norms or Image Quality Assessment (IQA), fall short of assessing the pixel-level stealthiness of subtle adversarial perturbations since these metrics are primarily designed for traditional distortions. To bridge this gap, we present the first comprehensive study on the subjective and objective assessment of the stealthiness of adversarial perturbations from a visual perspective at a pixel level. Specifically, we propose new subjective assessment criteria for human observers to score adversarial stealthiness in a fine-grained manner. Then, we create a large-scale adversarial example dataset comprising 10586 pairs of clean and adversarial samples encompassing twelve state-of-the-art adversarial attacks. To obtain the subjective scores according to the proposed criterion, we recruit 60 human observers, and each adversarial example is evaluated by at least 15 observers. The mean opinion score of each adversarial example is utilized for labeling. Finally, we develop a three-stage objective scoring model that mimics human scoring habits to predict adversarial perturbation's stealthiness. Experimental results demonstrate that our objective model exhibits superior consistency with the human visual system, surpassing commonly employed metrics like PSNR and SSIM.
引用
收藏
页码:898 / 913
页数:16
相关论文
共 50 条
  • [41] Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity
    Zhou, Shuai
    Liu, Chi
    Ye, Dayong
    Zhu, Tianqing
    Zhou, Wanlei
    Yu, Philip S.
    ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [42] Variational Adversarial Defense: A Bayes Perspective for Adversarial Training
    Zhao, Chenglong
    Mei, Shibin
    Ni, Bingbing
    Yuan, Shengchao
    Yu, Zhenbo
    Wang, Jun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3047 - 3063
  • [43] Analysis of Adversarial Jamming From a Quantum Game Theoretic Perspective
    Borah, Shantom Kumar
    Anand, Shivansh
    Agarwal, Rishabh Vipul
    Bitragunta, Sainath
    IEEE SYSTEMS JOURNAL, 2023, 17 (01): : 881 - 891
  • [44] Toward Understanding and Boosting Adversarial Transferability From a Distribution Perspective
    Zhu, Yao
    Chen, Yuefeng
    Li, Xiaodan
    Chen, Kejiang
    He, Yuan
    Tian, Xiang
    Zheng, Bolun
    Chen, Yaowu
    Huang, Qingming
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6487 - 6501
  • [45] Revisiting Adversarial Robustness Distillation from the Perspective of Robust Fairness
    Yue, Xinli
    Mou, Ningping
    Wang, Qian
    Zhao, Lingchen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [46] IoT Network Security from the Perspective of Adversarial Deep Learning
    Sagduyu, Yalin E.
    Shi, Yi
    Erpek, Tugba
    2019 16TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2019,
  • [47] Stabilizing Adversarial Invariance Induction from Divergence Minimization Perspective
    Iwasawa, Yusuke
    Akuzawa, Kei
    Matsuo, Yutaka
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 1955 - 1962
  • [48] How to Compare Adversarial Robustness of Classifiers from a Global Perspective
    Risse, Niklas
    Goepfert, Christina
    Goepfert, Jan Philip
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 29 - 41
  • [49] Generating Transferable Adversarial Examples From the Perspective of Ensemble and Distribution
    Zhang, Huangyi
    Liu, Ximeng
    PROCEEDINGS OF 2024 3RD INTERNATIONAL CONFERENCE ON CYBER SECURITY, ARTIFICIAL INTELLIGENCE AND DIGITAL ECONOMY, CSAIDE 2024, 2024, : 173 - 177
  • [50] A Closer Look at Curriculum Adversarial Training: From an Online Perspective
    Shi, Lianghe
    Liu, Weiwei
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14973 - 14981