A Stable and Efficient Data-Free Model Attack With Label-Noise Data Generation

被引:0
|
作者
Zhang, Zhixuan [1 ]
Zheng, Xingjian [2 ]
Qing, Linbo [1 ]
Liu, Qi [3 ]
Wang, Pingyu [4 ]
Liu, Yu [4 ]
Liao, Jiyang [4 ]
机构
[1] Sichuan Univ, Sch Cyber Sci & Engn, Chengdu 610207, Peoples R China
[2] Frost Drill Intellectual Software Pte Ltd, Int Plaza, Singapore 079903, Singapore
[3] South China Univ Technol, Sch Future Technol, Guangzhou 511442, Peoples R China
[4] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Closed box; Generators; Data models; Data collection; Adaptation models; Diversity methods; Cloning; Glass box; Computational modeling; Deep neural network; data-free; adversarial examples; closed-box attack;
D O I
10.1109/TIFS.2025.3550066
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The objective of a data-free closed-box adversarial attack is to attack a victim model without using internal information, training datasets or semantically similar substitute datasets. Concerned about stricter attack scenarios, recent studies have tried employing generative networks to synthesize data for training substitute models. Nevertheless, these approaches concurrently encounter challenges associated with unstable training and diminished attack efficiency. In this paper, we propose a novel query-efficient data-free closed-box adversarial attack method. To mitigate unstable training, for the first time, we directly manipulate the intermediate-layer feature of a generator without relying on any substitute models. Specifically, a label noise-based generation module is created to enhance the intra-class patterns by incorporating partial historical information during the learning process. Additionally, we present a feature-disturbed diversity generation method to augment the inter-class distance. Meanwhile, we propose an adaptive intra-class attack strategy to heighten attack capability within a limited query budget. In this strategy, entropy-based distance is utilized to characterize the relative information from model outputs, while positive classes and negative samples are used to enhance low attack efficiency. The comprehensive experiments conducted on six datasets demonstrate the superior performance of our method compared to six state-of-the-art data-free closed-box competitors in both label-only and probability-only attack scenarios. Intriguingly, our method can realize the highest attack success rate on the online Microsoft Azure model under an extremely low query budget. Additionally, the proposed approach not only achieves more stable training but also significantly reduces the query count for a more balanced data generation. Furthermore, our method can maintain the best performance under the existing defense models and a limited query budget.
引用
收藏
页码:3131 / 3145
页数:15
相关论文
共 50 条
  • [41] Model Conversion via Differentially Private Data-Free Distillation
    Liu, Bochao
    Wang, Pengju
    Li, Shikun
    Zeng, Dan
    Ge, Shiming
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 2187 - 2195
  • [42] Data-Free Learning of Student Networks
    Chen, Hanting
    Wang, Yunhe
    Xu, Chang
    Yang, Zhaohui
    Liu, Chuanjian
    Shi, Boxin
    Xu, Chunjing
    Xu, Chao
    Tian, Qi
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3513 - 3521
  • [43] Data-free inference of the joint distribution of uncertain model parameters
    Berry, Robert D.
    Najm, Habib N.
    Debusschere, Bert J.
    Marzouk, Youssef M.
    Adalsteinsson, Helgi
    JOURNAL OF COMPUTATIONAL PHYSICS, 2012, 231 (05) : 2180 - 2198
  • [44] Latent Code Augmentation Based on Stable Diffusion for Data-Free Substitute Attacks
    Shao, Mingwen
    Meng, Lingzhuang
    Qiao, Yuanjian
    Zhang, Lixu
    Zuo, Wangmeng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025,
  • [45] Network-light not data-free
    French, Katherine M.
    Riley, Steven
    Garnett, Geoff P.
    SEXUALLY TRANSMITTED DISEASES, 2007, 34 (01) : 57 - 58
  • [46] Discovering and Overcoming Limitations of Noise-engineered Data-free Knowledge Distillation
    Raikwar, Piyush
    Mishra, Deepak
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [47] Label-Noise Resistant Logistic Regression for Functional Data Classification with an Application to Alzheimer's Disease Study
    Lee, Seokho
    Shin, Hyejin
    Lee, Sang Han
    BIOMETRICS, 2016, 72 (04) : 1325 - 1335
  • [48] Data-free knowledge distillation via generator-free data generation for Non-IID federated learning
    Zhao, Siran
    Liao, Tianchi
    Fu, Lele
    Chen, Chuan
    Bian, Jing
    Zheng, Zibin
    NEURAL NETWORKS, 2024, 179
  • [49] Effective and efficient conditional contrast for data-free knowledge distillation with low memory
    Jiang, Chenyang
    Li, Zhendong
    Yang, Jun
    Wu, Yiqiang
    Li, Shuai
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (04):
  • [50] Data-Free Knowledge Distillation for Privacy-Preserving Efficient UAV Networks
    Yu, Guyang
    2022 6TH INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION SCIENCES (ICRAS 2022), 2022, : 52 - 56