A Stable and Efficient Data-Free Model Attack With Label-Noise Data Generation

被引:0
|
作者
Zhang, Zhixuan [1 ]
Zheng, Xingjian [2 ]
Qing, Linbo [1 ]
Liu, Qi [3 ]
Wang, Pingyu [4 ]
Liu, Yu [4 ]
Liao, Jiyang [4 ]
机构
[1] Sichuan Univ, Sch Cyber Sci & Engn, Chengdu 610207, Peoples R China
[2] Frost Drill Intellectual Software Pte Ltd, Int Plaza, Singapore 079903, Singapore
[3] South China Univ Technol, Sch Future Technol, Guangzhou 511442, Peoples R China
[4] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Closed box; Generators; Data models; Data collection; Adaptation models; Diversity methods; Cloning; Glass box; Computational modeling; Deep neural network; data-free; adversarial examples; closed-box attack;
D O I
10.1109/TIFS.2025.3550066
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The objective of a data-free closed-box adversarial attack is to attack a victim model without using internal information, training datasets or semantically similar substitute datasets. Concerned about stricter attack scenarios, recent studies have tried employing generative networks to synthesize data for training substitute models. Nevertheless, these approaches concurrently encounter challenges associated with unstable training and diminished attack efficiency. In this paper, we propose a novel query-efficient data-free closed-box adversarial attack method. To mitigate unstable training, for the first time, we directly manipulate the intermediate-layer feature of a generator without relying on any substitute models. Specifically, a label noise-based generation module is created to enhance the intra-class patterns by incorporating partial historical information during the learning process. Additionally, we present a feature-disturbed diversity generation method to augment the inter-class distance. Meanwhile, we propose an adaptive intra-class attack strategy to heighten attack capability within a limited query budget. In this strategy, entropy-based distance is utilized to characterize the relative information from model outputs, while positive classes and negative samples are used to enhance low attack efficiency. The comprehensive experiments conducted on six datasets demonstrate the superior performance of our method compared to six state-of-the-art data-free closed-box competitors in both label-only and probability-only attack scenarios. Intriguingly, our method can realize the highest attack success rate on the online Microsoft Azure model under an extremely low query budget. Additionally, the proposed approach not only achieves more stable training but also significantly reduces the query count for a more balanced data generation. Furthermore, our method can maintain the best performance under the existing defense models and a limited query budget.
引用
收藏
页码:3131 / 3145
页数:15
相关论文
共 50 条
  • [1] Data-Free Hard-Label Robustness Stealing Attack
    Yuan, Xiaojian
    Chen, Kejiang
    Huang, Wen
    Zhang, Jie
    Zhang, Weiming
    Yu, Nenghai
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 6853 - 6861
  • [2] Towards Data-Free Model Stealing in a Hard Label Setting
    Sanyal, Sunandini
    Addepalli, Sravanti
    Babu, R. Venkatesh
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15263 - 15272
  • [3] NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation
    Tran, Minh-Tuan
    Le, Trung
    Le, Xuan-May
    Harandi, Mehrtash
    Tran, Quan Hung
    Phung, Dinh
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23860 - 23869
  • [4] Data-Free Model Extraction
    Truong, Jean-Baptiste
    Maini, Pratyush
    Walls, Robert J.
    Papernot, Nicolas
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 4769 - 4778
  • [5] Ask, Acquire, and Attack: Data-Free UAP Generation Using Class Impressions
    Mopuri, Konda Reddy
    Uppala, Phani Krishna
    Babu, R. Venkatesh
    COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 : 20 - 35
  • [6] Label-Noise Learning with Intrinsically Long-Tailed Data
    Lu, Yang
    Zhang, Yiliang
    Han, Bo
    Cheung, Yiu-Ming
    Wang, Hanzi
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1369 - 1378
  • [7] DFDS: Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack
    Jiang, Shuliang
    He, Yusheng
    Zhang, Rui
    Kang, Zi
    Xia, Hui
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024, 2024, 14886 : 274 - 285
  • [8] DBIA: DATA-FREE BACKDOOR ATTACK AGAINST TRANSFORMER NETWORKS
    Lv, Peizhuo
    Ma, Hualong
    Zhou, Jiachen
    Liang, Ruigang
    Chen, Kai
    Zhang, Shengzhi
    Yang, Yunfei
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2819 - 2824
  • [9] Data-Free Quantization via Pseudo-label Filtering
    Fan, Chunxiao
    Wang, Ziqi
    Guo, Dan
    Wang, Meng
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 5589 - 5598
  • [10] Diversifying Sample Generation for Accurate Data-Free Quantization
    Zhang, Xiangguo
    Qin, Haotong
    Ding, Yifu
    Gong, Ruihao
    Yan, Qinghua
    Tao, Renshuai
    Li, Yuhang
    Yu, Fengwei
    Liu, Xianglong
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15653 - 15662