Black-box attacks on face recognition via affine-invariant training

被引:0
|
作者
Sun, Bowen [1 ]
Su, Hang [2 ]
Zheng, Shibao [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Elect Engn, Shanghai 200240, Peoples R China
[2] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2024年 / 36卷 / 15期
基金
中国国家自然科学基金;
关键词
Face recognition; Black-box attack; Affine-invariant training; AI-block; EIGENFACES;
D O I
10.1007/s00521-024-09543-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural network (DNN)-based face recognition has shown impressive performance in verification; however, recent studies reveal a vulnerability in deep face recognition algorithms, making them susceptible to adversarial attacks. Specifically, these attacks can be executed in a black-box manner with limited knowledge about the target network. While this characteristic is practically significant due to hidden model details in reality, it presents challenges such as high query budgets and low success rates. To improve the performance of attacks, we establish the whole framework through affine-invariant training, serving as a substitute for inefficient sampling. We also propose AI-block-a novel module that enhances transferability by introducing generalized priors. Generalization is achieved by creating priors with stable features when sampled over affine transformations. These priors guide attacks, improving efficiency and performance in black-box scenarios. The conversion via AI-block enables the transfer gradients of a surrogate model to be used as effective priors for estimating the gradients of a black-box model. Our method leverages this enhanced transferability to boost both transfer-based and query-based attacks. Extensive experiments conducted on 5 commonly utilized databases and 7 widely employed face recognition models demonstrate a significant improvement of up to 11.9 percentage points in success rates while maintaining comparable or even reduced query times.
引用
收藏
页码:8549 / 8564
页数:16
相关论文
共 50 条
  • [11] Attacks on feature-based affine-invariant watermarking methods
    Chotikakamthorn, N
    Pantuwong, N
    Fifth International Conference on Computer and Information Technology - Proceedings, 2005, : 706 - 710
  • [12] Pictorial recognition using affine-invariant spectral signatures
    BenArie, J
    Wang, ZQ
    1997 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, 1997, : 34 - 39
  • [13] Affine-invariant shape recognition using Grassmann manifold
    Liu, Yun-Peng
    Li, Guang-Wei
    Shi, Ze-Lin
    Zidonghua Xuebao/Acta Automatica Sinica, 2012, 38 (02): : 248 - 258
  • [14] Simple Black-box Adversarial Attacks
    Guo, Chuan
    Gardner, Jacob R.
    You, Yurong
    Wilson, Andrew Gordon
    Weinberger, Kilian Q.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [15] Improving the robustness of adversarial attacks using an affine-invariant gradient estimator
    Xiang, Wenzhao
    Su, Hang
    Liu, Chang
    Guo, Yandong
    Zheng, Shibao
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 229
  • [16] Enhancing Transferability of Black-box Adversarial Attacks via Lifelong Learning for Speech Emotion Recognition Models
    Ren, Zhao
    Han, Jing
    Cummins, Nicholas
    Schuller, Bjoern W.
    INTERSPEECH 2020, 2020, : 496 - 500
  • [17] Examining of Shallow Autoencoder on Black-box Attack against Face Recognition
    Vo Ngoc Khoi Nguyen
    Terada, Takamichi
    Nishigaki, Masakatsu
    Ohki, Tetsushi
    2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 1775 - 1780
  • [18] Improving query efficiency of black-box attacks via the preference of models
    Yang, Xiangyuan
    Lin, Jie
    Zhang, Hanlin
    Zhao, Peng
    INFORMATION SCIENCES, 2024, 678
  • [19] Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation
    Aithal, Manjushree B.
    Li, Xiaohua
    IEEE ACCESS, 2022, 10 : 12395 - 12411
  • [20] Black-box attacks on dynamic graphs via adversarial topology perturbations
    Tao, Haicheng
    Cao, Jie
    Chen, Lei
    Sun, Hongliang
    Shi, Yong
    Zhu, Xingquan
    NEURAL NETWORKS, 2024, 171 : 308 - 319