Inspector for Face Forgery Detection: Defending Against Adversarial Attacks From Coarse to Fine

被引:0
|
作者
Xia, Ruiyang [1 ]
Zhou, Dawei [2 ]
Liu, Decheng [3 ]
Li, Jie [1 ]
Yuan, Lin [4 ]
Wang, Nannan [2 ]
Gao, Xinbo [4 ,5 ]
机构
[1] Xidian Univ, Sch Elect Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Telecommun Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[3] Xidian Univ, Sch Cyber Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[4] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
[5] Xidian Univ, Sch Elect Engn, Shaanxi 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Forgery; Detectors; Perturbation methods; Faces; Accuracy; Training; Iterative methods; Face forgery; adversarial defense; forgery detection;
D O I
10.1109/TIP.2024.3434388
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of face forgery has raised global concerns on social security, thereby facilitating the research on automatic forgery detection. Although current forgery detectors have demonstrated promising performance in determining authenticity, their susceptibility to adversarial perturbations remains insufficiently addressed. Given the nuanced discrepancies between real and fake instances are essential in forgery detection, previous defensive paradigms based on input processing and adversarial training tend to disrupt these discrepancies. For the detectors, the learning difficulty is thus increased, and the natural accuracy is dramatically decreased. To achieve adversarial defense without changing the instances as well as the detectors, a novel defensive paradigm called Inspector is designed specifically for face forgery detectors. Specifically, Inspector defends against adversarial attacks in a coarse-to-fine manner. In the coarse defense stage, adversarial instances with evident perturbations are directly identified and filtered out. Subsequently, in the fine defense stage, the threats from adversarial instances with imperceptible perturbations are further detected and eliminated. Experimental results across different types of face forgery datasets and detectors demonstrate that our method achieves state-of-the-art performances against various types of adversarial perturbations while better preserving natural accuracy. Code is available on https://github.com/xarryon/Inspector.
引用
收藏
页码:4432 / 4443
页数:12
相关论文
共 50 条
  • [31] Defending Against Adversarial Attacks on Time-series with Selective Classification
    Kuehne, Joana
    Guehmann, Clemens
    2022 PROGNOSTICS AND HEALTH MANAGEMENT CONFERENCE, PHM-LONDON 2022, 2022, : 169 - 175
  • [32] Defending edge computing based metaverse AI against adversarial attacks
    Yi, Zhangao
    Qian, Yongfeng
    Chen, Min
    Alqahtani, Salman A.
    Hossain, M. Shamim
    AD HOC NETWORKS, 2023, 150
  • [33] Defending Against Adversarial Fingerprint Attacks Based on Deep Image Prior
    Yoo, Hwajung
    Hong, Pyo Min
    Kim, Taeyong
    Yoon, Jung Won
    Lee, Youn Kyu
    IEEE ACCESS, 2023, 11 : 78713 - 78725
  • [34] SATYA: Defending Against Adversarial Attacks Using Statistical Hypothesis Testing
    Raj, Sunny
    Pullum, Laura
    Ramanathan, Arvind
    Jha, Sumit Kumar
    FOUNDATIONS AND PRACTICE OF SECURITY (FPS 2017), 2018, 10723 : 277 - 292
  • [35] Defending against Adversarial Attacks in Federated Learning on Metric Learning Model
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 197 - 206
  • [36] Defending Against Local Adversarial Attacks through Empirical Gradient Optimization
    Sun, Boyang
    Ma, Xiaoxuan
    Wang, Hengyou
    TEHNICKI VJESNIK-TECHNICAL GAZETTE, 2023, 30 (06): : 1888 - 1898
  • [37] Defending Hardware-Based Malware Detectors Against Adversarial Attacks
    Kuruvila, Abraham Peedikayil
    Kundu, Shamik
    Basu, Kanad
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (09) : 1727 - 1739
  • [38] Transferable adversarial attacks against face recognition using surrogate model fine-tuning
    Khedr, Yasmeen M.
    Liu, Xin
    Lu, Haobo
    He, Kun
    APPLIED SOFT COMPUTING, 2025, 174
  • [39] Defending Adversarial Attacks Against ASV Systems Using Spectral Masking
    Sreekanth, Sankala
    Murty, Kodukula Sri Rama
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2024, 43 (7) : 4487 - 4507
  • [40] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413