Inspector for Face Forgery Detection: Defending Against Adversarial Attacks From Coarse to Fine

被引:0
|
作者
Xia, Ruiyang [1 ]
Zhou, Dawei [2 ]
Liu, Decheng [3 ]
Li, Jie [1 ]
Yuan, Lin [4 ]
Wang, Nannan [2 ]
Gao, Xinbo [4 ,5 ]
机构
[1] Xidian Univ, Sch Elect Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Telecommun Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[3] Xidian Univ, Sch Cyber Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[4] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
[5] Xidian Univ, Sch Elect Engn, Shaanxi 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Forgery; Detectors; Perturbation methods; Faces; Accuracy; Training; Iterative methods; Face forgery; adversarial defense; forgery detection;
D O I
10.1109/TIP.2024.3434388
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of face forgery has raised global concerns on social security, thereby facilitating the research on automatic forgery detection. Although current forgery detectors have demonstrated promising performance in determining authenticity, their susceptibility to adversarial perturbations remains insufficiently addressed. Given the nuanced discrepancies between real and fake instances are essential in forgery detection, previous defensive paradigms based on input processing and adversarial training tend to disrupt these discrepancies. For the detectors, the learning difficulty is thus increased, and the natural accuracy is dramatically decreased. To achieve adversarial defense without changing the instances as well as the detectors, a novel defensive paradigm called Inspector is designed specifically for face forgery detectors. Specifically, Inspector defends against adversarial attacks in a coarse-to-fine manner. In the coarse defense stage, adversarial instances with evident perturbations are directly identified and filtered out. Subsequently, in the fine defense stage, the threats from adversarial instances with imperceptible perturbations are further detected and eliminated. Experimental results across different types of face forgery datasets and detectors demonstrate that our method achieves state-of-the-art performances against various types of adversarial perturbations while better preserving natural accuracy. Code is available on https://github.com/xarryon/Inspector.
引用
收藏
页码:4432 / 4443
页数:12
相关论文
共 50 条
  • [41] Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks
    Theagarajan, Rajkumar
    Bhanu, Bir
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3537 - 3547
  • [42] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [43] PatchZero: Defending against Adversarial Patch Attacks by Detecting and Zeroing the Patch
    Xu, Ke
    Xiao, Yao
    Zheng, Zhaoheng
    Cai, Kaijie
    Nevatia, Ram
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 4621 - 4630
  • [44] Adversarial Attacks Against Face Recognition: A Comprehensive Study
    Vakhshiteh, Fatemeh
    Nickabadi, Ahmad
    Ramachandra, Raghavendra
    IEEE ACCESS, 2021, 9 : 92735 - 92756
  • [45] Universal Adversarial Spoofing Attacks against Face Recognition
    Amada, Takuma
    Liew, Seng Pei
    Kakizaki, Kazuya
    Araki, Toshinori
    2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021), 2021,
  • [46] PatchBreaker: defending against adversarial attacks by cutting-inpainting patches and joint adversarial training
    Huang, Shiyu
    Ye, Feng
    Huang, Zuchao
    Li, Wei
    Huang, Tianqiang
    Huang, Liqing
    APPLIED INTELLIGENCE, 2024, 54 (21) : 10819 - 10832
  • [47] Analysis of Adversarial Attacks against CNN-based Image Forgery Detectors
    Gragnaniello, Diego
    Marra, Francesco
    Poggi, Giovanni
    Verdoliva, Luisa
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 967 - 971
  • [48] Research on the Face Forgery Detection Model Based on Adversarial Training and Disentanglement
    Wang, Yidi
    Fu, Hui
    Wu, Tongkai
    APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [49] Defending against sparse adversarial attacks using impulsive noise reduction filters
    Radlak, Krystian
    Szczepankiewicz, Michal
    Smolka, Bogdan
    REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2021, 2021, 11736
  • [50] DeepIris: An ensemble approach to defending Iris recognition classifiers against Adversarial Attacks
    Tamizhiniyan, S. R.
    Ojha, Aman
    Meenakshi, K.
    Maragatham, G.
    2021 INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND INFORMATICS (ICCCI), 2021,