Inspector for Face Forgery Detection: Defending Against Adversarial Attacks From Coarse to Fine

被引:0
|
作者
Xia, Ruiyang [1 ]
Zhou, Dawei [2 ]
Liu, Decheng [3 ]
Li, Jie [1 ]
Yuan, Lin [4 ]
Wang, Nannan [2 ]
Gao, Xinbo [4 ,5 ]
机构
[1] Xidian Univ, Sch Elect Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Telecommun Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[3] Xidian Univ, Sch Cyber Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[4] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
[5] Xidian Univ, Sch Elect Engn, Shaanxi 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Forgery; Detectors; Perturbation methods; Faces; Accuracy; Training; Iterative methods; Face forgery; adversarial defense; forgery detection;
D O I
10.1109/TIP.2024.3434388
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of face forgery has raised global concerns on social security, thereby facilitating the research on automatic forgery detection. Although current forgery detectors have demonstrated promising performance in determining authenticity, their susceptibility to adversarial perturbations remains insufficiently addressed. Given the nuanced discrepancies between real and fake instances are essential in forgery detection, previous defensive paradigms based on input processing and adversarial training tend to disrupt these discrepancies. For the detectors, the learning difficulty is thus increased, and the natural accuracy is dramatically decreased. To achieve adversarial defense without changing the instances as well as the detectors, a novel defensive paradigm called Inspector is designed specifically for face forgery detectors. Specifically, Inspector defends against adversarial attacks in a coarse-to-fine manner. In the coarse defense stage, adversarial instances with evident perturbations are directly identified and filtered out. Subsequently, in the fine defense stage, the threats from adversarial instances with imperceptible perturbations are further detected and eliminated. Experimental results across different types of face forgery datasets and detectors demonstrate that our method achieves state-of-the-art performances against various types of adversarial perturbations while better preserving natural accuracy. Code is available on https://github.com/xarryon/Inspector.
引用
收藏
页码:4432 / 4443
页数:12
相关论文
共 50 条
  • [1] Exploring Frequency Adversarial Attacks for Face Forgery Detection
    Jia, Shuai
    Ma, Chao
    Yao, Taiping
    Yin, Bangjie
    Ding, Shouhong
    Yang, Xiaokang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4093 - 4102
  • [2] VeriFace: Defending against Adversarial Attacks in Face Verification Systems
    Sayed, Awny
    Kinlany, Sohair
    Zaki, Alaa
    Mahfouz, Ahmed
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (03): : 3151 - 3166
  • [3] Defending network intrusion detection systems against adversarial evasion attacks
    Pawlicki, Marek
    Choras, Michal
    Kozik, Rafal
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 110 : 148 - 154
  • [4] Improving Robustness of Facial Landmark Detection by Defending against Adversarial Attacks
    Zhu, Congcong
    Li, Xiaoqiang
    Li, Jide
    Dai, Songmin
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 11731 - 11740
  • [5] Defending malware detection models against evasion based adversarial attacks
    Rathore, Hemant
    Sasan, Animesh
    Sahay, Sanjay K.
    Sewak, Mohit
    PATTERN RECOGNITION LETTERS, 2022, 164 : 119 - 125
  • [6] Defending against adversarial attacks by randomized diversification
    Taran, Olga
    Rezaeifar, Shideh
    Holotyak, Taras
    Voloshynovskiy, Slava
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11218 - 11225
  • [7] Defending Distributed Systems Against Adversarial Attacks
    Su L.
    Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [8] ADVERSARIAL ATTACKS ON COARSE-TO-FINE CLASSIFIERS
    Alkhouri, Ismail R.
    Atia, George K.
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2855 - 2859
  • [9] Scalable Universal Adversarial Watermark Defending Against Facial Forgery
    Qiao, Tong
    Zhao, Bin
    Shi, Ran
    Han, Meng
    Hassaballah, Mahmoud
    Retraint, Florent
    Luo, Xiangyang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 8998 - 9011
  • [10] DEFENDING AGAINST ADVERSARIAL ATTACKS ON MEDICAL IMAGING AI SYSTEM, CLASSIFICATION OR DETECTION?
    Li, Xin
    Pan, Deng
    Zhu, Dongxiao
    2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, : 1677 - 1681