Inspector for Face Forgery Detection: Defending Against Adversarial Attacks From Coarse to Fine

被引:0
|
作者
Xia, Ruiyang [1 ]
Zhou, Dawei [2 ]
Liu, Decheng [3 ]
Li, Jie [1 ]
Yuan, Lin [4 ]
Wang, Nannan [2 ]
Gao, Xinbo [4 ,5 ]
机构
[1] Xidian Univ, Sch Elect Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Telecommun Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[3] Xidian Univ, Sch Cyber Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[4] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
[5] Xidian Univ, Sch Elect Engn, Shaanxi 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Forgery; Detectors; Perturbation methods; Faces; Accuracy; Training; Iterative methods; Face forgery; adversarial defense; forgery detection;
D O I
10.1109/TIP.2024.3434388
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of face forgery has raised global concerns on social security, thereby facilitating the research on automatic forgery detection. Although current forgery detectors have demonstrated promising performance in determining authenticity, their susceptibility to adversarial perturbations remains insufficiently addressed. Given the nuanced discrepancies between real and fake instances are essential in forgery detection, previous defensive paradigms based on input processing and adversarial training tend to disrupt these discrepancies. For the detectors, the learning difficulty is thus increased, and the natural accuracy is dramatically decreased. To achieve adversarial defense without changing the instances as well as the detectors, a novel defensive paradigm called Inspector is designed specifically for face forgery detectors. Specifically, Inspector defends against adversarial attacks in a coarse-to-fine manner. In the coarse defense stage, adversarial instances with evident perturbations are directly identified and filtered out. Subsequently, in the fine defense stage, the threats from adversarial instances with imperceptible perturbations are further detected and eliminated. Experimental results across different types of face forgery datasets and detectors demonstrate that our method achieves state-of-the-art performances against various types of adversarial perturbations while better preserving natural accuracy. Code is available on https://github.com/xarryon/Inspector.
引用
收藏
页码:4432 / 4443
页数:12
相关论文
共 50 条
  • [11] ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness
    Theagarajan, Rajkumar
    Chen, Ming
    Bhanu, Bir
    Zhang, Jing
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6981 - 6989
  • [12] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [13] Defending Against Adversarial Attacks in Speaker Verification Systems
    Chang, Li-Chi
    Chen, Zesheng
    Chen, Chao
    Wang, Guoping
    Bi, Zhuming
    2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [14] DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Ioannidis, Vassilis N.
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8469 - 8473
  • [15] Defending Against Adversarial Attacks Using Random Forest
    Ding, Yifan
    Wang, Liqiang
    Zhang, Huan
    Yi, Jinfeng
    Fan, Deliang
    Gong, Boqing
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 105 - 114
  • [16] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [17] Detection of Face Recognition Adversarial Attacks
    Massoli, Fabio Valerio
    Carrara, Fabio
    Amato, Giuseppe
    Falchi, Fabrizio
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 202
  • [18] On the Generalization of Face Forgery Detection with Domain Adversarial Learning
    Weng Z.
    Chen J.
    Jiang Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (07): : 1476 - 1489
  • [19] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [20] Evidential classification for defending against adversarial attacks on network traffic
    Beechey, Matthew
    Lambotharan, Sangarapillai
    Kyriakopoulos, Konstantinos G.
    INFORMATION FUSION, 2023, 92 : 115 - 126