Deepfake video detection: YOLO-Face convolution recurrent approach

被引:14
|
作者
Ismail, Aya [1 ]
Elpeltagy, Marwa [2 ]
Zaki, Mervat [3 ]
ElDahshan, Kamal A. [4 ]
机构
[1] Tanta Univ, Math Dept, Tanta, Al Gharbia, Egypt
[2] Al Azhar Univ, Syst & Comp Dept, Nasr City, Egypt
[3] Al Azhar Univ, Girls Branch, Math Dept, Nasr City, Egypt
[4] Al Azhar Univ, Math Dept, Nasr City, Egypt
关键词
Deepfake; YOLO-Face; Convolution recurrent neural networks; Deepfake detection; Video authenticity; CLASSIFICATION; NETWORKS;
D O I
10.7717/peerj-cs.730
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, the deepfake techniques for swapping faces have been spreading, allowing easy creation of hyper-realistic fake videos. Detecting the authenticity of a video has become increasingly critical because of the potential negative impact on the world. Here, a new project is introduced; You Only Look Once Convolution Recurrent Neural Networks (YOLO-CRNNs), to detect deepfake videos. The YOLO-Face detector detects face regions from each frame in the video, whereas a fine-tuned EfficientNet-B5 is used to extract the spatial features of these faces. These features are fed as a batch of input sequences into a Bidirectional Long Short-Term Memory (Bi-LSTM), to extract the temporal features. The new scheme is then evaluated on a new large-scale dataset; CelebDF-FaceForencics++ (c23), based on a combination of two popular datasets; FaceForencies++ (c23) and Celeb-DF. It achieves an Area Under the Receiver Operating Characteristic Curve (AUROC) 89.35% score, 89.38% accuracy, 83.15% recall, 85.55% precision, and 84.33% F1-measure for pasting data approach. The experimental analysis approves the superiority of the proposed method compared to the state-of-the-art methods.
引用
收藏
页数:19
相关论文
共 50 条
  • [11] YOLO-FD: YOLO for Face Detection
    Silva, Luan P. E.
    Batista, Julio C.
    Bellon, Olga R. P.
    Silva, Luciano
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS (CIARP 2019), 2019, 11896 : 209 - 218
  • [12] A Survey on Face Forgery Detection of Deepfake
    Zhang, Ying
    Gao, Feng
    Zhou, Zichen
    Guo, Hong
    THIRTEENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2021), 2021, 11878
  • [13] Mining Temporal Inconsistency with 3D Face Model for Deepfake Video Detection
    Cheng, Ziyi
    Chen, Chen
    Zhou, Yichao
    Hu, Xiyuan
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VII, 2024, 14431 : 231 - 243
  • [14] Improving Video Vision Transformer for Deepfake Video Detection Using Facial Landmark, Depthwise Separable Convolution and Self Attention
    Ramadhani, Kurniawan Nur
    Munir, Rinaldi
    Utama, Nugraha Priya
    IEEE ACCESS, 2024, 12 : 8932 - 8939
  • [15] Adversarially Robust Deepfake Video Detection
    Devasthale, Aditya
    Sural, Shamik
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 396 - 403
  • [16] ConTrans-Detect: A Multi-Scale Convolution-Transformer Network for DeepFake Video Detection
    Sun, Weirong
    Ma, Yujun
    Zhang, Hong
    Wang, Ruili
    2023 29TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND MACHINE VISION IN PRACTICE, M2VIP 2023, 2023,
  • [17] Deepfake video detection: challenges and opportunities
    Kaur, Achhardeep
    Hoshyar, Azadeh Noori
    Saikrishna, Vidya
    Firmin, Selena
    Xia, Feng
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (06)
  • [18] Deepfake Video Detection: A Novel Approach via NLP-Based Classification
    Bunluesakdikul, Patchraphon
    Mahanan, Waranya
    Sungunnasil, Prompong
    Sangamuang, Sumalee
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS, 2025,
  • [19] Deepfake video detection using convolutional neural network based hybrid approach
    Kocak, Aynur
    Alkan, Mustafa
    Arikan, Muhammed Suleyman
    JOURNAL OF POLYTECHNIC-POLITEKNIK DERGISI, 2024,
  • [20] Face detection in a video sequence - a temporal approach
    Mikolajczyk, K
    Choudhury, R
    Schmid, C
    2001 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 2, PROCEEDINGS, 2001, : 96 - 101