A Comparative Analysis of Deepfake Detection Methods Using Overlapping Multiple Dynamic Images

被引:0
作者
Purevsuren, Enkhtaivan [1 ]
Sato, Junya [2 ]
Akashi, Takuya [3 ]
机构
[1] Iwate Univ, Grad Sch Engn, Dept Design & Media Technol, 4-3-5 Ueda, Morioka, Iwate 0208551, Japan
[2] Gifu Univ, Fac Engn, 1-1 Yanagido, Gifushi, Gifu 5011193, Japan
[3] Okayama Univ, Dept Informat Elect Mathemat Data Sci Informat, Facil Engn, 3-1-1 Tsushima Naka,Kita Ku, Okayama 7008530, Japan
关键词
fake face; deepfake; the overlapping multiple dynamic images;
D O I
10.1002/tee.24258
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deepfake technology, which uses artificial intelligence to create realistic fake images, audio, and videos, has raised significant concerns due to its potential for misuse and manipulation. The emergence of deepfake technology poses a significant threat to the integrity of digital content, necessitating robust detection mechanisms. This paper proposes a novel method for deepfake detection by combining Overlapping Multiple Dynamic Images (OMDI) and Inversed Overlapping Multiple Dynamic Images (I-OMDI). Both representations capture temporal inconsistencies and subtle visual artifacts in fake videos by effectively utilizing spatial-temporal information. Our approach employs EfficientNetB7 as the backbone for feature extraction, enabling the model to distinguish between real and fake videos with high accuracy. By combining OMDI and I-OMDI with a weighted average strategy, we amplify the strengths of each method. Specifically, we assign equal weights of 0.5 to OMDI and I-OMDI based on their individual contributions to performance metrics. This balance yields substantial performance improvements across multiple datasets. When evaluated on the Celeb-DF v2 and DFDC datasets, our proposed model achieves state-of-the-art results, with an AUC score of 0.9952 on Celeb-DF v2 and 0.9947 on DFDC. These results underscore the robustness of the combined OMDI and I-OMDI methods in identifying deepfake videos. Furthermore, our model demonstrates superior performance compared to existing methods, including those by Tran et al. and Heo et al., underscoring its effectiveness in practical deepfake detection applications. (c) 2025 Institute of Electrical Engineers of Japan and Wiley Periodicals LLC.
引用
收藏
页数:13
相关论文
共 49 条
  • [41] Suwajanakorn S., What makes Tom Hanks look like Tom Hanks. Proceedings of the IEEE International Conference on Computer Vision
  • [42] Tan M., Efficientnet: rethinking model scaling for convolutional neural networks. International Conference on Machine Learning
  • [43] Deep learning for deepfakes creation and detection: A survey
    Thanh Thi Nguyen
    Quoc Viet Hung Nguyen
    Dung Tien Nguyen
    Duc Thanh Nguyen
    Thien Huynh-The
    Nahavandi, Saeid
    Thanh Tam Nguyen
    Quoc-Viet Pham
    Nguyen, Cuong M.
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 223
  • [44] Thies J., Face2face: Realtime face capture and reenactment of RGB videos. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
  • [45] Deferred Neural Rendering: Image Synthesis using Neural Textures
    Thies, Justus
    Zollhofer, Michael
    Niessner, Matthias
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2019, 38 (04):
  • [46] High Performance DeepFake Video Detection on CNN-Based with Attention Target-Specific Regions and Manual Distillation Extraction
    Tran, Van-Nhan
    Lee, Suk-Hwan
    Le, Hoanh-Su
    Kwon, Ki-Ryong
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (16):
  • [47] Vijaya J., Generation and detection of deepfakes using generative adversarial networks (GANs) and affine transformation. 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT)
  • [48] Yang X., Exposing deep fakes using inconsistent head poses. ICASSP 20192019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • [49] Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks
    Zhang, Kaipeng
    Zhang, Zhanpeng
    Li, Zhifeng
    Qiao, Yu
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (10) : 1499 - 1503