Improving Detection of DeepFakes through Facial Region Analysis in Images

被引:1
作者
Alanazi, Fatimah [1 ,2 ]
Ushaw, Gary [1 ]
Morgan, Graham [1 ]
机构
[1] Newcastle Univ, Sch Comp, Newcastle Upon Tyne NE1 7RU, England
[2] Univ Hafr Al Batin, Coll Comp Sci & Engn, Hafar Al Batin 39524, Saudi Arabia
关键词
DeepFake detection; face augmentation; face cutout facial recognition; feature fusion; image analysis;
D O I
10.3390/electronics13010126
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the evolving landscape of digital media, the discipline of media forensics, which encompasses the critical examination and authentication of digital images, videos, and audio recordings, has emerged as an area of paramount importance. This heightened significance is predominantly attributed to the burgeoning concerns surrounding the proliferation of DeepFakes, which are highly realistic and manipulated media content, often created using advanced artificial intelligence techniques. Such developments necessitate a profound understanding and advancement in media forensics to ensure the integrity of digital media in various domains. Current research endeavours are primarily directed towards addressing a common challenge observed in DeepFake datasets, which pertains to the issue of overfitting. Many suggested remedies centre around the application of data augmentation methods, with a frequently adopted strategy being the incorporation of random erasure or cutout. This method entails the random removal of sections from an image to introduce diversity and mitigate overfitting. Generating disparities between the altered and unaltered images serves to inhibit the model from excessively adapting itself to individual samples, thus leading to more favourable results. Nonetheless, the stochastic nature of this approach may inadvertently obscure facial regions that harbour vital information necessary for DeepFake detection. Due to the lack of guidelines on specific regions for cutout, most studies use a randomised approach. However, in recent research, face landmarks have been integrated to designate specific facial areas for removal, even though the selection remains somewhat random. Therefore, there is a need to acquire a more comprehensive insight into facial features and identify which regions hold more crucial data for the identification of DeepFakes. In this study, the investigation delves into the data conveyed by various facial components through the excision of distinct facial regions during the training of the model. The goal is to offer valuable insights to enhance forthcoming face removal techniques within DeepFake datasets, fostering a deeper comprehension among researchers and advancing the realm of DeepFake detection. Our study presents a novel method that uses face cutout techniques to improve understanding of key facial features crucial in DeepFake detection. Moreover, the method combats overfitting in DeepFake datasets by generating diverse images with these techniques, thereby enhancing model robustness. The developed methodology is validated against publicly available datasets like FF++ and Celeb-DFv2. Both face cutout groups surpassed the Baseline, indicating cutouts improve DeepFake detection. Face Cutout Group 2 excelled, with 91% accuracy on Celeb-DF and 86% on the compound dataset, suggesting external facial features' significance in detection. The study found that eyes are most impactful and the nose is least in model performance. Future research could explore the augmentation policy's effect on video-based DeepFake detection.
引用
收藏
页数:22
相关论文
共 49 条
  • [1] Afchar D, 2018, IEEE INT WORKS INFOR
  • [2] Face swapping: Automatically replacing faces in photographs
    Bitouk, Dmitri
    Kumar, Neeraj
    Dhillon, Samreen
    Belhumeur, Peter
    Nayar, Shree K.
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2008, 27 (03):
  • [3] Disparity-Based Multiscale Fusion Network for Transportation Detection
    Chen, Jing
    Wang, Qichao
    Peng, Weiming
    Xu, Haitao
    Li, Xiaodong
    Xu, Wenqiang
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 18855 - 18863
  • [4] Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection
    Chen, Liang
    Zhang, Yong
    Song, Yibing
    Liu, Lingqiao
    Wang, Jue
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18689 - 18698
  • [5] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [6] Ciftci Umur Aybars, 2020, IEEE Trans Pattern Anal Mach Intell, VPP, DOI 10.1109/TPAMI.2020.3009287
  • [7] Towards Solving the DeepFake Problem : An Analysis on Improving DeepFake Detection using Dynamic Face Augmentation
    Das, Sowmen
    Seferbekov, Selim
    Datta, Arup
    Islam, Md Saiful
    Amin, Md Ruhul
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 3769 - 3778
  • [8] DeVries T, 2017, Arxiv, DOI [arXiv:1708.04552, DOI 10.48550/ARXIV.1708.04552]
  • [9] FraudTrip: Taxi Fraudulent Trip Detection From Corresponding Trajectories
    Ding, Ye
    Zhang, Wenyi
    Zhou, Xibo
    Liao, Qing
    Luo, Qiong
    Ni, Lionel M.
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (16) : 12505 - 12517
  • [10] Local Information-Enhanced Graph-Transformer for Hyperspectral Image Change Detection With Limited Training Samples
    Dong, Wenqian
    Yang, Yufei
    Qu, Jiahui
    Xiao, Song
    Li, Yunsong
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61