Detection of Deepfake Videos Using Long-Distance Attention

被引:15
作者
Lu, Wei [1 ]
Liu, Lingyi [1 ]
Zhang, Bolin [1 ]
Luo, Junwei [1 ]
Zhao, Xianfeng [2 ]
Zhou, Yicong [3 ]
Huang, Jiwu [4 ,5 ,6 ]
机构
[1] Sun Yat sen Univ, Sch Comp Sci & Engn, Guangdong Prov Key Lab Informat Secur Technol, Minist Educ,Key Lab Machine Intelligence & Adv Com, Guangzhou 510006, Peoples R China
[2] Chinese Acad Sci, Inst Informat Engn, State Key Lab Informat Secur, Beijing 100195, Peoples R China
[3] Univ Macau, Dept Comp & Informat Sci, Macau 999078, Peoples R China
[4] Shenzhen Univ, Guangdong Key Lab Intelligent Informat Proc, Shenzhen 518060, Peoples R China
[5] Shenzhen Univ, Shenzhen Key Lab Media Secur, Shenzhen 518060, Peoples R China
[6] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Deepfakes; Faces; Forgery; Time-domain analysis; Transformers; Task analysis; Semantics; Attention mechanism; deepfake detection; face manipulation; spatial and temporal artifacts; FACE; REPRESENTATION;
D O I
10.1109/TNNLS.2022.3233063
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid progress of deepfake techniques in recent years, facial video forgery can generate highly deceptive video content and bring severe security threats. And detection of such forgery videos is much more urgent and challenging. Most existing detection methods treat the problem as a vanilla binary classification problem. In this article, the problem is treated as a special fine-grained classification problem since the differences between fake and real faces are very subtle. It is observed that most existing face forgery methods left some common artifacts in the spatial domain and time domain, including generative defects in the spatial domain and interframe inconsistencies in the time domain. And a spatial-temporal model is proposed which has two components for capturing spatial and temporal forgery traces from a global perspective, respectively. The two components are designed using a novel long-distance attention mechanism. One component of the spatial domain is used to capture artifacts in a single frame, and the other component of the time domain is used to capture artifacts in consecutive frames. They generate attention maps in the form of patches. The attention method has a broader vision which contributes to better assembling global information and extracting local statistic information. Finally, the attention maps are used to guide the network to focus on pivotal parts of the face, just like other fine-grained classification methods. The experimental results on different public datasets demonstrate that the proposed method achieves state-of-the-art performance, and the proposed long-distance attention method can effectively capture pivotal parts for face forgery.
引用
收藏
页码:9366 / 9379
页数:14
相关论文
共 49 条
[1]  
Agarwal S., 2019, CVPR WORKSH, P38, DOI [10.4108/eai.18-7-2019, DOI 10.4108/EAI.18-7-2019]
[2]  
[Anonymous], 2016, Proceedings of the IEEE conference on computer vision and pattern recognition, DOI [DOI 10.1109/CVPR.2016.262, 10.1109/CVPR.2016.262]
[3]   Weakly Supervised Deep Detection Networks [J].
Bilen, Hakan ;
Vedaldi, Andrea .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2846-2854
[4]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[5]   DISC: Deep Image Saliency Computing via Progressive Representation Learning [J].
Chen, Tianshui ;
Lin, Liang ;
Liu, Lingbo ;
Luo, Xiaonan ;
Li, Xuelong .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (06) :1135-1149
[6]  
CHOLLET F, 2017, PROC CVPR IEEE, P1800, DOI [DOI 10.1109/CVPR.2017.195, 10.1109/CVPR.2017.195]
[7]   Recasting Residual-based Local Descriptors as Convolutional Neural Networks: an Application to Image Forgery Detection [J].
Cozzolino, Davide ;
Poggi, Giovanni ;
Verdoliva, Luisa .
IH&MMSEC'17: PROCEEDINGS OF THE 2017 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, 2017, :159-164
[8]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[9]  
Dosovitskiy A, 2021, INT C LEARN REPR ICL
[10]  
Dumoulin V, 2018, Arxiv, DOI [arXiv:1603.07285, 10.48550/arXiv.1603.07285]