UIA-ViT: Unsupervised Inconsistency-Aware Method Based on Vision Transformer for Face Forgery Detection

被引:56
作者
Zhuang, Wanyi [1 ]
Chu, Qi [1 ]
Tan, Zhentao [1 ]
Liu, Qiankun [1 ]
Yuan, Haojie [1 ]
Miao, Changtao [1 ]
Luo, Zixiang [1 ]
Yu, Nenghai [1 ]
机构
[1] Univ Sci & Technol China, CAS Key Lab Electromagnet Space Informat, Hefei, Peoples R China
来源
COMPUTER VISION - ECCV 2022, PT V | 2022年 / 13665卷
基金
中国国家自然科学基金;
关键词
D O I
10.1007/978-3-031-20065-6_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Intra-frame inconsistency has been proved to be effective for the generalization of face forgery detection. However, learning to focus on these inconsistency requires extra pixel-level forged location annotations. Acquiring such annotations is non-trivial. Some existing methods generate large-scale synthesized data with location annotations, which is only composed of real images and cannot capture the properties of forgery regions. Others generate forgery location labels by subtracting paired real and fake images, yet such paired data is difficult to collected and the generated label is usually discontinuous. To overcome these limitations, we propose a novel Unsupervised Inconsistency-Aware method based on Vision Transformer, called UIA-ViT, which only makes use of video-level labels and can learn inconsistency-aware feature without pixel-level annotations. Due to the self-attention mechanism, the attention map among patch embeddings naturally represents the consistency relation, making the vision Transformer suitable for the consistency representation learning. Based on vision Transformer, we propose two key components: Unsupervised Patch Consistency Learning (UPCL) and Progressive Consistency Weighted Assemble (PCWA). UPCL is designed for learning the consistency-related representation with progressive optimized pseudo annotations. PCWA enhances the final classification embedding with previous patch embeddings optimized by UPCL to further improve the detection performance. Extensive experiments demonstrate the effectiveness of the proposed method.
引用
收藏
页码:391 / 407
页数:17
相关论文
共 34 条
[1]  
Chen S, 2021, AAAI CONF ARTIF INTE, V35, P1081
[2]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[3]   On the Detection of Digital Face Manipulation [J].
Dang, Hao ;
Liu, Feng ;
Stehouwer, Joel ;
Liu, Xiaoming ;
Jain, Anil K. .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5780-5789
[4]  
DeepFakes, 2019, About Us
[5]  
Dosovitskiy A., 2021, P 9 INT C LEARN REPR
[6]  
FaceSwap, 2019, About us
[7]   Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection [J].
Haliassos, Alexandros ;
Vougioukas, Konstantinos ;
Petridis, Stavros ;
Pantic, Maja .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :5037-5047
[8]   Video Transformer for Deepfake Detection with Incremental Learning [J].
Khan, Sohail Ahmed ;
Dai, Hang .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :1821-1828
[9]   CutPaste: Self-Supervised Learning for Anomaly Detection and Localization [J].
Li, Chun-Liang ;
Sohn, Kihyuk ;
Yoon, Jinsung ;
Pfister, Tomas .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :9659-9669
[10]  
Li D., 2018, 32 AAAI C ARTIFICIAL