Multimodal Proxy-Free Face Anti-Spoofing Exploiting Local Patch Features

被引:0
|
作者
Yu, Xiangyu [1 ]
Huang, Xinghua [1 ]
Ye, Xiaohui [1 ]
Liu, Beibei [1 ]
Hua, Guang [2 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou 510641, Peoples R China
[2] Singapore Inst Technol SIT, Infocomm Technol ICT Cluster, Singapore 138683, Singapore
关键词
Faces; Face recognition; Feature extraction; Protocols; Task analysis; Training; Printing; Face anti-spoofing; local spoof features; proxy-free pairwise similarity learning;
D O I
10.1109/LSP.2024.3418710
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Face anti-spoofing (FAS) is vital to ensure the security of the face recognition systems, for which the essential task is to capture the unique spoof face features. Most of the existing methods extract spoof features from the whole faces, overlooking clues in local face patches. Meanwhile, researchers usually use intermediate parameters as a proxy in face classification, but this requires the design of additional loss functions. To solve these problems, we propose a multimodal proxy-free FAS model which uses contrastive language image pre-training (CLIP) as the backbone. Specifically, we use patches cropped from the original face to augment the data, forcing the network to learn local spoof features, such as the edges of printing attacks. At the same time, we introduce dynamic central difference convolutional (DCDC) adapter to extract fine-grained features in patches. Furthermore, we propose to adopt a proxy-free pairwise similarity learning (PSL) loss to achieve the goal that the maximum intra-class distance is smaller than the minimum inter-class distance. Experiments on several benchmark datasets show that the proposed method achieves state-of-the-art performance.
引用
收藏
页码:1695 / 1699
页数:5
相关论文
共 50 条
  • [21] Dual-Branch Meta-Learning Network With Distribution Alignment for Face Anti-Spoofing
    Jia, Yunpei
    Zhang, Jie
    Shan, Shiguang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 138 - 151
  • [22] Learning Meta Pattern for Face Anti-Spoofing
    Cai, Rizhao
    Li, Zhi
    Wan, Renjie
    Li, Haoliang
    Hu, Yongjian
    Kot, Alex C.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1201 - 1213
  • [23] Dual Spoof Disentanglement Generation for Face Anti-Spoofing With Depth Uncertainty Learning
    Wu, Hangtong
    Zeng, Dan
    Hu, Yibo
    Shi, Hailin
    Mei, Tao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4626 - 4638
  • [24] Fusing Multiple Deep Features for Face Anti-spoofing
    Tang, Yan
    Wang, Xing
    Jia, Xi
    Shen, Linlin
    BIOMETRIC RECOGNITION, CCBR 2018, 2018, 10996 : 321 - 330
  • [25] Detection of Spoofing Medium Contours for Face Anti-Spoofing
    Zhu, Xun
    Li, Sheng
    Zhang, Xinpeng
    Li, Haoliang
    Kot, Alex C.
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (05) : 2039 - 2045
  • [26] S-Adapter: Generalizing Vision Transformer for Face Anti-Spoofing With Statistical Tokens
    Cai, Rizhao
    Yu, Zitong
    Kong, Chenqi
    Li, Haoliang
    Chen, Changsheng
    Hu, Yongjian
    Kot, Alex C.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 8385 - 8397
  • [27] Face Anti-Spoofing With Deep Neural Network Distillation
    Li, Haoliang
    Wang, Shiqi
    He, Peisong
    Rocha, Anderson
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (05) : 933 - 946
  • [28] Suppressing Spoof-Irrelevant Factors for Domain-Agnostic Face Anti-Spoofing
    Kim, Taewook
    Kim, Yonghyun
    IEEE ACCESS, 2021, 9 : 86966 - 86974
  • [29] Face anti-spoofing methods
    Parveen, Sajida
    Ahmad, Sharifah Mumtazah Syed
    Hanafi, Marsyita
    Adnan, Wan Azizun Wan
    CURRENT SCIENCE, 2015, 108 (08): : 1491 - 1500
  • [30] Surveillance Face Anti-Spoofing
    Fang, Hao
    Liu, Ajian
    Wan, Jun
    Escalera, Sergio
    Zhao, Chenxu
    Zhang, Xu
    Li, Stan Z.
    Lei, Zhen
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 1535 - 1546