Cross-Scenario Unknown-Aware Face Anti-Spoofing With Evidential Semantic Consistency Learning

被引:2
作者
Jiang, Fangling [1 ]
Liu, Yunfan [2 ]
Si, Haolin [3 ]
Meng, Jingjing [3 ]
Li, Qi [4 ,5 ]
机构
[1] Univ South China, Sch Comp Sci, Hengyang 421001, Hunan, Peoples R China
[2] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 101408, Peoples R China
[3] Huawei Technol Co Ltd, Shenzhen 518129, Guangdong, Peoples R China
[4] CASIA, Ctr Res Intelligent Percept & Comp, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
[5] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
关键词
Face anti-spoofing; cross-scenario testing; generalized feature learning; unknown PAI detection; PRESENTATION ATTACK DETECTION; DOMAIN ADAPTATION;
D O I
10.1109/TIFS.2024.3356234
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, domain adaptation techniques have been widely used to adapt face anti-spoofing models to a cross-scenario target domain. Most previous methods assume that the Presentation Attack Instruments (PAIs) in such cross-scenario target domains are the same as in the source domain. However, since malicious users are free to use any form of unknown PAIs to attack the system, this assumption does not always hold in practical applications of face anti-spoofing. Thus, unknown PAIs would inevitably lead to significant performance degradation, since samples of known and unknown PAIs usually have large differences. In this paper, we propose an Evidential Semantic Consistency Learning (ESCL) framework to address this problem. Specifically, a regularized evidential deep learning strategy with a two-way balance of class probability and uncertainty is leveraged to produce uncertainty scores for unknown PAI detection. Meanwhile, an entropy optimization-based semantic consistency learning strategy is also employed to encourage features of live and known PAIs to be gathered in the label-conditioned clusters across the source and target domains, while making the features of unknown PAIs to be self-clustered according to intrinsic semantic information. In addition, a new evaluation metric, KUHAR, is proposed to comprehensively evaluate the error rate of known classes and unknown PAIs. Extensive experimental results on six public datasets demonstrate the effectiveness of our method in generalizing face anti-spoofing models to both known classes and unknown PAIs with different types and quantities in a cross-scenario testing domain. Our method achieves state-of-the-art performance on eight different protocols.
引用
收藏
页码:3093 / 3108
页数:16
相关论文
共 78 条
  • [71] Distributed Deep Reinforcement Learning: A Survey and a Multi-player Multi-agent Learning Toolbox
    Yin, Qiyue
    Yu, Tongtong
    Shen, Shengqi
    Yang, Jun
    Zhao, Meijing
    Ni, Wancheng
    Huang, Kaiqi
    Liang, Bin
    Wang, Liang
    [J]. MACHINE INTELLIGENCE RESEARCH, 2024, 21 (03) : 411 - 430
  • [72] Yu ZT, 2023, Arxiv, DOI arXiv:2307.13958
  • [73] Yu ZT, 2020, INT CONF ACOUST SPEE, P996, DOI [10.1109/icassp40776.2020.9053587, 10.1109/ICASSP40776.2020.9053587]
  • [74] Deep Learning for Face Anti-Spoofing: A Survey
    Yu, Zitong
    Qin, Yunxiao
    Li, Xiaobai
    Zhao, Chenxu
    Lei, Zhen
    Zhao, Guoying
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 5609 - 5631
  • [75] NAS-FAS: Static-Dynamic Central Difference Network Search for Face Anti-Spoofing
    Yu, Zitong
    Wan, Jun
    Qin, Yunxiao
    Li, Xiaobai
    Li, Stan Z.
    Zhao, Guoying
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (09) : 3005 - 3023
  • [76] Zhang Ke-Yue, 2020, ECCV, P641, DOI DOI 10.1007/978-3-030-58529-738
  • [77] Zhiwei Zhang, 2012, 2012 5th IAPR International Conference on Biometrics (ICB), P26, DOI 10.1109/ICB.2012.6199754
  • [78] Adaptive Mixture of Experts Learning for Generalizable Face Anti-Spoofing
    Zhou, Qianyu
    Zhang, Ke-Yue
    Yao, Taiping
    Yi, Ran
    Ding, Shouhong
    Ma, Lizhuang
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6009 - 6018