论文标题

面对人类物质感知的反欺骗

Face Anti-Spoofing with Human Material Perception

论文作者

Yu, Zitong, Li, Xiaobai, Niu, Xuesong, Shi, Jingang, Zhao, Guoying

论文摘要

面部反欺骗(FAS)在保护面部识别系统免于演示攻击方面起着至关重要的作用。大多数现有的FAS方法捕获了各种提示(例如,纹理,深度和反射),以区分活面孔和欺骗面孔。所有这些提示都是基于物理材料(例如,皮肤,玻璃,纸和硅胶)之间的差异。在本文中,我们将反企业反应作为材料识别问题,并将其与经典的人类材料感知相结合[1],该[1]旨在提取FAS的歧视性和健壮特征。为此,我们提出了双边卷积网络(BCN),该网络能够通过汇总多级双侧宏观和微信息来捕获基于材料的固有模式。此外,还利用多级特征改进模块(MFRM)和多头监督来学习更多强大的功能。全面的实验是在六个基准数据集上进行的,并且所提出的方法在内部和跨数据集测试上都能达到卓越的性能。一个亮点是,我们在SIW-M数据集中实现了总体11.3 $ \ pm $ 9.5 \%EER用于跨类型测试,这极大地表现了先前的结果。我们希望这项工作将促进FAS与物质社区之间的未来合作。

Face anti-spoofing (FAS) plays a vital role in securing the face recognition systems from presentation attacks. Most existing FAS methods capture various cues (e.g., texture, depth and reflection) to distinguish the live faces from the spoofing faces. All these cues are based on the discrepancy among physical materials (e.g., skin, glass, paper and silicone). In this paper we rephrase face anti-spoofing as a material recognition problem and combine it with classical human material perception [1], intending to extract discriminative and robust features for FAS. To this end, we propose the Bilateral Convolutional Networks (BCN), which is able to capture intrinsic material-based patterns via aggregating multi-level bilateral macro- and micro- information. Furthermore, Multi-level Feature Refinement Module (MFRM) and multi-head supervision are utilized to learn more robust features. Comprehensive experiments are performed on six benchmark datasets, and the proposed method achieves superior performance on both intra- and cross-dataset testings. One highlight is that we achieve overall 11.3$\pm$9.5\% EER for cross-type testing in SiW-M dataset, which significantly outperforms previous results. We hope this work will facilitate future cooperation between FAS and material communities.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源