Deep spatial gradient and temporal depth learning for face anti-spoofing

Face anti-spoofing is critical to the security of face recognition systems. Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing. Despite the great success, most previous works still formulate the problem as a single-frame multi-task one by simply augmenting the loss with depth, while neglecting the detailed fine-grained information and the interplay between facial depths and moving patterns. In contrast, we design a new approach to detect presentation attacks from multiple frames based on two insights: 1) detailed discriminative clues (e.g., spatial gradient magnitude) between living and spoofing face may be discarded through stacked vanilla convolutions, and 2) the dynamics of 3D moving faces provide important clues in detecting the spoofing faces. The proposed method is able to capture discriminative details via Residual Spatial Gradient Block (RSGB) and encode spatio-temporal information from Spatio-Temporal Propagation Module (STPM) efficiently. Moreover, a novel Contrastive Depth Loss is presented for more accurate depth supervision. To assess the efficacy of our method, we also collect a Double-modal Anti-spoofing Dataset (DMAD) which provides actual depth for each sample. The experiments demonstrate that the proposed approach achieves state-of-the-art results on five benchmark datasets including OULU-NPU, SiW, CASIA-MFSD, Replay-Attack, and the new DMAD. Codes will be available at https://github.com/clks-wzz/FAS-SGTD.

Authors:
Wang Zezheng, Yu Zitong, Zhao Chenxu, Zhu Xiangyu, Qin Yunxiao, Zhou Qiusheng, Zhou Feng, Lei Zhen

Publication type:
A4 Article in conference proceedings

Place of publication:
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020

Keywords:
6G Publication

Published:

Full citation:
Z. Wang et al., “Deep Spatial Gradient and Temporal Depth Learning for Face Anti-Spoofing,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 5041-5050, doi: 10.1109/CVPR42600.2020.00509

DOI:
https://doi.org/10.1109/CVPR42600.2020.00509

Read the publication here:
http://urn.fi/urn:nbn:fi-fe20201217101058