In the field of face antispoofing, researchers are increasingly focusing their efforts on multimodal and feature fusion. While multimodal approaches are more effective than single-modal ones, they often come with a huge number of parameters, require significant computational resources, and pose challenges for execution on mobile devices. To address the real-time problem, we propose a fast and lightweight framework based on ShuffleNet V2. Our approach takes patch-level images as input, enhances unit performance by introducing an attention module, and addresses dataset sample imbalance issues through the focal loss function. The framework effectively tackles the real-time constraints of the model. We evaluate the performance of our model on CASIA-FASD, Replay-Attack, and MSU-MFSD datasets. The results demonstrate that our method outperforms the current state-of-the-art methods in both intratest and intertest scenarios. Furthermore, our network has only 0.84 M parameters and 0.81 GFlops, making it suitable for deployment in mobile and real-time settings. Our work can serve as a valuable reference for researchers seeking to develop single-modal face antispoofing methods suitable for mobile and real-time applications. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Convolution
RGB color model
Video
Performance modeling
Data modeling
Feature extraction
Statistical modeling