Face recognition algorithm based on compressed sensing and sparse representation is hotly argued in these years. The
scheme of this algorithm increases recognition rate as well as anti-noise capability. However, the computational cost is
expensive and has become a main restricting factor for real world applications. In this paper, we introduce a
GPU-accelerated hybrid variant of face recognition algorithm named parallel face recognition algorithm (pFRA). We
describe here how to carry out parallel optimization design to take full advantage of many-core structure of a GPU. The
pFRA is tested and compared with several other implementations under different data sample size. Finally, Our pFRA,
implemented with NVIDIA GPU and Computer Unified Device Architecture (CUDA) programming model, achieves a
significant speedup over the traditional CPU implementations.