We present an interactive image deformation method which preserves the local shapes of salient objects in the concerned image during the deformation. The proposed method falls into the moving least squares (MLS) framework, but notably differs from the original MLS deformation method. First, a saliency-related distance is developed to replace the original Euclidean distance in the weight definition. Second, the original affine matrix is decomposed into a single rotation matrix and a symmetric matrix by using a singular value decomposition, then the free parameters of these matrices are interpolated according to the saliency information. Furthermore, for the line-based MLS deformation, the closed-form solution of weight cannot be found directly when using the proposed saliency-based distance. To address this problem, we propose a method using an exponential transformation to regulate the weight where the regulation factor is also correlated to saliency information. All these revisions lead a saliency-sensitive mapping which creates a deformation change in the nonvital parts of image while preserving the local shapes of salient parts. Experimental results show that the proposed deformation outperforms the original MLS deformation in terms of visual performance.
Biometric template protection is one of the important issues in deploying a practical biometric
system. To tackle this problem, many algorithms have been reported in recent years, most of them
being applicable to fingerprint biometric. Since the content and representation of fingerprint
template is different from templates of other modalities such as face, the fingerprint template
protection algorithms cannot be directly applied to face template. Moreover, we believe that no
single template protection method is capable of satisfying the diversity, revocability, security and
performance requirements. We propose a three-step cancelable framework which is a hybrid
approach for face template protection. This hybrid algorithm is based on the random projection, class
distribution preserving transform and hash function. Two publicly available face databases, namely
FERET and CMU-PIE, are used for evaluating the template protection scheme. Experimental results
show that the proposed method maintains good template discriminability, resulting in good
recognition performance. A comparison with the recently developed random multispace quantization
(RMQ) biohashing algorithm shows that our method outperforms the RMQ algorithm.
Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.