In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
In this paper, we investigate an application that integrates holistic appearance based method and feature based method for face recognition. The automatic face recognition system makes use of multiscale Kernel PCA (Principal Component Analysis) characterized approximated face images and reduced the number of invariant SIFT (Scale Invariant Feature Transform) keypoints extracted from face projected feature space. To achieve higher variance in the inter-class face images, we compute principal components in higher-dimensional feature space to project a face image onto some approximated kernel eigenfaces. As long as feature spaces retain their distinctive characteristics, reduced number of SIFT points are detected for a number of principal components and keypoints are then fused using user-dependent weighting scheme and form a feature vector. The proposed method is tested on ORL face database, and the efficacy of the system is proved by the test results computed using the proposed algorithm.
This paper proposes a palmprint identification system using Finite Ridgelet Transform (FRIT) and Bayesian classifier.
FRIT is applied on the ROI (region of interest), which is extracted from palmprint image, to extract a set of distinctive
features from palmprint image. These features are used to classify with the help of Bayesian classifier. The proposed
system has been tested on CASIA and IIT Kanpur palmprint databases. The experimental results reveal better
performance compared to all well known systems.
This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique.
We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been
extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally,
identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted
SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been
tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the
In this paper, fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA)
in the context of multiview face recognition is proposed. The generalization of LDA is extended to establish correlation
between face classes in the transformed representation, which is called canonical covariate. The proposed work uses
Gabor filter bank for extracting facial features characterized by spatial frequency, spatial locality and orientation to
compensate the variations in face that occur due to change in illumination, pose and facial expression. Convolution of
Gabor filter bank with face images produces Gabor face representations with high dimensional feature vectors. PCA and
canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into
low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are
fused together using weighted mean fusion rule. Finally, support vector machines have been trained with augmented
fused set of features to perform recognition task. The proposed system has been evaluated with UMIST face database
and performs with higher recognition accuracy for multi-view face images.
Multibiometric systems offer more reliable and accurate performance combining the benefits of using multiple traits for
user authentication. Due to incompatible biometric characteristics such as unmatched image patterns, improper feature
registration and feature space representation, image scaling and unfeasible fusion schemes often degrades the
performance of multibiometric systems. This paper focuses on the benefits of feature level and match score level fusions
of face and ear biometrics using scale invariant feature transform (SIFT) representation and probabilistic graph. The
proposed fusion techniques first compute and detect the SIFT features from face and ear images independently. Further
probabilistic graphs are drawn on extracted feature points. By using iterative relaxation algorithm in both the graphs,
which are drawn on face and ear images, corresponding feature points are searched and match points are paired and
grouped into two independent sets. During feature level fusion, both the feature sets are concatenated together into an
augmented group. Combined feature set is normalized using 'min-max' normalization rule and finally the concatenated
feature vector is used for verification. In match score level fusion, independent verifications are performed using
relaxation based probabilistic graphs and point pattern matching algorithm. As a result, independent matching scores
generated from face and ear biometrics is fused together using 'sum' rule. The reported experimental results show the
performance improvements in verification by applying feature level. and score level fusions.
Ear biometrics has been found to be a good and reliable technique for human recognition. With the initial doubts on
uniqueness of the ear, ear biometrics could not attract much attention. But after it has been said that it is almost impossible
to find two ears with all the parts identical, ear biometrics has gained its pace. To automate the ear based recognition
process, ear in the image is required to be localized automatically. This paper presents a technique for the same. Ear
localization in the proposed technique is carried out by using the hierarchical clustering of the edges obtained from the
side face image. The technique is tested on a database consisting of 500 side face images of human faces collected at IIT
Kanpur. It is found to be giving 94.6% accuracy.
The paper proposes an efficient iris recognition algorithm, obtained through the fusion of Haar Wavelet and Circular
Mellin operator. The recognition system preprocesses the captured iris image to remove the effect of holes or spot of
light lying on the pupillary region which creates problem in pupil localization. The processed image is localized by
detecting inner and outer boundaries from the pupil center using maximum value of the spectrum image. Then the
eyelids are detected by fitting a 3<sup>rd</sup> degree polynomial on the suitable edge segments and removing the region occluded
by eyelids from the normalized iris image. The features for the iris pattern are extracted using Haar Wavelet and Circular
Mellin operator. The Haar Wavelet decomposition reduces the size of feature vector while Circular Mellin operator is
used for rotation and scale invariant feature extraction. The features are compared using Hamming Distance method and
the fusion is done at decision level using Conjunction rule. The recognizer is found to be more robust with accuracy level
more than 95%.
The paper proposes an efficient indexing scheme for binary feature template using B+ tree. In this scheme the
input image is decomposed into approximation, vertical, horizontal and diagonal coefficients using the discrete
wavelet transform. The binarized approximation coefficient at second level is divided into four quadrants of equal
size and Hamming distance (HD) for each quadrant with respect to sample template of all ones is measured. This
HD value of each quadrant is used to generate upper and lower range values which are inserted into B+ tree.
The nodes of tree at first level contain the lower and upper range values generated from HD of first quadrant.
Similarly, lower and upper range values for the three quadrants are stored in the second, third and fourth level
respectively. Finally leaf node contains the set of identifiers. At the time of identification, the test image is
used to generate HD for four quadrants. Then the B+ tree is traversed based on the value of HD at every node
and terminates to leaf nodes with set of identifiers. The feature vector for each identifier is retrieved from the
particular bin of secondary memory and matched with test feature template to get top matches. The proposed
scheme is implemented on ear biometric database collected at IIT Kanpur. The system is giving an overall
accuracy of 95.8% at penetration rate of 34%.
This paper proposes the multimodal biometrics system for identity verification using four traits i.e., face, fingerprint, iris and signature. The proposed system is designed for applications where the training database contains a face, iris, two fingerprint images and/or one or two signature image(s) for each individual. The final decision is made by fusion at "matching score level architecture" in which feature vectors are created independently for query images and are then compared to the enrollment templates which are stored during database preparation for each biometric trait. Based on the proximity of feature vector and template, each subsystem computes its own matching score. These individual scores are finally combined into a total score, which is passed to the decision module. Multimodal system is developed through fusion of face, fingerprint, iris and signature recognition. This system is tested on IITK database and the overall accuracy of the system is found to be more than 97% accurate with FAR and FRR of 2.46% and 1.23% respectively.