Multibiometric systems offer more reliable and accurate performance combining the benefits of using multiple traits for
user authentication. Due to incompatible biometric characteristics such as unmatched image patterns, improper feature
registration and feature space representation, image scaling and unfeasible fusion schemes often degrades the
performance of multibiometric systems. This paper focuses on the benefits of feature level and match score level fusions
of face and ear biometrics using scale invariant feature transform (SIFT) representation and probabilistic graph. The
proposed fusion techniques first compute and detect the SIFT features from face and ear images independently. Further
probabilistic graphs are drawn on extracted feature points. By using iterative relaxation algorithm in both the graphs,
which are drawn on face and ear images, corresponding feature points are searched and match points are paired and
grouped into two independent sets. During feature level fusion, both the feature sets are concatenated together into an
augmented group. Combined feature set is normalized using 'min-max' normalization rule and finally the concatenated
feature vector is used for verification. In match score level fusion, independent verifications are performed using
relaxation based probabilistic graphs and point pattern matching algorithm. As a result, independent matching scores
generated from face and ear biometrics is fused together using 'sum' rule. The reported experimental results show the
performance improvements in verification by applying feature level. and score level fusions.
In this paper, fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA)
in the context of multiview face recognition is proposed. The generalization of LDA is extended to establish correlation
between face classes in the transformed representation, which is called canonical covariate. The proposed work uses
Gabor filter bank for extracting facial features characterized by spatial frequency, spatial locality and orientation to
compensate the variations in face that occur due to change in illumination, pose and facial expression. Convolution of
Gabor filter bank with face images produces Gabor face representations with high dimensional feature vectors. PCA and
canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into
low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are
fused together using weighted mean fusion rule. Finally, support vector machines have been trained with augmented
fused set of features to perform recognition task. The proposed system has been evaluated with UMIST face database
and performs with higher recognition accuracy for multi-view face images.
The paper proposes an efficient iris recognition algorithm, obtained through the fusion of Haar Wavelet and Circular
Mellin operator. The recognition system preprocesses the captured iris image to remove the effect of holes or spot of
light lying on the pupillary region which creates problem in pupil localization. The processed image is localized by
detecting inner and outer boundaries from the pupil center using maximum value of the spectrum image. Then the
eyelids are detected by fitting a 3<sup>rd</sup> degree polynomial on the suitable edge segments and removing the region occluded
by eyelids from the normalized iris image. The features for the iris pattern are extracted using Haar Wavelet and Circular
Mellin operator. The Haar Wavelet decomposition reduces the size of feature vector while Circular Mellin operator is
used for rotation and scale invariant feature extraction. The features are compared using Hamming Distance method and
the fusion is done at decision level using Conjunction rule. The recognizer is found to be more robust with accuracy level
more than 95%.
The paper proposes an efficient indexing scheme for binary feature template using B+ tree. In this scheme the
input image is decomposed into approximation, vertical, horizontal and diagonal coefficients using the discrete
wavelet transform. The binarized approximation coefficient at second level is divided into four quadrants of equal
size and Hamming distance (HD) for each quadrant with respect to sample template of all ones is measured. This
HD value of each quadrant is used to generate upper and lower range values which are inserted into B+ tree.
The nodes of tree at first level contain the lower and upper range values generated from HD of first quadrant.
Similarly, lower and upper range values for the three quadrants are stored in the second, third and fourth level
respectively. Finally leaf node contains the set of identifiers. At the time of identification, the test image is
used to generate HD for four quadrants. Then the B+ tree is traversed based on the value of HD at every node
and terminates to leaf nodes with set of identifiers. The feature vector for each identifier is retrieved from the
particular bin of secondary memory and matched with test feature template to get top matches. The proposed
scheme is implemented on ear biometric database collected at IIT Kanpur. The system is giving an overall
accuracy of 95.8% at penetration rate of 34%.
This paper proposes the multimodal biometrics system for identity verification using four traits i.e., face, fingerprint, iris and signature. The proposed system is designed for applications where the training database contains a face, iris, two fingerprint images and/or one or two signature image(s) for each individual. The final decision is made by fusion at "matching score level architecture" in which feature vectors are created independently for query images and are then compared to the enrollment templates which are stored during database preparation for each biometric trait. Based on the proximity of feature vector and template, each subsystem computes its own matching score. These individual scores are finally combined into a total score, which is passed to the decision module. Multimodal system is developed through fusion of face, fingerprint, iris and signature recognition. This system is tested on IITK database and the overall accuracy of the system is found to be more than 97% accurate with FAR and FRR of 2.46% and 1.23% respectively.