We address the problem of score level fusion of intramodal and multimodal experts in the context of biometric
identity verification. We investigate the merits of confidence based weighting of component experts. In contrast
to the conventional approach where confidence values are derived from scores, we use instead raw measures of
biometric data quality to control the influence of each expert on the final fused score. We show that quality based
fusion gives better performance than quality free fusion. The use of quality weighted scores as features in the
definition of the fusion functions leads to further improvements. We demonstrate that the achievable performance
gain is also affected by the choice of fusion architecture. The evaluation of the proposed methodology involves
6 face and one speech verification experts. It is carried out on the XM2VTS data base.
Non-frontal illumination of objects may cause specular reflections and strong self-shadowing. Those phenomena change
the appearance of objects to such an extent that they may not be recognized properly. We propose a method to
automatically discard the areas of the image which are degraded beyond recovery by adverse illumination conditions.
The method is based on a comparison between local variances of image gradient and is computationally efficient. We
show that proposed method, implemented with a face verification system based on local DCTmod2 features and a
GMM classifier, reduces total recognition errors in the presence of changing directional illumination conditions.
Consequently, we show that proposed segmentation method can be used as an automatic estimator of mismatch between
the illumination conditions present during the acquisition of training and testing images. We propose an adaptive
thresholding scheme that uses the mismatch estimate to further reduce the recognition error.