The so-called robust L1 PCA was introduced in our recent work  based on the L1 noise assumption. Due to the heavy
tail characteristics of the L1 distribution, the proposed model has been proved much more robust against data outliers. In
this paper, we further demonstrate how the learned robust L1 PCA model can be used to denoise image data.
In recent years, the tasks of fingerprint examiners have been greatly aided by the development of automatic fingerprint
classification systems. These systems operate by matching low-level features automatically extracted from fingerprint
images, often represented collectively as numeric vectors, for their decision. However, there are two major shortcomings
in current systems. First, the result of classification depends solely on the chosen features and the algorithm that matches
them. Second, the systems cannot adapt their results over time through interaction with individual fingerprint examiners
who often have different degrees of experiences. In this paper, we demonstrate by incorporating relevance feedback in a
fingerprint classification system, a personalized semantic space over the database of fingerprints for each user can be
incrementally learned. The fingerprint features that induce the initial features space from which individual semantic
spaces are being learned were obtained by multispectral decomposition of fingerprints using a bank of Gabor filters. In
this learning framework, the out-of-sample extension of a recently introduced dimensionality reduction method, called
Twin Kernel Embedding (TKE), is applied to learn both the semantic space and a mapping function for classifying novel
fingerprints. Experimental results confirm this learning framework for examiner-centric fingerprint classification.