The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
3D object shapes (represented by meshes) include both areas that attract the visual attention of human observers and others less or not attractive at all. This visual attention depends on the degree of saliency exposed by these areas. In this paper, we propose a technique for detecting salient regions in meshes. To do so, we define a local surface descriptor based on local patches of adaptive size and filled with a local height field. The saliency of mesh vertices is then defined as its degree measure with edges weights computed from adaptive patch similarities. Our approach is compared to the state-of-the-art and presents competitive results. A study evaluating the influence of the parameters establishing this approach is also carried out. The strength and the stability of our approach with respect to noise and simplification are also studied.
We address the selection of fingerprint minutiae given a fingerprint ISO template. Minutiae selection plays a very important role when a secure element (i.e. a smart-card) is used. Because of the limited capability of computation and memory, the number of minutiae of a stored reference in the secure element is limited. We propose in this paper a comparative study of 6 minutiae selection methods including 2 methods from the literature and 1 like reference (No Selection). Experimental results on 3 fingerprint databases from the Fingerprint Verification Competition show their relative efficiency in terms of performance and computation time.
No-reference image quality metrics are of fundamental interest as they can be embedded in practical applications.
The main goal of this paper is to perform a comparative study of seven well known no-reference learning-based
image quality algorithms. To test the performance of these algorithms, three public databases are used. As a
first step, the trial algorithms are compared when no new learning is performed. The second step investigates
how the training set influences the results. The Spearman Rank Ordered Correlation Coefficient (SROCC) is
utilized to measure and compare the performance. In addition, an hypothesis test is conducted to evaluate the
statistical significance of performance of each tested algorithm.
A crucial step in image compression is the evaluation of its performance, and more precisely the available way
to measure the final quality of the compressed image. Usually, to measure performance, some measure of the
covariation between the subjective ratings and the degree of compression is performed between rated image
quality and algorithm. Nevertheless, local variations are not well taken into account.
We use the recently introduced Maximum Likelihood Difference Scaling (MLDS) method to quantify suprathreshold
perceptual differences between pairs of images and examine how perceived image quality estimated
through MLDS changes the compression rate is increased. This approach circumvents the limitations inherent
to subjective rating methods.
A crucial step in image compression is the evaluation of its
performance, and more precisely the available way to measure the
final quality of the compressed image. In this paper, a machine
learning expert, providing a final class number is designed. The quality measure is based on
a learned classification process in order to respect the one of
human observers. Instead of computing a final note, our method
classifies the quality using the quality scale recommended by the
UIT. This quality scale contains 5 ranks ordered from 1 (the worst
quality) to 5 (the best quality). This was
done constructing a vector containing many visual attributes.
Finally, the final features vector contains more than 40 attibutes.
Unfortunatley, no study about the existing interactions between the
used visual attributes has been done. A feature selection algorithm
could be interesting but the selection is highly related to the
further used classifier. Therefore, we prefer to perform
dimensionality reduction instead of feature selection. Manifold
Learning methods are used to provide a low-dimensional new
representation from the initial high dimensional feature space.
The classification process is performed on this new low-dimensional
representation of the images. Obtained results are compared to the one
obtained without applying the dimension reduction process to judge the
efficiency of the method.
A quality metric based on a classification process is introduced. The main idea of the proposed method is to avoid
the error pooling step of many factors (in frequential and spatial domain) commonly applied to obtain a final quality
score. A classification process based on final quality
class with respect to the standard quality scale provided by the UIT. Thus, for each degraded color image, a feature
vector is computed including several Human Visual System characteristics, such as, contrast masking effect, color
correlation, and so on. Selected features are of two kinds: 1)
full-reference features and 2) no-reference characteristics.
That way, a machine learning expert, providing a final class number is designed.
The quality of compressed images can be remarkabley improved if the dictates of the Human Visual System (HVS) requirements are followed. To achieve this goal, our strategy is to exploit human visual masking effect using a subband decomposition. A combination of both intra channel and inter channel masking property is then performed. This scheme exploits the lowpass character of the perceptual masking for natural color images in estimating the amount of available masking. Results obtained show an improvement of the quality of reconstructed color images compared to existing compression schemes like standard JPEG2000.
Edges are of fundamental importance in the analysis of images, and of course in the field of image quality. To incorporate the edge information as coded by the HVS in a vector quantization scheme, we have developed a classification strategy to separate edge vectors form non- edge vectors. This strategy allows the generation of different sets of codewords of different size for each kind of vectors. For each one of the 'edge' sets, the final size is perceptually tuned. Finally, when an image is encoded, its associated edge map is generated. Then the selection of the appropriate 'edge' set is made in respect with the edge amount present in the image. Then the second set of non-edge vectors is performed in order to respect the required compression rate. Statistical measure and psychophysical experiments have been performed to judge the quality of reconstructed images.
In the color image compression field, it is well known by researchers that the information is statistically redundant. This redundancy is a handicap in terms of dictionary construction time. A way to counterbalance this time consuming effect is to reduce the redundancy within the original image while keeping the image quality. One can extract a random sample of the initial training set on which one constructs the codebook whose quality is equal to the quality of the codebook generated from the entire training set. We applied this idea in the color vector quantization (VQ) compression scheme context. We propose an algorithm to reduce the complexity of the standard LBG technique. We searched for a measure of relevance of each block from the entire training set. Under the assumption that the measure of relevance is a independent random variable, we applied the Kolmogorov statistical test to define the smallest size of a random sample, and then the sample itself. Finally, from blocks associated to each measure of relevance of the random sample, we compute the standard LBG algorithm to construct the codebook. Psychophysics and statistical measures of image quality allow us to find the best measure of relevance to reduce the training set while preserving the image quality and decreasing the computational cost.
It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.