We adopt genetic programming (GP) to define a measure that can predict complexity perception of texture images. We perform psychophysical experiments on three different datasets to collect data on the perceived complexity. The subjective data are used for training, validation, and test of the proposed measure. These data are also used to evaluate several possible candidate measures of texture complexity related to both low level and high level image features. We select four of them (namely roughness, number of regions, chroma variance, and memorability) to be combined in a GP framework. This approach allows a nonlinear combination of the measures and could give hints on how the related image features interact in complexity perception. The proposed complexity measure MGP exhibits Pearson correlation coefficients of 0.890 on the training set, 0.728 on the validation set, and 0.724 on the test set. MGP outperforms each of all the single measures considered. From the statistical analysis of different GP candidate solutions, we found that the roughness measure evaluated on the gray level image is the most dominant one, followed by the memorability, the number of regions, and finally the chroma variance.
The aim of this work is to detect the events in video sequences that are salient with respect to the audio signal.
In particular, we focus on the audio analysis of a video, with the goal of finding which are the significant features
to detect audio-salient events. In our work we have extracted the audio tracks from videos of different sport
events. For each video, we have manually labeled the salient audio-events using the binary markings. On each
frame, features in both time and frequency domains have been considered. These features have been used to
train different classifiers: Classification and Regression Trees, Support Vector Machine, and k-Nearest Neighbor.
The classification performances are reported in terms of confusion matrices.
The aim of this work is to study image quality of both single and multiply distorted images. We address the case of images corrupted by Gaussian noise or JPEG compressed as single distortion cases and images corrupted by Gaussian noise and then JPEG compressed, as multiply distortion case. Subjective studies were conducted in two parts to obtain human judgments on the single and multiply distorted images. We study how these subjective data correlate with No Reference state-of-the-art quality metrics. We also investigate proper combining of No Reference metrics to achieve better performance. Results are analyzed and compared in terms of correlation coefficients.
Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to
reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database
are continuously added, a fast classification based on simple threshold evaluation is desirable.
In this work we present a CART-based (Classification And Regression Tree ) classification framework for audio
streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History
(AESS) , which is mainly composed of popular songs and other audio records describing the popular traditions
handed down generation by generation, such as traditional fairs, and customs.
The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained
environment; and for the non-expert human user is difficult to create the ground truth labels.
In our experiments, half of all the available audio files have been randomly extracted and used as training set. The
remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes:
speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes
above defined by domain experts.
The aim of our research is to specify experimentally and further model spatial frequency
response functions, which
quantify human sensitivity to spatial information in real complex images. Three visual response
functions are measured: the isolated Contrast Sensitivity Function (iCSF), which describes the
ability of the visual system to detect any spatial signal in a given spatial frequency
octave in isolation, the contextual Contrast Sensitivity Function (cCSF), which describes the
ability of the v isual system to detect a spatial signal in a given octave in an image and the
contextual Visual Perception Function (VPF), which describes visual sensitivity to changes in
suprathreshold contrast in an image. In this paper we present relevant background, along with
our first attempts to derive experimentally and further model the VPF and CSFs. We examine
the contrast detection and discrimination frameworks developed by Barten, which we find prov
ide a sound starting position for our own modeling purposes. Progress is presented
in the following areas: verification of the chosen model for detection and discrimination;
choice of contrast metrics for defining contrast sensitivity; apparatus, laboratory set-up
and imaging system characterization; stimuli acquisition and stimuli variations; spatial
decomposition; methodology for subjective tests. Initial iCSFs are presented and compared
findings that hav e used simple visual stimuli, as well as with more recent relevant work in
We address the problem of image quality assessment for natural images, focusing on No Reference (NR) assessment
methods for sharpness. The metrics proposed in the literature are based on edge pixel measures that
significantly suffer the presence of noise. In this work we present an automatic method that selects edge segments,
making it possible to evaluate sharpness on more reliable data. To reduce the noise influence, we also propose a
new sharpness metric for natural images.
In this work we present an automatic local color transfer method based on semantic image annotation. With
this annotation, images are segmented into homogeneous regions, assigned to seven different classes (vegetation,
snow, water, ground, street, and sand). Our method permits to automatically transfer the color distribution
from regions of the source and target images annotated with the same class (for example the class "sky"). The
amount of color transfer can be controlled by tuning a single parameter. Experimental results will show that
our local color transfer is usually more visually pleasant than a global approach.
We propose a bio-inspired framework for automatic image quality enhancement. Restoration algorithms usually
have fixed parameters whose values are not easily settable and depend on image content. In this study, we
show that it is possible to correlate no-reference visual quality values to specific parameter settings such that
the quality of an image could be effectively enhanced through the restoration algorithm. In this paper, we chose
JPEG blockiness distortion as a case study. As for the restoration algorithm, we used either a bilateral filter, or
a total variation denoising detexturer. The experimental results on the LIVE database will be reported. These
results will demonstrate that a better visual quality is achieved through the optimized parameters over the entire
range of compression, with respect to the algorithm default parameters.
In the present article we focus on enhancing the contrast of images with low illumination that present large
underexposed regions. For these particular images, when applying the standard contrast enhancement techniques,
we also introduce noise over-enhancement within the darker regions. Even if both the contrast enhancement and
denoising problems have been widely addressed within the literature, these two processing steps are, in general,
independently considered in the processing pipeline. The goal of this work is to integrate contrast enhancement
and denoise algorithms to proper enhance the above described type of images. The method has been applied
to a proper database of underexposed images. Our results have been qualitatively compared before and after
applying the proposed algorithm.
In this work we propose an image quality assessment tool. The tool is composed of different modules that
implement several No Reference (NR) metrics (i.e. where the original or ideal image is not available). Different
types of image quality attributes can be taken into account by the NR methods, like blurriness, graininess,
blockiness, lack of contrast and lack of saturation or colorfulness among others. Our tool aims to give a structured
view of a collection of objective metrics that are available for the different distortions within an integrated
framework. As each metric corresponds to a single module, our tool can be easily extended to include new
metrics or to substitute some of them. The software permits to apply the metrics not only globally but also
locally to different regions of interest of the image.
A method for contrast enhancement is proposed. The algorithm is based on a local and image-dependent exponential correction. The technique aims to correct images that simultaneously present overexposed and underexposed regions. To prevent halo artifacts, the bilateral filter is used as the mask of the exponential correction. Depending on the characteristics of the image (piloted by histogram analysis), an automated parameter-tuning step is introduced, followed by stretching, clipping, and saturation preserving treatments. Comparisons with other contrast enhancement techniques are presented. The Mean Opinion Score (MOS) experiment on grayscale images gives the greatest preference score for our algorithm.
The present work concerns the development of a no-reference demosaicing quality metric. The demosaicing
operation converts a raw image acquired with a single sensor array, overlaid with a color filter array, into a
full-color image. The most prominent artifact generated by demosaicing algorithms is called zipper. In this work
we propose an algorithm to identify these patterns and measure their visibility in order to estimate the perceived
quality of rendered images. We have conducted extensive subjective experiments, and we have determined the
relationships between subjective scores and the proposed measure to obtain a reliable no-reference metric.
We present different computational strategies for colorimetric characterization of scanners using multidimensional polynomials. The designed strategies allow us to determine the coefficients of an a priori fixed polynomial, taking into account different color error statistics. Moreover, since there is no clear relationship between the polynomial chosen for the characterization and the intrinsic characteristics of the scanner, we show how genetic programming could be used to generate the best polynomial. Experimental results on different devices are reported to confirm the effectiveness of our methods with respect to others in the state of the art.
Skin detection is a preliminary step in many applications. We analyze some of the most frequently cited binary skin classifiers based on explicit color cluster definition and present possible strategies to improve their performance. In particular, we demonstrate how this can be accomplished by using genetic algorithms to redefine the cluster boundaries. We also show that the fitness function can be tuned to favor either recall or precision in pixel classification. Some combining strategies are then proposed to further improve the performance of these binary classifiers in terms of recall or precision. Finally, we show that, whatever the method or the strategy employed, the performance can be enhanced by preprocessing the images with a white balance algorithm. All the experiments reported here have been run on a large and heterogeneous image database.
Several algorithms were proposed in the literature to recover
the illuminant chromaticity of the original scene. These algorithms
work well only when prior assumptions are satisfied, and the
best and the worst algorithms may be different for different scenes.
We investigate the idea of not relying on a single method but instead
consider a consensus decision that takes into account the responses
of several algorithms and adaptively chooses the algorithms
to be combined. We investigate different combining strategies
of state-of-the-art algorithms to improve the results in the
illuminant chromaticity estimation. Single algorithms and combined
ones are evaluated for both synthetic and real image databases
using the angular error between the RGB triplets of the measured
illuminant and the estimated one. Being interested in comparing the
performance of the methods over large data sets, experimental results
are also evaluated using the Wilcoxon signed rank test. Our
experiments confirm that the best and the worst algorithms do not
exist at all among the state-of-the-art ones and show that simple
combining strategies improve the illuminant estimation.
Low quality images are often corrupted by artifacts and generally need to be heavily processed to become visually pleasing. We present a modified version of unsharp masking that is able to perform image smoothing, while not only preserving but also enhancing the salient details in images. The premise supporting the work is that biological vision and image reproduction share common principles. The key idea is to process the image locally according to topographic maps obtained from a neurodynamical model of visual attention. In this way, the unsharp masking algorithm becomes local and adaptive, enhancing the edges differently according to human perception.
In this paper we investigate the relationship between matrixing methods, the number of filters adopted and the
size of the color gamut of a digital camera. The color gamut is estimated using a method based on the inversion of
the processing pipeline of the imaging device. Different matrixing methods are considered, including an original
method developed by the authors. For the selection of a hypothetical forth filter, three different quality measures
have been implemented. Experimental results are reported and compared.
In this work we consider six methods for automatic white balance available in the literature. The idea investigated
does not rely on a single method, but instead considers a consensus decision that takes into account
the compendium of the responses of all the considered algorithms. Combining strategies are then proposed and
tested both on synthetic and multispectral images, extracted from well known databases. The multispectral
images are processed using a digital camera simulator developed by Stanford University. All the results are
evaluated using the Wilcoxon Sign Test.
The illuminant estimation has an important role in many domain applications such as digital still cameras and mobile phones, where the final image quality could be heavily affected by a poor compensation of the ambient illumination effects. In this paper we present an algorithm, not dependent on the acquiring device, for illuminant estimation and compensation directly in the color filter array (CFA) domain of digital still cameras. The algorithm proposed takes into account both chromaticity and intensity information of the image data, and performs the illuminant compensation by a diagonal transform. It works by combining a spatial segmentation process with empirical designed weighting functions aimed to select the scene objects containing more information for the light chromaticity estimation. This algorithm has been designed exploiting an experimental framework developed by the authors and it has been evaluated on a database of real scene images acquired in different, carefully controlled, illuminant conditions. The results show that a combined multi domain pixel analysis leads to an improvement of the performance when compared to single domain pixel analysis.
The segmentation of skin regions in color images is a preliminary step in several applications. Many different methods for discriminating between skin and non-skin pixels are available in the literature. The simplest, and often applied, methods build what is called an "explicit skin cluster" classifier which expressly defines the boundaries of the skin cluster in certain color spaces. These binary methods are very popular as they are easy to implement and do not require a training phase. The main difficulty in achieving high skin recognition rates, and producing the smallest possible number of false positive pixels, is that of defining accurate cluster boundaries through simple, often heuristically chosen, decision rules. In this study we apply a genetic algorithm to determine the boundaries of the skin clusters in multiple color spaces. To quantify the performance of these skin detection methods, we use recall and precision scores. A good classifier should provide both high recall and high precision, but generally, as recall increases, precision decreases. Consequently, we adopt a weighted mean of precision and recall as the fitness function of the genetic algorithm. Keeping in mind that different applications may have sharply different requirements, the weighting coefficients can be chosen to favor either high recall or high precision, or to satisfy a reasonable tradeoff between the two, depending on application demands. To train the genetic algorithm (GA) and test the performance of the classifiers applying the GA suggested boundaries, we use the large and heterogeneous Compaq skin database.
The paper describes an algorithm for the automatic removal of "redeye" from digital photos. First an adaptive color cast removal algorithm is applied to correct the color photo. This phase not only facilitates the subsequent steps of processing, but also improves the overall appearance of the output image. A skin detector, based mainly on analysis of the chromatic distribution of the image, creates a probability map of skin-like regions. A multi-resolution neural
network approach is then exploited to create an analogous probability map of candidate faces. These two distributions are then combined to identify the most probable facial regions in the image. Redeye is searched for within these regions, seeking areas with high “redness” and applying geometric constraints to limit the number of false hits. The redeye removal algorithm is then applied automatically to the red eyes identified. Candidate areas are opportunely smoothed to
avoid unnatural transitions between the corrected and original parts of the eyes. Experimental results of application of this procedure on a set of over 300 images are presented.
The paper describes an adaptive and tunable color cast removal algorithm. This multi-step algorithm first quantifies the strength of the cast by applying a color cast detector, which classifies the input images as having no cast, evident cast, ambiguous cast (images with low cast, or for which whether or not the cast exists is a subjective opinion), or intrinsic cast (images presenting a cast that is probably due to a predominant color we want to preserve, such as in underwater images). The cast remover, a modified version of the white balance algorithm, is then applied in the two cases of evident or ambiguous casts. The method we propose has been tuned and tested, with positive results, on a large data set of images downloaded from personal web-pages, or acquired by various digital cameras.
The paper describes a method for detecting a color cast (i.e. a superimposed dominant color) in a digital image without any a priori knowledge of its semantic content. The color gamut of the image is first mapped in the CIELab color space. The color distribution of the whole image and of the so-called Near Neutral Objects (NNO) is then investigated using statistical tools then, to determine the presence of a cast. The boundaries of the near neutral objects in the color space are set adaptively by the algorithm on the basis of a preliminary analysis of the image color gamut. The method we propose has been tuned and successfully tested on a large data set of images, downloaded from personal web-pages or acquired using various digital and traditional cameras.
A ligand and three metallo-organic complexes containing Nd3+, Tb3+ and Er3+ ions were synthesized. Absorption linear dichroism spectroscopy and domain structures investigations plus x-ray diffractometry measurements at heating-cooling cycles were performed. An influence of a rare earth metals on LC thermodynamic properties have been discussed.
Three ligands with different number of carbon atoms in alkoxy chain, their Cu(II) and Ni(II) metallomesogens were synthesized. Absorption, linear dichroism spectroscopy measurements in visible region and x-ray diffractometry measurements at heating-cooling cycles were performed. An influence of organic groups and Cu(II), Ni(II) ions on thermodynamic properties have been discussed.
A possible method for WIMPs detection using liquid xenon scintillation is discussed. Background from cosmic and radioactive gamma rays at energies down to the keV region can be easily rejected by requiring the presence of proportional scintillation. The results from a basic test are presented and a prototype detector design is proposed.
Recent operation results of a three-ton liquid argon time projection chamber for the ICARUS project are reported. This elecronic continually sensitive, self-triggering bubble-chamber is capable of providing 3D imaging of any ionizing event in conjunction with a good calorimetric response.