Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.
The quality of the images obtained by digital cameras has improved a lot since digital cameras early days.
Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to
obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible
details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require
a complete documentation of the processing steps, enabling the replication of the entire process. The automation
of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents
an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic
scripting generation. The technique is based on a preprocessing step which extracts the features of the image
second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is
largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments
on a dataset of images are also reported showing the effectiveness of the proposed methodology.
The high level context image analysis regards many fields as face recognition, smile detection, automatic red eye removal,
iris recognition, fingerprint verification, etc. Techniques involved in these fields need to be supported by more powerful
and accurate routines. The aim of the proposed algorithm is to detect elliptical shapes from digital input images. It can
be successfully applied in topics as signal detection or red eye removal, where the elliptical shape degree assessment can
improve performances. The method has been designed to handle low resolution and partial occlusions. The algorithm is
based on the signature contour analysis and exploits some geometrical properties of elliptical points. The proposed method
is structured in two parts: firstly, the best ellipse which approximates the object shape is estimated; then, through the
analysis and the comparison between the reference ellipse signature and the object signature, the algorithm establishes if
the object is elliptical or not. The first part is based on symmetrical properties of the points belonging to the ellipse, while
the second part is based on the signature operator which is a functional representation of a contour. A set of real images
has been tested and results point out the effectiveness of the algorithm in terms of accuracy and in terms of execution time.
Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts
introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and
other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving
this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image.
Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we
propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms,
in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge
This paper presents a technique to convert surfaces, obtained through a Data Dependent Triangulation, in Bezier Curves by using a Scalable Vector Graphics File format. The method starts from a Data Dependent Triangulation, traces a map of the boundaries present into the triangulation, using the characteristics of the triangles, then the estimated barycenters are connected, and a final conversion of the resulting polylines in curves is performed. After the curves have been estimated and closed the final representation is obtained by sorting the surfaces in a decreasing order. The proposed techniques have been compared with other raster to vector conversions in terms of perceptual quality.
The SVG (Scalable Vector Graphics) standard permits to represent complex graphical scenes by a collection of vectorial-based primitives. In this work we are interested in finding some heuristic techniques to cover the gap between the graphical vectorial world and the raster real world typical of digital photography. SVG format could find useful application in the world of mobile imaging devices, where typical camera capabilities should match with limited color/size resolutions displays.
Two different techniques have been applied: Data Dependent Triangulation (DDT) and Wavelet Based Triangulation (WBT). The DDT replaces the input image with a set of triangles according to a specific cost function. The overall perceptual error is then minimized choosing a suitable triangulation. The WBT uses the wavelet multilevel transformation to extract the details from the input image. A triangulation is achieved at the lowest level, introducing large triangles; then the process is iteratively refined, according to the wavelet transformation. That means increasing the quantity of small triangles into the texturized areas and fixing the amount of large triangles into the smooth areas.
Both DDT and WBT are then processed by the polygonalization. The purpose of this function is to merge triangles together reducing the amount of redundancies present into the SVG files.
The proposed technique has been compared with other raster to vector methods showing good performances. Experiments can be found in the SVG UniCT Group page http://svg.dmi.unict.it/.
Reconstruction techniques exploit a first building process using Low-resolution (LR) images to obtain a "draft" High Resolution (HR) image and then update the estimated HR by back-projection error reduction. This paper presents different HR draft image construction techniques and shows methods providing the best solution in terms of final perceived/measured quality. The following algorithms have been analysed: a proprietary Resolution Enhancement method (RE-ST); a Locally Adaptive Zooming Algorithm (LAZA); a Smart Interpolation by Anisotropic Diffusion (SIAD); a Directional Adaptive Edge-Interpolation (DAEI); a classical Bicubic interpolation and a Nearest Neighbour algorithm. The resulting HR images are obtained by merging the zoomed LR-pictures using two different strategies: average or median. To improve the corresponding HR images two adaptive error reduction techniques are applied in the last step: auto-iterative and uncertainty-reduction.