High-end PC monitors and TVs continue to increase their native display resolution to 4k by 2k and beyond.
Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or
wireless communication channels, but also when processing with array processor architectures. We recently presented a
block-based memory compression architecture for text, graphics, and video which we named parametric functional
compression (PFC) enabling multi-dimensional error minimization with context sensitive control of visually noticeable
artifacts. The underlying architecture was limited to small block sizes of 4x4 pixels. Although well suitable for random
access, its overall compression ratio ranges between 1.5 and 2.0. To increase compression ratio as well as image quality,
we propose a new hybrid approach. Within an extended block size we apply two complementary methods using a set of
vectors with orientation and curvature attributes across a 3x3 kernel of pixel positions. The first method searches for
linear interpolation candidate pixels that result in very low interpolation errors using vectorized linear interpolation
(VLI). The second method calculates the local probability of orientation and curvature (POC) to predict and minimize
PFC coding errors. Detailed performance estimation in comparison with the prior algorithm highlights the effectiveness
of our new approach, identifies its current limitations with regard to high quality color rendering with lower number of
bits per pixel, and illustrates remaining visual artifacts.
High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k by 2k and beyond. Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or wireless communication channels, but also when processing with array processor architectures. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. We present a block-based memory compression architecture for text, graphics, and video enabling multidimensional error minimization with context sensitive control of visually noticeable artifacts. As a result of analyzing image context locally, the number of operations per pixel can be significantly reduced, especially when implemented on array processor architectures. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, identifies its current limitations with regard to high quality color rendering, and illustrates remaining visual artifacts.
This paper describes an interpolation method that takes into account the edge orientation in order to avoid
typical interpolation artifacts (jagging, staircase effects...). It is first based on an edge orientation estimation,
performed in the wavelet domain. The estimation uses the multi-resolution features of wavelets to give an
accurate and non-biased description of the frequency characteristics of the edges, as well as their orientation.
The interpolation is then performed, using the edge orientation estimation, to improve a reference interpolation
(cubic-spline for instance). This improvement is carried out by filtering the edges with a Gaussian kernel along
their direction in order to smooth the contour in the direction parallel to the edge, which avoids disturbing
variations across them (jagging and staircase effects). This technique also keeps the sharpness of the transition
in the direction perpendicular to the contour to avoid blur.
Results are presented on both synthetic and real images, showing the visual impact of the presented method on
the quality of interpolated images. Comparisons are made with the usual cubic-spline interpolation, and with
other edge-directed interpolation techniques to discuss the choices that have been made in our method.
This paper presents a method that detects edge orientations in still images. Edge orientation is a crucial
information when one wants to optimize the quality of edges after different processings. The detection is carried
out in the wavelet domain to take advantage of the multi-resolution features of the wavelet spaces, and locally
adapts the resolution to the characteristics of edges. Our orientation detection method consists of finding the
local direction along which the wavelet coefficients are the most regular. To do so, the image is divided in square
blocks of varying size, in which Bresenham lines are drawn to represent different directions. The direction of
the Bresenham line that contains the most regular wavelet coefficients, according to a criterion defined in the
paper, is considered to be the direction of the edge inside the block. The choice of the Bresenham line drawing
algorithm is justified in this paper, and we show that it considerably increases the angle precision compared to
other methods as for instance, the method used for the construction of bandlet bases. An optimal segmentation
is then computed in order to adapt the size of the blocks to the edge localization and to isolate in each block at
most one contour orientation. Examples and applications on image interpolation are shown on real images.
To achieve the best image quality, noise and artifacts are generally removed at the cost of a loss of details generating the
blur effect. To control and quantify the emergence of the blur effect, blur metrics have already been proposed in the
literature. By associating the blur effect with the edge spreading, these metrics are sensitive not only to the threshold
choice to classify the edge, but also to the presence of noise which can mislead the edge detection.
Based on the observation that we have difficulties to perceive differences between a blurred image and the same reblurred
image, we propose a new approach which is not based on transient characteristics but on the discrimination
between different levels of blur perceptible on the same picture.
Using subjective tests and psychophysics functions, we validate our blur perception theory for a set of pictures which
are naturally unsharp or more or less blurred through one or two-dimensional low-pass filters. Those tests show the
robustness and the ability of the metric to evaluate not only the blur introduced by a restoration processing but also focal
blur or motion blur. Requiring no reference and a low cost implementation, this new perceptual blur metric is applicable
in a large domain from a simple metric to a means to fine-tune artifacts corrections.
Evaluation and optimization, with an ever increasing variety of material, are getting more and more time-consuming
tasks in video algorithm development. An additional difficulty in moving video is that frame-by-frame perceived
performance can significantly differ from real-time perceived performance. This paper proposes a way to handle this difficulty in a more systematic and objective way than with usual long tuning
procedures. We take the example of interpolation algorithms where variations of sharpness or contrast look annoying in
real-time whereas the frame-by-frame performance looks well acceptable. These variations are analyzed to get an
objective measure for the real-time annoyance. We show that the reason for the problem is that most interpolation
algorithms are optimized across intraframe criteria ignoring that the achievable intrinsic performance may vary from
frame to frame. Our method is thus based on interframe optimization taking into account the measured annoyance. The
optimization criteria are steered frame by frame depending on the achievable performance of the current interpolation
and the achieved performance in previous frames. Our policy can be described as "better be good all time than very
good from time to time." The advantage is that it is automatically controlled by the compromise wished in the given
With the incoming of digital TV, sophisticated video processing algorithms have been developed to improve the rendering of motion or colors. However, the perceived subjective quality of these new systems sometimes happens to be in conflict with the objective measurable improvement we expect to get. In this presentation, we show examples where algorithms should visually improve the skin tone rendering of decoded pictures under normal conditions, but surprisingly fail, when the quality of mpeg encoding drops below a just noticeable threshold. In particular, we demonstrate that simple objective criteria used for the optimization, such as SAD, PSNR or histogram sometimes fail, partly because they are defined on a global scale, ignoring local characteristics of the picture content. We then integrate a simple human visual model to measure potential artifacts with regard to spatial and temporal variations of the objects' characteristics. Tuning some of the model's parameters allows correlating the perceived objective quality with compression metrics of various encoders. We show the evolution of our reference parameters in respect to the compression ratios. Finally, using the output of the model, we can control the parameters of the skin tone algorithm to reach an improvement in overall system quality.
As large-scale direct view TV screens such as LCD flat panels or plasma displays become more and more affordable, consumers not only expect to buy a ‘big screen’ but to also get ‘great picture quality’. To enjoy the big picture, its viewing distance is significantly reduced. Consequently, more artifacts related to digital compression techniques show above the threshold of visual detectability. The artifact that caught our attention can be noticed within uniform color patches. It presents itself as ‘color blobs’ or color pixel clustering. We analyze the artifact’s color characteristics in RGB and CIELAB color spaces and underline them by re-synthesizing an artificial color patch. To reduce the visibility of the artifact, we elaborate several linear methods, such as low pass filtering and additive white gaussian noise and verify, whether they could correct or mask the visible artifacts. From the huge list of nonlinear filter methods we analyze the effect of high frequency dithering and pixel shuffling, also based on the idea that spatial visual masking should dominate signal correction. By applying shuffling, we generate artificial high frequency components within the uniform color patch. As a result, the artifact characteristics change significantly and its visibility is strongly reduced.