Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
An image authentication and tampering localization technique based on a wavelet-based digital watermarking procedure [Opt. Express 3(12), 491-496 (1998)] is proposed. To determine whether a given watermarked image has been tampered with or not, the similarity between the extracted and embedded watermarks is measured. If the similarity is less than a threshold value, the proposed sequential watermark alignment based on a coefficient stamping (SWACS) scheme is used to determine the modified wavelet coefficients corresponding to the tampered region. Then, the morphological region growing and subband duplication (MRGSD) scheme are used to include neighboring wavelet coefficients and then duplicate the wavelet coefficients in other subbands. The experimental results show that the proposed SWACS and MRGSD schemes can efficiently identify different types of image tampering. Moreover, the detection performance of the proposed system on various sizes of the watermark and tampered region is also evaluated.
Four key issues in wavelet zero-tree based image coding are investigates and presented, there are (1) Fast wavelet transform that save 1/2 and 3/4 processing for 1D signal and 2D signals respectively. (2) The selection of the best wavelet filters that yields best performance (PSNR vs. Bit rate) for most common seen images. (3) Recommendation of number of wavelet scales (or frequencies) for image coding by experiments and analysis.
This paper investigates the question: What is the minimum number of colors required to represent color images in a computer monitor. We conduct experiment to perform JND partition along the 3-axis in L x y color space that colors in the same partition are indistinguishable to human perception. We also propose a color image quality measure based on the LMS cone perception sensitivity. The JND model is applied to design a fixed color palette and its performance is evaluated.
Two major issues in image coding are the effective incorporation of human visual system (HVS) properties and the effective objective measure for evaluating image quality (OQM). In this paper, we treat the two issues in an integrated fashion. We build a JND model based on the measurements of the JND (Just Noticeable Difference) property of HVS. We found that JND does not only depend on the background intensity but also a function of both spatial frequency and patten direction. Wavelet transform, due to its excellent simultaneous Time (space)/frequency resolution, is the best choice to apply the JND model. We mathematically derive an OQM called JND_PSNR that is based on the JND property and wavelet decomposed subbands. JND_PSNR is more consistent with human perception and is recommended as an alternative to the PSNR or SNR. With the JND_PSNR in mind, we proceed to propose a wavelet and JND based codec called JZW. JZW quantizes coefficients in each subband with proper step size according to the subband's importance to human perception. Many characteristics of JZW are discussed, its performance evaluated and compared with other famous algorithms such as EZW, SPIHT and TCCVQ. Our algorithm has 1 - 1.5 dB gain over SPIHT even when we use simple Huffman coding rather than the more efficient adaptive arithmetic coding.
PNN algorithm is excellent for obtaining an initial codebook in VQ design. However, the drawback of PNN is its computational complexity, especially when the training set size is large. In this paper, we explore the characteristics of PNN algorithm and propose a fast PNN algorithm using memory which can reduce the computational complexity from O(L<SUP>3</SUP>) to O(L<SUP>2</SUP>).
In this paper, we measure the gray level JND (just noticeable difference) property of human visual system directly under various viewing conditions. We then developed three image processing tasks using the measured JND data. First, a JND based image segmentation algorithm for coding purpose is proposed. The algorithm operates on a pyramid data structure and uses the JND property as the merge criterion which is simple in computation while proven to be effective and robust in segmenting various images such as Lena, Salesman, etc. Second, the blocky artifacts normally seen in segmented images can be improved by encoding the difference image between the original image and its segmented version. With slight modifications, the JND based segmentation algorithm can effectively segment the difference image for the proposed two-pass progressive image coding. Finally, the measured JND data shows that 55 gray levels per pixel are sufficient to represent an image under normal viewing conditions and that 64 gray levels are sufficient under any viewing condition. An image requantization algorithm is then proposed and its effectiveness verified.
Image transmission via packet switched networks has a significant impact on encoded image data. To develop an efficient image codec for packet video, the goals of image coding are redefined and formulated as an optimization problem. Guided by these goals, a set of design requirements and a new segmentation based coding technique is developed. This approach features region based motion estimation, region based residual coding and region based single frame coding. The performance of the proposed algorithm is evaluated and a packet loss compensation algorithm is presented. As a result, good image quality at very low-bit rates can be achieved.
The design of an image coder for packet-switched transmission is formulated as a minimization problem. A general set of design requirements is derived and used to design a segmentation-based texture coding algorithm. The segmentation process is performed on a pyramid data structure and uses the just noticeable difference (JND) of the human visual system as the merge criterion. To reduce the bit-rate while maintaining image quality, each region is classified as either texture or non-texture. Texture regions are approximated by a one-dimensional polynomial, while the non-texture regions are approximated by the region's mean intensity. A set of parameters for bit-rate/image quality tuning is identified and their effect evaluated on LENA and HOUSE.