Digital images and videos are used in many digital devices recently. Also, the resolution of display became larger than
that of previous years. Image up-scaling algorithm is important issue since original input source is limited in transferring
within data bandwidth. Among various up-scaling algorithms, Super-Resolution (SR) image reconstruction method is
able to estimate high-resolution (HR) image using multiple low-resolution (LR) images. Conventional approaches to
estimate HR image with Least Square (LS) method and Weighted Least Square (WLS) method are not able to reconstruct high-frequency region effectively in case its blur kernel is assumed Gaussian kernel in unknown system. Also, these methods produce jagging artifacts from the deficiency of LR frames. The proposed SR algorithm uses edge adaptive WLS method to reconstruct high-frequency region considering local properties and is applied to video sequence with block process to cope with local motions. Moreover, to apply video sequence with complex motions, we use selectively the correct information of reference frame to avoid errors from incorrect information. For accurate additional information from reference frames, the proposed algorithm determines additional information in reference frame by comparing with current frame and reference frame. The experiments demonstrate the superior performance of the proposed algorithm.
This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics
for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some
objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection
(IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and
frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their
restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the
paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration
accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented.
Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration
based methods appear more accurate to the HR image in a real world case where any prior information about the blur
kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those
methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was
found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration
accuracy of those SR algorithms.
A new iris recognition scheme using multispectral iris images aimed for preventing the counterfeit attack is proposed. In the proposed system, multispectral infrared iris images are taken in order to utilize the spectral features of real iris. Rather than additionally deciding whether the enrolled iris is fake or not, the multispectral images are fused into a grayscale image to contain the complementary information among them by a gradient-based image fusion algorithm, and the iris region of the fused image is applied directly to the recognition procedure. Through the fusion process, the images which do not show multispectral variations result in a scrambled image that does not contain the exact features of the original iris. Because of the failure in the fusion process, the fused image of a fake iris does not match the original iris features in the database. Thus, they are simply rejected in the recognition step. Experimental results show that the proposed scheme successfully localizes the iris position of real irises and prevents possible counterfeit attacks while maintaining the performance of the authentication system.
In color television broadcasting standards, such as National Television System Committee (NTSC) and phase alteration line (PAL), the bandwidth of the chrominance signals are even narrower than those of the luminance signals. Also in digital video standards, the chrominance signals are usually low-pass filtered and subsampled to reduce the amount of data. Because of these reasons, the chrominance signals have poor transition characteristics and the slow transition causes blurred color edges. A color transient improvement algorithm is proposed by exploiting the high-frequency information of the luminance signal. The high-frequency component extracted from the luminance signal is modified by adaptive gains and added to the low-resolution chrominance signals in the proposed algorithm. The gain is estimated to minimize the l2 norm of the error between the original and the estimated pixel values in a local window. The proposed algorithm naturally improves the transient of the chrominance signal as much as that of the luminance signal without overshoots and undershoots. The experimental results show that the proposed method produces steep and natural color edge transition and reconstructs narrow line edges that are not restored by the conventional algorithms.
A deinterlacing algorithm based on edge-dependent interpolation (EDI) that considers edge patterns is proposed. Generally, EDI algorithms perform visually better than other deinterlacing algorithms using one field. However, they produce unpleasant results due to failure in estimating edge direction. To estimate the edge direction precisely, not only simple differences between adjacent two lines but also edge patterns are used. Edge patterns, which give sufficient information to estimate the edge direction, appear along the edge direction. Therefore, we analyze properties of edge patterns and model them as a weight function. The weight function helps the proposed method to estimate the edge direction precisely. Experimental results indicate that the proposed algorithm outperforms conventional EDI approaches with respect to both objective and subjective criteria.
In this paper, we propose two new spatially adaptive image fusion algorithms based on Bayesian approach for merging remotely sensed panchromatic and multi-spectral images. The two complementary images are modeled as correlated two dimensional stochastic signals and the high-resolution multi-spectral image is estimated by minimizing the mean squared error between the original high-resolution image and the estimated image. We assume that the estimator is locally linear and obtain the local linear minimum mean square error (MMSE) estimator for image fusion. Two MMSE image fusion algorithms are derived on different assumptions of the images. If we assume that pixels in the images are uncorrelated with their neighbors, the estimator becomes a point processor which is controlled by an adaptive gain expressed by the ratio of local cross-covariance between the two images and the local variance of the panchromatic image. On the other hand, if we assume that pixels in a small block are considered stationary and correlated with one another, the estimator uses the locally stationary
cross-covariance matrix between the two images and auto-covariance matrix of the panchromatic image. For the second algorithm, we take Fast Fourier Transform (FFT) based approach in order to avoid complex matrix computations and achieve a fast algorithm. Experimental results show that the proposed algorithms are superior to conventional algorithms according to visual and quantitative comparisons.