Most compression methods for hyperspectral images have been optimized to minimize mean squared errors. However,
this kind of compression method may not retain all discriminant information, which is important if hyperspectral images
are to be used to distinguish among classes. In this paper, we propose a two-stage compression method for hyperspectral
images with encoding residual discriminant information. In the proposed method, we first apply a compression method
to hyperspectral images, producing compressed image data. From the compressed image data, we produce reconstructed
images. Then we generate residual images by subtracting the reconstructed images from the original images. We also
apply a feature extraction method to the original images, which produces a set of feature vectors. By applying these
feature vectors to the residual images, we generate discriminant feature images which provide the discriminant
information missed by the compression method. In the proposed method, these discriminant feature images are also
encoded. Experiments with AVIRIS data show that the proposed method provides better compression efficiency and
improved classification accuracy than other compression methods.
In this paper, we propose two compression methods for hyperspectral images with discriminant features enhanced. Generally, when hyperspectral images are compressed with conventional image compression algorithms, which mainly minimize mean squared errors, discriminant features of the original data may not be well preserved since they may not be necessarily large in energy. In this paper, we propose two compression methods that do preserve the discriminant information. In the first method, we enhanced the discriminant features and then compressed the enhanced data using conventional image compression algorithms such as 3D JPEG 2000. In the second method, we applied a feature extraction method and extracted the discriminantly dominant feature vectors. By examining the dominant feature vectors, we determined the discriminant usefulness of each spectral band. Based on these findings, we determined the bit allocation of each spectral band assuming 2D compression methods are used. Experiments show that the proposed methods effectively preserved the discriminant information and yielded improved classification accuracies compared to existing compression algorithms.
We propose new blur and blocking metrics and then present a no-reference image-quality assessment method using these blur and blocking metrics. To compute the blur metric, we first estimated a blur radius from a given image and its reblurred version by using edge differences and edge amplitudes. Because blurring in edge regions is generally more sensitive to human perception, the blur metric was estimated from the edge blocks. We also used kurtosis and structural similarity to better estimate the blur metric. To compute the blocking metric, the blocking artifact was modeled as a 2-D step function and the blockiness visibility was estimated by the brightness difference between adjacent blocks. After the blocky position was determined, the blocking metric was computed from the six differences between four adjacent blocks. Experimental results show that the objective quality scores correlated highly with the subjective quality scores.
Objective video quality measurement has become an important issue, as multimedia services are now widely available over the Internet and other wireless communication media. Traditionally, professional CRT monitors have been used to measure subjective video quality. However, the majority of users have LCD, plasma display panel (PDP), or consumer-graded CRT monitors. We compared the subjective video quality of various TV and LCD PC monitors. Subjective tests were performed with a wide range of video sequences using different monitors, and their correlations were analyzed. Although there were high correlations among the various display monitors, care should be taken in selecting a monitor for certain applications.
We propose a new edge-dependent deinterlacing method using weighted motion estimation. Motion-compensated methods use the motion information and produce improved performance. However, they still tend to produce unsatisfactory picture quality in rapid motion areas and edge regions due to incorrect motion estimation. The proposed method mitigates these problems by limiting motion estimation to an appropriate search range and applying a weighted edge motion estimation with a piecewise linear weight function. Experimental results show that the proposed method provides noticeable improvement in terms of the peak signal-to-noise ratio and produces better picture quality in edge regions.
We propose a new deinterlacing algorithm with selective motion compensation. It has been reported that deinterlacing methods using motion compensation produce significantly improved results, although they tend to yield undesired results in fast moving areas. This is due to weak correlations between the previous and current frames. The proposed algorithm solves this problem by selectively applying motion-compensated deinterlacing. We first apply intrafield interpolation in the spatial domain, and then selectively apply motion compensations according to the type of motion vectors. Experimental results show that the proposed method produces noticeably improved performance compared to existing motion-compensated deinterlacing methods.
We propose a new method for an objective measurement of video quality. By analyzing subjective scores of various video sequences, we find that the human visual system is particularly sensitive to degradation around edges. In other words, when edge areas of a video sequence are degraded, evaluators tend to give low quality scores to the video, even though the overall mean squared error is not large. Based on this observation, we propose an objective video quality measurement method based on degradation around edges. In the proposed method, we first apply an edge detection algorithm to videos and locate edge areas. Then, we measure degradation of those edge areas by computing mean squared errors and use it as a video quality metric after some postprocessing. Experiments show that the proposed method significantly outperforms the conventional peak signal-to-noise ratio (PSNR). This method was also independently evaluated by independent laboratory groups in the Video Quality Experts Group (VQEG) Phase 2 test. The method consistently provided good performances. As a result, the method was included in international recommendations for objective video quality measurement.
In this paper, we investigate video quality on various LCD monitors. There exists a large variance in video quality among various LCD monitors. Due to this unavoidable variance in LCD monitors, there has been a concern about the stability and repeatability of subjective and objective testing for LCD monitors. We performed subjective testing the DSCQS method and compare subjective quality ratings on the 5 LCD monitors. The experimental results show that the correlation coefficients among DMOS’s with respect to each monitor are acceptably high. Thus, it may be possible to develop models for objective measurement of video quality on LCD monitors. In the paper, physical parameters such as color temperature, contrast, brightness, response time, etc will be presented and thorough analyses will be provided.