The objective of pre-processing is to eliminate information of least visual significance prior to encoding, in order to achieve best overall performance of a video compression system. We formulate the pre-processing problem in an operational rate-distortion framework, with the aim of maximizing the visual quality of the compressed video, as well as the coding efficiency of the system. Rather than filtering the original video, our novel approach consists of filtering the motion compensated error signal. This offers a significant computational advantage over other pre-filtering methods without sacrificing effectiveness. The proposed method selects the parameters of a pre-filter in conjunction with the selection of the quantization scales of the encoder. We incorporate a perceptual quality metric in the optimization process, in order to maximize the visual quality of the compressed video for given rate constraints. Our approach is developed for motion-compensated block-based discrete cosine transform coders, which form the basis of several video coding standards. We use the visual quality metric that was proposed by Watson, which was developed for such coders, based on the discrete cosine transform decomposition. The effectiveness of the proposed approach is demonstrated by simulation results, carried out within the framework of MPEG-2 video compression.
Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.
The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.
This paper addresses the problem of target identification using features extracted from the time-frequency domain. This approach is an attractive alternative to other target recognition schemes because it makes us of the time-dependence of frequency domain signatures or the frequency dependence of time-domain scattering features. The disadvantage, however, is that time-frequency signatures are based on the magnitude squared of the received backscatter signal and are thus characterized by high noise variance. In this paper, we determine the feasibility of radar target identification in the time-frequency domain. The goal is to compare the performance of time-frequency domain target recognition with other schemes that rely on the raw backscatter data. To ensure fairness, we implement the same type of statistical pattern recognition in both domains. We also attempt three types of time-frequency backscatter representations: Wigner-Ville, Bjorn-Jordan, and spectrograms. We use the analytic form of real radar signatures recorded in a compact range environment for different target azimuth positions.