PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Noise contamination of remote sensing data is an inherent problem and various techniques have been developed to counter its effects. In multiband imagery, principal component analysis (PCA) can be an effective method of noise reduction. For single images, convolution masking is more suitable. The application of data masking techniques, in association with PCA, can effectively portray the influence of noise. A description is presented of the performance of a developed masking technique in combination with PCA in the presence of simulated additive noise. The technique is applied to Landsat Thematic Mapper (TM) imagery in addition to a test image. Comparisons of the estimated and applied noise standard deviations from the techniques are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diffraction limited images obtained form practical sensing operations need to be processed before they can be used for any decision-making purposes (target detection, tracking, surveillance, etc.). Iterative digital processing algorithms, generally referred to as super-resolution algorithms, that provide not only restoration within the passband of the sensor but also some degree of spectral extrapolation, have been primarily used to process such images. A popular route for the design of these algorithms is to employ a Bayesian framework where an appropriately modeled statistical quantity (likelihood, posterior distribution, etc.) is optimized iteratively. Although powerful algorithms with demonstrable super-resolution capabilities can be synthesized using this approach, the computational demands and the slow convergence of these algorithms can make them rather unattractive to implement in specific situations. Furthermore, the quality of restoration achieved may not be entirely satisfactory in specific cases of imaging where the underlying emission process is not accurately modeled by the assumed probability distribution functions used in the derivation of algorithms. In this paper, we shall describe a set of hybrid algorithms that integrate set theory-based adjustment operations with the iterative steps that perform statistical optimization in order to achieve superior performance features such as faster convergence and reduced restoration errors. Mathematical modeling of constraint sets that facilitate inclusion of specific types of information available will be discussed and the design of projection operators that permit an intelligent utilization of these constraint sets in the iterative processing will be outlined. The restoration and super- resolution performance of these hybrid algorithms will be demonstrated by processing several blurred images acquired from different types of sensing mechanisms and a quantitative evaluation of the benefits in both image and frequency domains will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The removal of noise in image is one of the important issues, and useful as a preprocessing for edge detection, motion estimation and so on. Recently, many studies on the nonlinear digital filter for impulsive noise reduction have been reported. The median filter, the representative of the nonlinear filters, is very effective for removing impulsive noise and preserving sharp edge. In some cases, burst (i.e., successive) impulsive noise is added to image, and this type of noise is difficult to remove by using the median filter. In this paper, we propose an Adaptive Weighted Median (AWM) filter with Decimation (AWM-D filter) for burst noise reduction. This method can also be applied to recover large destructive regions, such as blotch and scratch. The proposed filter is an extension of the Decimated Median (DM) filter, which is useful for reducing successive impulsive noise. The DM filter can split long impulsive noise sequences into short ones, and remove burst noise in spite of the short filter window. Nevertheless, the DM filter also has two disadvantages. One is that the signals without added noise is unnecessary filtered. The other is that the position information in the window is not considered in the weight determinative process, as common in the median type filter. To improve detail-preserving property of the DM filter, we use the noise detection procedure and the AWM-D filter, which can be tuned by Least Mean Absolute (LMA) algorithm. The AWM-D filter preserves details more precisely than the median-type filter, because the AWM-D filter has the weights that can control the filter output. Through some simulations, the higher performance of the proposed filter is shown compared with the simple median, the WM filter, and the DM filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The precise estimation of optical flow is a key technology in computer vision and moving image processing. Due to the inherent feature of apparent motion occlusion and uncovered phenomena, flow estimation is erroneous at moving object's boundary. The line field lattice process(i.e. Gibbs/Markov random field model of discontinuity) is a well-known solution to this problem.The binary-valued line is used to separate regions with respect to motion.
This paper proposes two improvements to the conventional line field estimation process. One is to reduce the computational burden by the following idea. At the MAP estimation algorithm for region segmentation, the
applied region of line setting is restricted solely within motion boundary area which is specified by thresholding the residue of optical flow constraints. The second improvement is to refine the estimation accuracy at the recursive minimization of energy function. Since the previous pel-recursive line estimation procedure
uses causal scanning, it tends to give undesirable lines such as cracked or isolated lines.Our proposal algorithm adopts non-causal scan process. The effect of the proposed methods are examined for artificial and a real moving image. In consequence, only 14 of computational time of previous method is necessary to generate the line. In addition, undesirable line setting is effectively omitted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flicker, defined as unnatural temporal fluctuations in perceived image intensity, is a common artifact in old films. Flicker removal is needed due to the high quality requirement of revitalizing old films. In this paper, we propose a least square estimation (LSE) method for eliminating flicker in film archives. The essential point of this method is to estimate flicker parameters for each small region by minimizing the squared error between corrected intensities in previous frames and estimated intensities in current frame. Based on the thresholds of flicker parameters, stationary and motion blocks are detected. For those stationary blocks, a criterion of mean squared error (MSE) is added to strongly restrict the stationary area. These blocks, in which MSEs surpass the threshold, are flagged as motion blocks. Flicker parameters in motion blocks are retrieved by iterative interpolation process. Synthetic and real flicker image sequences are used to evaluate and demonstrate the algorithm's usefulness in terms of average PSNR and visual quality in real-time playback respectively. Moreover, the results gotten from LSE method were compared with those obtained from Roosmalen method. The results of LSE method show an impressive improvement on PSNR in simulated flicker sequence. Meanwhile, no blocky effect and no new artifacts introduced are visible in real-time play back for both synthetic and real sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an automated segmentation algorithm for three-dimensional sagittal brain MR images. We start the segmentation from a midsagittal brain MR image utilizing some landmarks, anatomical information and a connectivity-based threshold segmentation algorithm. Since the brain in adjacent slices has a similar size and shape, we propose to use the segmentation result of the midsagittal brain MR image as a mask to guide segmentation of the adjacent slices in lateral direction. The masking operation may truncate some region of the brain. In order to restore the truncated region, we find the end points of the boundary of the truncated region by comparing the boundaries of the mask image and the masked image. Then, we restore the truncated region using the connectivity-based threshold segmentation algorithm with the end points. The resulting segmented image is then used as a mask for the subsequent slice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A composite frame image is an interlaced composition of two sub-image odd and even fields. Such image type is common in many imaging systems that produce video sequences. When relative motion between the camera and the scene occurs during the imaging process, two types of distortion degrade the image: the edge 'staircase effect' due to the shifted appearances of the objects in successive fields, and blur due to the scene motion during each field exposure. This paper deals with restoration of composite frame images degraded by motion. In contrast to other previous works that dealt with only uniform velocity motion, here we consider a more general case of nonlinear motion. Since conventional motion identification techniques used in other works can not be employed in the case of nonlinear motion, a new method for identification of the motion from each field is used. Results of motion identification and image restoration for various motion types are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of edges in digital imagery based on localizing the zero crossings of filtered image data has been investigated. Truncated time or frequency sampled forms (TSF/FSF) of the Laplacian of Gaussian (LOG) filter are employed in the transform domain. Samples of the image are transformed using the discrete symmetric cosine transform (DSCT) prior to adaptive filtering and the isolation of zero crossings. The DSCT facilitates both control of the edge localization accuracy as well as modular implementation. The adaptive strategy for accepting/rejecting edge transitions at the appropriate locations in the image is based on estimates of the local gradient. This paper evaluates block-based filtering procedures to identify edges in terms of achievable edge localization, signal-to-noise ratio (SNR) around the edge, and computational benefits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel method for registration of range images from multiple viewpoints is presented. The algorithm encompasses two stages -- surface approximation using triangular mesh and progressive registration. It first generates triangular meshes interpolating the underlying surfaces represented by the range images in progressive levels of detail (LOD). The triangulation algorithm is capable of identifying feature points among all the samples and therefore can not only provide distinctive landmark control points critical for registration accuracy but also substantially reduces the registration complexity. At the registration stage, based on the features that triangular meshes have captured, corresponding vertices can easily be located and the least-square method is applied to the set of control points in the coarse triangular meshes to derive an initial transformation. The registration will then be iteratively performed on finer meshes to further improve the transformation accuracy. The classic Iterative Closest Point (ICP) algorithm is modified and integrated with this progressive registration method based on surface triangulation. This approach overcomes the drawbacks of the classic ICP, namely it no longer requires one surface be the subset of the other and it does not need an initial transformation -- a sufficiently close alignment to avoid the convergence to a local minimum. In addition, the surface information provided by the triangular mesh helps with registration accuracy, and results in fast convergence. Experiment have been conducted on benchmark images, the superior results confirm the effectiveness of this novel approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing use of three-dimensional imaging modalities triggers the need for efficient techniques to transport and store the related volumetric data. Desired properties like quality and resolution scalability, region-of-interest coding, lossy-to-lossless coding and excellent rate-distortion characteristics for as well low as high bit-rates are inherently supported by wavelet-based compression tools. In this paper a new 3D wavelet-based compression engine is proposed and compared against a classical 3D JPEG-based coder and a state-of-the-art 3D SPIHT coder for different medical imaging modalities. Furthermore, we evaluate the performance of a selected set of lossless integer lifting kernels. We demonstrate that the performance of the proposed coder is superior for lossless coding, and competitive with 3D SPIHT at lower bit-rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In conventional bit rate control schemes, the buffer level is controlled by adjusting the quantization step size while the frame rate and spatial resolution chosen for coding are fixed throughout the coding process. In this paper, we consider a more general Multidimensional (M-D) bit rate control where the frame rate, spatial resolution and quantization step size are jointly adapted for buffer control. In the M-D bit rate control setting, the problem is to decide which frames to code (and which frames to skip) along with the spatial resolution and the quantization step size to use for each coded frame. Given a finite set of operating points on a M-D grid, we formulate the optimal solution of the M-D buffer-constrained allocation problem. The formulation allows a skipped frame to be reconstructed from one coded frame using any temporal interpolation method. A dynamic programming algorithm is presented to obtain an optimal solution for the case of intraframe coding which is a special case of dependent coding. We experiment with both zero-order hold and motion-compensated temporal interpolation. Operational rate-distortion (R-D) bounds are illustrated for both the M-D and conventional bit rate control approaches. Our focus is one very low bit rate applications where a significant delay is tolerable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a vector quantization variant for lossy compression of binary images. This algorithm, adaptive binary vector quantization for binary images (ABVQ), uses a novel, doubly-adaptive codebook to minimize error while typically achieving compression higher than is achieved by lossless techniques. ABVQ provides sufficient fidelity to be used on text images, line drawings, graphics, or any other binary (two-tone, or bi-level) images. Experimental results are included in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The impact of lossy compression on the classification of the remotely-sensed imagery data is examined. The impact of compression is assessed for both types of classifications, i.e., classification via thematic map for small-footprint imagery, and classification via spectral unmixing for large- footprint imagery data. An overview of viable classification and spectral unmixing procedures are given. The criteria for measuring the impact of compression are defined. It was shown the impact of compression is insignificant for compressions ratios of less than 10. It is argued that the effective impact of compression is reduced due to the presence of others sources of inaccuracies in the original data and its relevant prediction models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A highly scalable wavelet video codec is proposed for Internet video streaming applications based on the simplified JPEG-2000 compression core. Most existing video coding solutions utilize a fixed temporal grouping structure, resulting in quality degradation due to structural mismatch with inherent motion and scene change. Thus, by adopting an adaptive frame grouping scheme based on fast scene change detection, a flexible temporal grouping is proposed according to motion activities. To provide good temporal scalability regardless of packet loss, the dependency structure inside a temporal group is simplified by referencing only the initial intra-frame in telescopic motion estimation at the cost of coding efficiency. In addition, predictive-frames in a temporal group are prioritized according to their relative motion and coding cost. Finally, the joint spatio-temporal scalability support of the proposed video solution is demonstrated in terms of the network adaptation capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is first presented in here that in addition to the well known property of energy compaction, novel organization of block-based DCT coefficients has similar characteristics to wavelet transform when taking an image as a single DCT clustering entity. These characteristics include cross-subband similarity, decay of magnitude across subband, etc. and can be exploited to obtain higher compression performance and to widen DCT applications. These applications include image retrieval and image recognition in which DCT has the advantage of lower computational complexity over wavelet based schemes. Secondly, an embedded image coder based on this novel coefficient organization strategy is then presented. To the best of our knowledge, the proposed coder outperforms any other DCT-based coder published in the literature in terms of compression performance. These other coders include EZDCT, EZHDCT, STQ, JPEG, etc. Using Lena image as an example, the proposed coder outperforms a version of EZDCT, EZHDCT, STQ and JPEG by an average of 0.5 dB, 0.5 dB, 1.1 dB and 1.5 dB in PSNR when bits-per-pixel(bpp) is between the range 0.125 to 1.00. This outstanding compression performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new image representation scheme using a set of block templates is introduced first. Its application in image coding is presented afterwards. In the proposed representation scheme, a set of block templates is constructed to represent three basic types of image patterns: uniform, edge, and irregular. A novel classifier, which is designed based on the histogram shape analysis of image blocks, is employed to classify the block templates according to their level of visual activity. Each block template is then represented by a set of parameters associated with the pattern appearing inside the block. Image representation using these templates requires considerably fewer bits than the original pixel-wise description and yet characterizes perceptually significant features more effectively. The coding system approximates each image block by one of the block templates and further quantizes the template parameters. Satisfactory coded images have been obtained at bit rates between 0.3 - 0.4 bits per pixel (bpp).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a mesh compression algorithm based on a layered mesh simplification method which allows a multiresolution representation for triangular meshes. Unlike previous progressive compression methods which use the greedy vertex removal scheme to achieve the best compression results, we adopt a new layered mesh simplification algorithm to generate a sequence of meshes at a broad range of resolutions with very good visual quality through a derived threshold bounding curve. In order to achieve the desirable coding gain, we perform two types of prediction for geometric data a well as topological data. The advantage of the proposed algorithm is that it generates progressive meshes of linear resolutions, i.e. we can smoothly move from meshes of lower resolutions to higher ones by linearly adding geometry primitives. It can deal well with general meshes containing special topological features such as boundary edges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By augmenting the ITU-T H.263 standard bit stream with supplementary motion vectors for to-be-interpolated frames, a new deformable block-based fast motion-compensated frame interpolation (DB-FMCI) scheme is presented. Unlike other motion-compensated interpolation methods, which assume a constant motion velocity between two reference P frames, the proposed scheme takes into account the non-linearity of motion to achieve a better interpolation result. The supplementary motion information for the so-called M frame (motion frame) is defined, which consists of compressed residues of linear and non-linear motion vectors. The non-linear motion vectors of skipped frames are used at the decoder to determine the 6- parameter affine-based DB-FMCI. Experimental results show that the proposed non-linear enhancement scheme can achieve a higher PSNR value and better visual quality in comparison with traditional methods based only on the linear motion assumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several analytical models have been recently introduced to estimate the impact of the error propagation effect on the source video caused by lossy transmission channels. However, previous work focused either on the statistical aspects for the whole sequence or had a high computational complexity. In this work, we concentrate on estimating the distortion caused by the loss of a packet with a moderate computational complexity. The proposed model considers both the spatial filtering effect and the temporal dependency that affect the error propagation behavior. To verify this model, a real loss propagation effect is measured and compared with that of the expected distortion level derived by the model. Also, its applicability to the quality of service (QoS) of transmitted video is demonstrated through the packet video evaluation over the simulated differentiated service (DiffServ) forwarding mechanism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of estimation of biological signal parameters and present methods based on the phase gradient of the short time Fourier transform which may be used to accurately estimate signal parameters. The methods are robust and are well suited to the analysis of non-stationary multicomponent signals. Specifically addressed are the problems of recovery of crisp narrow-band time-frequency representations from very small data sets, accurate estimation of speech formants, blind recovery of the group delay of the transmission channel and equalization of time-frequency representations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Papermaking process consists in a succession of unit operations that are successively the forming section, the press section and finally the drying section. Forming and pressing are on the scope of this paper as they may influence the aspect of the studied material: paper. The main objective is to characterize paper and more specifically its visual quality, mainly due to marking which consists in successive white and dark strips. A proposed method is described in order to analyze the quality of this visual aspect of paper, which is a very important factor for the consumer. This paper is therefore devoted to the presentation of an industrial tool to Digital Image Processing that allows the evaluation of cigarette paper marking quality. This problem is delicate as different technical and physical parameters have some influence on the paper appearance. For example the whiteness or the opacity of paper influences the evaluation of the quality of the marking. Furthermore, an expert of paper cigarette who observes the paper lying on a black support carries out the classical test of quality evaluation. Thus the reflection of light is mainly observed instead of the look- through aspect. Usually, this determination is made by the experienced eye of the expert who may distinguish between 5 to 6 classes of paper quality. Moreover, sensibility and subjectivity play an important role in this grading establishment. The aim of the presented tool is to obtain an object classification of the paper marking quality. Image analysis is used in order to mimic the expert experience. In a first step, the image acquisition is done using a standard scanner. Then developed software analyzes the obtained image numerically. The sensibility of the image analysis is high, and the results are repeatable. The classification of different cigarette papers using this method provided the same results as the human expert, pointing out the validity of the developed method. Some experimental results are presented in order to illustrate the industrial interest for this method. We present in a first part the new method to obtain an evaluation of the quality of a material property (paper aspect) from image analysis. Then example of measurements obtained on different paper samples, using a classical scanner, illustrate the proposed methodology. Finally, some comparison between the classifications obtained from the proposed method and the human expertise are presented to underline the interest of the proposed objective method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we present a strategy to report in an automatic way significant changes on a map by fusion of recent images in various spectral bands. We focus on industrial scenes where changes concern objects like man-made structures and water areas. The approach is a two-stage processing. The first one concerns the detection of cartographic objects that do not exist anymore. The second one concerns the detection of new objects by a multispectral classification of the images. For configurations of partial overlapping between map and images, it is difficult or even impossible to formalize the approach suggested within a probabilistic framework. Thus, the Dempster-Shafer theory is shown as a more suitable formalism in view of the available information, and we present several solutions. Experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a novel method for facial feature extraction. In our approach, we attempt to find the correspondence of an intensity grid, where a feature template is defined, to an image patch. Their similarity is measured with the Sampling Determination Coefficient, the square of the Linear Correlation Coefficient. The search space generated by this function make it easier for an optimization algorithm to find the parameters to extract the sought feature. We extract facial features from people's frontal view images, like the ones present in most photographs of passports, driver licenses and other documents alike. We tested our algorithm with 823 images. Facial features were correctly extracted n 99.028% of them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A thorough understanding and analysis of geometry and topology of three-dimensional fiber networks from high resolution images is an important and challenging task due to the enormous complexity and randomness of the structure. In this paper we propose a technique that is aimed at structural analysis of fiber mats, both for quality evaluation and improvement of fiber products. A sequence of image processing techniques is applied to the images, to obtain the medial axis of the fiber network. A description of the network is then determined from the medial axis. We demonstrate computational algorithms that can efficiently identify individual fibers from a network of randomly oriented and curled fibers that are bonded irregularly with each other. We can accurately measure the orientation, location, curl, length, bonds, and crossing angles of the identified fibers as well as the density of the fibers contained in a given volume. The performance of the proposed technique is presented for simulated fiber data and for a synthetic (polymer) fiber mat.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the first part of the presented paper a new measurement system for fast three-dimensional in vivo measurement of the microtopography of human skin is proposed. It is based on the principle of active image triangulation. A Digital Micromirror Device (DMDTM) is used for projecting sinusoidal intensity distributions on the surface of human skin. By using temporal phase shift algorithms the three-dimensional topography is reconstructed from two-dimensional images. Displacement vector fields represent a promising approach for detecting deformation and other lateral changes in the surface of human skin. In the second part of the presented paper a method based on local template matching and smooth interpolation algorithms for determining a displacement vector field is proposed. Aiming at a minimal expenditure of numerical calculation, a stepwise algorithm was developed for this purpose. The deformatory component of the calculated vector field is separated by minimizing a suitable functional, which is also presented in the paper. In investigations of measurement series the proposed method proves to be very efficient. The calculated displacement vector fields connect corresponding areas in two subsequent measurements. Distortions caused by mechanical deformation or other influences can be detected and visualized by the separated deformatory components of the vector fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the QccPack software package, an open-source collection of library routines and utility programs for quantization, compression, and coding of data. QccPack is being written to expedite data-compression research and development by providing general and reliable implementations of common compression techniques. Functionality of the current release includes entropy coding, scalar quantization, vector quantization, adaptive vector quantization, wavelet transforms and subband coding, error-correcting codes, image-processing support, and general vector-math, matrix-math, file-I/O, and error-message routines. All QccPack functionality is accessible via library calls; additionally, many utility programs provide command-line access. The QccPack software package, downloadable free of charge from the QccPack Web page, is published under the terms of the GNU General Public License and the GNU Library General Public License which guarantee source-code access and as well as allow redistribution and modification. Additionally, there exist optional modules that implement certain patented algorithms. These modules are downloadable separately and are typically issued under licenses that permit only non-commercial use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eastman Kodak Company developed a rate-controlled adaptive Differential Pulse Code Modulation (DPCM) image compression algorithm for commercial remote sensing applications. This algorithm is currently being used in a space-qualified ASIC in the Space Imaging Incorporated IKONOS satellite and the soon- to-be-launched EarthWatch QuickBird satellite. This ASIC compresses the raw imagery data (before calibration) at a speed just under 4 Megapixels per second. Kodak has redesigned this ASIC to increase the functionality and throughput while maintaining the power and area. With advancements in ASIC design, the compression algorithm, and fabrication techniques, the new compression ASIC has achieved the operating rate of 22 Megapixels per second. A third option mode has also been added to increase the capability of the ASIC to achieve lossless compression ratios of 2:1 to lossy compression ratios of 5:1. This new ASIC is intended to meet the future commercial remote sensing requirements for increased resolution and greater area coverage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging data. The algorithm first performs a block transform of either a hybrid of modulated lapped transform (MLT) with discrete cosine transform (DCT), or a 2-dimensional MLT. The transform is followed by a bit-plane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The approach requires no unique look-up table to maximize its performance and is error-resilient in that error propagation is contained within a few scan lines for push- broom applications. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantization from 2 to 16 bits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future high resolution instruments planned by CNES to succeed SPOT5 will lead to higher bit rates because of the increase in both resolution and number of bits per pixel, not compensated by the reduced swatch. Data compression is then needed, with compression ratio goals higher than the 2.81 SPOT5 value obtained with a JPEG like algorithm. Compression ratio should rise typically to 4 - 6 values, with artifacts remaining unnoticeable: SPOT5 algorithm performances have clearly to be outdone. On another hand, in the framework of optimized and low cost instruments, noise level will increase. Furthermore, the Modulation Transfer Function (MTF) and the sampling grid will be fitted together, to -- at least roughly -- satisfy Shannon requirements. As with the Supermode sampling scheme of the SPOT5 Panchromatic band, the images will have to be restored (deconvolution and denoising) and that renders the compression impact assessment much more complex. This paper is a synthesis of numerous studies evaluating several data compression algorithms, some of them supposing that the adaptation between sampling grid and MTF is obtained by the quincunx Supermode scheme. The following points are analyzed: compression decorrelator (DCT, LOT, wavelet, lifting), comparison with JPEG2000 for images acquired on a square grid, compression fitting to the quincunx sampling and on board restoration (before compression) versus on ground restoration. For each of them, we describe the proposed solutions, underlining the associated complexity and comparing them from a quantitative and qualitative point of view, giving the results of experts analyses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many space-borne remote sensing missions are based on scanning sensors that create images a few lines at a time. Moreover, spacecraft typically have limited amounts of available memory, on account of weight, size and power constraints. For these reasons, the JPEG-2000 emerging standard has a requirement for stripe processing in order to meet the needs of the remote sensing profile. This paper first briefly presents the JPEG- 2000 algorithm, highlighting details pertinent to scan-based processing. A technique for meeting the stripe processing requirement is then presented. This technique use a sliding window rate control mechanism that maintains the desired average bit rate over entire images, while retaining a minimum number of bytes in memory at any given time. Results are then presented to show performance over various sliding window sizes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An update of the development status of the CWIC on-board image compressor project is given. For the convenience of the reader, the features of CWIC are recollected: wavelet-based, high-speed, high resolution, constant-rate image compression using space-qualifiable hardware. The compression efficiency of CWIC has been reported earlier but is supplemented with a JPEG comparison presently. The precise real-time performance of CWIC has been obtained from netlist simulation. The CWIC architecture is shown with two interface options: either a parallel or a SpaceWire serial interface. The status of the demonstrator is reported, and the existing filter and coder boards are depicted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The output rates of imaging scientific experiments on planetary missions far exceed the few 10 kbits/s provided by X or Ka band downlink. This severely restricts the duration and frequency of observations. Space applications present specific constraints for compression methods: space qualified ROM and fast RAM chips have limited capacity and large power requirements. Real time compression is therefore preferable (no large local data buffer) but requires a large processing throughput. Wavelet compression provides a fast and efficient method for lossy data compression, when combined with tree- coding algorithms such as that of Said and Pearlman. We have developed such an algorithm for four instruments on ROSETTA (ESA cometary rendez-vous mission) and Mars Express (ESA Mars Orbiter and Lander mission), building on the experience from two experiments on CASSINI and MARS 96 for which lossless compression was implemented. Modern Digital Signal Processors using a pipeline architecture provide the required high computing capability. The Said-Pearlman tree-coding algorithm has been optimized for speed and code size by reducing branching and bit manipulation, which are very costly in terms of processor cycles. Written in C with a few assembly language modules, the implementation on a DSP of this new version of the Said-Pearlman algorithm provides a processing capability of 500 kdata/s (imaging), which is adequate for our applications. Compression ratios of at least 10 can be achieved with acceptable data quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The JPEG2000 still image compression standard has adopted a new compression paradigm: 'Encode once, decode many.' JPEG2000 architecture allows the codestream to be decoded differently for different application needs. The codestream can be decoded to produce lossless and lossy images, specific spatial regions of the image or images with different quality and resolution. The syntax contains pointers and length fields which allow relevant coded data to be identified without the entropy coder or transforms. Consequently, the codestream can be 'parsed' to create different codestreams without transcoding. For example, such parsing operation can create a new codestream which contains a lower resolution image (e.g. thumbnail) of the original codestream. If a codestream is produced appropriately, it can also be converted to codestreams of lower quality images, or images containing only a specific spatial region. This feature is very useful for many applications, especially on the internet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After nearly three years of international development, the still-image technology for the JPEG-2000 standard is almost fully established. This paper briefly summarizes the developmental history of this standard and discusses its evolution through the Verification Model (VM) experimental and testing software which will become the first fully compliant, fully functional implementation of the new standard. The standard is then described, highlighting the data domains at various stages during the forward compression process. These data domains provide certain flexibilities which offer many of the rich set of features available with JPEG-2000. Some of these features are then described, with algorithmic examples as well as sample output from the VM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper compares the set of features offered by JPEG 2000, and how well they are fulfilled, versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG and more recent PNG. The study concentrates on compression efficiency and functionality set, while addressing other aspects such as complexity. Lossless compression efficiency as well as the fixed and progressive lossy rate-distortion behaviors are evaluated. Robustness to transmission errors, Region of Interest coding and complexity are also discussed. The principles behind each algorithm are briefly described. The results show that the choice of the 'best' standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While there do exist many different image file formats, the JPEG committee felt that none of those formats addressed a majority of the needs of tomorrow's complicated imaging applications. Many formats do not provide sufficient flexibility for the intelligent storage and maintenance of metadata. Others are very restrictive in terms of color encoding. Others provide flexibility, but with a very high cost due to complexity. The JPEG 2000 file format addresses these concerns by combining a simple binary container with a flexible metadata architecture and a useful yet simple mechanism for encoding the colorspace of an image. The format also looks toward the future, where the lines between still images, moving images, and multimedia become a blur, by providing simple hooks into other multimedia standards. This paper describes the binary format, metadata architecture, and colorspace encoding architecture of the JPEG 2000 file format. It also shows how this format can be used as the basis for more advanced applications, such as the upcoming motion JPEG 2000 standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The JPEG 2000 compression standard includes several optional file formats called the JP family of file formats. One, the JPM file format (file extension:.jpm) is aimed at compression of compound images: those have multiple regions, each with differing requirements for spatial resolution and tonality. Document images are common instances of compound images. By applying multiple compression methods, each matched to the characteristics of a distinct region, significant compression advantages can be achieved over use of a single compression method for the entire image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past, there have been a lot of different image file formats, providing a lot of different capabilities. However, the one thing they pretty much all shared was a very limited mechanism for encoding color. Storing a color image today forces developers to take either a 'lowest-common-denominator' approach by using a single standard colorspace in all applications, or by using the capabilities of ICC color management at the loss of wide interoperability. The JPEG 2000 film format changes all this with a new architecture for encoding the colorspace of an image. While the solution is not perfect, It does greatly increase the number of colorspaces that can be encoded while maintaining a very high level of interoperability between applications. This paper describes the color encoding architecture in the JPEG 2000 file format and shows how this new architecture meets the needs of tomorrows imaging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When transmitting image information through a channel, the image data is sometimes corrupted by different error sources. To deal with these error issues found in different types of transmission channels, JPEG-2000 has developed several error resilience tools that can be used to improve the decoding performance in the presence of errors. The error resilience tools currently available include resynchronization markers, short packet headers, arithmetic coding bypass, arithmetic coder termination and context reset, and segmentation symbols. Resynchronization markers enable the decoder to re-establish synchronization even when bit errors occur within a packet. Short packet headers provide the ability to move the header information within a packet into the tile or main header. Arithmetic coding bypass allows arithmetic coding to be stopped and raw data to be placed within the bitstream instead. The propagation of errors is reduced through arithmetic coder termination and context resetting, since the arithmetically coded bitstream is divided into independent coder elements through these methods. The addition of segment symbols within the bitstream at the end of each bitplane allows errors to be detected within the coded bitplanes of the code blocks. This document gives a description and performance evaluation of the error resilience tools within JPEG-2000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ideally, when the same set of compression parameters are used, it is desirable for a compression algorithm to be idempotent to multiple cycles of compression and decompression. However, this condition is generally not satisfied for most images and compression settings of interest. Furthermore, if the image undergoes cropping before recompression, there is a severe degradation in image quality. In this paper we compare the multiple compression cycle performance of JPEG and JPEG2000. The performance is compared for different quantization tables (shaped or flat) and a variety of bit rates, with or without cropping. It is shown that in the absence of clipping errors, it is possible to derive conditions on the quantization tables under which the image is idempotent to repeated compression cycles. Simulation results show that when images have the same mean squared error (MSE) after the first compression cycle, there are situations in which the images compressed with JPEG2000 can degrade more rapidly compared to JPEG in subsequent compression cycles. Also, the multiple compression cycle performance of JPEG2000 depends on the specific choice of wavelet filters. Finally, we observe that in the presence of cropping, JPEG2000 is clearly superior to JPEG. Also, when it is anticipated that the images will be cropped between compression cycles when using JPEG2000, it is recommended that the canvas system be used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
JPEG 2000 inverse scalar quantization includes a reconstruction rounding factor that has a range of allowable values within the standard. Although the standard notes a fixed value that works reasonably well in practice, implementations are allowed to use other values in an effort to improve the reconstructed image quality. This paper discusses some of the issues involved in adjusting the rounding factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Implementation complexity will have a dramatic influence in facilitating the deployment of the JPEG-2000 technology in consumer market products. This paper addresses the computational complexity evaluation of the new standard's compression engine. A non-optimized C code implementation of JPEG-2000 is profiled at run time using an automatic profiling tool built in house. The computational complexity measurements generated by the profiler are presented and discussed in the paper. The comparison between JPEG-2000 and JPEG baseline computational complexity is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies the implementation complexity of the new digital-imaging standard JPEG2000. It will be shown that the new standard has a significant higher memory complexity then JPEG. The number of memory transfers almost doubles and also the size of memory buffers is increased with a factor 40. Hence, a cost-efficient implementation of JPEG2000 is a real challenge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The efficient compression of greater than three-component imagery has not been allowed within current image compression standards. The advanced JPEG 2000 image compression standard will have provisions for multiple component imagery that will enable decorrelation in the component direction. The JPEG 2000 standard has been defined in a flexible manner, which allows for the use of multiple transform techniques to take advantage of the correlation between components. These techniques will allow the user to make the trade between complexity and compression efficiency. This paper compares the compression efficiency of three techniques within the JPEG 2000 standard against other standard compression techniques. The results show that the JPEG 2000 algorithm will significantly increase the compression efficiency of multiple-component imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A baseline mode of Trellis coded quantization (TCQ) is described as used in JPEG 2000 along with the results of visual evaluations which demonstrate TCQ effectiveness over scalar quantization (SQ). Furthermore, a reduced complexity TCQ mode is developed and described in detail. Numerical and visual evaluations indicate that compression performance is nearly identical to baseline TCQ, but with greatly reduced memory footprint and increased progressive image decoding facilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
JPEG2000 is the new standard for the compression of still images. Its objective and subjective image quality performance is superior to existing standards. In this paper the Part I of the JPEG2000 standard is presented in brief, and its implementation complexity on different computer platforms is reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to design rank-order filters based on an explicit use of spatial relations between image elements is proposed. Many rank-order processing techniques may be implemented by applying the approach, such as noise suppression, local contrast enhancement, and local detail extraction. The performance of the proposed rank-order filters for suppression a strong impulsive noise in a test interferogram-like image is compared to conventional rank- order algorithms. The comparisons are made using a mean square error, a mean absolute error, and a subjective human visual error criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracing motion of lower limb is the basis of analyzing motion of whole body. In this paper, the image sequences are transformed to distance images in which each pixel represent its distance to the background. The process is as follows: we first get the binary image sequences from the original image sequences by threshold selection, and we extract interesting region that is the part of the leg. Pixel value within lower limb is represent as '1' and pixel within background as '0'. Then scanning each binary image from top to bottom. As we are scanning one binary image, we compare every pixel value within the object with its neighbors to get new value. If the binary image is denoted by f(i,j) where (i,j) is coordinate of the pixel and f(i,j) is the pixel value, we scan the image forward, and the pixel's new value denoted by g(i,j) is as follows: g(i,j) equals 0, if f(i,j) equals 0 g(i,j) equals min [g(i-1,j)+1, g(i,j-1)+1], if f(i,j) equals 1 Then we scan the g (i,j) from bottom to top. As we are scanning, we compare every pixel value within the object with its neighbors to get new value denoted by h(i,j): h(i,j) equals min [g(i,j), h(i-1,j)+1, h(i,j-1)+1]. We denote h (i,j) as the distance image of the original image f (i,j). h (i,j) is pixel' minimum distance to the background. The distance image sequences can reflect geometric characters of images very well. The paper proposes a correlation matching method on distance images, and give matching result of four joints on the leg. The result shows that this is an effective method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work four different algorithms for image classification have been applied to AHVRR images coming from the NOAA-14 polar orbiter satellite. The goal is to assign each pixel to one of the following classes: Sea, Land, Cloud, Cloud Edge or Out of the scene. The last class are needed since the images are resampled with always the same geographical limit, thus, sometime, due to satellite being low on the horizon, some pixel can be out of the radiometer view. The first method is based on thresholds on the value of the NDVI (Normalized Differential Vegetation Index) alone. The second method is based on thresholds on 4 of the 5 bands of the AVHRR. The third method is a simple Isodata algorithm on the same 4 bands of the second method and on the same 2 bands of the first method, the ones used to extract the NDVI value. The fourth method is a fuzzy C-Means algorithm on the same set of bands of first and second method. Finally some improvement are proposed and discussed with focus and the actual NOAA-14 images. The results, in terms of computational time, and classification behavior are also discussed. For comparison purposes, the second method, thresholds on 4 of the 5 bands of the AVHRR, is assumed to be the truth and all the results are given with reference to it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remotely sensed imagery can be used to assess the results of natural disasters such as floods. The imagery can be used to predict the extent of a flood, to develop methods to control a flood, and to assess the damage caused by a flood. This paper addresses the information derived from two different sources: Interferometric Synthetic Aperture Radar (IFSAR) and Light Detection and Ranging (LIDAR). The study will show how the information differs and how this information can be fused to better analyze flood problems. LIDAR and IFSAR data were collected over the same Lakewood area of Los Angeles, California as part of a Federal Emergency Management Agency (FEMA)-sponsored data collection. Lakewood is located in a floodplain and is of interest for updating the maps of floodplains. IFSAR is an active sensor and can penetrate through clouds and provides three separate digital files for analysis: magnitude, elevation, and correlation files. LIDAR provides elevation and magnitude files. However, for this study only the elevation values were provided. The LIDAR elevation data is more accurate and more densely sampled than the IFSAR data. In this study, the above information is used to produce charts with information relevant to floodplain mapping. To produce relevant information, the data had to be adjusted due to different coordinate systems, different sampling rates, vertical and horizontal post spacing differences, and orientation differences between the IFSAR and LIDAR data sets. This paper will describe the methods and procedures to transform the data sets to a common reference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The bubble rise velocity in air/water two-phase flow in a vertical pipe is studied experimentally by processing consecutive series of digitized video images. The shape of the bubble, as well as its instantaneous velocity is measured by using binary image processing techniques. Digital image processing algorithms have been developed to obtain the coordinates of specified points in image. This coordinate data was used to calculate the instantaneous bubble velocity, which can be expressed as a separation distance between the two consecutive image frames. This method has many advantages it is a non-invasive measurement, and does not require sophisticated laboratory equipment. Images are directly digitized by using CCD digital camera. Image analysis involves purely computer-based computation. Hence reasonably accurate velocity data has been obtained from only one experiment in a small period of time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the structure of solar image obtaining system in the Space Solar Telescope, including white-light image, magnetic field image, velocity field image and X-ray image. High resolution CCD cameras with 2048 X 1024 pixels will be used in the Space Solar Telescope, which need image processor has high speed image processing speed and high compression ratio process, for all the images must be stored in a limited memory before they are sent to the receiving station on the Earth. To obtain Solar magnetic field image and velocity field image, the image processor must process images from CCD camera in real-time speed, for this will make it possible to get the Solar object with short life time and save observing time. Here we discuss how to use a powerful DSP to do this job and some compression methods for saving and transferring the result image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for real-time detection, localization and tracking of periodic signals, which appear as few-pixel blobs in video image sequences and have a characteristic binary pattern in the temporal domain, was developed, implemented and tested. All stages of the algorithm are described and discussed: (1) initial estimation and continuous re-estimation of global motion; (2) local 'signal augmentation' (the crucial step for detecting small-sized signals in image sequences); (3) temporal domain -- followed by spatial domain -- background estimation and subtraction; (4) binarization; (5) preliminary temporal domain pattern matching and subsequent pixel tracking; (6) signal detection and localization by post- processing and clustering, applied to the set obtained in (5); (7) temporal and spatial tracking of the located signals. The algorithm was tested by processing simulated as well as real image sequences. The results are discussed. The algorithm enables for efficient detection of visible signals, provides for reasonable detection even of invisible signals and yields an acceptable false alarm probability. It has also proved to be insensitive to local motions in a video film, to camera motion, to intensity changes and to any weak flickering of background. Strong flickering of background can, however, decrease the probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo matching is the key problem of depth information measure using the disparity in stereo vision. In this paper, a new stereo matching algorithm, combining both the modified genetic algorithm and the classic feature matching technique, is presented. The chromosome construction and the fitness function for stereo matching are proposed. According to the corresponding constraint of stereo matching, genetic algorithm is applied to this problem. Finally, the experiment using the proposed method is given, and the experiment results show that the proposed algorithm is more applicable and stable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a vehicle detection framework which aims at avoiding collision and warning the dangerous situation during driving on a road at night. Potential obstacles- vehicles, motorcycles are detected from image sequences by a vision system which processes the images given by a Charge Coupled Device (CCD) camera mounted on a moving car. We can compute the position and number of vehicles from these image sequences by using several image processing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The method and device for remote technological monitoring of volumes and masses of axisymmetrical objects (freely falling hot glass drops) in real time is represented. A main feature of the method is the measurement of volume because of hypothesises about a topological similarity of parallel cuts of a skew field on its optical images on the plane of registration. The method is well combined with technique of the television registration. The system of remote measurement of volumes and masses freely falling hot glass drops acting in industry is realized. The error of measurements about 0.5% is reached.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an application to food science of image processing technique. We describe a method for determining fat content in beef meat. The industry of meat faces a permanent need for improved methods for meat quality evaluation. Researchers want improved techniques to deepen their understanding of meat features. Expectations of consumers for meat quality grow constantly, which induces the necessity of quality control. Recent advances in the area of computer and video processing have created new ways to monitor quality in the food industry. We investigate the use of a new technology to control the quality of food: NMR imaging. The inherent advantages of NMR images are many. Chief among these unprecedented contrasts between the various structures present in meat like muscle, fat, and connective tissue. Moreover, the three-dimensional nature of the NMR method allow us to analyze isolated cross-sectional slices of the meat and to measure the volumetric content of fat, not only the fat visible on the surface. We propose a segmentation algorithm for the detection of fat together with a filtering technique to remove intensity inhomogeneities in NMR images caused by non-uniformities of the magnetic field during acquisition. Measurements have been successfully correlated with chemical analysis and digital photography. Results show that the NMR technique is a promising non-invasive method to determine the fat content in meat.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, a real-time imaging system based on terahertz (THz) time-domain spectroscopy has been developed. This technique offers a range of unique imaging modalities due to the broad bandwidth, sub-picosecond duration, and phase-sensitive detection of the THz pulses. This paper provides a brief introduction of the state-of-the-art in THz imaging. It also focuses on expanding the potential of this new and exciting field through two major efforts. The first concentrates on improving the experimental sensitivity of the system. We are exploring an interferometric arrangement to provide a background-free reflection imaging geometry. The second applies novel digital signal processing algorithms to extract useful information from the THz pulses. The possibility exists to combine spectroscopic characterization and/or identification with pixel-by-pixel imaging. We describe a new parameterization algorithm for both high and low refractive index materials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Results of unsupervised classification of composite two- layered image are presented. Image consists of first layer TNDVI data, calculated on the base of LANDSAT-TM4 image and second layer DEM raster, created on the base of digital vector map. Different attempts to layers co-registering were tested. Raster type TNDVI image nonlinear polynomial transformation to digital map projection; nonlinear transformation of coordinates of digital map point coverage to LANDSAT-TM4 image coordinate system and then DEM raster creating; creating raster DEM on the base of point elevation data and then nonlinear transformation of DEM to LANDSAT-TM4 image coordinate system. Unexpected essentially big difference in clusters shape and position during unsupervised classification is showed and analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main goals of the color image enhancement are to sharp contrast the image while preserving the natural appearance of the image. Some works have been done to process color images in luminance, hue and saturation attributes. But there are some problems by using these methods. Other attributes of the image color will been varied if one of its attribute is changed. In this paper, we analysis the true color image processing method, presented that the luminance value of the image color should be adjusted to suitable range, then the saturation enhancement under the certain luminance can be performed in the RGB color space, L*u*v* color space and L*a*b* color space to get the best saturation effect. We calculated some typical color differences produced by the luminance adjustment method and the saturation enhancement method. Then we compared the color differences in these three color spaces. As a result, we concluded that the different method should be chosen according to the different color images. Some images were processed with these methods and the best visual effect were obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In video compression, the motion compensation phase passes through block matching step. During this phase we search in two successive frames about a block in the last one with respect to its original position in the first one and apply mathematical technique to decide if it matches or not. The proposed method is for matching phase, the normal methods are Mean Absolute Difference (MAD), Mean Square Difference (MSD), Pel Difference Classification (PDC) and Integral Projection (IP). We proposed a method based on the subsampling while applying IP then adding the adaptively phase across a preprocessed factor depending on the nature of the frame content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new filtering algorithm is presented which can remove impulse noise from corrupted images while preserving details. The algorithm is based on a new impulse detection technique that uses image gradients. The proposed impulse detector can effectively categorize all the pixels in an image into two classes -- noise pixels and noise-free pixels. The noise-free pixels are kept untouched while the noise pixels are filtered by a noise cancellor such as median filter. Experimental results show that the proposed algorithm provides significant improvement over many existing techniques in terms of both subjective and objective evaluations. It also has the advantage of computational simplicity over those algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robust image filter that can provide the preservation of fine details and effective suppression of multiplicative noise is presented. This filter bases on the M (robust maximum likelihood) estimators and R (rank) estimators derived from the statistical theory of rank tests. The filter proposed consists of two stages: at the first stage, to realize the rejection of impulsive noise, we presented the image filter with an adaptive spike detector and M-estimator modified by median estimator to have the ability to remove outliers. The second stage filter, a modified sigma filter, provides the multiplicative noise suppression combined with detail preserving scheme of a Lee filter. Numerical analysis of the simulation results shows that the proposed image filter has good preservation of fine details, effective multiplicative noise suppression and impulsive noise removal for different type of images in the sense of small detail percentage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new method to improve the performance of a duplicate video transmission system. The proposed method is based on an alternate temporal sub-sampling approach, which is to transmit the encoded even-numbered and odd-numbered pictures through separate channels. At the receiver, the decoded pictures from both channels are combined by alternately choosing one from each channel. The proposed method uses the full capacity of both regular and backup channels for failure-free video transmission. Experimental results show that the proposed alternate temporal sub-sampling method gives about 1.0 to approximately 4.0 dB improvements in PSNR over the conventional simulcast one at the bit rates of 2 to approximately 15 Mbps/channel for failure-free video transmission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a novel lossless coding scheme for medical images that allows the final user to switch between a lossy and a lossless mode. This is done by means of a progressive reconstruction philosophy (which can be interrupted at will) so we believe that our scheme gives a way to trade off between the accuracy needed for medical diagnosis and the information reduction needed for storage and transmission. We combine vector quantization, run-length bit plane and entropy coding. Specifically, the first step is a vector quantization procedure; the centroid codes are Huffman- coded making use of a set of probabilities that are calculated in the learning phase. The image is reconstructed at the coder in order to obtain the error image; this second image is divided in bit planes, which are then run-length and Huffman coded. A second statistical analysis is performed during the learning phase to obtain the parameters needed in this final stage. Our coder is currently trained for hand-radiographs and fetal echographies. We compare our results for this two types of images to classical results on bit plane coding and the JPEG standard. Our coder turns out to outperform both of them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are same occasions to extract the moving object from image sequence in the region of remote sensing, robot vision and so on. The process needs to have high accurate extraction and simpler realization. In this paper, we propose the design method of the optimal filter in the frequency-time mixed domain. Frequency selective filter to dynamic images usually are designed in 3-D frequency domain. But, design method of the filter is difficult because of its high parameter degree. By the use of frequency-time mixed domain(MixeD) which constitutes of 2-D frequency domain and 1-D time domain, design of filters becomes easier. But usually the desired and noise frequency component of image tend to concentrate near the origin in the frequency domain. Therefore, conventional frequency selective filters are difficult to distinguish these. We propose the optimal filter in the MixeD in the sense of least mean square error. First of all, we apply 2-D spatial Fourier to dynamic images, and at each point in 2-D frequency domain, designed FIR filtering is applied to 1-D time signal. In designing the optimal filter, we use the following information to decide the characteristics of the optimal filter. (1) The number of finite frames of input images. (2) The velocity vector of the signal desired. (3) The power spectrum of the noise signal. Signals constructed by these information are applied for the evaluation function and it decides filter coefficients. After filtering, 2-D inverse Fourier transform is applied to obtain the extracted image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The wavelet-based JPEG 2000 image compression standard is flexible enough to handle a large number of imagery types in a broad range of applications. One important application is the use of JPEG 2000 to compress imagery collected by remote sensing systems. This general class of imagery is often larger -- in terms of number of pixels) -- than most other classes of imagery. Support for tiling and the embedded, progressively ordered bit stream of JPEG 2000 are very useful in handling very large images. However, the performance of JPEG 2000 on detected SAR (Synthetic Aperture Radar) and other kinds of specular imagery is not as good, from the perspective of visual image quality, as its performance on more 'literal' imagery types. In this paper, we try to characterize the problem by analyzing some statistical and qualitative differences between detected SAR and other more literal remote sensing imagery types. Several image examples are presented to illustrate the differences. JPEG 2000 is very flexible and offers a wider range of options that allow for technology that can be used to optimize the algorithm for a particular imagery type or application. A number of different JPEG 2000 options - - including subband, weighting, trellis-coded quantization (TCQ), and packet decomposition -- are explored for their impact to SAR image quality. Finally, the anatomy of a texture-preserving wavelet compression scheme is presented with very impressive visual results. The demonstration system used for this paper is currently not supported by the JPEG 2000 standard, but it is hoped that with additional research, a variant of the scheme can be fit into the framework of JPEG 2000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a fast block-based motion estimation algorithm based on matching projections. The idea is simple: blocks cannot match well if their corresponding 1D projections do not match well. We can take advantage of this observation to translate the expensive 2D block matching problem to a simpler 1D matching one by quickly eliminating a majority of matching candidates. Our novel motion estimation algorithm offers computational scalability through a single parameter and global optimum can still be achieved. Moreover, an efficient implementation to compute projections and to buffer recyclable data is also presented. Experiments show that the proposed algorithm is several times faster than the exhaustive search algorithm with nearly identical prediction performance. With the proposed BME method, high-performance real-time all- software video encoding starts to become practical for reasonable video sizes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a systematic approach to design two families of fast multiplierless approximations of the DCT with the lifting scheme, based on two kinds of factorizations of the DCT matrix with Givens rotations. A scaled lifting structure is proposed to reduce the complexity of the transform. Analytical values of all lifting parameters are derived, from which dyadic values with different accuracies can be obtained through finite-length approximations. This enables low-cost and fast implementations with only shift and addition operations. Besides, a sensitivity analysis is developed for the scaled lifting structure, which shows that for certain rotation angles, a permuted version of it is more robust to truncation errors. Numerous approximation examples with different complexities are presented for the 8-point and 16-point DCT. As the complexity increases, more accurate approximation of the floating DCT can be obtained in terms of coding gains, frequency responses, and mean square errors of DCT coefficients. Hence the lifting-based fast transform can be easily tailored to meet the demands of different applications, making it suitable for hardware and software implementations in real-time and mobile computing applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed video bitstreams require protection from channel errors in a wireless channel and protection from packet loss in a wired ATM channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error-correcting (FEC) channel (RCPC) code combined with a single ARQ (automatic- repeat-request) proved to be an effective means for protecting the bitstream. There were two problems with this scheme: the noiseless reverse channel ARQ may not be feasible in practice; and, in the absence of channel coding and ARQ, the decoded sequence was hopelessly corrupted even for relatively clean channels. In this paper, we first show how to make the 3-D SPIHT bitstream more robust to channel errors by breaking the wavelet transform into a number of spatio-temporal tree blocks which can be encoded and decoded independently. This procedure brings the added benefit of parallelization of the compression and decompression algorithms. Then we demonstrate the packetization of the bit stream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness. Then we encode each packet with a channel code. Not only does this protect the integrity of the packets in most cases, but it also allows detection of packet decoding failures, so that only the cleanly recovered packets are reconstructed. This procedure obviates ARQ, because the performance is only about 1 dB worse than normal 3-D SPIHT with FEC and ARQ. Furthermore, the parallelization makes possible real-time implementation in hardware and software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3-D wavelet-based scalable video coding provides a viable alternative to standard MC-DCT coding. However, many current 3-D wavelet coders experience severe boundary effects across group of picture (GOP) boundaries. This paper proposes a memory efficient transform technique via lifting that effectively computes wavelet transforms of a video sequence continuously on the fly, thus eliminating the boundary effects due to limited length of individual GOPs. Coding results show that the proposed scheme completely eliminates the boundary effects and gives superb video playback quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.