PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The QM-Coder is an adaptive binary arithmetic coder for the JPEG and JBIG image compression standards. This coder employs a probability estimation state table (finite state machine). The present state corresponds to the current probability estimate. The estimate takes the form of the augend value that (on encode) is added to the code string if the less probable symbol occurs. The state changes only if the arithmetic coder experiences a renormalization. The QM-AYA coder is derived from the Q-Coder and the QM-Coder. A modified Metropolis method was used to fine-tune the QM-AYA augend values for improved compression performance. Heuristics in the search strategy to determine the next change in augend values reduce the estimated 54 year run time of the generalized annealing algorithm to 4 weeks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with still image subband coding with adaptive filter banks. Based on an analysis of distortions in coders with fixed filter banks with different characteristics, the scheme with spatially adaptive filter banks is developed. The main objective is to construct a subband coder in which both ringing noise and blocking artifacts are reduced. With respect to local image characteristics, suitable filter responses are selected to achieve this aim. The selection is based on measuring of edge business and edge orientation. A second objective is to reduce border distortions caused by the circular extension method by employing filters with short unit pulse responses in the border regions. It turns out that subband coding with adaptive filter banks in combination with an optimum filter selection strategy achieves a good trade-off between ringing noise and blocking effects. We conclude that the presented scheme for automatic filter selection works well for a large class of images with reasonably high contrast. For 'smooth-looking' images further work to improve the robustness of the algorithm is needed. The use of adaptive filtering in the image border regions solves--to some extent--the distortion problems encountered when using the circular extension method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally the distribution of the prediction error has been treated as the single-parameter Laplacian distribution, and based on this assumption one can design a set of Huffman codes selected through an estimate of the parameter. More recently, the prediction error distribution has been compared to the Gaussian distribution about mean zero when the value is relatively high. However when using nearly quantized prediction errors in the context model, the relatively high variance case is seen to merge conditional distributions surrounding both positive edges and negative edges. Edge information is available respectively from large negative or positive prediction errors in the neighboring pixel positions. In these cases, the mean of the distribution is usually not zero. By separating these two cases, making appropriate assumptions on the mean of the context-dependent error distribution, and other techniques, additional cost-effective compression can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vector quantization (VQ) has emerged as a viable and practical bandwidth compression technique due to its promising performance and a simple decompression architecture that requires a single look-up table. The improvement in the performance of VQ is realized only when large dimension input vectors could be utilized. This is hampered by the exponential growth in the complexity and the storage requirements of VQ for large dimension vectors. This presentation summarizes the results of a study that considers the hardware completely of VQ based on both Linde-Buzo-Gray (LBG) classification and neutral networks (NNs). The result of the study shows that a single chip implementation of large dimension VQ at video rates using either LBG or NN approach is not feasible if a full search algorithm is utilized. Modified forms of LBG VQ, with suboptimal performance, can be implemented using a single chip at moderate vector dimensions and bit rates. The most efficient implementation of neural network vector quantization (NNVQ) is the one that uses a combination of an analog and a digital chip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of new approaches to image coding are being developed to meet the increasing need for a broader range of quality and usage environments for image compression. Several of these new approaches, such as subband coders, are intended to provide higher quality images at the same bit rate as compared to the JPEG standard, because they are not subject to end of block artifacts, or because they are inherently better attuned to the image representation that occurs in the peripheral visual system. Still, in the absence of a pertinent quality criterion, the quality and performance of subband coders, or wavelet coders, can be mediocre. We have developed over the past few years the elements of a methodology applicable to this problem. We reported last year at the SPIE in San Diego, a Comparison of Coding Techniques based on a new Picture Quality Scale (PQS). In that work, we were able to rate coders designed by any criteria, on the basis of performance and quality. The problem that we are now considering is to design the coding technique so as to provide a better quality or a lower bit rate. The image quality, as evaluated by PQS, depends on a combination of several objective distortion factors, which can be identified with perceived coding artifacts, but the design of coders using all the factors is much too complex for an analytical approach. We make use instead of two design methodologies. The first one is to optimize the design of an existing subband coder using PQS as a distortion metric. The second one makes use of a methodology for the design of linear filters based on properties of human perception that we have developed previously and that may provide a tractable design method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image compression consists of an image data transformation followed by a data encoding. Transformation prepares the data so that the encoding will result in significant compression. The PROJECTRON algorithm is an iterative, adaptive algorithm that transforms the data by extracting optimal linear features which may be used to provide a compact representation of the image suitable for encoding. The algorithm generates features related to principal components and the Karhunen-Loeve transform. Use of the algorithm for image data compression is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A subband decomposition using wavelet filters has become a popular and effective basis for image compression. The use of a ladder structure for the subband filtering provides a fast implementation, and allows the system to be useful at video rates. In this paper, a system is proposed that provides good quality video for CIF sized image transmitted from 240 kbits/s down to 128 kbits/s (basic rate ISDN). The system uses subband coding, motion-compensated prediction, adaptive DCT coding, and entropy-constrained quantization to achieve the desired bit rate and image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, multiresolution representations have been successfully used in a broad class of applications in the image analysis and image processing domain. It has been especially demonstrated, that the discrete multiresolution (or wavelet) transforms are well suited for image coding in both, high and low rate image compression. The wavelet-based image compression techniques surpass impressively all other methods known until now. In this paper a low-complexity high-performance method for image data compression in the discrete wavelet transform domain is presented. The method is based on adaptive run-length coding of the quantized wavelet coefficients. The method utilizes a dynamic Huffman coder. The simplified version of the method employs a very simple adaptive coding scheme based on the recency rank coder instead of the dynamic Huffman coder. On the decoder side, locally adaptive amplitude reconstruction can optionally be used. The gain of this optimized reconstruction is about 0.3 - 0.5 dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method is developed for computation of motion vectors from a video sequence which is appropriate for motion compensated coding. Although the proposed method is gradient based, it can handle both discontinuities and long range motion. Experimental results on synthetic and natural images are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new fast algorithm that can display true 24-bits color images of JPEG and MPEG on a 8 bits color display is described. Instead of generating a colormap in the R-G-B color space conventionally, we perform analysis of color images based on the Y-Cr-Cb color space. By using Bayes decision rule, the representative values for Y component are selected based on its histogram. Then, the representative values for Cr and Cb components are determined by their conditional histogram assuming Y. Finally, a fast lookup table that can generate R-G-B outputs for Y-Cr-Cb inputs without matrix transformation is addressed. The experimental results show that good-looking quality color quantization images can be achieved by our proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color vector quantization is a very important technique to display a color image with N2, typically N equals 256, colors in a personal computer and workstation or print a color image with K, typically K equals 256, colors without too much color degradation. The LBG algorithm is a popular suboptimal algorithm to solve the color vector quantization. However, it is very slow. Several fast algorithms such as the popularity algorithm, the median cut algorithm have been proposed. In this paper, we propose a new fast algorithm for color vector quantization. The proposed algorithm has been implemented in a tree structure. Assume that n is the number of pixels in the image and m is the dimension of the color space. In the tree structure, there is n leaves in the tree in the worst case, where n is the total pixel numbers in the original color image. The storage complexity of the proposed algorithm is O(n) and the time complexity if O(n log2 N). It is much faster than the median cut algorithm. In the same space complexity O(mn) our algorithm has time complexity O(n log2 K) while the median cut algorithm requires O(m log2 Kn log2 n), where m equals 3 is the dimensionality of the color space. Also, our algorithm finds the centroid of a compact cube instead of a rectangular shape in the median cut algorithm. Therefore, it produces better color images after vector quantization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral image data reduction by optimal band selection is explored. Hyperspectral images have many bands requiring significant computational power for machine interpretation. During image pre-processing, regions of interest that warrant full examination need to be identified quickly. One technique for speeding up the processing is to use only a small subset of bands to determine the 'interesting' regions. The problem addressed here is how to determine the fewest bands required to achieve a specified performance goal for pixel classification. The (m,n) feature selection algorithm of Stearns is used to determine which combination of bands has the smallest probability of pixel misclassification. This technique avoids having to test all the possible combinations of 200 or more hyperspectral bands, while resisting the pitfalls demonstrated by Cover, et al., that fool other band selection algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image analysis is a labor-intensive activity that grows increasingly intensive due to the volume of imagery and collateral data being collected by open source (i.e., unclassified) collectors. The data is frequently multispectral, providing a more complex but ultimately richer resource from which to derive information. Image analysts (IAs) and photo interpreters need to extract accurate yet timely information from this data. As a result, many efforts have been directed at developing systems to assist IAs in their exploitation tasks. While systems focusing on either spectral or spatial exploitation have been researched, the two approaches have seldom been integrated. Consequently, these systems could not take full advantage of the range of information inherent to the data. The Lines of Communication (LOC) system was designed to merge the two disciplines into a user-friendly package that exploited the multispectral aspect of LANDSAT TM imagery without overwhelming the analyst unnecessarily with the complexities of the data. This paper discusses in detail the system components, the operations concept and user interface, and preliminary processing results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time ultrasound technique, image processing and analysis were applied to develop predition models for interamuscular fat (marbling) in live beef animals in genetic evaluation program. Ultrasound equipment Aloka 500v with a 3.5 Mhz 17 cm linear array transducer was used to do scans for live beef animals. Two scan images were collected on diferrent locations for each animal. The animals were slaughtered within one day after scanning and then a USDA marbling score was recorded and an ether extract (EE) analysis was performed to get a percent fat for each animal. Gray scale histogram, texture and movement analysis were applied to all the scan images. Fourier spectrum statistics variables and movement descriptors were used to develop prediction models. The prediction results demonstrate that the image analysis and real-time ultrasound techniques may be a valuable tool to estimate intramuscular fat for use in sorting feedlot cattle and in genetic evaluation programs for body composition in live beef animals.
Key words: Ultrasonics, Image Analysis, Prediction Models, Genetic Improvement, Live Beef Animals
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A three-dimensional representation of the vascular networks is of major importance for quantitative diagnosis and for surgical planning. This paper describes work in progress concerning a new method of three-dimensional reconstruction of the cerebral blood vessels from the digital subtraction angiographic image sequence. The proposed method is based on the image acquisition geometry of angiographic system. The reconstructed three-dimensional image can be used to visualize the anatomy of cerebral blood vessels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manufacturing efficiency improvements have traditionally been driven by the requirements of innovative manufacturers and their ability to modify, adapt and adopt technologies and techniques not intuitively related to the processes currently employed in manufacturing processes. The application of image processing to the requirements of manufacturers has thus far been limited to off-line verification tasks, where the benefits are notable but the impact on manufacturing flexibility, efficiency and productivity are negligible. It is now possible to incorporate image processing within the process control inner loop, transparently, to realize a degree of product quality and process productivity only speculated upon in the past.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Movement of a robot head between desired points in a 3D volume from (x1,y1,z1) to (x2,y2,z2) is crucial for high accuracy. When the knowledge of a 3D volume is only partial, obtained as a data set of cross-sectional image planes, control parameters for movement of the robot head are critical for best accuracy. In the present approach an attempt is being made to develop an interface for transforming control parameters of a robot system for desired movements of the robot head in the 3D volume from a sequence of cross-sectional image planes. Coordinates of a desired location from image data are obtained, and their corresponding locations on the object are estimated. These coordinates are transformed through matrix transformation into control parameters for the desired movements of the robot system. Most diagnostic medical imaging modalities obtain cross-sectional image planes of vital human organs. Treatment procedures often require 3D volume considerations. In the present approach a hypothetical radiation treatment procedure for a prostate cancer tumor in a 3D volume from given 2D cross-sectional sequential image planes is presented. Diagnostic ultrasound images of the prostate are obtained as sequential cross-sectional image planes at 2 mm apart from base to apex of the gland. An approach for robot coordinate movements for a simple robotic system with five degrees of freedom (Eshed Robotics, ER VII) is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a novel technique for stereo matching. According to the anisotropic character of human visual system, the original stereo images are decomposed into several directional images. The primitive extraction, description and matching are performed parallel in each pair of directional images. Since the information in each directional image belongs to only a limited interval of direction, the matching in each directional image is rather precise and robust. The process is more consistent with the physiology and it directly leads to extensions of Marr-Poggio theory in several aspects. The experimental results have shown the efficiency of the stereo technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing techniques have already been widely used in various medical applications for decades. With the development of computer and image processing techniques, more and more medical diagnostic systems have been put into use. This presentation describes a cytological color image processing system developed for the health inspection for early stage lung cancers. As most of the existing microscopic diagnostic systems use morphological, and gray or color features respectively, which results in the instability of the diagnosis and limitation in their applications, we make use of both morphological and color features of the cells in our system. To increase the stability and efficiency of the diagnosis, we adopt a hierarchical processing architecture for the segmentation and classification of cells. First, all the nuclei are segmented by thresholding in a special color space. Then, the segmented nuclei are classified as normal cells or candidate cancer cells using their morphological features. Finally, suing the chromatic features of the nuclei, all the candidate cancer cells are verified and further classified. At last, experiment results are given to show the feasibility of the approach proposed here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of computer and image processing techniques, the image processing techniques have been widely used in many of the medical applications for decades. Many of these applications are in the fields of microscopic diagnosis. This paper describes a cytological color image processing system developed for the detection of lung cancer cells in the examinees' sputum slides for the health inspection for lung cancers. The approach proposed here adopts mainly three kinds of features for the segmentation and classification of cells: chromatic, fractal and texture features. In this presentation, the chromatic features are mainly used for the segmentation of nuclei, and the fractal and texture features for the recognition of the cancer cells. At the same time, a hierarchical processing architecture is used for the whole processing procedure. The experiment results have show that the approach proposed here can perform properly in the detection of lung cancer cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a newly developed color vision model for robotic systems. This new model consists of a normal lens, a neutral density filter, an infrared filter, two different prisms, four different bands of optical pass filters, four monochromatic CCD cameras and a camera controller. This vision system is very effective by using some low-level processing technology to realize a higher level human-like color visual perception. A simple experiment was done to demonstrate human-like color vision by employing a modified opponent color theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computational requirements of simulated annealing can be reduced by the use of an error backprojection operator to restrict the intensity perturbations to the area in which they will be most effective in decreasing the cost function. The use of cost function jitter and other diagnostic tools applicable to simulated annealing and/or area-adaptive simulated annealing is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the smoothness of the restored image is maximized subject to a constraint on the fidelity of the restored image. The traditional derivation and implementation of the constrained least-squares restoration (CLS) filter is based on an incomplete discrete/discrete (d/d) system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a 1990 SPIE paper, Park et. al. demonstrated that a derivation of the Wiener filter based on the incomplete d/d model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous (c/d/c) model. In a similar 1992 SPIE paper, Hazra et al. attempted to extend Hunt's d/d model-based CLS filter derivation to the c/d/c model, but with limited success. In this paper, a successful extension of the CLS restoration filter is presented. The resulting new CLS filter is intuitive, effective and based on a rigorous derivation. The issue of selecting the user-specified inputs for this new CLS filter is discussed in some detail. In addition, we present simulation-based restoration examples for a FLIR (Forward Looking Infra-Red) imaging system to demonstrate the effectiveness of this new CLS restoration filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images obtained by passive millimeter wave systems are heavily blurred and band-limited due to diffraction. A two-stage superresolution algorithm has been developed to improve their spatial resolution beyond the diffraction limit. Initially, the spectrum of an image is restored within the passband and an interpolation operation may be combined in this stage of processing, but the resulting image then contains strong ringing due to the lack of higher spatial frequencies. These ringing artifacts are then reduced by using a piecewise-linear model which identifies and restores sharp edges, so introducing spatial frequencies beyond the passband. The effectiveness of this method has been demonstrated by applying it to synthesized and real mm-wave images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors presents an algorithm for computing two-dimensional convolution filters that can detect characters and other image features by generalizing the previously derived one- dimensional Wiener Spike Filter. The author motivates the technique of Wiener Spike Filter design, provides a formal derivation for the two-dimensional generalization, gives a sample example, and describes the results of applying the technique to the task of detecting and recognizing specific English characters. The procedure can be easily adapted to recognize any specific feature that can be expressed as an image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shape recognition has a variety of applications such as the aircraft identification, character recognition, industrial part recognition, and country map discrimination. One of the most important steps is to extract optimal features to discriminate one shape from the others. This paper reviews and evaluates shape features extracted by different methods. The shape feature sets of improved moment invariants, traditional moment invariants, and Fourier descriptors by testing three sets of digital shapes: (a) Chinese symbols, (b) animal shapes, and (c) toys show that improved moment invariants and traditional moment invariants have near a perfect recognition but Fourier descriptors do not perform well. An efficient shape recognition system based on improved moment invariants is thus established and described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for restoring turbulence-degraded images is proposed. It is based on the turbulence-degraded model. The parameter in the model is sought automatically to determine the actual point spread function (PSF) of the turbulent atmosphere, therefore the degraded images are retrieved by traditional techniques. The method needs no information about the turbulence. Experimental results verifies the effectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a method for segmentation of planes and quadrics of a three-dimensional range image using the phase Fourier transform. We extend our previous method for contour determination and simultaneous detection of edges and quadrics. We consider optical and electronic implementations. Using the phase Fourier transform, we address the issue of invariant pattern recognition for segmented and non-segmented three-dimensional images. We show results obtained with a parallel computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most reliable features that can be readily extracted from intensity images comprising of line segments: both straight and curved. Most applications rely on the recognition of such line patterns. Fourier Descriptors, which are widely used, require the patterns to be closed and binary. Other techniques which are based on the chain codes or vectorization have quantization errors and therefore need additional preprocessing. Furthermore, almost all the description methods for line patterns inherently have an element of 'tracing' involved in them and their generalization to grey scale or multi-colored patterns is limited. A novel global shape description technique based on edge segments is used for recognition of line patterns. This approach extends the boundary based representation to generalized edge patterns that may have segments which are straight, curved, crossing or open. A novel representation of Edge Moments (EM) is used for shape description with a novel normalization. The invariant features may be formed by using standard (invariant) moments. This has led to development of Edge standard moments (ESM). The power of the method is demonstrated for recognition of 3-D polyhedral objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pattern models for the analysis, visualization, and compression of experimental 2-D flow imagery are developed. Linear and nonlinear models are presented, both of which use the linear phase portrait as a basic building block. These techniques require orientation field computation, critical point detection, and estimation of the associated phase portraits as preliminary analysis steps. In the linear case flows are modeled as a superposition of phase portraits, where their strengths are determined from the orientation field. This works well for flows that exhibit nearly ideal behavior, and a modification is included which is applicable to a wider range of flows. In the nonlinear case flows are modeled by differential equations of Taylor series form. Inclusion of higher order nonlinear terms provides for better modeling of non-ideal flows. The nonlinear coefficients are computed from the estimated linear phase portrait descriptions. The output of these modeling techniques is a compact set of coefficients from which the original flow streamlines are visualized. Finally, the derived models are employed to compress scalar images that exhibit little or gradual variation along the flow streamlines. Compression ratios on the order of 100:1 are achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several frequency domain or joint spatial/frequency domain techniques for image texture classification have been published. We formulate these techniques within a common signal processing framework based on digital filter banks. The usefulness of computationally efficient IIR filter banks as channel filters in texture classifiers is demonstrated. Using estimates of local energy in the frequency channels we also propose a technique for selecting optimum filter banks by maximizing a between class distance measure. This optimization is particularly simple when using the IIR based filter banks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe an automated method for the enhancement of digital photographic images. The enhancement incorporates the basic photographic principle that films have best detail in mid-tones and suppressed detail toward the extremes. This means that the subject or the visually strong areas must comprise of mid-tones in order to have good detail. Hence appropriate regions of visual significance within an image are first identified based on the features of focus, contrast, and texture. Enhancement of the image then follows by a gray level transformation of different pixels using a simulated exposure-density curve of photographic films. The enhancement is influenced strongly by the visually significant areas rather than the entire image (global statistics). This allows the subject or important areas to be enhanced to an optimum level (mid-tones) rather than achieving good overall balance but with a dull (over-exposed or underexposed) subject.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simple and successful methods have been presented for deriving suspended matter concentration from optical reflectance spectra of seawater. The pattern recognition method allows multispectral radiance to be analyzed for signatures characteristic of different, independently vary phenomena and can be used to evaluate the information content. Landsat MSS data on Nov. 28, 1983 and quasi-synchronous field data at the mouth of Yellow River have been used in this paper. By pattern recognition, the first three characteristic vectors which are derived from the radiance covariance matrix are enough for this study. These vectors accounted for 98.1, 98.9, 99.2 percent respectively. The first vector is mainly loaded by visible light channels, the optical properties of suspended matter enhance the backscattered radiance in these channels. The scalar multiplier for the first vector represents the first vector weighted in each station. We get the regression equation from a least-square fit between the scalar multiplier and the logarithm of the field data observed at each station. The correlation coefficient is 0.81, which is much higher than the correlation coefficient between the radiance of single channel and the logarithm of suspended matter concentration (0.59, 0.70, 0.72, 0.37 respectively). Using the model, we get the information of concentration and distribution of suspended matter from Landsat data on Nov. 28, 1983, Oct. 5 1984, Dec. 3 1988. These results confirmed with hydrology and meteorology information were satisfactory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Road segmentation is one of the most preliminary and important tasks for the road following and planning of the Autonomous Land Vehicle (ALV), since the efficiency of road segmentation has direct effect on the reliability of road following and planning, and consequently the speed of ALV. Therefore, road segmentation has been extensively studied, and a variety of methods for color road segmentation have been proposed, since color images contain more information of road than gray level images do. In most of the existing color road segmentation approaches, a best discriminant vector, which is a linear transformation of color vector (r,g,b), was used to project and classify a point in color space, and only one such projection was used in the segmentation, which may lead to instability of segmentation under variant circumstances. This presentation proposed a new color road segmentation method in which a pyramid based data structure and the corresponding region splitting and combination techniques for the classification of sensed areas are adopted. At the same time, two transformations of the (R,G,B) color space, and data fusion technique are used to increase the efficiency of the road segmentation. Experiment results are presented to illustrate the performance of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes the requirements of a high definition, high speed image processing system. Different types of parallel architectures were considered for the system. Advantages and limitations of SIMD and MIMD architectures are briefly discussed for image processing applications. A parallel image processing system based on MIMD architecture has been developed using multiple digital signal processors which can communicate with each other through an interconnection network. Texas Instruments TMS320C40 digital signal processors have been selected because they have a powerful floating point CPU supported by fast parallel communication ports, a DMA coprocessor and two memory interfaces. A five processor system is described in the paper. The EISA bus is used as the host interface and VISION bus is used to transfer images between the processors. The system is being used for automated non-contact inspection in which electro-optic signals are processed to identify manufacturing problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A great number of parallel computer architectures have been proposed, whether they are SIMD machines (Single Instruction Multiple Data) with lots of quite simple processors, or MIMD machines (Multiple Instruction Multiple Data) containing few, but powerful processors. Each one claims to offer some kind of an optimality at the hardware level. But implementing parallel image processing algorithms to make them run in real time will remain a real challenge; it addresses rather the control of communication networks between processors (message passing, circuit switching..) or the computing model (e.g. data parallel model). In that respect, our goal here is to point out some algorithmic needs to distribute image processing operators. They will be translated first in terms of programming models, more general then image processing applications, and then as hardware properties of the processor network. In that way, we do not design yet another parallel machine dedicated to image processing, but a more general parallel architecture which one will be able to efficiently implement different kinds of programming models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic processing of radar images (pattern recognition, segmentation, data fusion) is restricted by the presence of geometric distortions and radiometric variability induced by unrectified relief and erroneous parameters. This is especially true in the case of a flash radar where the reflectivity map is built by merging strip images, each one with its proper geometric errors. The conception of a fast radar simulator using the flash radar geometric relationship would allow to study the geometric errors and to define the best viewing parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we shall present a new method for generating enhanced images with a desired gray value distribution at real-time speeds. Unlike most conventional methods of image enhancement, the proposed approach allows the user to specify the gray value distribution of the enhanced image. We shall first discuss a generic model of signal transformation, and implement it in an off-the-shelf hardware for analog to digital conversion of image signals. Nonidealities of this hardware will also be considered. Simplifications of this general model are presented to gain computational speeds under 10 ms. The method also provides a range of permissible gray values that can be selected by the user. If the selections are outside the permissible range, the method provides guidelines to reconfigure the imaging conditions to generate the desired enhanced image. An analysis of measurement and quantization errors with constant and gaussian distributions of errors is also considered. Extensive experiments on a set of industrial parts for character recognition are conducted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we shall present a novel algorithm for camera calibration with an improvement in mathematical simplicity, accuracy and computational efficiency in the solution of 12 extrinsic and 7 intrinsic parameters. The method involves a direct transformation from the three- dimensional (3D) object world to the two-dimensional (2D) image plane in terms of 'homogeneous vector forms'. Next, we have demonstrated a strong robust property of the proposed algorithm by proving (with experimental corroboration) that if the camera is calibrated with image data not compensated for image center displacement and scale factor, the algorithm yields parameters that cause no error in the computation of both image and world coordinates. In addition, we have discussed a new method of parameter computation under a complete lens distortion effect (both radial and tangential distortions) with analytical proofs of convergence. Finally, we have proposed a new Incremental Model for correspondence of tolerances between the object world and image plans. Experimental results on a coplanar set of object points are provided to support our models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research presented in this study primarily proposes a simple but efficient method for the generation of depth maps for a certain class of industrial parts. The suggested technique is based on physical principle that the light is absorbed in a color liquid as it travels along an optical path. Consequently the grey level image of an object immersed in a color liquid contains information about the optical path of rays reflected from the object. In other words, the intensity of the grey level image of the object produced in this way is modulated by the depth of the object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The results of four-channel polarization Earth imagery and image treatment are presented. The author had proposed and realized special technique for application of natural polarized source and circular polarized part of reflected light. Polarimeter included photo camera MSK-4 with four glass filters and four analyzers, placed before objectives. Azimuth angle of Sun's direction and Sun elevation were used in treatment. Real Earth surface was modelling as a little anisotropy absorption medium. Technique application for mapping of little faults overlapped by soil was shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.