PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper, we propose using maximum Cramer-Rao bound (MCRB) as a criterion for the estimation of a prior probability law p(x). This criterion has a useful interpretation: It is known that any estimate of x must have a mean-square error greater than or equal to the CRB. Hence, if one demands that p(x) have maximum CRB, he is demanding that p(x) be so non-informative, i.e., x be so random, that even the best estimate of x (attaining the CRB) will have maximal mean-square error. Hence, p(x) is made to describe a minimally knowable x. Because of the current interest in maximum entropy (ME) as an alternative approach, we compare results throughout this paper with those by ME. For example, if only the mean-value of x is known as prior information, the ME answer for p(x) is an exponential law, a smooth function. By contrast, MCRB gives the square of an Airy function, in fact an even smoother function. Comparisons are also made between ME and MCRB restorations, whereby p(x) is modeled to be the unknown object radiance distribution. Finally, MCRB may be applied to estimating the probability law describing particle position in a known potential energy field. With a constraint on average kinetic energy, the result is the Schroedinger wave equation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two of the most important problems in scene analysis of image sequences is the segmentation of image frames into moving and non-moving components and the 2-D motion estimation of the moving parts of the scene. No work has been done on combining these two problems interactively for a noisy sequence of images. Our approach first models the image sequence including information about the position of the boundary of the target. The algorithm has two parts: segmentation and 2-D motion estimation. Both parts are interactively connected. The segmentation process is based on the estimation of the boundary of the target. It uses the motion prediction from the previous frame as a priori information. For the 2-D motion estimation some methods are indicated. Since the segmentation process precedes the motion estimation, the error in the motion estimate would be significantly reduced. This algorithm can be used in most of the image sequence applications like image sequence enhancement, 3-D motion estimation, target tracking, motion-compensated coding, biomedical imaging, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new estimation criterion, called the minimum-error, minimum-correlation (MEMC) criterion, is applied to the point estimation of images in additive noise in conjunction with calculation of local statistical parameters in an adaptive analysis window. The MEMC estimator produces sharper and hence more visually pleasing restorations than the usual minimum mean squared error estimator and the use of adaptive analysis windows tends to isolate the locally stationary portions of the image in calculation of image statistics. As most of the remaining noise in these restorations resides on the edges within the images, a postprocessing step of edge detection and edge smoothing is then applied for its reduction. Such image restorations are compared to those of minimum mean squared error in both fixed and adaptive window implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic thresholding of the gray level of an image is very useful in automated analysis of morphological images, and it represents the first step in many applications in Image understanding. Recently it was shown that by choosing the threshold as the value that maximizes the entropy of the one dimensional histogram of an image, one might be able to separate, effectively, the desired objects from the background. This approach, however, does not take into consideration the spatial correlation between the pixels in an image. Thus, the performance might degrade rapidly as the spatial interaction between pixels becomes more dominant than the gray level values. In this case, it becomes difficult to isolate the object from the background and human interference might be required. This was observed during studies that involved images of the stomach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to the application of maximum entropy principles in image restoration is being developed. It is an iterative process which combines a Wiener filter restoration and modifications to the Wiener image that are Entropy-based. The result is an image with well-restored frequency content and very little of the spuriousness commonly introduced by inverse filters. The algorithm is fast, stable to convergence, and will accommodate any specifiable distorting function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The feasibility of implementing superresolving image restoration for differential-interference-contrast (DIC) microscopy has been studied. This study has involved the computer simulation of DIC optics using the modeling methods of Fourier optics and using random-number-generation software to simulate noise introduced by video instrumentation. The initial results of this modeling and simulation have been reported in earlier papers, with recent extensions of this work and its results presented herein. In particular, images resulting from extensions and improvements to the iterative image-restoration algorithm are presented. Also presented are the tabulations of mean-square error as a function of iteration number which give a quantitative measure of convergence properties and show some of the measurable effects of implementing certain extensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For strongly defocused images the degradation function can be found automatically by machine operation only. Using a modification of the homomorphic filter, as well as the knowledge of the nature of the defocus process, the restoration filter can be calculated and applied. Real time restoration is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modified homomorphic filter for adaptive, automatic image restoration, is presented. The algorithm utilizes the expected power spectrum estimation of natural terrain, and the radial mean power spectrum of the degraded image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data fusion of multispectral image data requires tech-niques that are often time-consuming while giving unclear results. Development of algorithms that integrate information in a useful way is important to improving autonomous and semi-autonomous image understanding systems. This paper presents a comparison of two data fusion methods, each of which compresses the data. One method, the Hotelling transform (Karhunen-Loeve transform), is investigated and its results compared with a less computationally intensive method using new techniques. Each algorithm is translated into the Air Force's Image Algebra, as it provides a common mathematical environment for image algorithm development, optimization, comparison, coding and performance evaluation. The translucent nature of the algebra facilitates the comparison of the advantages and disadvantages of each method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image delineation at full scene resolution can be a very time-consuming task. If the results of delineation at a coarse resolution can be used to guide automatic delineation at progressively finer resolutions, the potential operational savings could be significant. This paper describes such an automated hierarchical system. In this system, the coarse resolution image is first segmented into regions using multiple thresholds. Next, these region labels are used to guide delineation at finer resolution. This step, while preserving the general delineation of the image, restricts the label assignments at fine resolution to only a set of selected pixels, thereby reducing the processing time considerably. The results of applying this procedure to some complex scenes demonstrate not only the effectiveness, but also the cost-saving aspect of the procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms used to measure the three-dimensional coordinates of an object using a standard video camera and a mask in front of the camera lens are described. A simple and very compact 3-D camera is developed using these techniques. Depending on the type of mask and the illumination used, different algorithms are investigated. Emphasis is placed on the real-time processing of the video images with application to control. Experimental results will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the first laboratory demonstration of television transmission in the 1920's, there has been a considerable amount of research and development effort in video communications. However, many technological and economic barriers have kept video services (except broadcast TV) from wide-spread usage. The need for wideband transmission and the lack of cost-effective implementation of this service were two major barriers. Due to the increasing availability of fiber optics links and rapid advances in VLSI technology, there has been a renewed interest in video communications. Research and development activities in employing state-of-the-art VLSI technology to video communications can be expected to expand dramatically in the near future. In order to put more emphasis on this important application, this paper will give an overview for video communications and up-to-date research results relating to VLSI implementation of video codecs. Special emphases in this paper will be placed on high-speed Analog-to-Digital Conversion (ADC), digital filtering for video applications and the single-chip VLSI implementation for the discrete cosine transform (DCT), Vector Quantization (VQ), and motion detection and compensation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The industrial visual inspection of textured surfaces can be automated by using computationally cheap texture analysis methods, provided their parameters are well adapted to the given task. A methodology for the selection and parametrization of such methods is proposed. The inspection of several different industrial products could be successfully automated by this methodology. Some of the mentioned texture analysis methods are implemented on the Visu-al Interpretation System for Technical Applications (VISTA). VISTA hardware modules for these algorithms are available or under development. By means of these modules the inspection can be carried out in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an adaptive vector differential pulse code modulation (VDPCM) for the encoding of subband images. These subband images are obtained by splitting an image into subbands, using quadrature mirror filter (QMF) banks, followed by a 2:1 decimation. Linear phase FIR filters are used for the QMF filter bank. In this work, only one level of splitting, four bands partition, is performed. The subband images are encoded using adaptive VDPCM to a achieve bit rate of 0.5 bit per pixel. Each of the subband images is divided into small blocks of size M by M, at the encoder, and K-dimensional vectors are formed from subblocks of k by 1. Linear block prediction is then performed along the rows and the prediction error is quantized using a matrix quantizer within the VDPCM loop. As the energy distribution among the subband images is not even each of the subbands are encoded at different rates. Since the low low (LP LP) band has the highest energy and carries all the essential details of the original image in a compressed form, it is encoded at a higher rate as compared to the other bands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pyramid image structure is an effective picture representation. In the past few years, several pyramid-based image coding techniques, showing high compression ratios, have been developed. The top-level Gaussian image is a compact form of the original image and needs much less time for feature extraction. Therefore, the pyramid structure is well suited for hierarchical image retrieval in computerized image archiving. In this paper, the pyramid image system using quadrature mirror filters to form image pyramids is employed as a backbone for the proposed image retrieval system. The underlying retrieval scheme is to use an exhibition picture containing desired features as a key for retrieval. The process of image searching is through the execution of retrieval on a decision tree. Using the nature of the pyramid image structure, the retrieval process can be made much more computationally efficient. Picture information measures of the compact top-level Gaussian images are compared with that of the full-size original pictures to demonstrate the effectiveness of preliminary decisions based upon top-level Gaussian images. Coding parameters of the pyramid system are also examined for use in further image searching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper compares two coding methods that differs in segmentation and reconstruction strategy, but they both use interpolation as basic reconstructor. The general scheme is given in fig.1 . First the picture is segmented and the relevant information of each segment is extracted, secondly this information is coded and this information is used for the reconstruction of the picture. A brief description of the segmentation methods is given in part 2. For a more detailed description see [VAN]. The relation between segmentation and reconstruction is also described. In part 3 and 4 the reconstruction strategies for DIDT and RBN are treated in detail. The last part will compare some results of simulations on real images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are various systems available commercially designed to transmit images over voice grade telephone lines. This project was undertaken to compare moderate cost systems. The application of these systems used for the current study was to send radiographic images. The various sytems have different degrees of resolution and transmission speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive image coding system is presented, in which vector auantization (VO) is implemented in the discrete cosine transformed (DCT) domain to manipulate the stationary image data according to their features. The errors resulted by VQ, viewed as non-stationarity of the data, are evaluated and an adaptive correction operation is carried out using scalar correction quatizers (SCQ). Moreover, the DC-coefficients are also vector quantized. This coding system has been simulated for natural images and an improvement of more than 3 dB is obtained in comparison to traditional DCT coding systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the research described in this paper is the design of an intelligent computer vision system to automatically extract information from paper-based maps and answer queries related to spatial features and structure a geographical data. The foundation to such a system is a set. of powerful image analysis algorithms to operate on digitized images of paper-based maps. Efficient algorithms to orient map images, detect symbols, identify and track various types of lines, follow closed contours, compute distances, find shortest paths, etc. have been developed. An intelligent query processor analyzes the queries presented by the user in a predefined syntax, controls the operation of the image processing algorithms, and interacts with the user. The query processor is written in LISP and calls image analysis routines written in FORTRAN. In this paper we describe the image analysis algorithms and present their effectiveness in extracting information from a simplified map image of 2,048 x 2,048 pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optical implementation of median filters for optical digital signal and image processing is proposed. The two properties of median filters, namely, the threshold decomposition and the stacking property are taken advantage of in our design of the median filter. The thresholder decomposes the incoming signal into a set of binary sequences by thresholding the input signal at various thresholding levels. These binary sequences are then applied to a set of binary median filters, the output of which are added together one sample at a time using the summing lens. Data is coded using light polarization and the binary median filters are implemented via symbolic substitution logic (SSL). The optical implementation offers an increased throughput by taking full advantage of the parallelism offered by SSL and the inherent massive parallelism of optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of integral holography for recovering depth information from a series of two-dimensional digital images is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the real time implementation of an image registration algorithm, for an application onboard a flight vehicle, where there are limitations on system power and space, in addition to the processing time constraint. Two images must be registered to find the similarity between them or to find the object motion in a scene. Real time in this application means an update time of 1/30th of a second i.e., at TV frame rate. The direct method using the normalised correlation function is chosen for implementation considering both the performance and the computational complexity. A hybrid approach using dedicated hardware for computation intensive part of the algorithm and a microprocessor based subsystem for other functions is adopted as a compromise between flexibility and efficiency. To meet the time constraint parallel pipelined architecture is used. To meet the low power requirement mostly CMOS devices are used. To meet the space constraint specific integrated circuits are being planned. A specific example of implementation is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images that are obtained with coherent sensors are corrupted with spbts of varying gray level values called speckle noise. Recently there has been a lot of interest in the problem of enhancing speckled imagery especially in the fields of Synthetic Aperture Radar (SAR) and coherent laser radar processing. This paper surveys the various methods that have been suggested for speckle removal, and a perspective on their applicability and shortcomings is presented. New experimental results are also included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For mosaic images created by a multiaperture optical system each optical element forms one pixel of the image, meaning the total field of view of an additional optical element is described by one pixel only. This requires a large number of optical elements for a multiaperture system, e.g., the common housefly has 20,000 eyelets. For artificial systems it is desirable to reduce this number. In this paper it is shown that this is possible by proper detector design, even if non-imaging optical elements (light horns) are used. This is accomplished by analyzing the symmetry of the output of the light horn, and so subdividing the field of view of the individual eyelet into more than one pixel. The paper analyzes the achievable resolving power as a function of cone angle and length of the light horn used. The result of these computations are experimentally verified for two object points. The detector designed for this purpose is a cylindrical cavity, the wall of which is lined with an appropriate number of detector arrays. The angular resolution information is derived from an intensity ratio of corresponding detector outputs at various penetration depths. The detector design described allows improved image quality for a multiaperture device having a given number of elements. Since the design is similar to the anatomy of the rhabdom of the insect eye, it also sheds some light on the function of this organ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Compton effect is an elastic collision between gamma and Xrays and electrons, and in tomographic imaging systems can produce spectral artifacts and distort the signals received through one or more scattering steps. In most tomography systems, energy sensitive detectors or collimation can be used to diminish the effects of Compton scattering. However, in fourth generation tomography systems, it is difficult to utilize collimators due to the fact that detectors receive signals from more than one direction. As a result, in those cases where collimation and energy sensitive detectors are not employed, there is a need to find other methods that reduce the distortion due to scatter. We developed a model for the distribution of Compton scattered photons for industrial applications by implementing a Monte Carlo simulation routine based on a single beam scanner geometry, and compared the results to expermental data collected from a single beam system at Bethlehem Steel Corporation. Significant differences appeared between the experimental and simulated data. In addition, existing scatter correction techniques were applied to data obtained from Bethlehem Steel's fourth generation tomographic system. The existing scatter correction techniques involve both point-wise and convolution models which are subtracted from the measured data to correct for scatter. Improved results were obtained in both image quality and dimensional measures. Finally, using the model of scattering obtained from the simulation data, we proposed further modification to the existing point-wise scatter correction technique, which further correct for scatter in computerized tomographic systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The relationship between processing time and image quality of fan beam computerized tomography (CT) systems was examined using computer simulations. Process-ing time was related to the number of projections and samples per projection, and image quality was measured using discrete normalized mean squared error, discrete normalized mean absolute error, and binary error. First, simula-tions were performed on resolution test patterns, measuring the effect of changing the linear or angular spacing on the image quality. The best trade-off between time and resolution appeared to be at a spacing of 3 pixels for linear resolution, and at an angular frequency corresponding to a 16-sector circle. In addition, computer experiments were performed on "industrial" test patterns, measuring the effect of increasing the number of projections and samples per projection on the image quality. The results showed that the optimum choice of these parameters is on the order of N for an N x N image grid, with slight variations depending on the shape of the test pattern used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several algorithms and devices have been developed to enhance and restore detils, as well as to quantify features in microscopic images, that have been obtained with a scanning electron microscope (SEM) from samples submitted to stress fields within the SEM. A microcomputer is used to control the scanning of the electron beam and to acquire and digitize images for processing. By the use of both, software and hardware manipulation it is possible to observe and quantify microdisplacements of the elements of a composite material, in order to advance the understanding of their behaviour in the elastic and plastic regimes, as well as to characterize their fracture modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method of tomographic image reconstruction from a limited set of projection data, which uses a decomposition of the object distribution on "constrained natural pixels or voxels". This method combines the advantages of the natural pixel decomposition, which matches exactly the measurements recording, and those of cons-trained algorithms, which incorporate a priori informations in the reconstruction problem. The natural elements are the elementary integration areas or volumes, covered by the beam paths. We propose in this method to weight each natural element with a function that can translate a priori informations on support and density. We have applied this method to limited angle axial tomography, and to coded aperture imaging, when the aperture contains several slits for application to laser plasmas microimaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronic collimator positron emission tomography (ECPET) is outlined for different classes of coincidence events. It is shown that for 4 fold coincidences the system is essentially equivalent to Llacer-Cho PET. It reduces for two fold coincidences to standard PET. For three fold coincidences the system reduces two different modes of PET operation, each more sensitive in terms of counting efficiency but less precise than Llacer, Cho and less sensitive but more precise than standard PET.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The hierarchical flow of information through a quadtree is controlled by a multitude of factors, some statistical some spatial. By defining the notions of hard and soft links in the context of branch strength, a single integrated expression provides the much needed understanding of the vertical and lateral information flow. It is shown that up-projection phenomenon h an asymptotically improving effect upon the partitioning power of a quadtree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration algorithms are essential for subtractive analysis of sequential images. Discrepancies in lighting, image orientation, and scale must be minimized before effective subtraction of two images can occur. We have successfully implemented computationally intensive algorithms for registration, which include illuminance normalization and magnification correction, in a PC-based image processing system. A homomorphic filter in the spatial domain is used to reduce the illumination variations in the images. A modified sequential similarity detection technique is used to derive the minimum error factor associated with each combination of translation, magnification, and rotation variations. Each variation of the test image is masked with one of three masks, and the squares of the pixel intensity differences are summed for every test image. An adaptive threshold is used to decrease the time required for a misfit by aborting the test image under consideration when its summation exceeds the value of the previous best fit summation. After the best fit parameters are obtained, they are used to register the images so that the images can be subtracted. The difference image is subjected to further image enhancement operations. The execution time of the image registration algorithm has been reduced through use of a hybrid program written in C and Assembly languages. Applications of the registration algorithms in analysis of fundus images will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A square root information filter (SRIF) data reduction method with a bicubic B-spline estimator is used to reduce optical data taken at points on a rectangular strip. It is shown that some difficulties encountered when using Zernike or Fourier estimators are avoided by using B-splines. Standard statistical measures of the data trend as they relate to instrument noise are easily found as a by-product of the SRIF bicubic B-spline data reduction process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The simultaneous calculation of projections in two orthogonal directions during a row by row scanning of an image is explained. A description is given of a method which allows the calculation of projections 'which have arbitrary angles with the pixel grid directions. The use of fifo's, interleaved and conditional carry processing to optimize this calculation is discussed. These techniques make a compact implementation possible with a high processing speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probably the most critical problem in image understanding is the segmenting images into disjoint regions of uniform gray-tone or texture. This paper describes the design of a new image segmentation system which uses an expert-system approach for solving this problem. The system is composed of two parts: (1) an image processing tool set which processes a given image by localized brightness compensation and heuristic edge extraction, and (2) a knowledge-base controller which is composed of an inference engine, a rule base, and a hypothesis base. The localized brightness compensation preprocesses the input image under control of the knowledge-base controller to obtain improved image quality. The heuristic edge extraction obtains spatially-accurate, one-pixel-wide boundaries of uniform-property subregions. These boundaries are, then, encoded using specially designed codes which facilitate the application of rules for final region formation. The combined use of the image processing tools and the knowledge-base controller makes the segmented region boundaries one pixel wide, spatially accurate, without edge gaps, and without noisy micro-edges inside segmented regions. The paper is illustrated with results by applying the system to the segmentation of actual high-resolution aerial imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, thresholding, etc. Information concerning the velocity was extracted from the enhanced image by measuring the length and orienta-tion of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation we used a simple convo-lution technique with aM adaptive Gaussian window. The results are compared with a numerical prediction by a Navier-Stokes computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The central requirement in the bin-of-parts problem is to direct a robot manipulator to select, grasp, and remove an arbitrarily-oriented part (or object) from a bin of many such objects. This necessitates the estimation of the pose (position and orientation) of a partially-occluded object and, in general, its 3-D structure. The solution of such a problem using passive vision requires the use of sophisticated processing incorporating multiple redundant representations, such as stereopsis, motion, and analysis of object shading. This paper describes the first step is such an approach, that of determining a depth-map of the bin-of-parts, using optical flow derived from camera motion. Since the robotics environment is naturally constrained, simple camera motion can be generated by mounting the camera on the robot end-effector and directing the effector along a known path: the simplest motion, along the optical axis, is utilised in this case. For motion along the optical axis, the rotational components of flow are nil and the direction of the translational components is effectively radial from the fixation point (on the optical axis). Hence, it remains only to determine the magnitude of the velocity vector. Optical flow is estimated by computing the time derivative of a sequence of images, i.e., by forming differences between two successive images and, in particular, of contours in images which have been generated from the zero-crossings of Laplacian of Gaussian-filtered images. Once the flow field has been determined, a depth map is computed, initially for all contour points in the image, and ultimately for all surface points by interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Caries, or the decay of teeth are difficult to automatically detect in dental radiographs because of the small area of the image that is occupied by the decay. Images of dental radiographs have distinct regions of homogeneous gray levels, and therefore naturally lead to a segmentation based automatic caries detection algorithm. The difficulty is that the area occupied by the caries is very small and would not be detectable using thresholding algorithms that are not area independent such as the multiclass IS ODATA clustering algorithm or the bimean clustering algorithm [3].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quantitative assessment of histopathologic sections has in the past been concerned primarily with karyometric determinations.1 Increasingly, though, the diagnostic information offered by tissue architecture is also being analyzed.2,3 This type of higher-level analysis is becoming increasingly attractive because of the development of laser scanner microscopes4 that allow the rapid acquisition of large-area high-resolution tissue section images, and the simultaneous development of very powerful but relatively inexpensive multiprocessor systems that provide the needed computational power to analyze complex images within a reasonable time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing of flow visualization pictures is an effective technique to obtain full-field velocity information in a wide variety of fluid mechanics problems. This report describes the development of an image processing algorithm which employs automatic analysis of the flow visualization images to establish local, instantaneous velocity information. The automatic method utilizes gradient operators and curve fitting techniques to extract the valid flow streaks. It is demonstrated how this technique may be used to determine local velocity vectors for a variety of flow visualization images. The analysis is accurate and fast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precise registration techniques are essential for quantitative evaluation of sequential fundus images to make early detection of fundus anomalies feasible. The familiar correlation techniques for achieving such image registration are computationally intensive and suffer from non-uniqueness of solution. We have developed an accurate, yet fast, algorithm for image registration by using a combination of power spectrum and power cepstrum analyses. In this new algorithm rotational shifts and translational shifts are corrected separately. The technique involves two main ideas. First, a rotational shift is corrected and changed into a translational shift by computing Fourier power spectrums. After the rotational shift has been corrected, i.e., images are parallel, the remaining translational shifts are handled. Because of the accuracy characteristics of the power cepstrum and the speed of the-FFT, this new algorithm can work very fast and accurately compared to conventional techniques. Also, the cepstrum technique has better tolerance of image noise than the traditional correlation measures. The accuracy obtained and computational time required for the cepstrum-based registration techniques will be illustrated by operating on sequential fundus images used in early detection of glaucoma.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By applying the test theory, an object can be placed into a subspace within the hyperspace of features, which in turn gives a probability of correct classification (predictive value or diagnosability). Once the predictive value reaches or exceeds the predetermined confidence limit after a finite number of observations (tests) of the features, no additional observation is necessary. A discriminant for a given feature is set from empirical values ("experiences"), an observation of a feature needs not be a precise measure. Instead, a comparison whether the feature is greater or lesser than the discriminant can be used. Information of tests will give clues to decide the sequence of the tests in a descending order of information to classify an object with the minimum number of observa-tions. These strategies reduce the time required for observations of features and computation, and shortens the execution of pattern recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The principal goal of most of our remote sensing campaigns has been the choice of the best airborne sensors and the selection of the most efficient visible and infrared wavelengths for the remote sensing of the Italian coastal zone. The "1986 C130 European Program" was performed by NASA C130 airplane last summer. In this contest on 30th July a flight over the Tuscan islands and coast was performed. The airplane was equipped with the following main sensors: a Thematic Mapper Simulator (TMS), a Thermal Infrared Multispectral Scanner (TIMS) and an Airborne Imaging Spectrometer (AIS). The images acquired, were firstly corrected for the several types of instrumental noise and errors and after that were correlated with the flight parameters and geometrically corrected. Finally the data were reduced to physical units taking into account the sensors calibration. Particular attention was also paid to the atmospheric effects taken into account by the use of the spectral results of the computer program LOWTRAN-6. First results on sea temperature detection, especially near river or channel estuaries, were reported. At the same time comparison between the thermal infrared channel of the TMS and those of THIS was performed. In addition studies are being made on the relationships among chlorophyll, plankton, yellow substance, oil at sea, total suspended matter, fluorescence and sea color. On that basis, combining the bands of the TMS, tentative image processing is being performed to determinate alga and dissolved organic materials covering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pilot of a helicopter close to the ground estimates the range to objects partly by their motion, or optical flow, in his field of view. An automatic system for passive ranging can operate similarly. A conventional TV camera (generating approximately 512 by 512 pixel images 30 times per second) is an adequate sensor for a passive ranging system that could be used for navigation or that detects objects close enough to a helicopter flight path to be threatening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Historically, software tools for the design of space-borne imaging sensors have been either typical geometric ray tracing codes or stray-light analysis programs. While these codes provide useful information about the system, they do not currently permit simulation of the image at the detector focal plane. This paper describes a novel software system designed to perform focal simulation, which we believe fulfills an important role in the design of certain systems. The Optical Simulation for Imaging Reconnaissance and Intelligence Sensors (OSIRIS) code will provide users with an environment in which to synthesize imagery to a very high level of fidelity as would be detected at a sensor focal plane. The method for the simulation utilizes a reverse ray tracing technology previously developed for radiometric analysis in the Los Alamos Radiometry Code (LARC). At the present time, the OSIRIS software is complete and undergoing preliminary testing. While this method is extremely computation intensive, it appears to provide the designer with some information which could not otherwise be practically obtained. The primary objective of OSIRIS is to simulate imagery at the focal plane of a detector in space, taking into account laser light scattering in the earth's atmosphere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The algorithms discussed in this paper combine to provide a straightforward approach to determine camera characteristics that can affect video images. These algorithms are simple to perform and have a sound statistical basis for interpreting the results. These were developed specifically while working with holographic images where it is important to know how the camera itself affects the image. There are two characteristics of a video camera that directly affect the output image. The dark current is the "image" that is seen when no light is striking the camera. This map can depend on camera temperature, so it is useful to be able to gauge this map as the camera temperature changes. The second characteristic is the camera responsiveness to light, ranging from minimal to saturation. This responsiveness may or may not be linear with respect to light levels or be uniform across the face of the camera. The methods described in this paper discuss a laboratory situation in which the dark current and responsivity were needed to be determined. A method was used to determine an estimate of the dark current map and gain a general idea of the camera's illumination response. Confidence intervals for these estimates may be assigned. These intervals give the user a feel for how accurate these estimates are. By subtracting the dark current and adjusting the image with the responsivity map, the user will be able to gain a truer image with the additive effects of the camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a two-dimensional cellular hypercube architecture for image processing that combines features of the conventional hypercube and cellular logic architectures for 2-D computation cells. A unified theory of parallel binary image processing, binary image algebra (BIA), serves as a software tool for designing parallel image processing algorithms. To match the hardware to the software, we characterize the cellular processors using the same algebraic structure as BIA. The two-dimensional cellular hypercube image processor is a cellular SIMD machine with N2 cells and has a simple overall organization, low cell complexity and fast processing ability. An optical cellular hypercube implementation of BIA is proposed which offers parallel input/output and global interconnection capabilities which are difficult to do in planar VLSI technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A flexible software executive titled IMAGE, for Image Management, Analysis and Graphics Environment, has been developed to integrate image analysis requirements with image processing and animation workstations. The system controls both high-speed image digitization/pipeline processor stations and PC-based video animation production stations from a host computer. The system can thus address a complete sequence of operations supporting an image analysis task, including production of a video report.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a special purpose hardware for high speed generation of perspective images derived from multiple views of two dimensional images. The problems associated with camera modelling, picking stick figures from the various images, implementation through a workstation and the R&D efforts towards automating the laborious stick figure picking process are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine Vision and Image Processing have been limited to slow software techniques for object definition, measurement and characterization. Such software is now available in a hardware module implementation. The hardware provides sixteen parameters for up to 255 parent or child blobs in 1/30th second, real-time. Among the parameters are Connectivity Analysis; Area and Centre of Area; Location, Elongation and Orientation; and Shape Discrimination. Target tracking is possible using these parameters. Hardware organization and configuration will be described briefly along with existing and potential applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Growing popularity of Parallel Transfer Disks (PTD's) is demanding lower cost of interfacing to fast data busses. A new PTD interface to the VME provides for command and status, plus high speed data transfers of up to 40 Megabytes per second. End users and low volume OEMs may now apply wide bandwidth rotational storage to their data storage and transfer problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scanning Electron Microscopes (SEMs) and other related equipment have the potential for providing images of 1024 x 1024 pixels or greater, but at a very slow pixel clock rate. The Megavision 1024XM effeciently interfaces such unusual scanning formats due to its "asynchronous" input characteristics. Since the major tasks of interfacing are simplified or eliminated, the cost of high resolution image processing is now affordable by SEM users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A combination of general-purpose and specialized hardware and software enables the Series 200 subsystem to handle image processing computation requirements. The software presents several levels of parallel access to manage the complexity of this heterogeneous system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.