PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The need for transmitting an image through a low data rate channel has led to the development of various image data compression techniques. Recently, transform image coding based on the discrete cosine transform (DCT) algorithm has been proven to be a near optimum method for good quality, low data rate image transmission [1],[2]. In this paper, a CCD two-dimension DCT [3] device structure based on the recently developed one dimension CCD DCT device [4] will be reviewed. The CCD DCT device computes a 16-point cosine transform in 100 ns. The device structure is based on the vector-matrix product algorithm and implemented by using a bank of 256 fixed-weight multipliers. 60-dB dynamic range, -40-dB harmonic distortion has been achieved by the DCT device. Clocked at 10 MHz, the device is performing 5 billion computations per second and dissipates only 700 mW. The speed, power, weight and through-put rate advantages offered by the CCD technology make it ideal to be used in a low cost image transform CODECL [2].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This communication presents a 2D Discrete Cosine Transform Processor realized with a single chip. Implementing a B.G Lee graph, it can perform DCT computation at the video rate of 13.5 MHz on blocks of a programmable size, from 16*16 to 4*4. Direct or inverse DCT computation is also programmable. The maximum computation error for a direct transform followed by an inverse transform is always better than 1 LSB. A hardwired, mapped architecture has led to a 39 mm2 silicon area for a 1.25μ CMOS 2-metal process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among various transform coding techniques for image compression the Discrete Cosine Transform (DCT) is considered to be the most effective method and has been widely used in the laboratory as well as in the market, place. DCT is computationally intensive. For video application at 14.3 MHz sample rate, a direct implementation of a 16x16 DCT requires a throughput, rate of approximately half a billion multiplications per second. In order to reduce the cost of hardware implementation, a single chip DCT implementation is highly desirable. In this paper, the implementation of a 16x16 DCT chip using a concurrent architecture will be presented. The chip is designed for real-time processing of 14.3 MHz sampled video data. It uses row-column decomposition to implement the two-dimensional transform. Distributed arithmetic combined with hit-serial and hit-parallel structures is used to implement the required vector inner products concurrently. Several schemes are utilized to reduce the size of required memory. The resultant circuit only uses memory, shift registers, and adders. No multipliers are required. It achieves high speed performance with a very regular and efficient integrated circuit realization. The chip accepts 0-bit input and produces 14-bit DCT coefficients. 12 bits are maintained after the first one-dimensional transform. The circuit has been laid out using a 2-μm CMOS technology with a symbolic design tool MULGA. The core contains approximately 73,000 transistors in an area of 7.2 x 7.0
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A relatively simple algorithm is introduced for coding blocks of pels. Performance is comparable to orthogonal transform coding, which is considerably more complex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image data compression is achieved through the use of a two-dimensional (2-D) transform operation on the gray-level pixels within an image subsection followed by 2-D DPCM encoding of common transform coefficients from disjoint image subsections. Earlier results for this 2-D hybrid scheme employing the Discrete Cosine Transform (DCT) indicate an overall improvement in image source coding gain vis-a-vis conventional 2-D transform coding. Hybrid system performance dependence on transform block size is demonstrated along with the rank-ordering of the relative performance obtained when using the Haar, Walsh, Slant and DCT 2-D orthogonal transforms. It is shown that performance gains over conventional block 2-D transform coding are realized for transforms which are less efficient than the DCT. This hybrid system will provide a means of substantially improving existing hardware transform coding systems with a minimal increase in hardware complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vector quantization is a new coding technique which has enjoyed much success in its brief history. Its most attractive features are a high compression ratio and a simple decoder. Thus, it shows great promise in applications using a single-encoder with a multiple decoder such as videotext and archiving. However, some problems have arisen, most notably edge degradation, the so-called "block effect". A novel method is described in this paper which alleviated this problem without much increase in computational effort. An index has been devised, the activity index, based upon measurements on the input image data; it is used to classify image areas into two groups, active and nonactive. For nonactive areas, large block size is used, while for active areas the block size is small. Two codebooks are generated corresponding to each of the two groups of blocks formed. Using this adaptive vector quantization scheme the results obtained show that the edge features are well preserved upon reconstruction at the decoder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop a technique based on tree-searched mean residual vector quantization (MRVQ) for progressive compression and transmission of images. In the first stage, averages over image subblocks of a certain size are transmitted. If the receiver decides to retain the image, the residual image generated by subtracting the block averages from the original is progressively transmitted using the tree-searched vector quantization (VQ) hierarchy. In an attempt to reduce the bit-rate of the initial transmission, Knowlton's scheme is used to transmit the block averages progressively. Using a (4x4) block size, we obtain high quality images at 1.4 bits/pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Swift recognition of grey scale images transmitted through low bandwidth channels has been demonstrated by various progressive techniques in which a series of gradually refined image approximations are received and displayed. A previous technique demonstrated non-homogeneous progressive transmission in which image content controls transmission priorities, resulting in a decrease of the time required to receive a usable image. The non-homogeneous technique utilized two simultaneous decompositions, a quad-tree based spatial and a subimage information measure. The concept of simultaneous decomposition is extended to include a third component: grey level approximation. Just as an undecomposed quad-subtree provides a spatial approximation, values of pixels used as quad-subtree representatives are initially approximated and later refined. The three simultaneous decompositions are integrated so as to achieve, for a given transmission time, a higher degree of received image usefulness. The receiver does not have a priori knowledge about which image areas are to receive preferential treatment, and the level of preference is the pixel. The total transmission time for the series of approximations concluding in lossless reception, including preferential decomposition overhead, is comparable to the time required by non-progressive lossless methods. The technique is computationally simple and intended for general purpose processor architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a set of pyramid transforms that decompose an image into a set of basis functions that are (a) spatial frequency tuned, (b) orientation tuned, (c) spatially localized, and (d) self-similar. For computational reasons the set is also (e) orthogonal and lends itself to (f) rapid computation. The systems are derived from concepts in matrix algebra, but are closely connected to decompositions based on quadrature mirror filters. Our computations take place hierarchically, leading to a pyramid representation in which all of the basis functions have the same basic shape, and appear at many scales. By placing the high-pass and low-pass kernels on staggered grids, we can derived odd-tap QMF kernels that are quite compact. We have developed pyramids using separable, quincunx, and hexagonal kernels. Image data compression with the pyramids gives excellent results, both in terms of MSE and visual appearance. A non-orthogonal variant allows good performance with 3-tap basis kernels and the appropriate inverse sampling kernels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some noiseless data compression methods are introduced in this paper. By using these methods, we compress Chinese character patterns to save storage space. Since quality is very important for Chinese Character patterns, the patterns have to return to their original forms after decoding. Before coding, we first process the patterns by memorizing the changing points, rearranging pattern pixels and prediction function to improve the probability of some specific source symbols in analysis. After the preprocessing stated above, we use m*n subblock or run-length as the source symbol in Huffman coding and compare the results for each method. Our highest compression rate is 70%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the pattern recognition potential of spiral sampling applied on gray - scale, noisy images. Image pixels are rearranged into a one dimensional sequence by selecting samples in a spiral manner starting from the edge of the image and proceeding toward the center. The properties of this sample sequence are examined by Fourier transform analysis, using images from a number of groups, with varying contrast, orientation and size. The classification ability of features extracted from spiral sequences and their accuracy are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved segmenter has been developed which partitions a monochrome image into homogeneous regions using local neighborhood operations. Perkin's well-known edge-based segmenting algorithm [1] is used to partition those portions of an image with little detail (low edge density) into regions of uniform intensity. A technique is introduced which segments the remainder of the image to reveal details that were previously lost. Region merging is then performed by removing selected boundary pixels that separate sufficiently similar (e.g., in average intensity) regions subject to the constraint that the boundary pixel quality (e.g. edge strength) is below a selected threshold. Region merging is repeated using less and less restrictive merging criteria until the desired degree of segmentation (e.g. number of regions) is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recognition of partially occluded object is a desirable function in a computer vision system, especially one employed in an industrial automation environment. In this controlled environment, the objects to be recognized can be constrained to a relatively flat region (plane of image), and thus be easily modelled by polygons. This paper studies issues in such a computer vision system and presents algorithms for the various processes involved in occluded polygon matching and recognition. The recognition process is carried out by model matching. The scene may contain unknown model objects which may overlap or touch each other, giving rise to partial occlusion. Both the model and the scene objects are represented by their polygon approximations. Features used for matching are extracted from line segments connecting all possible pairs of vertices in the polygon. They are: vertex types at two ends of line segment, angles of these vertices, line type, and line length. A polygon clipping algorithm based on geometrical properties is used to determine the types of line segments. We also develop a context-free grammar for recognizing line types. To speed up the recognition process, only priority features are used in the initial matching. The priority features are identified after some analysis of the geometrical properties of polygons with occlusion. A consistency check also reduces the pool of candidates for matching. The matching algorithm superposes the model object on the scene along line segments in sequence and checks the dissimilarity between the region enclosed by the scene polygon and the region enclosed by the model polygon appearing in the scene. A dissimilarity measure based on the phenomenon of light illumination and the theory of fuzzy subset has been designed to measure the edge consistency between the scene and candidate model to select the best possible fit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface fitting allows for pose estimation and recognition of objects in range scenes. Unfortunately, surface fitting is computation-intensive. One way to speed-up this task is to use parallel processing, since its availability is on the increase. Mathematically, surface fitting can be formulated as an overdetermined linear system which can be solved in the least-square sense. Because of numerical stability and ease of implementation, the QR-factorization using the Givens transformation is best suited for the parallel solution of overdetermined systems. In this paper we present two algorithms to carry out a QR factorization on both distributed-memory and shared-memory parallel computers. These algorithms have been implemented and evaluated in terms of speed-up time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes image processing equipment developed for a data entry system aimed at creating a plant-record database. The equipment employs a multi-processor architecture allowing parallel processing of very large drawings by dividing them into sub-images. Image enhancement and binarization processing is executed on the equipment to improve the image quality of handwritten plant-record drawings. The recognition processing of symbols and line segments in the drawing is also implemented. The equipment is operated as a part of an experimental data entry system for Al-size handwritten plant-record drawings. It is confirmed that blur and faintness are eliminated in about 25 seconds, and the target symbols and line segments are recognized in about 15 minutes with an accuracy of over 90%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new method to obtain the features from intensity images by making use of an improved Hough Transform. Through image preprocessing, the intensity image can be converted to a boundary image. A new approach is proposed to find the linear and elliptical features embedded in the boundary image on which there are discon-nectivities and the distortion that are caused either by occlusion or by noise. The Hough transform method is improved to recognize those features with less computation effort and more accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the input problem of Chinese character is the barrier of the integration of computer, Chinese and communication ( C & C & C ), we studied and developed a powerful Chinese multifont recognition system. In this system, the algorithm we developed can recognize at the same time in the same program different character styles. In the meantime, it can recognize characters of different sizes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An analytical investigation is described which relates errors in spatial registration of multispectral and multitemporal data sets to error in classification. Of particular interest is the classification of large areas of crops. Misregis-tration effects on the statistics of field-center and boundary pixels are studied. The possible results when more than two classes are involved is also considered. An analytical model allowing numerical calculation of the probability of error in a number of specific cases is developed and requirements for a general model is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two quantization methods for high quality image coding are presented. The goal in both methods is to keep subjective distortion below visual threshold. The first method is a discrete cosine transform coding system. The coefficients of the transformed blocks are quantized with a set. of uniform quantizers with step sizes determine by a parametric function. Subjective experiments were carried out to determine the parameter values which minimize output entropy while keeping distortion at about the threshold of perceptibility. The second method is a vector quantizer on small image blocks. A structured quantizer defined by a few parameters is described, and approximate parameter values for transparent, coding are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss two non-linear models of early chromatic processing as motivated by biological systems. Two aspects of the models are examined -- their spatio-chromatic characteristics as evident from linear analysis, and the way the operators built in these models respond to different stimuli. We show that both nonlinear models for cone responses can be subject to linear analysis under low contrast conditions. Analysis of simple opponent (Type I) operators show that these operators are inseparable in space and color. They exhibit color opponent behavior in low spatial frequencies, and monochromatic behavior for high spatial frequencies. Double-opponent operators behave as chromatic change detectors, which signal changes in the sign of the two chromatic components that comprise the input. As such, when using either model, they do not respond to multiplicative changes in intensity. The change of weights assigned to the center and the surround affects the null point by moving it to a different wavelength, both for the Type I and double-opponent operators. Because of the separability of the double-opponent operator, this feature can be used to tune the operator to detect inputs in a narrow band of wavelengths. The responses obtained from the two nonlinear models are in reasonable agreement with the responses in human color vision. This suggests that the models capture the appropriate qualitative behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many visual illusions and Gestalt grouping phenomena may be explained by low-pass filtering of these images by the visual system. However, the perceptions of these images are maintained even if such images are high-pass filtered before display, removing the low-frequency information. Many such high-pass filtered images can be treated as a form of two-dimensional amplitude modulation (2-D AM) signals. The low-frequency figure information is coded in the modulation envelope, which disappears with the carrier if low-pass filtered. The envelope may be retrieved (demodulated), using one of many non-linear operations followed by a low-pass filter. Theory and image-processing simulations show that the compressive non-linearity of the visual system suffices to demodulate these images. This model accounts for various perceptual phenomena associated with filtered images and band-limited psychophysical stimuli.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a generalised approach to color image display, based on exploiting the natural processes of human visual perception. We suggest that to allow intuitive and effective data interpretation, a data display should always be recognisable as a realistic scene, with individual data variables represented by scene properties such as surface height or color. This display approach is realised within a computational framework which incorporates computational vision models, and scene synthesis techniques from computer graphics. The framework is based on a perceptually uniform color space, and includes the modelling of color display processes for controlled and reproducible image generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A scheme is proposed for image decomposition into components having spatial frequency band and orientation selectivity characteristic of the visual system. The Hexagonal-discrete-Fourier-transform (HDFT) of an hexagonally sampled image is partitioned into 36 bands, each of one octave and 30 degrees of angular orientation. Computational results of such image decomposition exhibit self-similarity of edge distribution, along the pyramidal structure of the spatial frequency components, in each of the six orientations. A combination of coding scheme, which reduces the apparent redundancy, and efficient bit reallocation procedure achieves good image reconstruction with data transmission rate of about 1.0 bits/pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We divide the boundary of a 2-dimensional object into segments each of which is either straight or a circular arc; associated with the segment end-points are angle measures that can be used to match an object with a transformed (rotated, scaled) version of itself. The chain code, easily extracted from the boundary pixels, is the basis of this division. The approach avoids problems common to many of the existing methods for identification of curvature extrema: sensitivity to noise and dependence on parameters that are chosen empirically. To each section of the boundary we assign a code that represents the change in slope between it and the previous section. This set of codes is integrated and thus provides a measure of the total directional change relative to the first section. For a closed object, the sequence of these sums is periodic, and one cycle can be plotted as a function of arc length, s. Such a plot can be shown to contain only straight lines: those that are not parallel to the s axis (representing circular arcs on the original boundary of the object), and those that are (representing straight sections on the boundary). This paper describes a recursive procedure for dividing the digital version of the curve described above into its linear segments. Each segment represents an arc that is the best fit to a portion of the original boundary; the angle which is defined by the arc is identical to the angle change of the edge in the same section, and the length of the arc is identical to that of the edge. The recursive procedure measures the error (for each value of arc length) between a proposed fitting line and the actual value of cumulative angle; where the error is maximum, and above a threshold, the line is segmented. The procedure is repeated until the error is sufficiently small. The breakpoints thus indicate the location and value of points of greatest curvature change. A formal definition of the procedure is given, and it is shown to perform well for rotated, scaled, and noisy objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recursive image filters are computationally more efficient than nonrecursive ones. The phase of recursive filters is normally nonlinear, but it can become linear (actually zero) by the use of multi-directional filtering, in which the overall system is a cascade or parallel combination of the same simple recursive filter applied in different directions. The same concept is generalized to include nonlinear recursive multi-directional image filters. Non-linear recursive filters have not been investigated in the literature due to their analytic difficulty and their lack of general stability criteria. A useful task requiring nonlinear operation is to differentiate the high spatial frequency components of a picture into those due to edges and those due to noise. A linear low-pass filter will simultaneously smooth noise and blur edges. A linear high-pass filter will simultaneously crisp edges and enhance noise. We propose a specific nonlinear multi-directional recursive scheme for simultaneous edge enhancement and noise smoothing of images. The filter has low complexity and is guaranteed to be stable. It can be used for image enhancement or restoration purposes, or as a pre-processor for spatial coding techniques, in which case the compression ratio can be increased without deterioration of quality by eliminating useless information form the original signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a class of iterative image restoration algorithms is derived based on a representation theorem for the generalized inverse of a matrix. These algorithms exhibit a first or higher order of convergence and some of them consist of an "on-line" and an "off-line" computational part. The conditions of convergence and the rate of convergence of these algorithms are derived. A faster rate of convergence can be achieved by increasing the computational load. The algorithms can be applied to the restoration of signals of any dimensionality. Iterative restoration algorithms that have appeared in the literature represent special cases of the class of algorithms described here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Receiver operating characteristics (ROC) is an empirical measure of performance of an integral system consisting of communication equipment and a human observer. ROC of two systems may interesect, thus making comparison of two ROC's difficult. Divergence (Kullback) can be used as a figure of merit for comparison in such instances. A theoretical basis of ROC necessary to calculate divergence from ROC is elaborated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents the results of first principle calculations of the effects of frequency and temporal compounding on speckle contrast and axial (range) resolution. Frequency compounding corresponds to the process in which a coherent pulse of a specific bandwidth is passed through a filter bank which divides the pulse into a number of sub-bands. The image is formed by incoherently summing (compounding) the intensities of the individual sub-bands. Temporal compounding falls into the category of non-linear multirate signal processing as the final pixel intensity is made up of the intensity of samples which were sampled initially at rates equal to or higher than the pulse bandwidth. The intensities of the samples are then compounded to form the final image. In the limit of infinite sampling rate, temporal compounding is equivalent exactly to analog integrated backscatter where the sensor continuously integrates the intensity of the incoming signal. The speckle contrast reduction and the decrease in resolution for these processes are discussed. These results apply to generic coherent imaging systems which can access both the amplitude and phase of signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general iterative method of restoring linearly degraded images has been introduced recently [J. Opt. Soc. Amer., 4, 208-215 (1987)]. In this paper, the general method is reformulated into a more tractable fixed point iterative procedure. The new formulation is shown to be an implementation of the steepest descent algorithm. The inherent step size of the generalized method is found to be responsible for its slow convergence. A new method is presented whose increased step size offers accelerated convergence. The realization of the new accelerated method is shown to require only a minor modification of the original algorithm. A new stopping criterion is also introduced. Computer simulations demonstrate a significant improvement in the rate of convergence of the new method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Random Scan Computer Vision System acquires two-dimensional data along task-dependent scanning patterns matched to image structure. A scanning trajectory is generated based on a-priori information concerning image structure, and is modified in real-time by the central processor in an interactive mode. The utility of various deterministic and probabilistic strategies is discussed, with special emphasis on the hierarchical nature of the process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method of extracting the regions of car number plates using two kinds of modified Hough transformations. One is a parameter restricted Hough transformation(PRH) and the other is a hierarchical parameter restricted Hough transformation(HPRH). The PRH algorithm, in which the range of the parameter plane is restricted, reduces the computation time and the storage capacity than the usual Hough transformation. The HPRH algorithm uses a hierarchical structure of image, and it also includes the PRH algorithm. The hierarchical structure reduces the picture points to be processed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, applications of mathematical morphology to various image analysis tasks for autonomous image analysis, based on a morphological image analysis system implemented on Alliant FX8 computer at TASC, are discussed. The basic concept of the theory of mathematical morphology is briefly described. Simple examples are given to provide further insights into this theory. Some high-level algorithms for edge detection, noise cleaning and sharpening are investigated. Two new concepts, restricted dilation and erosion and pattern spectrum for target enhancement and detection are presented. The analogy between the role of pattern spectrum in shape analysis and that of Fourier power spectrum in signal analysis is explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous work, the concept of estimation theory has been applied to provide a basis for determining the fractal dimension of a given data set when fractional Brownian motion (FBM) is assumed to be a suitable model. However, the generated FBM functions used to test the approach were only one dimensional, even though the estimation was applied to images. In the current work, the generation process is extended to two dimensions so that FBM images are generated directly. A series of 32 x 32 FBM images were formed for fractal dimensions of 2.2 to 2.8. The texture in these images is seen to be representative of textures observed in xray images of bone. Furthermore by combining the FBM realization with a deterministic function such as a sinusoid, it is found that complex appearing images can be broken down into two basic components: fractal, and determinisitic. Thus this methodology may prove useful in the analysis and presentation of medical images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents several experimental results of using the techniques of set and function and mathematical morphology (MM) for feature extraction from real and synthetic range imagery. More specifically, we consider the problem of extracting silhouetted appendages from real imagery with coarse range resolution and that of extracting appendages and corners from synthetic imagery with high range resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many definitions of the fractal dimension of an object, including box dimension. Bouligand-Minkowski dimension, and intersection dimension. Although they are all equivalent in the continuous domain, when discretized and applied to digitized data, they differ substantially. We show that the standard implementations of these definitions on self-afline curves with known fractal dimension (Weierstrass-Mandelbrot, Kiesswetter, fractional Brownian motion) yield results with errors ranging up to 5 or 10%. An analysis of the source of these errors led to a new algorithm in 1-D. called the variation method, which yielded accurate results. The variation method uses the notion of e-variation to measure the amplitude of the one-dimensional function in an e-neighborhood. The order of growth of the integral of the e-variation, as E tends toward zero, is directly related to the fractal dimension. In this paper, we extend the variation method to higher dimensions and show that, in the limit, it is equivalent to the classical box counting method. The result is an algorithm for reliably estimating the fractal dimension of surfaces or, more generally, graphs of functions of several variables. The algorithm is tested on surfaces with known fractal dimension and is applied to the study of rough surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an application of morphological systems to the problem of locating man-made objects in Forward Looking Infra Red I FUR) images. The FUR images, consist of compact light concentrated heat) regions corresponding to the object with a darker (cooler) background with some light distractions such as trees, or a forest. The images generally have poor contrast because of the nature of heat sensitive imagery. The goal of this research is to isolate the object (when present) from its background and to provide its exact loca tion within the imaging window. The research focuses upon the selection of the morphological operations, the choice of the shape and size of the structuring elements and the sequence in which the operations are applied. Preliminary experimental results indicate that morphological transformations may be well suited for this application. The compact light areas representing man-made objects are readily separated from the larger light ridges representing trees, or forests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image is modeled by a two-parameter stochastic process generated by splitting a pixel into a block of 2 X 2 pixels thereby obtaining a higher resolution image. This sequential reproduction process is achieved by using a sequence of random variables as multipliers. In the analysis of image structure we apply the model in the reverse order, i.e. estimate the sequence of random variables which generates the image. The size of matrices representing the stochastic processes show a pyramidal structure when increasing or decreasing the resolution. The features characterizing an image are provided by analysis of the properties of estimated stochastic processes at each level of the pyramid, and by comparison of properties characteristic of consecutive levels of the pyramid. As a byproduct of this image structure analysis, our approach suggests a new method for data compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an algebraic structure, known as the AFATL (Air Force Armament Technical Laboratory) Image Algebra, that is capable of expressing all image-to-image transformations. After presenting a brief overview of the operands and operations of the algebra, we show how a subalgebra of the full Image Algebra generalizes the theory of mathematical morphology. We provide examples which include 1) morphological operations expressed in the algebra, 2) Image Algebra algorithms not expressible in terms of morphological operations, and 3) a fractal target detection algorithm expressed in terms of the Image Algebra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The primary goal of an image algebra is the development of a mathematical environment in which to express the various algorithms employed in image processing. From a practical standpoint, this means that the algorithms should appear as strings in an operational calculus, where each operator can ultimately be expressed as a string composed of some collection of elemental, or "basis," operators and where the action of the string upon a collection of input images is determined by function composition. For instance, rather than defining operations such as convolution and dilation in a pointwise manner, we desire closed-form expressions of these operators in terms of low-level operations that are close to the algebraic structure of the underlying mathematical entities upon which images are modeled. It is precisely such an approach that will yield a natural symbolic language for the expression of image processing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ABSTRACT. In this paper we introduce some new techniques for modeling fractal images using concepts from the theory of iterated function systems and morphological skeletons. In the theory of iterated function systems, a fractal image can be modeled arbitrarily closely as the attractor of a finite set of affine maps. We use the morphological skeleton to provide us with sufficient information about the parameters of these affine maps. This technique has applications for fractal synthesis, computer graphics, and coding. Images that exhibit self-similarity, such as leaves, trees, mountains, and clouds, can be easily modeled using these techniques. Slight perturbations in the parameters of the model create variations in the image that can be used in animation. Finally, the small number of parameters in the model allows for very efficient image compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, mathematical morphology has been used to develop efficient and statistically robust 2-dimensional edge detectors[1]. These edge detectors have been shown to outperform most mask and differentiation based edge detectors. In this paper, we introduce a general robust N-dimensional morphological edge detector that outperforms any of the previously developed morphological edge detectors. We compare the statistical performance of our edge detector with that of the previously developed 2-D morphological edge detector on images with various noise levels. Finally, we will also include some examples of our edge detector's output on both 2 and 3-dimensional images to compare with other operators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a new technique for object detection that uses fractals to model the natural background in a visible image. Our technique is based on the fact that fractal-based models have been found to be good models for natural objects as well as images of natural objects. On the other hand, man-made objects are decidedly not self-similar and therefore fractal-based models are not good models for man-made objects and their images. The technique adaptively fits a fractal-based model and a 2-D autoregressive model over the image and the fractal dimension and model-fit errors are used to identify regions of anomalous dimension and high error. Thus the technique uses a dual approach to object detection by modeling and deemphasizing the natural background instead of explicitly modeling and identifying the man-made object. Results are shown for a real image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we use the Fractal surface model [Pentland 83] to describe complex. natural 3-D surfaces in a manner that mimics human perceptual judgments of surface structure (e.g., "peaks," "ridges," or "valleys"- ). We describe how real surfaces can be decomposed into such descriptions using a minimal-length encoding procedure. This allows us to structure the pixel data, in a manner that corresponds to the perceptual organization people impose upon the data, so that a user can point to a CRT image of digital terrain map (DTAI). say "that one." and have the machine understand the user's reference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper uses a powerful three stage mapping methodology to realize highly concurrent systolic array processor architectures for image processing. The paper addresses the issues of designing special purpose systolic image enhancement processors for edge detection and median filtering and designing configurable general purpose systolic signal processors for Kalman filtering and artificial neural networks. The latter two arrays can then be specifically configured for image restoration and other image processing problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nonlinear, nonrecursive, two-dimensional filter has been developed for edge-preserving noise smoothing of images. It can also be beneficial as a pre-processor for data compression purposes. The filter is applied over a 3x3 window of pixels and uses an adaptive threshold. As with all image filters, the computational requirements lead to the utillization of special purpose VLSI hardware for real time implementation. The algorithm is being implemented in VLSI using a systolic array structure and parallel architecture, in which each processing element is assigned to one pixel. The structure of the processing element is very simple: except for some control gates and registers, it contains only a one-bit full adder as the main functional element. The most important feature of the proposed architecture is that the nonlinear operation has been implemented using no more than the simple tools and the silicon area required for a linear convolutional algorithm. Other properties of the above design are: simple interconnection scheme between processing elements and common control circuitry for all the processing elements on a single chip. Due to the topological simplicity of the processing element, arrays of size 16x16 cells per chip are believed to be feasible, using standard CMOS technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a unified methodology for partitioning multidi-mensional problems on a shuffle-exchange network and butterfly network. A rasterization theory plays a central role for this methodology. The theory provides certain necessary and sufficient conditions for transforming a multidimensional problem into a one-dimensional one. Then the transformed one-dimensional problem is mapped into a Parallel/Pipelined Partitionable architecture such that a uniform treatment of any dimensional problem is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a modified Given's rotation algorithm and pipelined architecture for solving linear system equations A x = b. We have shown that this algorithm can be implemented with a trapezoidal type array with 0(n2/2) processors, and a linear array with 0(n) processors. The computing time for solving linear system equations will be 0(5n) time units. The numerical stability of this algorithm is superior than that of conventional hyperbolic algorithm. Since the array processors are very simple and regular, so the architecture of linear system solver is much suitable for the VLSI implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes two ASIC devices which are the building blocks for the memory management of an 8 Megabyte Video Memory system. The devices are used in tandem to synchronize, buffer, multiplex, and execute 256 million 4 byte I/O requests per second for a total transfer capacity of 8 Giga-bits per second. Requests are communicated to the memory over 16 asynchronous channels. The first device, the Memory Channel Interface (MCI), synchronizes, buffers, and routes incoming address, data, and control bits from the input/output channels to the second ASIC device, the Memory Module Interface (MMI). The MMI device interfaces the MCI with static memory chips, and produces the proper control signals for the memory's operation. The MCI and MMI devices allow the high performance Video Memory to be realized with approximately 150 devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cluster analysis is a generic name for a variety of mathematical methods that can be used to classify a given data set. By using the cluster analysis the people try to understand a set of data and to reveal the structure of the data. Clustering technologies find very important applications in the disciplining of pattern recognition and image processing. They are very useful for unsupervised pattern classification and image segmentation. This paper presents a VLSI cluster analyser for implementing the squared-error clustering technique using extensive pipelining and parallel computation capabilities. The proposed cluster analyser could perform one pass of the squared-error algorithm (which includes finding the squared distances between every cluster center, assigning each pattern to its closest cluster center and recomputing the cluster centers) in 0(N+M+K) time units, where M is the dimension of the feature vector, N is the number of sample patterns, and K is the desired number of clusters. And it will need 0(N x M xK) time units, if a uniprocessor is used. The algorithm partition problem is also studied in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A three-dimensional, multipurpose systolic architecture is proposed in which instructions, as well as data, propagate through the array structure. This paper presents the requirements and characteristics of this Data/Instruction Systolic Array (DISA) architecture. In addition, an algorithm for the multiplication of two matrices on a three-dimensional DISA is presented. Examples of two-dimensional DISA algorithms are provided to demonstrate the feasibility of the approach. Finally, analysis indicates that the performance of algorithms implemented on the DISA approaches that of those implemented on more traditional, single purpose arrays while providing the advantage of multiple, pipelinable functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently there exist many parallel computer architecture for image processing applications. The main goal of these architecture is to carry out image processing work in a highly parallel manner so that the processing time is short. In most of these architectures the processing elements are used very efficiently, but not the memory subsystems. Thus although the processing is done in parallel to ensure better response time, the available bandwidth is limited, in most cases, by the memory I/O operations. This paper introduces an MIMD (multiple instruction multiple data stream) type of parallel multimicroprocessor architecture for image processing. Our proposed architecture is one in which both the processors and the memory subsystems are kept as busy as possible in order to obtain faster response time and proper utilization of the hardware. The proposed architecture consists of an array of Processing Elements (PEs), a System Control Unit, and the main memory of the system. Each PE contains two Central Processing Unit (CPU), one is responsible for execution and the other is responsible for all memory operations. The overall response time of a task is faster because we divide the actual execution and the memory operation into two separate entities and carry them out concurrently. This is an improvement over the conventional architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to reduce the effects of scatter from the object and those of veiling glare from the image intensifier (I.I.)-T.V. system in digital radiographic images, we have employed a multiple-slit x-ray beam imaging technique. For reconstruction of a final image, a multiple-slit assembly (MSA) was moved under the x-ray tube by a microprocessor-driven stepper motor. The accuracy of the movement was to be kept within ±10 μm with the aid of a shaft encoder. The MSA was moved over the required distance (0.1 mm to 2.0 mm) in 100 msec, corresponding to the time between x-ray exposures. Positional errors as well as construc-tion errors in the MSA contributed to artifacts in the image. We determined that positional errors were kept below the ±10 μm limit; this was confirmed by the size of the artifacts observed in the reconstructed images and by direct measurements of the MSA position with a dial indicator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative measurement of regional myocardial perfusion would be invaluable for the detection and treatment of ischemic heart disease. Sonicated echocontrast agents now allow 2-D ultrasound scanners to track blood flow, but commercial scanners distort the received signal to produce visually pleasing images. To capture digitized radiofrequency (rf) envelope signal suitable for quantitative analysis before video processing, a real-time system for computer acquisition of echocardiogram data (CAED) was constructed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Telepathology is a new field of medicine encompassing the practice of pathology at a distance by visualizing an image of tissue and/or cells on a video monitor rather than viewing the specimen directly through a microscope. Essential components of a telepathology system include: 1- a fully motorized microscope equipped with a high resolution video camera at a remote site where the patient is located: 2- a pathologist's workstation that incorporates the controls for manipulating the remote microscope and a high-resolution video monitor at a Physician's Imaging Center (tm) and 3- a communications linkage between the remote site and the Physician's Imaging Center. Such a system has been constructed and its usefulness has been demonstrated using a SBS 3 satellite for video transmission and remote control of the robotic microscope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The receiver operator characteristic (ROC) curve is used to assess the ability of a diagnostic test to distinguish between two discreet states, such as tumor present or tumor absent in a histopathologic section. We have used ROC methodology to assess the ability of pathologists to diagnose frozen section biopsies of breast tissue as benign or malignant, using both a conventional light microscope and a high resolution camera/monitor system. 115 consecutive frozen section breast biopsies were reviewed using each of the above modalities. Results yielded identical ROC curves for the conventional light microscope and high resolution camera/monitor system. Furthermore, the percentage of cases in which pathologists rendered an "equivocal" diagnosis was the same with both modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Physicians frequently need to see retinal, biopsy, and other images, produced originally by medical diagnostic equipment and then processed by computers. At present this is done using either time-consuming photography or expensive video equipment. We have used digital halftoning as a method for fast communication of images both on print and on computer terminal screens. This method quickly produces a good-quality halftone rendition of a grey-scale image. These images are suitable for display and printout on inexpensive devices that normally do not have grey-scale capability. The algorithm is based on a previously published error-propagation technique. We improved the algorithm by including a factor that accounts for the difference in size between light and dark points on various devices. The algorithm is extended to devices that have two bitplanes (VT240), and the execution and transmission times are reduced. At Tufts-New England Medical Center in Boston, this program has been used in processing and reporting the results of muscle and nerve biopsies. At the Eye Research Institute in Boston, it has been used to report the results of retinal visual field mapping. This technique has a wide range of applications. It allows "image processing" to be done on computers that have no traditional image-processing hardware. It allows several users to operate simultaneously on time-shared systems that have only a single image-processor. Images are displayed on 1 or 2 bitplane devices (LA50 printers or VT240 terminals). It allows image transmission over long distances --replacing video communication equipment with RS232 cables and modems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for converting paper-written electrocardiograms to one dimensional (1-D) signals for archival storage on floppy disk is presented here. Appropriate image processing techniques were employed to remove the back-ground noise inherent to ECG recorder charts and to reconstruct the ECG waveform. The entire process consists of (1) digitization of paper-written ECGs with an image processing system via a TV camera; (2) image preprocessing, including histogram filtering and binary image generation; (3) ECG feature extraction and ECG wave tracing, and (4) transmission of the processed ECG data to IBM-PC compatible floppy disks for storage and retrieval. The algorithms employed here may also be used in the recognition of paper-written EEG or EMG and may be useful in robotic vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a pel-recursive Wiener-based displacement estimation algorithm is introduced for interframe image coding applications. This algorithm is based on the assumption that both the so-called update and the linearization error are samples of stochastic processes. It provides a linear least-squares estimate and has proven to be very successful to compensate motion in some typical video conferencing scenes. A comparison of the Wiener-based algorithm with some other well-known pel-recursive techniques shows the favourable behavior of the Wiener algorithm with respect to robustness, stability and convergence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simplified adaptive linear predictive coding scheme is proposed in this paper for video applications. The least-absolute-value (L1) and the least-squares (L2) criteria are considered. The scheme uses a vector formulation similar to DPCM techniques in video coding to simplify the prediction coefficient calculation from 2-D to 1-D. Higher-order prediction is further simplied using a lattice structure, which can be viewed as a concatenation of identical first-order structures. The computation complexity of high-order prediction coefficients is thus decomposed into simple successive first-order calculation. The lattice structure also allows the stability constraints to be easily imposed, which is not the case in a general 2-D transversal formulation. The prediction error of the proposed scheme is a decreasing function of the prediction order, which shows the gain of this analytic approach over fixed predictor DPCM. The Ll scheme is more robust in the presence of noise while the L2 scheme is easier to solve computationally. Simulation results show clearly the advantages of the proposed scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Packet switching of variable bit-rate real-time video sources is a means for the efficient sharing of communication resources, while maintaining a uniform picture quality. Performance analyses for the statistical multiplexing of such video sources are required as a first step towards assessing the feasibility of packet switched video. This paper extends our earlier work in modelling video sources which have been coded using inter-frame coding schemes, and in carrying out buffer queueing analyses for the multiplexing of several such sources. Our previous models and analysis were suitable for relatively uniform activity scenes. Here we consider models and queueing analysis for more realistic scenes with multiple activity levels where the coder output bit-rates may change violently. We present correlated Markov source models for the corresponding sources, and using a flow-equivalent queueing analysis, obtain common buffer queue distributions and probabilities of packet loss. Our results demonstrate efficient resource sharing of packetized video on a single link, due to the smoothing effect of multiplexing several variable-rate video sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sub-band coding has been investigated for the novel application of video transmission over packet-switched networks. The scheme, which divides the input signal into frequency bands in all three dimensions, seems promising in that it lends itself to parallel implementation, it is robust enough to handle errors due to lost packets; and it yields high compression with sustained good quality. Moreover, it may be well integrated with the network to handle issues like flow-control and error-handling. The article presents the underlying design goals together with a software implementation and associated results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the previous paper, the techniques of dual-mode coding were introduced. One of the techniques uses the two-dimensional delta modulation to code the image pixels. In this scheme, the predictor is switched between the Graham's predictor and the normal predictor which sends the direction bits. The bit rate is the lowest of the considered systems. In this paper, the channel coding of the system is considered. Especially in coding the difference bits, several methods were studied including the ordering of the bits according to the two-dimensional statistics of the difference bit map. The window for the predictor of the difference bits was designed to maximize the coding efficiency. For a variety of images tested in this paper, the bit rates for the difference bits were in the range 0.6717 - 0.8774 bits/pixel which are about 11 to 28 percent less than those of the one-dimensional run-length coded bits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a proposed system for the generation, display, and animation of 3-D holographic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides an overview of Bellcore's activities in the area of nationally compatible, customer controllable, DS3 rate (45 Mb/s), multipoint, digital networks for distribution to and collection from affiliates of broadcast quality NTSC television on terrestrial digital fiber optic and microwave interoffice transmission facilities (more than 215 cities are now interconnected) in a manner which rivals today's domestic satellite networks in performance, reliability, flexibility and security. It also outlines a three year work program leading toward a multi-carrier, multi-supplier, multi-broadcaster trial of such a network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.