PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Nonlinear signal processing elements are increasingly needed in current signal processing systems. Stack filters form a large class of nonlinear filters, which have found a range of applications, giving excellent results in the area of noise filtering. Naturally, the development of fast procedures for the optimization of stack filters is one of the major aims in the research in the field. The objective of the optimization method presented in this paper is to find the stack filter producing optimal noise attenuation and satisfying given constraints. The constraints may limit the search into a set of stack filters with some common statistical description or they may describe certain structures which must be preserved or deleted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Soft morphological filters form a large class of nonlinear filters with many desirable properties. However, few design methods exist for these filters and in the existing methods the selection of the filter composition tends to be ad-hoc and application specific. This paper demonstrates how optimization schemes, simulated annealing and genetic algorithms, can be employed in the search for optimal soft morphological filter sequences realizing optimal performance in a given signal processing task. This paper describes also the modifications in the optimization schemes required to obtain sufficient convergence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A one-dimensional discrete Boolean model is a random process on the discrete line where random-length line segments are positioned according to the outcomes of a Bernoulli process. Points on the discrete line are either covered or left uncovered by a realization of the process. An observation of the process consists of runs of covered and not-covered points, called black and white runlengths, respectively. The black and white runlengths form an alternating sequence of independent random variables. We show how the Boolean model is completely determined by probability distributions of these random variables by giving explicit formulas linking the marking probability of the Bernoulli process and segment length distribution with the runlength distributions. The black runlength density is expressed recursively in terms of the marking probability and segment length distribution and white runlengths are shown to have a geometric probability law. Filtering for the Boolean model can also be done via runlengths. The optimal minimum mean absolute error filter for union noise is computed as the binary conditional expectation for windowed observations, expressible as a function observed black runlengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptation of an opening possessing a multiparameter structuring element is studied in the context of Markov chains by treating the multiple parameters as a vector r defining the state of the system and considering the operative filter (Lambda) r to be opening by reconstruction. Adaptation of (Lambda) r (transition of r) is in accordance to whether or not (Lambda) r correctly or incorrectly passes signal and noise grains sampled from the image. Signal and noise are modeled as unions of randomly parameterized and randomly translated primary grains. Transition probabilities are discussed for two adaptation protocols and the state- probability increment equations are deduced from the appropriate Chapman-Kolmogorov equations. Adaptation convergence is characterized by the steady-state distributions of the Markov chains and these are computed numerically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many methods have been developed to recognize objects in a scene; most involving a preprocessing step to extract local information from the image of the scene. The non-linear sieve decomposition has already been shown to be a successful low-level process in machine vision. Matched sieves, where the local granularity is compared to that of a template, are effective for locating and rejecting non-matching signals. A single example of the object to be located is used to build a granularity template. This is unnecessarily restrictive since there is no generalization over a training set of target patterns, nor is the template modified to account for granules that, because of noise, do not contribute to the classification process. This paper addresses the next step towards an automatic classifier based upon the sieve decomposition. A genetic algorithm is used to configure a population of templates. These templates are evaluated at every cycle in order to generalize the population over a series of target patterns, whilst rejecting noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral approaches to the selection probabilities of stack filters are derived. The spectral algorithms are given for the computation of the rank and sample selection probability vectors. They have computational complexity O(2N), where N is the number of input samples within the window. The main advantage of the spectral algorithms to the nonspectral ones is that spectral algorithms are universal in the sense that the complexities of these algorithms are independent on the logical function used as the base for stack filtering. They are also straightforward to implement and fast spectral transforms exist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we consider the formation of morphological templates using adaptive resonance theory. We examine the role of object variability and noise on the clustering of different sized objects as a function of the vigilance parameter. We demonstrate that the fuzzy adaptive resonance theory is robust in the presence of noise but that for poor choice of vigilance there is a proliferation of prototypical categories. We apply the technique to detection of abnormal cells in pap smears.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive smoothing filter is proposed for reducing non-stationary or mixed noises efficiently. The output of the adaptive filter is the weighted sum of typical five L-filters' outputs. The weights are estimated by using fuzzy rules. Since the antecedents of the fuzzy rules can be composed of several local measurements, it is possible for the proposed filter to adjust its weights to adapt to local data. The performance of the proposed filter is compared to several reported filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have discussed a morphologically based nonlinear document degradation model to characterize the perturbation process associated with the printing and scanning process [KHP93, KHP94]. In this paper we use the nonparametric estimation algorithm discussed in [KHP93, KHP94] for estimating the sizes of the structuring elements of the degradation model. Other parameters of the model can be estimated in a similar fashion. Thus, given a small sample of (real) scanned documents, we can estimate the parameters of the model using the nonparametric estimation algorithm, and use the estimated parameters to create a large sample of simulated documents with degradation characteristics similar to that of the real scanned documents. The large simulated sample can then be used for various purposes, for example, training classifiers, estimating performance of OCR algorithms, choosing parameter values in noise removal algorithms, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper it is shown how to calculate the joint distribution function of two different stack filters sharing the same arguments. The input signal is modeled as a sequence of independent, but not necessarily identical, random variables. Supposing that the input signal is filtered with stack filter, we apply the derived formula to find the joint distribution function of any two samples in the output signal. In the paper we first go through the derivation of the formula starting with real-valued stack filters and the definition of the joint distribution function of their outputs. After that we change into binary domain where the enumeration of the possible cases turns out to be quite a straightforward task. It also allows reasonably compact expression of the final formula. The joint distribution formula for stack filters has many possible applications. For example it allows the analytic computation of the autocorrelation functions of these filters. It may also be useful in system reliability studies where the failure of a system is derived as a function of its components states. Often this function can be composed of solely min- and max-operations, i.e., it possesses the weak superposition property called stack decomposition. The paper includes interesting examples where the joint distribution function is obtained in a particularly neat form. Also a special symmetry class of stack filters is characterized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Granulometric method-of-moment estimation is based on the asymptotic representation of expectations of granulometric moments in terms of grain-sizing model parameters, mixture proportions, and geometric constants. Method-of-moment estimation for sizing parameters and mixture proportions is accomplished by replacing granulometric-moment expectations with moment estimations from image realizations and then solving the resulting system of equations for the model parameters. The present paper considers the special case in which all sizing parameters for the various shapes generating the random image possess a common gamma sizing distribution. In particular, the usually difficult system of nonlinear equations reduces to a more computationally tractable form, which is important when there is more than a single image generator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an efficient approach for image restoration of noisy data is suggested. This approach combines the Monte Carlo image restoration technique and the Morrison noise removal methods. The mean squared error (MSE) criterion is used to test the performance of the Monte Carlo method with and without prior-application of the Morrison noise removal method. The methods for facilitating the Monte Carlo walk to the brightest regions of the image are discussed and a new approach is suggested. It is shown that the Monte Carlo technique is potentially very fast with good resolution. The Morrison noise removal method smoothes the data at the first iteration and proceeds to restore the data back to its original noisy form at later iterations. To achieve some noise suppression, one can stop the Morrison iterations before it converges to the original noisy form. The Monte Carlo method is then applied to the noise suppressed data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Textures are degraded by Gaussian noise in the process of image acquisition. The restoration of a texture is very important for later texture analysis and classification. In this paper, the method of ranked residuals is proposed to restore binary texture which is corrupted by Gaussian noise. This method not only deletes the noise but also preserves all details of a texture. In addition, it has the property of preserving any line endings (not necessarily straight) and any boundary (concave or convex) at any orientation, edges, and corners. The main idea of ranked residual method is that it selects the windowed pixels that are closest to the windowed central value as the subset and chooses an estimator (median, mean, LMS, etc.) to estimate the central value. This allows us to adapt our choice of subsets. Therefore whatever the shape of texture looks like, the filter can preserve the texture detail and eliminate the noise at the same time. Some synthetic and real textures are used to demonstrate the properties of this filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the binary segmentation an image transforms into a binary representation, in which the regions of interest as objects (or their parts) for further analysis are detected as connected components. The underlying image model for binary segmentation and analysis is composed of two separated parts: the first one is to model image domain by using notions and operations of mathematical morphology and the second one is to model the values of intensity function, defined on this domain. The proposed morphological operator transforms gray-scale images into binary ones by comparing image local properties within the structuring elements or structuring regions with a tolerance threshold, giving eroded objects as connected components and dilated contours for further analysis. Since the implementation of this operation is rather complicated, fast algorithms to calculate local properties of intensity function (e.g. mean, square deviation, median, absolute deviation, etc.), using spatial recursion, have been developed. They give a speed-up of order O(N), where N equals L X L is the structuring element size, for computing, e.g. local mean and variance, as compared with their naive calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Block based image and video coders operating in the DCT domain are affected by ringing, blurring and `blocking' artifacts due to quantization of DCT coefficients. At high compression ratios removal of blocking effects is required to improve service quality. For this purpose, a simple post-processing is suggested in the JPEG draft, while computationally expensive simulated annealing techniques are employed to account for the statistical behavior of actual images. In this contribution, a technique suggested by the criterion of separating the blocking artifacts from the image components in a `visual features space' is proposed. The visual feature space is obtained by filtering the decoded image with a `first order harmonic angular filter' (HAF) whose complex output represents the edges' strengths and orientations through its magnitude and phase. In this space both the image and the blocking artifacts are described by simple statistical models and blocking removal can then be performed by a proper non linear, zero memory, sub-optimal Bayesian estimator. Finally a restored image is reconstructed by the inverse of the HAF. After a brief introduction on the HAF feature space properties, the contribution provides a stochastic model of both the signal and the blocking patterns in the visual features space and the derivation of a corresponding sub-optimal `zero memory' Bayesian estimator. Computational aspects are addressed and performances are illustrated through classical examples for different levels of blocking artifacts on JPEG decoded images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We chose the optimal method of nonlinear restoration by probabilistic approaches. As a criterium for the optimal choice, it is mean-squared-value of the restoration error. As prior information about random object and noise, there are autocorrelation functions of the object and noise. We are using unknown strictly monotonous nonlinear operator to construct a broad multitude of nonlinear methods for random object restoration. The reconstruction process consists of carrying out the recurrent scheme with proved convergence. The criterium for quality of solution is based on the error probability for object restoration to exceed any previously settled value. The solution of this problem is received for an isoplanatic imaging system under the assumption that the point-sampled spectra of the initial object and noise are independent. This analytical solution is then used as the first approximation for solution of the general problem. We have derived the recurrent scheme for the numerical solution of this task. The recurrent process is converging to the solution of a linear imaging equation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A partial closing is class of edge preserving operators where a dilation-like operation is followed by an erosion with a convex structuring element. These operators are increasing, and edge preserving, but in general satisfy none of the other formal properties of the standard morphological closing which is a special case of this operator. The purpose of the partial closing is the restoration of certain classes of images by filling in gaps caused by noise. However, the examples and analysis to be given involves one dimensional images. The method can be applied to two dimensional images that are comprised of short line segments that occur for example in character strokes or image edges after an edge detection operation. One type of partial closing is an order statistic filter followed by an erosion. Another type-a dilation partial closing is a dilation with a sparse structuring element followed by an erosion with a convex structuring element. Dilation partial closings exist that are excellent approximations to the median filters with sliding windows of diameters 3, 5, and 7. The use of dilation partial closings in place of the median filters results in a considerable savings in computer time. The statistics of the partial closings are independent of the threshold. Thus the filters can be generalized to gray levels using stack filters. The dilation partial filters are then expressed in terms of minima and maxima.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational mathematical morphology is extended to provide computational representations of increasing and nonincreasing windowed translation-invariant operators of the form (psi) : LN yields M, where L and M are complete lattices. Representations are grounded on the Riemann zeta function and provide lattice-valued extensions of the classical disjunctive- normal-form, reduced, and positive logical representations. Both direct and dual representations are given. Representations are morphological because they involve elemental forms of erosion, dilation, or the hit-or-miss transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of a quasilinear space generalizing the linear space is considered, and, on the basis of it, a concept of quasilinear systems is introduced, which allows us to construct, unify, and classify wide classes of nonlinear systems, such as median filters, morphological and homomorphic systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive analysis of massive data sets acquired by different remote sensors (RS) in microwave (MW), infrared (IR), and visible (V) bands requires special methods and algorithms. Different kind of noises and inaccuracies can deteriorate input data and give rise to considerable errors when an illposed inverse problem is solved. We are further developing a processing system to analyze multitemporal and multispectral images of different objects. The main advantages of this system are: the possibilities to restore 2D multispectral (multitemporal) images, and to find the highest correlation regions on an image produced by different sensors (MW: dm, cm, mm, SAR and non SAR; IR and V and oth) as well as the use of the new robust order-statistic filtering procedures proposed. Different RS problems have been investigated: rural or vegetation covered areas sensed by MW airborne sensors; forest fire areas, industrial plants in the night, electrical power elements (IR and V airborne sensors). Numerical simulation and experimental results have shown the efficiency of the proposed restoration and order-statistic filtering techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
More and more attention has been given to the edge jitter that may occur when a robust nonlinear filter for edge restoration is applied. In this paper, we consider the possible edge jitter introduced by the median filter i.e., the bias behavior of the median filter at an edge both for edge location and for restoration of the constant signal on each side of the edge. A nonparametric method is proposed to evaluate the edge jitter (location only) of the nonlinear filters. Simulation shows that the method gives an efficient and effective way to study the performance of a nonlinear filter at edges. The second part of this paper concentrates on quantifying the bias behavior of the median filter in restoration of the constant signals on either side of the edges in one and two dimensional signals. Median filters preserve convex and concave edges, this is a well known fact. The behavior of the median filter at perfect edges which have been corrupted by noise has not been studied in detail. It is proven that the median filter actually `rearranges' the noisy perfect edges to concave or convex edges i.e., they do not preserve perfect edges perturbed by noise and, indeed, preserve the noise as well as the edges. It is argued in this paper that the median filter actually implements a max or min filter at the edges in one dimensional signals and in the two dimensional signals the output of the median is a value, the order of which is a function of the height and width of the window. Proofs in the second part of the paper question the credibility of the median filter as an edge preserving filter in the presence of noise, in that the median yields biased estimates of each constant region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper it is shown how fuzzy control techniques can be directly applied on adaptive weighted average filters design. It is possible for proposed filters to adjust the weights to adapt to local data in image in order to achieve maximum noise reduction in uniform areas and preserve details of images as well. Furthermore, a method for the automatic tuning of the fuzzy filter has been developed. According to the proposed tuning method, the fuzzy filter is tuned optimally in the mean square error (MSE) sense. This work shows a new approach for image processing based on fuzzy rules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fuzzy theory is a controversial, subjective generalization of conventional set theory enjoying great commercial success in control and signal processing. In this paper, we show that stack filters, already related to morphology, require slight modification to implement basic fuzzy operators. This paper therefore provides a fast errorless method for implementation of the basic fuzzy operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a fast implementation method for grayscale function processing (FP) systems. The proposed method is based on the matrix representation of the FP system using the basis matrix (BM) and the block basis matrix (BBM). The computational efficiency derives from recursive algorithms based on some characteristics of the BM and BBM matrices. It is shown that, with the proposed scheme, both opening and closing can be determined in real time by 2N - 2 additions and 2N - 2 comparisons, and open-closing and close-opening by 4N - 4 additions and 4N - 4 comparisons, when the size of the GSE is equal to N.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Matrix-array ultrasound produces real-time 3D images of the heart, by employing a square array of transducers to steer the ultrasound beam in three dimensions electronically with no moving parts. Other 3D modalities such as MR, MUGA, and CT require the use of gated studies, which combine many cardiac cycles to produce a single average cycle. Three- dimensional ultrasound eliminates this restriction, in theory permitting the continuous measurement of cardiac ventricular volume, which we call the volumetricardiogram. Towards implementing the volumetricardiogram, we have developed the flow integration transform (FIT), which operates on a 2D slice within the volumetric ultrasound data. The 3D ultrasound machine's scan converter produces a set of such slices in real time, at any desired location and orientation, to which the FIT may then be applied. Although lacking rotational or scale invariance, the FIT is designed to operate in dedicated hardware where an entire transform could be completed within a few microseconds with present integrated circuit technology. This speed would permit the application of a large battery of test shapes, or the evolution of the test shape to converge on that of the actual target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IDP++, image and data processing in C++, is a set of signal processing libraries written in C++. It is a multi-dimension (up to four dimensions), multi-data type (implemented through templates) signal processing extension to C++. IDP++ takes advantage of the object-oriented compiler technology to provide `information hiding.' Users need only know C, not C++. Signals or data sets are treated like any other variable with a defined set of operators and functions. We present here some examples of the nonlinear filter library within IDP++. Specifically, the results of min, max, median, (alpha) -trimmed mean, and edge-trimmed mean filters as applied to a real aperture radar (RAR) and synthetic aperture radar (SAR) data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optoelectronic system has been designed to pre-screen pap-smear slides and detect the suspicious cells using the hit/miss transform. Computer simulation of the algorithm tested on 184 pap-smear images detected 95% of the suspicious region as suspect while tagging just 5% of the normal regions as suspect. An optoelectronic implementation of the hit/miss transform using a 4f Vander-Lugt correlator architecture is proposed and demonstrated with experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One form of digital document enhancement maps a binary image to N-bits/pixel while maintaining constant spatial sampling resolution. Pixels in character stroke interiors and in the background typically retain black or white values, while certain edge pixels map to intermediate `gray' levels, effectively lessening the jagged appearance of curved and angled strokes. This paper presents a binary-to-gray operator that employs binary erosion-based filters in an iterative stacking architecture. We describe the filter architecture, optimal design method, and demonstrate binary-to-gray enhancement using only a small number of morphological erosions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase stepping digital interference microscopy (PSM) is a measuring technique that can be used for the three-dimensional analysis of small surface defects in many microelectronic applications on bulk materials and epitaxial layers. If the resulting intensities for the interference after regular phase stepping are I1, I2,...,Ip, where p equals 3,4,5,... according to the method, then the relief h(x,y) is given as a well known function of the Ii. We have used fuzzy logic to improve the algorithms in order to secure the final result and also to afford a possibility for a subsample resolution: first we define into the I-space N fuzzy classes in a non linear way; then the regularity needed in phase stepping leads us to define theoretical p-uples of fuzzy classes. Finally we correct the p-uples of measured values (I1, I2,..., Ip) according to the membershipness to the theoretical p-uples of fuzzy classes. These algorithms lead to important image improvements, offering better accuracy and a partial solution to the problem of image noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Various edge detectors have been proposed as well as several different types of adaptive edge detectors, but the performance of many of these edge detectors depends on the features and the noise present in the grayscale image. Attempts have been made to extend edge detection to color images by applying grayscale edge detection methods to each of the individual red, blue, and green color components as well as to the hue, saturation, and intensity color components of the color image. The modulus 2(pi) nature of the hue color component makes its detection difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Normal edge detection of a color image containing adjacent pixels with hue of 0 and 2(pi) could yield the presence of an edge when an edge is really not present. This paper presents a method of mapping the 2(pi) modulus hue space to a linear space enabling the edge detection of the hue color component using the Sobel edge detector. The results of this algorithm are compared against the edge detection methods using the red, blue, and green color components. By combining the hue edge image with the intensity and saturation edge images, more edge information is observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of secondary processing digital algorithm design and selection for multichannel remote sensing, radar images filtering, and first stage segmentation/recognition is discussed. On the basis of radar image specific features analysis it is shown that the application of order statistic algorithms is expedient for different stages of data processing. Some novel techniques and approaches to image filtering and interpreting are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accuracy of minor dimensional light target coordinate determination by a dissectoral tracking system with regard to the non-stationary state of noise is analyzed. The results are obtained by statistical simulation of the extended Kalman filter with a computer. Comparison of the suggested signal models is conducted. The robustness and stability of the algorithm to deviation of the model parameters from nominal value and the accuracy of coordinate determination at different target contrast, at the relationship signal/noise and dissectoral tracking system parameters, have been examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of fitting straight lines to noisy images, the subspace-based line detection algorithm (SLIDE) offers two benefits over the conventional Hough transform method: low computational complexity and high resolution of the estimates. These improvements are due to the fact that the line fitting problem is converted to an equivalent problem of fitting exponentials to a time series, which can then be solved efficiently by using subspace methods like ESPRIT. The SLIDE algorithm establishes this equivalence by transforming the two dimensional binary image to a single observation vector using a propagation scheme. The difficulty with having a single observation vector in this approach is that the total number of snapshots may not be adequately large. This limits the estimation accuracy and degrades the performance of the detection algorithm (which estimates the number of present lines in the image). In this paper we propose to utilize multiple observation vectors to circumvent the problem of inadequate number of snapshots. The challenge with the multiple observation vector approach is how to combine these vectors to form a covariance matrix that possesses the desired structure. We overcome this difficulty by considering only a specific set of observation vectors along with an interleaving technique. Simulation results show that this technique significantly improves the efficiency of the detection algorithm as well as the accuracy of the estimates of the line angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anisotropic diffusion has been extensively used as an efficient nonlinear filtering technique for simultaneously performing contrast enhancement and noise reduction, and for deriving consistent scale-space image descriptions. In this paper, we present a general study of anisotropic diffusion schemes based on differential group-invariant representations of local image structure. We show that the local geometry (i.e., shape and scale) of the photometric surface is intrinsically specified by two dual families of curves, respectively consisting of isophotes and stream lines, which remain invariant under isometries in the image domain. Within this framework, anisotropic diffusive processes induce a deformation flow on the network of isophotes and stream lines. Deriving the general expression of this flow leads to identifying canonical forms for admissible conduction functions, that yield an optimal and stable preservation of significant image structures. Moreover, relating scale to directional variations of isophote density results in controlling the diffusion dynamics by means of a heterogeneous damping density which allows us to adaptively reduce diffusion speed in the vicinity of high gradient lines while increasing it within stationary intensity domains. Finally, these results are extended to arbitrary image dimensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Normalized cross-correlation is a standard way of looking for examples of a target in an image. A way to avoid scanning the entire image is explored. It exploits the observation that in the locality of a target object, the watershed of a smoothed image is similar to the watershed of a smoothed image of the target. Therefore, it is only necessary to align the target watershed with the image watershed and track the one along the other. This can be faster than scanning the entire image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient morphological method, based on watershed concept, is presented for the segmentation of overlapped particles with similar sizes. The method described first calculates the distance functions of particles in an image and then compares the boundary lengths of segmented regions with those of their minima to discriminate between desirable and undesirable segments. A post-processing is followed to merge the undesirable segments into the desirable neighboring segments, computing the new distance functions of the former segments. Experimental results are given with circular particles significantly overlapped along with comparisons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows that images can be decomposed into a series of homotopic subsets by means of morphological erosions using a series of disk-like structuring elements, and the skeleton can be obtained from the homotopic subset by detecting the vertices of each homotopic subset. It is an affine transform to map objects into a series of subsets and the skeleton points can be obtained from the mapped subsets individually. When a digital disk is rotation-invariant, the mapping is rotation-invariant. Consequently, the skeleton is rotation-invariant. It is shown that the convex vertices of an object of which curvatures change significantly are the skeleton points. Two algorithms for detecting vertices are presented in this paper. A fast mapping algorithm and a reconstruction algorithm are presented. Compared to other morphological methods, this proposed skeletonization method generates more accurate skeletons, particularly in the cases involving rotated shapes. Based on the skeleton, we introduce a new concept of major points (MPs) for skeleton descriptions. This is a skeleton sampling method. MPs can be obtained through choosing skeleton points with maximally weighted self-information. The MPs emphasize the contribution of each skeleton point to original objects. This paper also presents a detailed description on selections of MPs, where an object can be partially reconstructed via MPs based on a proposed reconstruction criterion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new parallel algorithm for detecting corners of planar curves or shapes is proposed in this paper. This algorithm is based on the morphological residue and corner characteristics analysis. The relationship between curvature radii and structuring elements is investigated. The location of the corners is detected based on the morphological residue set and the curvature extrema support region analysis. Noise influence is suppressed through the smoothing property of the algorithm. The algorithm works simultaneously on curves and shapes of multiple objects. The approach is different from chain-code based corner detection algorithms which need floating point computation for a curvature. For multiple objects, traditional algorithms deal with each curve individually, therefore, for multiply connected shapes or curves with intersection, coding and curvature computation are difficult and costly. The proposed algorithm deals with a whole image as a single object, therefore the computation complexity is significantly reduced. Our experiment demonstrates that the algorithm is fast and effective to execute on an SIMD parallel computer. This paper also presents a new parallel filling algorithm: a boundary-constrained morphological method for filling closed curves into shapes for corner detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By the interpretation of the segmentation process as an assignment of labels to objects dependent on spatial constraints, image segmentation can be described as a constraint satisfaction problem (CSP). Starting from this model, a new technique for the segmentation of medical images is presented: the constraint satisfaction synergetic potential network (CSSPN). In CSSPN the actually possible labels of an object are represented by singular points of synergetic potential systems. The fuzzy-algorithmic initialization model of the CSSPN allows a label-number-independent dimensioning of the network with n2 nodes. The parallel relaxation dynamics of the CSSPN controlled by interactions of the potential systems will bring selection or evolution of the input image by complete deterministic or stochastically perturbed equations of motion in the potential systems. Constraint functions are significant to the relaxation dynamics and to the result of segmentation within an object adjacency, information of the image model like the image semantics or the optimization strategy of network parameters are mapped onto the CSP with them. Experimental comparative analyses of the segmentation results demonstrate the efficiency of the technique and confirm that the CSSPN is a very promising method for image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report is devoted to developing algorithms of symmetry measure computation for binary images. We consider central, rotation and reflection symmetry measures. The algorithms are based either on contour representation of objects or on corresponding symmetrization transformations. Boundary representations based on a centroidal profile as well as on an equi- angular signature are applied. Using centroidal profile and signature representations we propose several simple and fast algorithms for measure computation of convex images. These algorithms are also valid for the evaluation of symmetry measures of nonconvex images. The second group of algorithms is based on a set symmetrization. We apply the corresponding symmetrization transformations via Minkowski addition (difference body, Blaschke symmetrization, etc.). Then the ratio of areas of original and symmetrical images defines an evaluation of symmetry measures. Kovner-Besicovitch symmetry measure is also investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose to solve the incorrect inverse problem of image restoration using the estimation of quasi-solution, which is the member of some optimized compact set. For such a compact set we choose the subspace, which is spanned on the several first eigenvectors of symmetrized matrix CK equals (C * K + K * C)/2. Here C is an estimation of covariation of primary images, and K is (pseudo)inverse of noise covariation matrix. This representation provides concentration of noiseless images' energy in main spectral components and effective noise suppression under spectrum truncation. The non-orthogonal basic set for representation of distorted images was obtained from CK set by distortion operator. These two sets are used to compact the matrix representation of the distortion operator and its pseudo-inversion by singular decomposition procedure. We use the novel algorithm of training set decorrelation transformation for calculation of CK eigenvectors. In order to improve the restored image we use the iterative nonlinear procedure, which is based on influence function technique. Intermediate image is similar to primary distorted image, but with rough errors cleaned and omitted spots restored as well. A priori information about non-negative sign and limited variation of restored and intermediate images can easily be accounted. The proposed scheme of image processing enables restoration of large images with rough distortions in extremely high noise (up to 100%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A vector predictor is an integral part of the predictive vector quantization (PVQ) scheme. The performance of a predictor deteriorates as the vector dimension (block size) is increased. This makes it necessary to investigate new design techniques in order to design a vector predictor which gives better performance when compared to a conventional vector predictor. This paper investigates several neural network configurations which can be employed in order to design a vector predictor. The first neural network investigated in order to design the vector predictor is the multi-layer perceptron. The problem with multi-layer perceptron is the long convergence time which is undesirable when the on-line training of the neural network is required. Another neural network called functional link neural network has been shown to have fast convergence. The use of this network as a vector predictor is also investigated. The third neural network investigated is a recurrent type neural net. It is similar to the multi-layer perceptron except that a part of the predicted output is fed back to the hidden layer/layers in an attempt to further improve the current prediction. Finally, the use of a radial-basis function (RBF) network is also investigated for designing the vector predictor. The performances of the above mentioned neural network vector predictors are evaluated and compared with that of a conventional linear vector predictor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study image block coding using the Boltzmann machines. We first briefly review the basic idea of storing images in the invariant distributions of the Markov chains, in particular, the Markov chains are the Boltzmann machines. Then we present the perfect Boltzmann machine (FBM) which can simulate any distributions. A FBM with n neurons has neural connections up to nth order. In practice, only the Boltzmann machines (second order) or third order Boltzmann machines are used. We discuss how to convert a FBM to a Boltzmann machine. Finally, we present the idea of image block coding using the Boltzmann machines. The idea is to represent images in terms of blocks in a similar way as JPEG. The DCT in JPEG is replaced by Boltzmann machine transformation. Image compression ratios for various selections of block size and parameter space are calculated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The major problems with finite state vector quantization (FSVQ) are the lack of accurate prediction of the current state, the state codebook design, and the amount of memory required to store all state codebooks. This paper presents a new FSVQ scheme called finite-state residual vector quantization (FSRVQ), in which a neural network based state prediction is used. Furthermore, a novel tree-structured competitive neural network is used to jointly design the next-state and the state codebooks for the proposed FSRVQ. The proposed FSRVQ scheme differs from the conventional FSVQ in that the state codebooks encode the residual vectors instead of the original vectors. The neural network predictor predicts the current block based on the four previously encoded blocks. The index of the codevector closest to the predicted vector (in the Euclidean distance sense) represents the current state. The residual vector obtained by subtracting the predicted vector from the original vector is then encoded using the current state codebook. The neural network predictor is trained using the back propagation learning algorithm. The next-state codebook and the corresponding state codebooks are jointly designed using the tree-structured competitive neural network. This joint optimization eliminates the large number of unnecessary states which in turn reduces the memory requirement by several order of magnitude when compared to the ordinary FSVQ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image sequences are very difficult to analyze because of their high dimensionality. The large amount of visual data is generated even in a typical situation. That is why the number of data should be limited for information processing. Often, most information in a frame is relatively slowly changing background and only small pieces of a frame are new or novel. Our purpose is to process a time sequence of images and to model objects and/or background from an image sequence in a compact form suitable for recognition and processing. This problem is similar to compression problems and it can be solved optimally by using a truncated Karhunen-Loeve (KL) expansion of the process. This paper describes a new efficient method for novelty filtering of time-sequential images. This method uses a neural approach for calculating a truncate Karhunen-Loeve expansion of the process. The algorithm employs the multilayer neural networks and it exploits the error back-propagation learning algorithm. A neural network implementation seems to be a very promising and effective tool for novelty filtering on image sequence. The validity and performance of the proposed neural network architecture and associated learning algorithm have been tested by extensive computer simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A particularly challenging neural network application requiring high-speed and intensive image processing capability is target acquisition and discrimination. It requires spatio-temporal recognition of point and resolved targets at high speeds. A reconfigurable neural architecture may discriminate targets from clutter or classify targets once resolved. By mating a 64 X 64 pixel array infrared (IR) image sensor to a 3-D stack (cube) of 64 neural-net ICs along respective edges, every pixel would directly input to a neural network, thereby processing the information with full parallelism. However, the `cube' has to operate at 90 degree(s)K with < 250 nanoseconds signal processing speed and approximately 2 watts of power dissipation. Analog circuitry, where the spatially parallel input to the neural networks is also analog, would make this possible. Digital neural processing would require analog-to-digital converters on each IC, impractical with the power constraint. A versatile reconfigurable circuit is presented that offers a variety of neural architectures: multilayer perceptron, cascade backpropagation, and template matching with winner-take-all (WTA) circuitry. Special designs of analog neuron and synapse implemented in VLSI are presented which bear out high speed response both at room and low temperatures with synapse-neuron signal propagation times of approximately 100 ns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using Bayesian approach, early-vision tasks can be formulated into a statistical regularization problem. With the help of a Markov random field (MRF) description, an posterior energy function is defined on the image. Using MAP (maximum a posteriori) criterion, the restoration process becomes equivalent to the minimization of the non-convex energy. Stochastic methods offer a general framework to solve such difficult problems. We concentrate on image restoration preserving discontinuities, and on the so called Geman's energy function characterizing it. This energy is defined upon a continuous intensity field in interaction with a binary line process, allowing for sharp edges in the restoration. We propose an algorithm and hardware solutions for performing video rate stochastic minimization through a dedicated optoelectronic VLSI retina. The operation of the stochastic algorithm we present is twofold. On one hand, thermal equilibrium in the continuous field comes from a deterministic minimization perturbed by a quasi-static noise process. Quasi-static meaning that the noise process is constant during the minimization. The binary field, on the other hand, is updated using a Gibbs sampler technique. Next we propose a VLSI implementation of this algorithm. It features an asynchronous analogue stochastic resistive network implementing the thermal equilibrium of the continuous field, and a parallel array of synchronous stochastic processing elements providing Gibbs sampling of the binary line field. An optoelectronic VLSI efficient random number generator provides the retina with the massive amount of random numbers required for video rate operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural network paradigms are derived from biological nervous system and are characterized by massive parallelism. These networks have shown the capabilities of processing input-output mapping operations even where the transformation rules are not known, partially known, or ill-defined. For high-speed processing, we have fabricated neural network architectures as building-block chips with either a 32 X 32 matrix of synapses or a 32 X 31 array of synapses along with 32 neurons along a diagonal for a 32 X 32 matrix. Reconfigurability allows a variety of architectures from fully recurrent to fully feedforward, including constructive architectures such as cascade correlation. Further, a variety of gradient-descent learning algorithms have been implemented. Additionally, the chips being cascadable, larger size networks are easily assembled. An innovative scheme of combining two identical synapses on two respective chips in parallel nominally doubles the bit resolution from 7 bits (6-bit + sign) to 13 bits (12-bit + sign). We describe the feedforward net obtained by assembly of 8 chips on a board with nominally 13 bits of resolution for a hardware-in-the-loop learning of a feature classification problem involving map-data. This neural net hardware with 27 analog inputs and 7 outputs is able to learn to classify the features and provide the required output map at high speed with 89% accuracy. This result, with hardware's lower precision, etc., compares favorably with an accuracy of 92% obtained both by a neural network software simulation (floating point accuracy of synaptic weights) and a statistical technique of k-nearest neighbors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with an investigation of the properties and characteristics of a family of non linear filters, the adaptive single layer look up perceptrons -- SLLUP for short. One of the major aims of the investigation is to compare and contrast the family's performance with other filtering processes both linear and nonlinear.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image processing, one of the important problems is edge-preserving smoothing in mixed noise environment such that both Gaussian noise and impulsive noise exist. Recently, several types of the hybrid filter which is a kind of nonlinear filter have been proposed for this purpose. In this paper, a technique for edge-preserving smoothing is developed by using median finite impulse response (FIR) neural hybrid filters. This filter structure is represented by the cascade connection of median filter, FIR filter, and neural network. In this structure, the section of a median filter selects the median value among 3 points and the section of an FIR filter calculates the mean value of 3 points. The section of neural network consists of three layered structure and its inputs equal the output from the section of a median filter and the output from the section of an FIR filter. The major features of this filter are as follows: (1) This filter can adapt itself to the various noise environment through the learning of a training image. (2) Even if a priori data such a training image is unavailable, this filter can efficiently be applied to edge-preserving smoothing for the images degraded by the Gaussian and impulsive noises. Moreover, the structure of the proposed filter is very simple.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are concerned with object recognition in the framework of a navigation support system (NSS). Unlike a vision based navigation system where the navigating agent must solve obstacle avoidance problems, path planning, etc., different problems must be solved in the NSS: for instance it must inform the user that it has reached a desired location once objects associated with that location have been recognized. In this general context we present a framework to compute perceptual organization. It incorporates a number of concepts from human visual analysis especially the Gestalt laws of organization. Fuzzy techniques are used for the definition and evaluation of the grouping/non-grouping properties as well as for the construction of structures from grouped input tokens. This method takes as input the initially fitted line segments (tokens) and then recursively groups these tokens into higher level structures (tokens) such as lines, u-structures, quadrilaterals, etc. The output high level structures can then be used to compare with object models and thus lead to object recognition. In this paper inference (grouping) of line segments, line symmetry, junctions, closed regions and strands is presented. The approach is supported by experimental results on 2D images of an office scene environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural filters with multilayer backpropagation network have been proved to be able to define mostly all linear or non-linear filters. Because of the slowness of the networks' convergency, however, the applicable fields have been limited. In this paper, fuzzy logic is introduced to adjust learning rate and momentum parameter depending upon output errors and training times. This makes the convergency of the network greatly improved. Test curves are shown to prove the fast filters' performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the learning capacity, parallel structure, and tolerant performance of neural networks, we have used hybrid neural networks to recognize the body of a TSS-1 satellite made in Italy. A set of features based on both boundary points and a centroid of the body has been extracted from an image with the satellite body. The features have been proved to be rather stable to the changes of translation, rotation, magnification, and distortion. Therefore, object recognition with higher accuracy has been performed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computerized ultrasound tissue characterization has become an objective means for diagnosis of diseases. It is difficult to differentiate diffuse liver diseases, namely cirrhotic and fatty liver from a normal one, by visual inspection from the ultrasound images. The visual criteria for differentiating diffused diseases is rather confusing and highly dependent upon the sonographer's experience. The need for computerized tissue characterization is thus justified to quantitatively assist the sonographer for accurate differentiation and to minimize the degree of risk from erroneous interpretation. In this paper we used the fuzzy similarity measure as an approximate reasoning technique to find the maximum degree of matching between an unknown case defined by a feature vector and a family of prototypes (knowledge base). The feature vector used for the matching process contains 8 quantitative parameters (textural, acoustical, and speckle parameters) extracted from the ultrasound image. The steps done to match an unknown case with the family of prototypes (cirr, fatty, normal) are: Choosing the membership functions for each parameter, then obtaining the fuzzification matrix for the unknown case and the family of prototypes, then by the linguistic evaluation of two fuzzy quantities we obtain the similarity matrix, then by a simple aggregation method and the fuzzy integrals we obtain the degree of similarity. Finally, we find that the similarity measure results are comparable to the neural network classification techniques and it can be used in medical diagnosis to determine the pathology of the liver and to monitor the extent of the disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on curvature representation, a fuzzy Kohonen self-organizing feature mapping is combined with the fuzzy delta rule to recognize partially occluded objects. Because of learning and tolerant performance as well as fuzzy membership function, the fuzzy hybrid neural networks can recognize the objects with higher precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, several approaches including K-means, fuzzy K-means (FKM), fuzzy adaptive resonance theory (ART2) and fuzzy Kohonen self-organizing feature mapping (SOFM) are adapted to segment the texture image. In our tests five features, energy, entropy, correlation, homogeneity, and inertia, are used in texture analysis. The K-means algorithm has the following disadvantages: (1) supervised learning mode, (2) slow real-time ability, (3) instability. The FKM algorithm has improved the performance of the instability by means of the introduction of fuzzy distribution functions. The fuzzy ART2 has advantages, such as unsupervised training, high computation rates, and a great degree of fault tolerance (stability/plasticity). Fuzzy operator and mapping functions are added in the network to improve the generality. The fuzzy SOFM integrates the FKM algorithm into fuzzy membership value as a learning rate and updates stratifies of the Kohonen network. This yields automatic adjustment of both the learning rate distribution and update neighborhood, and has an optimization problem related to FKM. Therefore, the fuzzy SOFM is independent of the sequence of feed of input patterns whereas final weight vectors by the Kohonen method depend on the sequence. The fuzzy SOFM is `self-organizing' since the `size' of the update neighborhood and learning rate are automatically adjusted during learning. Clustering errors are reduced by fuzzy SOFM as well as better convergence. The numerical results show that fuzzy ART2 and fuzzy SOFM are better than K-means algorithms. The images segmented by the algorithms are given to prove their performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the recognition of handwriting text character by using a reduced character methodology and neural nets (BNN, RNN). The reduced characters methodology is based on the representation (mapping) of the text characters on a small size 2-D array of 12 X 9. For the recognition process each character is considered as a composition of `main' and `secondary' features. The main features are the necessary and important parts of a character for its recognition. The secondary (or artistic) features are the parts of a character which contribute to its various representations. The reduced character methodology presented in this paper attempts to prove that the recognition of a reduced size character provides a robust approach for recognition of handwritten text. The RNN approach for handwritten character recognition is based upon recurrent neural networks. The recurrent networks have a feedback mechanism. The feedback mechanism acts to integrate new values of feature vector with their predecessors. The output is supervised according to a target function. These networks can deal with inputs and outputs that are explicit functions of time. A new way of associating shape information was used, which gives very consistent results for handwritten character recognition. In this scheme the `shadow' each character was considered to find was the distances between the margins of the character. The distances are normalized with respect to the maximum distance in the entire shape to minimize the effect of disproportionately formed characters. For this effort two neural networks and an attributed graph approach are used and their results are compared on a set of 5000 handwritten characters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Schistosomiasis is a major problem in Egypt, despite an active control program it is estimated to exist in about 1/3 of the population. Deposition of less functioning fibrous tissues in the liver is the major contributory factor to the hepatic pathology. Fibrous tissues consist of a complex array of connective matrix material and a variety of collagen isotopes. As a result of an increased stromal density (collagen content), the parenchyma became more ectogenic and less elastic (hard). In this study we investigated the effect of cardiac mechanical impulses from the heart and aorta on the kinetics of the liver parenchyma. Under conditions of controlled patient movements and suspended respiration, a 30 frame per second of 588 X 512 ultrasound images (cineloop, 32 pels per cm) are captured from an aTL ultrasound machine then digitized. The image acquisition is triggered by the R wave of the ECG of the patient. The motion that has a forced oscillation form in the liver parenchyma is quantified by tracking of small box (20 - 30 pels) in 16 directions for all the successive 30 frames. The tracking was done using block matching techniques (the max correlation between boxes in time, frequency domains, and the minimum SAD (sum absolute difference) between boxes). The motion is quantified for many regions at different positions within the liver parenchyma for 80 cases of variable degrees of schisto., cirrhotic livers, and for normal livers. The velocity of the tissue is calculated from the displacement (quantified motion), time between frames, and the scan time for the ultrasound scanner. We found that the motion in liver parenchyma is small in the order of very few millimeters, and the attenuation of the mechanical wave for one ECG cycle is higher in the schisto. and cirrhotic livers than in the normal ones. Finally quantification of motion in liver parenchyma due to cardiac impulses under controlled limb movement and respiration may be of value in the characterization of schisto. (elasticity based not scattering based). This value could be used together with the wide varieties of quantitative tissue characterization parameters for pathology differentiation and for differentiating subclasses of cirrhosis as well as the determination of the extent of bilharzial affection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a multi-layer neural network system to classify police mugshots according to the contours of the heads. In order to efficiently acquire enough information from the mugshots, an interactive algorithm performing image pre-processing including segmentation and curve fitting is presented, by which the contours of the human heads are extracted. From the contours obtained, a set of feature vectors consisting of 16 normalized measures is gathered. Since the feature vectors are distributed non-linearly separable in Hilbert space, a two layer Kohonen network is implemented to cluster these vectors. It has been demonstrated and proved that the multi-layer Kohonen network has a performance of non-linear partition, so it has more powerful pattern separability than conventional Kohonen network. Meanwhile, the fact that two layer Kohonen network is enough for dealing with the current non-linear partition problem is expressed. About 100 samples of mugshots are involved in the research, and the results are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with algorithms for target detection and recognition in infrared (IR) images. The second order differential method is developed to remove the correlation of noise and clutter, and multiframe cumulation is exploited for enhancing the target and suppressing background noise relatively. Backpropagation neural network is developed for target identifying. The proposed ANN is trained by unsupervised learning and supervised learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The storage capacity of the 2n-element number discrete neural networks models are analyzed in this paper. The present model can be applied to recognize the 22(n)-level gray or color images. Some examples to recognize the multistate English letters or Chinese words are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.