PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Filters can be designed using a pair of training signals, a desired one and a corrupted one, by finding from a filter class the filter that maps the corrupted training signal closest to the desired signal. This paper addresses the effect that the wordlength used in the training signals has on the applicability of the found optimal filters to other signals with different wordlength. The study is done by concentrating on three filter classes that are shown to reveal different aspects of the topic in hand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most general filtering methods for binary images are binary increasing or non-increasing filters. The amount of memory used by both methods during training rises exponentially with the window size. Also, both methods need training sets whose size rises exponentially with the number of the window elements. This limits the size of the window to maximum 5 by 5 pixels. In this paper we search for a suboptimal filter which can be implemented for larger window sizes. Applying certain constraints to the noise distribution, the conditional expectation is decomposed into noise dependent and original image dependent components. The resulting filter can be trained with the original image to learn the ideal patterns, while the noise properties can be extracted by both modeling or training on a reasonable training set. The amount of memory used by the new filter is proportional to a finite power of window size. It is shown that the new filter is a generalization of weighted order statistic filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large computational complexity arises in model-based ATR systems because an object's image is typically a function of several degrees of freedom. Most model-based ATR systems overcome this dependency by incorporating an exhaustive library of image views. This approach, however, requires enormous storage and extensive search processing. Some ATR systems reduce the size of the library by forming composite averaged images at the expense of reducing the captured pose specific information, usually resulting in a decrease in performance. The linear signal decomposition/direction of arrival (LSD/DOA) system, on the other hand, forms an essential-information object model which incorporates pose specific data into a much smaller data set, thus reducing the size of the image library with less loss of discrimination and pose estimation performance. The LSD/DOA system consists of two independent components: a computationally expensive off-line component which forms the object model and a computationally inexpensive on-line object recognition component. The focus of this paper is on the development of the multi-object generalized likelihood ratio test (GLRT) as applied to the LSD/DOA ATR system. Results are presented from the testing of the LSD/DOA multi-object ATR system for SAR imagery using four targets, represented over a wide range of viewing angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Boolean model is a random set process in which random shapes are positioned according to the outcomes of an independent point process. In the discrete case, the point process is Bernoulli. To do estimation on the two-dimensional discrete Boolean model, we sample the germ-grain model at widely spaced points. An observation using this procedure consists of jointly distributed horizontal and vertical runlengths. An approximate likelihood of each cross observation is computed. Since the observations are taken at widely spaced points, they are considered independent and are multiplied to form a likelihood function for the entire sampled process. Estimation for the two-dimensional process is done by maximizing the grand likelihood over the parameter space. Simulations on random-rectangle Boolean models show significant decrease in variance over the method using horizontal and vertical linear samples. Maximum-likelihood estimation can also be used to fit models to real textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assuming a random shape to be governed by a random generator and noise parameter vector, it is essential to optimally estimate the state of the generators given some set of extracted features based on the random shape. If the features used are analytically tied to shape and distortion parameters, the conditional densities involved in this Bayesian estimation problem are of a generalized nature and exist only on the manifold dictated by the particular probe. These generalized densities can be used in a conventional way to calculate the conditional- expectation estimates of the parameters. They may also be used to minimize the mean-square error on the manifold itself, thereby yielding an estimate of shape parameters consistent with the geometrical prior information provided by the observed feature set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Projectively invariant classification of patterns is constructed in terms of orbits of the group SL(2,C) acting on an extended complex line (image plane with complex coordinates) by Mobius transformations. It provides projectively adapted noncommutative harmonic analysis for patterns by decomposing a pattern into irreducible representations of the unitary principal series of SL(2,C). It is the projective analog of the classical (Euclidean) Fourier decomposition, well suited for the analysis of projectively distorted images such as aerial images of the same scene when taken from different vantage points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid neural network that can learn nonlinear morphological feature extraction and classification simultaneously, called the morphological shared-weight network (MSNN), is described. The feature extraction operation is performed by a gray scale hit-miss transform. The network learns morphological structuring elements by a back-propagation type learning rule. It provides a general problem-independent methodology for designing morphological structuring elements for pattern recognition. The network was applied to handwritten digit recognition and automatic target recognition (ATR) of occluded vehicles and compared to the standard shared-weight neural networks (SSNN) that perform linear feature extraction. For binary handwritten digit recognition, it produced performance comparable to that obtained using existing shared-weight networks. However, it trained faster. For ATR, a set of parking lot images containing a certain type of vehicle was used. An MSNN was trained with non- occluded training vehicles and tested with images containing the training vehicles at various degrees of occlusion. An efficient training method to improve background rejection is introduced. Two target-aim-point selection methods are defined. The MSNN performed significantly better than the SSNN at detecting occluded vehicles and reducing false alarm rates. Furthermore, the morphological network trained significantly faster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In radar signal processing, it is important to improve target detectability over various clutter which may be caused by a number of different sources. Frequently the location of these sources is additionally subject to variations in time and position, i.e., the output clutter level is not kept constant and discrimination of the target from clutter is no longer easy. This fact calls for adaptive signal processing technique operating in accordance to the local clutter situation. The present paper suggests such a technique. One advantage of this technique is that, over a wide class of no-signal environments, the false alarm rate remains the same. Also, no learning process is necessary in order to achieve the constant false alarm rate (CFAR). In this paper, a CFAR test is developed. The test is applicable to the detection of a signal in some elements of a multiple-resolution-element radar. It is based on the use of binary integration and the order statistics. Binary integration is a nonlinear process that counts the number of times the return signal from a given sequence exceeds a threshold (referred to as the first threshold). The new adaptive test (for discriminating targets from clutter), proposed here, utilizes the parameter- free statistics obtained from the above order statistics. These parameter-free statistics are transformed to the distribution-free statistic which is compared to another threshold (referred to as the second threshold) for the decision. The adaptive test is able to achieve a fixed PFA (probability of a false alarm) which is invariant to intensity changes in the noise background. The results of computer simulation are presented as an evidence of the validity of the theoretical predictions of performance of the suggested CFAR test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A filter structure formed as a linear combination of a bank of nonlinear filters, in particular, as linear combination of a bank of stack filters, is studied. This type of filter includes many known filter classes, e.g., linear FIR filters and nonlinear threshold Boolean filters, L-filters. An efficient algorithm based on joint distribution functions of stack filters for finding optimal filter coefficients under MSE (mean squared error) criterion is proposed. A subclass of the above filters, called FFT-ordered L-filters (FFT-LF), is studied in detail. In this case the bank of filters is formed according to the generalized structure of the FFT flowgraph. It is shown that FFT-LFs effectively remove mixed Gaussian and impulsive noise. Possessing good characteristics of performance, FFT-LFs are simple in implementation. The most complicated (in the sense of implementation) FFT-LFs are well known L-filters. We suggest an efficient parallel architecture implementing FFT-LFs as well as a family of discrete orthogonal transforms including discrete Fourier, Walsh and other transforms. Both linear and nonlinear L-filter-type filters are implemented effectively on the architecture. Comparison with known architectures implementing both linear and nonlinear filters reveals advantages of the proposed architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents two new models of image restoration under consideration the linear- invariant system of image formation, which is described by the convolution type Fradholm integral equation of the first kind. The models come to the preliminary restoration of the noise imposed on the image when the last is formed. The corresponding approximate solutions of the restored image are describe and the theoretical comparative estimates are given. Also in the framework of these models the well-known inverse and Wiener filters are analyzed and the new so-called noise-homomorphic filters are considered. The best approximation of the true image in the sense of the mean-root-square error is obtained and its main properties are considered. It is shown that this approximation is better than the Wiener estimate obtained in the classical model of image restoration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent papers in multiscale morphological filtering, particularly, have renovated the interest in signal representation via multiscale openings. Although most of the analysis was done with flat structuring elements, extensions to grayscale structuring elements (GSE) are certainly possible. In fact, we have shown that opening a signal with convex and symmetric GSE does not introduce additional zero-crossings as the filter moves to a coarser scales. However, the issue of finding an optimal GSE is still an open problem. In this paper, we present a procedure to find an optimal GSE under the least mean square (LMS) algorithm subject to three constraints: The GSE must be convex, symmetric, and non-negative. The use of the basis functions simplifies the problem formulation. In fact, we show that the basis functions for convex and symmetric GSE are concave and symmetric, thus alternative constraints are developed. The results of this algorithm are compared with our previous work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One important task in the field of digital video signal processing is the conversion of one standard into another with different field and scan rates. Therefore we have developed a vector based nonlinear upconversion algorithm which applies nonlinear center weighted median filters (CWM). Assuming a 2-channel model of the human visual system with different spatio temporal characteristics, there are contrary demands for the CWM filters. We can meet these demands by a vertical band separation and an application of so-called temporally and spatially dominated CWMs. Hereby errors of the separated channels can be orthogonalized and avoided by an adequate splitting of the spectrum. By this we have achieved a very robust vector error tolerant up-conversion method which significantly improves the interpolation quality. By an appropriate choice of the CWM filter root structures main picture elements are interpolated correctly also if faulty vector fields occur. In order to demonstrate correctness of the deduced interpolation scheme picture content is classified. These classes are distinguished by correct or incorrect vector assignment and correlated or noncorrelated picture content. The mode of operation of the new algorithm is portrayed for each class. Whereas the mode of operation for correlated picture content can be shown by object models this is shown for noncorrelated picture content by the distribution function of the applied CWM filters. The new algorithm has been verified as well by an objective evaluation method the PSNR (peak signal to noise ratio) measurement as by a comprehensive subjective test series.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recursive approaching signal filter (RASF) calculates the weights for each filtering window position from the difference of the original signal and a prefiltered signal. The original definition suggests the use of an exponential function for calculating the weights, but any nonincreasing function may be used as well. This paper addresses the problem of selecting the optimal one among them via empirical simulations applying the programming paradigm of genetic algorithms for the optimization problem. Furthermore, another modification to the RASF filter class taking advantage of a larger number of observations with smaller time complexity is proposed and thus a novel filter class is presented. The designed optimization scheme for finding the optimal weighting function is applied also to these filters and comparisons with the RASF filter are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interest in image processing is not confined to 2D, 1D linescan cameras are increasingly used to monitor the width of objects in web processes, production lines, extrusions, etc. An object, for example a wire extrusion is illuminated from behind to create a silhouette, is represented in an image of that object, as a local (or regional) extremum of intensity. In a single line scan taken across the wire, the object appears as a pulse. The width of the pulse is then a measure of the wire diameter. In a typical application, the pulse width is monitored by comparing the scanline with a stored background then compressing the resulting binary scan line using runlength code so that the pulse width can be compared with a template. It is often necessary to interpret the width from a noisy graylevel signal. It has been reported that the width of noisy pulses can be estimated very well using sieves. Furthermore, very fast sieves can be implemented in digital form consistent with handling the 20Msample data rates that might be encountered. Here, the results of experiments using a linescan camera coupled with a recursive datasieve to measure object widths are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of detection of the region of interest by means of nonlinear filters is addressed as applied to flaws detection in diagnostics imaging. The solution of this problem by conventional methods of linear high-pass filters with subsequent thresholding does not yield a satisfactory result due to a poor quality of images in the applications. To evaluate the basic local features in images, it is proposed to use adaptive trimmed mean filters with fast recursive calculation of order statistics. Theoretical and experimental investigation of the proposed algorithm for adaptive filtering confirms its efficiency for the detection of small isolated objects and crack- like flaws in diagnostic imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The expedience of use of expert system and fuzzy logic based decisions to image recognition and filtering is discussed. One possible approach including the calculation of several local parameters for every image pixel scanning window position combined with expert system preliminary training and decision undertaking for image recognition and adaptive filtering is proposed. The efficiency and the peculiarities of considered procedure are analyzed and demonstrated for simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A genetic algorithm is presented for the blind-deconvolution problem of image restoration. The restoration problem is modeled as an optimization problem, whose cost function is minimized based on mechanics of natural selection and natural genetics. The applicability of GA for blind-deconvolution problem was demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of image segmentation is addressed via the Voronoi diagram (VD). The set of seeds, which determine the Voronoi regions, can be modified by adding and removing seeds. These modifications can be governed by user-specified constraints, but can also be driven by 'salt-and-pepper' noise. The VD, seen as a segmentation operator, is unstable to this kind of perturbation. A dynamic algorithm for the construction of discrete VD, exploiting the local dependence of the diagram on each seed, is presented. The updates in the diagram are made inside a convex set obtained by mapping seeds to linear inequalities; the updates in the neighbor relationships are done using a version of the incremental method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deterministic hierarchical approaches in image analysis comprise two major sub-classes: the multiresolution approach and the scale-space representation. Both approaches require either a coarse-to-fine exploration of the hierarchical structure, or a careful selection of a single analysis parameter, but neither one takes full advantage of the hierarchical structure (the end result is obtained at only one analysis level). To overcome this limitation, we propose an explicit hierarchical-based model in which any image primitive is expressed as a finite sum of mobile wavelets (MW), which are defined as wavelets whose dilation, translation and amplitude parameters are allowed to vary. This description derives from an adaptive discretization of the continuous, inverse wavelet transform. First, the MW-based representation is used within the framework of active contour modeling. The primitive corresponds to a deformable, parametrized curve expressed as a sum of MWs. The initial curve is refined by updating the three parameters of each MW in order to minimize the intensity gradient along the active contour. Surface reconstruction is also addressed by the MW approach. In this case, the primitive, the intensity function, is expressed as a sum of MW whose associated parameters are estimated from the noisy data by minimizing a regularizing energy functional.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new layered graph network (LGN) for image segmentation is presented. In the LGN a graph representation of images is used. In such a pixel adjacency graph (PAG) a segment is considered as a connected component. To define the PAG the layers of the network are divided into regions, and inside the regions the image is represented by sub-graphs consisting of sub-segments (nodes) which are connected by branches if they are adjacent. The connection of sub-segments is controlled by a special adjacency criterion which depends on the mean gray values of the sub-segments and their standard deviations. This way, the sub-segments of a layer 1 are merges of sub-segments of layer 1-1 (the sub-segments of layer 0 are the pixels). The gray value averaging over the sub-segments is edge preserving and becomes more and more global with the increasing number of the network layer. Bridge connections between the segments are prevented by the special regional structure of the network layers. The LGN can be understood as a special 'neural' vision network with the highest layer representing the PAG. Simulated and real world images have been processed by a LGN simulator with good success.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method that uses a genetic algorithm (GA) to optimize rules for categorizing terrain as depicted in multispectral data has been developed by us. A variety of multispectral data have been used in the work. Linear techniques have not separated terrain categories with sufficient accuracy so that genetic algorithms have been applied to the problem. Genetic algorithms, in general, are a nonlinear optimization technique based on the biological ideas of natural selection and survival of the fittest. For the work presented here, the genetic algorithm optimizes rules for the categorization of terrain. The genetic algorithm produced promising results for terrain categorization; however, work continues with efforts to improve classification accuracy. As part of this effort, new rule types have been added to the genetic algorithm's repertoire. These new rule types include the clustering of data, the ratio of bands, the linear combination of bands, and the second order combination of two and three bands. Improved performance of the rules is demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We generalize set symmetrization transformations based on Minkowski addition to symmetrization transformations for numerical functions. For this purpose two types of function representation are used. The first one is umbra representation, when symmetrization transformations are performed to the set of points under the graph of a function. This corresponds to introducing of transformations using gray-scale dilation. The second representation of a function is based on a family of threshold sets. Flat function symmetrization transformations are generated by corresponding set transformations operating on threshold sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By developing the transform of local umbra, fuzzified sets are proposed to represent grayscale images and used to redefine the operations in mathematical morphology. A linear transform is suggested to convert the fuzzified sets back to the images. Then a set algebraic structure in the space of local umbra is established, which can describe binary and grayscale morphological functions in a uniform formulation. Further, a compact fuzzified-set image algebra is developed which has only a combined operator of a set logic followed by a fuzzified-set dilation. A cellular two-layer set-logic array architecture is suggested to execute the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.