In this paper, we first introduce the block threshold decomposition (BTD) which can be regarded as a compact form of standard threshold decomposition. The proposed BTD will produce multilevel threshold signals and applies to both discrete and continuous valued signals. Based on BTD, we then introduce a new digital filtering approach, the pyramid median filtering, which consists of a group of median filters, each operating on a BTD signal, and whose window masks form some kind of a pyramid. According to prior knowledge or estimates of the detail distributions and noise characteristics in the received signal or image, it is possible to choose the form of pyramid in order to best accomplish the task at hand. Some properties of the pyramid median filters are analyzed. Application of them to image restoration in impulsive noise shows that this approach provides a better performance than standard median filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, we propose a new type of modified trimmed mean (MTM) filter for image smoothing. The MTM filter was first proposed by Lee and Kassam. The filter is designed to remedy the problem of edge blurring resulted by a mean filtering. The idea is to perform the averaging operation on some selected samples inside a window. A data sample is selected if its value falls into the range of (m - q, m + q) where m is a value calculated from the data samples and q is a preselected threshold value. Lee et al used the median filter to estimate the m value. Although the MTM filter works well for some images, it cannot preserve the details. This is because the median filter is not a detail preserving filter. In this paper, we propose to replace the median filter by a detail preserving filter, namely multistage median (MSM), for the m value estimation. We call this filter the multistage median based MTM (MSMTM) filter. It is shown that the new MSMTM filter is highly efficient and detail- preserving. By some modification, the MSMTM can also be used to filter the multiplicative noise. Finally, simulations are carried out to evaluate the performance of the filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A basic problem in digital image restoration is the smoothing or blurring of edges due to the averaging effects of most techniques. Filtering results in images that are less noisy but less sharp as well. In order to be able to effectively reduce noise while maintaining a degree of sharpness we define a cost function constrained to reflect a perception-related criterion. In this paper we examine the effects of a modification of the mean squared error (MSE) based on the subjective importance of edges. We have studied both the space-invariant and space-varying filters with standard edge detection operators. The static approach offers a simple and parallel implementation while the adaptive one gives better performance regarding sharpness and subjective quality. We present computer simulations on images of a standard data set with various noise densities and investigate the application of Ll-filters with the perception-constrained cost function. Also an analysis of the robustness of filters is included for cases when a test image is not available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present a class of nonlinear (generalized) robust order statistical filters. The entire class of filters is robust in that all filters in the class demonstrate the same impulse rejection properties as the median. Each filter in the class of filters is an optimal edge enhancing filter in that 'nonperfect' edges converge to 'perfect' edges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The pseudomedian filter was designed to be a computationally efficient alternative to the median filter. The output of the pseudomedian filter is the average of two values obtained by maximum and minimum operations performed on selected subwindows within the filter window. Although the response of the pseudomedian filter resembles that of the median filter in many ways, there are important differences. The pseudomedian filter is more susceptible to impulse-like noise in images, but the square-shaped 2D pseudomedian filter does not round off sharp corners as much as the square-shaped 2D median filter does. The psuedomedian filter also removes high- frequency periodic elements from images. The pseudomedian filter exhibits a more 'center-weighted' response than the median filter. Thus, fine details that are completely removed by the median filter usually remain visible in pseudomedian- filtered images. Images filtered by a square-shaped 2D pseudomedian filter often have a 'blocky' appearance caused by the square shape of the filter subwindows. These properties of the pseudomedian filter indicate that it may be more appropriate than the median filter for processing images with sharp corners or fine details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Recently, a morphological method was proposed for edge detection in which intensity edges were obtained by thresholding the difference between the image and a dilated version of the image. While this technique is promising, it is quite sensitive to noise. To improve noise immunity and robustness, we propose using stack filters to estimate the dilated and eroded versions of the image, and then threshold the difference between these two images. Comparisons between this stack filter based technique and some standard edge detectors are provided. For instance, we find that this approach yields results comparable to those obtained with the Canny operator for images with additive Gaussian noise, but works much better when the noise is impulsive. Extensive simulations with many different images and different types of noise were performed. Pratt's figure of merit was used as an objective measure of performance on synthetic images. Many natural scenes were also used to test the performance of this technique. The results indicate that this approach is robust with respect to changes in both the image and the noise. In other words, filters obtained by training on one image and one type of noise work well even when both the image and noise statistics vary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We introduce the LUM filter for both smoothing and sharpening. The LUM filter is a moving window estimator that does the following: it finds the order statistics by sorting the samples in the window, and it compares a lower order statistic, an upper order statistic, and the middle sample. The two order statistics define a range of 'normal' values. If smoothing is desired, the LUM filter outputs the middle sample if it is between the two order statistics; otherwise, it outputs the closest of the two order statistics. If sharpening is desired, the roles are reversed. The LUM sharpener outputs the middle sample if it is outside the two order statistics; otherwise it outputs the closest of the two order statistics. Furthermore, both characteristics can be achieved at the same time. We compare the LUM filter against common alternatives such as linear smoothers and sharpeners, moving medians, and sharpeners such as the CS filter. In summary, we believe the LUM filter is widely applicable and has good performance in a wide range of applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, we analyze the noise attenuation properties of discrete morphological filters. Using the connections between morphological filters and stack filters certain symmetry properties of morphological filters are studied and for 1D morphological filters analytical expressions for the output distributions are derived. Utilizing these results it is shown that as estimators morphological filters are biased but that they have very good noise attenuation properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In texture and pattern analysis, the granulometric size distribution is related to morphological features. The mean of this distribution--the pattern spectrum mean--can be used for feature analysis. In this paper, we treat the pattern spectrum as a random function and its moments as random variables. The grain model is as follows: the number of grains is held fixed for the image segment and grains are assumed convex. Generalization to the case of random number of grains is desirable; only limited results are available for this case so far. The statistical distribution of the pattern spectrum mean is studied under two alternative models: Normal and Gamma. The asymptotic distribution of the PSM is developed under each model. A special case of random numbers of grains per segment is analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Characterizing highly irregular 2D pore images requires special image analysis methods. An opening operation with a small circular structuring element has been used to remove small pores and small features of large pores from an image. Through a series of openings with increasingly larger structuring elements, all pore pixels are gradually removed according to the sizes of associated features. A pore size distribution is obtained as a result of this process. However, the conventional opening algorithm is very slow in performing this size analysis due to its iterative character. A direct approach algorithm has been developed to map size values for all pore pixels without iterative steps. There are three major steps in this algorithm. First, a distance map is constructed in which each pixel has a value equal to the distance between the pixel and the nearest pore boundary. Second, local maxima on the distance map are found. Finally, for each local maximum, a circular area is scanned around the pixel with a radius equal to the pixel value and all pixels within the circle are assigned the same value (as the local maximum). One pixel may be assigned several values from different local maxima; in such a case, the largest value should be chosen. The result is a size map and the histogram of this size map represents the pore size distribution. This analysis can be applied to any binary image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Even in the binary case, designing optimal morphological filters involves a time-consuming search procedure that, in practice, can be intractable. The present paper provides an algorithm for filter design that is based upon the relationship between the optimal morphological filter and the conditional expectation. In effect, the algorithm proceeds by changing the conditional expectation into a morphological filter while at the same time increasing the mean-square error a minimal amount. Under many noise environments, the new algorithm is extremely efficient, thereby providing a filter design that can be used online for structuring-element updating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In order to efficiently perform morphological binary operations by relatively large structuring elements, we propose to decompose each structuring element into squares with 2 X 2 pixels by the quadtree approach. There are two types of decomposition--the dilation decomposition and the union decomposition. The first type decomposition is very efficient, but it is not necessarily always possible. The decomposition of second type is available for any structuring element, but the time cost of computation is proportional to the area of the structuring element. The quadtree decomposition proposed here is the combination of these two types of decomposition, and exists for any structuring element. When the Minkowski addition A (direct sum) B or the Minkowski subtraction A - B is computed, the number of times of the union/intersection of translations of the binary image is about the number of leaves of the quadtree representation of the structuring element B, which is roughly proportional to the square root of the area of B. In this paper, an algorithm for quadtree decomposition is described, and experimental results of this decomposition for some structuring elements are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper describes some of the most recent algorithmic techniques of mathematical morphology. The classical parallel and sequential methods mostly involve numerous scannings of all the image pixels and are thus inefficient on conventional computers. To get rid of this drawback, the key idea is to consider, at each step, only the pixels whose value may be modified. Two classes of algorithms rely on this principle: the first one realizes an encoding of the object boundaries as loops which are then propagated in the image. The second one embraces the algorithms based on breadth- first image scanning enabled by queues of pixels. The algorithms belonging to these two families are extremely efficient and particularly suited to nonspecialized equipments, since they require random access to the pixels. Moreover, the 'customized' image scannings they are based on allow one to develop more accurate and flexible procedures: for example algorithms for computing exact Euclidean distance functions have been derived from the first class. On the other hand, the queue-based algorithms work in any kind of grid, in the Euclidean and geodesic cases, and extend to any dimension and even to graphs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Spatially variant versions of classic morphological operations are introduced. The probes used are locally variant on their domains. Range invariant morphological filters are introduced. A generalized Matheron Representation Theorem for these filters in terms of spatially variant erosions is proved. Locally variant openings and locally variant algebraic openings are introduced, and a generalized Matheron Representation Theorem for locally variant algebraic openings is proved. The theory presented is quite general and is valid for functions defined on subsets of any space X and with values in any ordered commutative group G for which the concept of least upper bound and greatest lower bound are meaningful. When G equals R, the real numbers, and X equals Rn the theory encompasses real signal processing (n equals 1), real image processing (n equals 2), and real multivariable signal processing (n > 2). When G equals Z, the integers, and X equals Zn the theory encompasses the digitized versions of each form of processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A competitive learning neural network (CLNN) has a mechanism to discover statistically distinctive features included in input population. Competitive learning is different from a classification paradigm that needs a supervisor. Therefore, the unknown features are expected to be extracted from the visual image. However, CLNN has a problem of a serious decline of learning ability from the lack of competition. The reason for this is that the units of CLNN are not allocated to adapt to the distribution of input vectors in the feature space. We propose learning algorithms to optimize the positions of units and attain valid competition. These learning algorithms are based on structure learning according to two ideas. The first idea is that many units should be allocated according to concentrations of input vectors in the feature space. The second idea is that at least one unit should exist within an appropriate distance form every input vector. We apply the proposed algorithms to CLNN and experiment on the distinction of different binary 64 X 64 dot patterns. This patterns explores the validity of the two algorithms for CLNN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Extracting edges from images is a widely used first step in processing. A different view on the well known enhancement/thresholding approach for edge detection is presented in this paper. The structure of a two layer feed forward neural network is comparable to the structure of enhancement/thresholding edge detectors. It is possible to calculate an optimal edge detector with a certain predefined network structure and training set, by training the neural network with examples of edge and nonedge patterns. The back propagation learning rule is used for optimization of the network. The choice of the network structure and the training set are very important, because they determine the final behavior of the network. The paper describes which network structures were selected and how the training sets were generated. Some of the experiments are described, and observations of the convolution kernels for edge enhancement that are formed during training. Finally the results are evaluated and compared with the results of edge detectors based on the Sobel, Marr-Hildreth and Canny edge enhancement algorithms. It appears that the neural network edge detector can be made very robust against noise and blur and in most tests outperforms the others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper multistage weighted order statistic (MWOS) filters are introduced. With the aid of threshold decomposition, it is shown that any MWOS filter in real domain corresponds to a multistage threshold logic gate, or a multilayer perceptron in binary domain, which can be viewed as another representation of a stack filter. The MWOS filter requires much fewer parameters to represent a stack filter than the truth table of the positive Boolean function. An adaptive filtering algorithm, named as constrained backpropagation (CBP) algorithm, is developed for finding the optimal MWOS filters under the mean absolute error (MAE) criterion. The CBP algorithm is the same as the backpropagation algorithm used in multilayer perceptrons except the positivity of the parameters of MWOS filters, equal to the stacking property of stack filters, is imposed. Simulation results on image restoration are provided to compare the performance of the adaptive MWOS filters and the adaptive stack filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper presents results of an experiment on handwritten digit recognition using a four-layered backpropagation network. The input to the network is a 105 element feature vector formed in two steps. First, morphological operations are performed on the input digit to create images of six different cavity features. Then, along with the normalized digit image, each of the cavity images is coarse-coded to produce the input vector. The network is trained on 5200 normalized, repaired digits and is tested on two other large sets. All digit samples were obtained from the United States Postal Service. The first test set, composed of 1916 digits, is used to select a decision strategy for the network which maximizes correct recognition rate while keeping the error rate under one percent. This strategy is then applied to the second test set, a true test set composed of 3568 characters, with recognition rate near 97 percent and an error rate of less than one percent. These results suggest that the use of morphologically derived features in backpropagation networks is effective for optical character recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A Kalman filter for a class of 2D image state-space model is presented. Kalman filter equations are derived for the reduced version of 2D system model and resulting state estimate is expressed in terms of original 2D system. A neural network computing the Kalman filter gain has been designed. This way burdensome Riccati equation solution was improved. The evaluated Kalman filter gain is used to estimate the real input noisy image. As a result a restore image is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A number of binary images derived from a segmentation process can be represented with a matrix indexing. Morphological operators can be similarly denoted. A formalism is described, where erosions and dilations operate on matrices of images. Matrix erosions and dilations can be extended to binary order statistic filters and hit- and-miss transforms. It will be shown how an image transformed by a sequence of hit-and-miss, order statistic, matrices of structuring elements is isomorphic to a multiple layer translation invariant neural network that has excitatory and inhibitory connections with unit weights. Network connections are equivalent to points in a structuring element. A form of supervised Hebbian learning can be applied to these morphological networks as follows. All weights are initially zero. A number of training cycles are performed by showing many images to the system. The weight of the connection that would provide the largest number of correct responses over the training set is chanced from zero to +1, or, the connection that would lead to the largest number of incorrect responses is changed from zero to -1 to provide an inhibitory connection. The best connection is determined, and then added to the network, and the cycles are repeated to build up enough connections to give low error rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The application of the gradient search algorithm for adaptive determination of the optimal L-filter coefficients is discussed. Specifically, the case of symmetric noise densities is investigated with respect to the convergence properties of LMS algorithm. Analytical results are presented and supported by simulation experiments for appropriate signal models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A nonlinear regression is a signal that has a specified property (which may be different from linearity) and that optimally approximates a given signal. Such properties are given in the domain of the signal (e.g. time, space) and are called shape constraints. The optimality of the approximation is measured with a semimetric defined on the space of signals under consideration. Finite-length discrete signals are well modeled as point in n-dimensional real space Rn. Thus, for example, a linear regression of a signal is a signal, in the subspace of linear signals, that is closest (usually under the Euclidean metric) to the given signal. Four shape constraints considered in the paper; piecewise constancy, local monotonicity, piecewise linearity and local convex/concavity. They are constraints of smoothness and in this respect, local convex/concavity has the advantage over local monotonicity that a sine wave of small frequency may be locally concave/convex but not locally monotonic. 2D signals defined on quadrille tessellations and on hexagonal tessellations are considered briefly; local monotonicity of degree 3 is defined for 2D signals. A technique for obtaining locally monotonic approximations of 2D signals is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
An algorithm is presented for speckle reduction in time varying images. The algorithm operates in the time domain and is based on a subdivision of the time evolution of each pixel in the image into homogenous zones. The algorithm operates on a finite time window. The intensity evolution in this window is processed individually for each pixel in the image. The restored time evolution is defined as a piecewise linear function with a predefined maximal number of linear segments. The maximal number of segments is a parameter of the algorithm. Dynamic programming has been applied to obtain the piecewise linear function that minimizes the mean square error. For more than one segment the algorithm finds the optimal way to subdivide the time evolution into homogenous zones. Ordinary linear curve regression is used within each homogenous time segment. Several alternatives for temporal filtering based on discontinuity detection are discussed. The proposed algorithm reduces nonstationary speckle and increases the image contrast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A remote machine vision system is presented which addresses three critical aspects of surveillance and vision in general. First, to deal successfully with changing weather conditions and fast events in real time. Second, the false alarm rate must be very low since the system may operate 24 hours a day all year round. Third, to send out visual information to the head of security immediately, wherever he may be. This visual information consists of the track the intruder left and its silhouette. This allows the official to distinguish between human and nonhuman intruders. The key to this architecture is an arithmetical subtraction which is done pixel-by-pixel over the whole image. Basically, it is a difference between a reference image (clean image) and the one which is being received. Other steps of the process are multiple threshold and low-pass filtering. Filtering and dynamic range splitting are the domains in which we have worked using digital hardware techniques. Very consistent results were obtained by adaptive mean filtering and 3-class splitting respectively. Considerable progress is being made in developing an adaptive n-class splitting. Special relevance has been given to the imaging hardware which is able to control, acquire, digitize, filter, compare, and add images and transmit them over a telephone line with appropriate alarms and display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper introduces a near-real time method of image processing in a PC-based environment. A segmentation technique based on unsupervised classification is implemented. A prototype for the detection of ice formation on the external tank (ET) of the Space Shuttle is being developed for NASA Science and Technology Laboratory by Lockheed Engineering and Sciences Company at Stennis Space Center, MS. The objective is to be able to do an online classification of the ET images into distinct regions denoting ice, frost, wet, or dry areas. The images are acquired with an infrared camera and digitized before being processed by a computer to yield a false color-coded pattern, with each color representing a region. A two-monitor PC-based setup is used for image processing. Various techniques for classification, both supervised and unsupervised, are being investigated for developing a methodology. This paper discusses the implementation of two adaptive algorithms for image segmentation. The K-means algorithm is compared to another algorithm based on adaptive estimation of region boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Most radar systems determine the presence of a target via the comparison of the appropriate return (actually the output of a square-law device) to a threshold. In order to achieve CFAR (constant false-alarm rate) operation, this threshold must incorporate some estimate of the ambient noise power level, usually derived using measurements from a reference window of neighboring cells (in range, bearing, and/or doppler). Under the assumption that the reference celis are statistically homogeneous, cell-averaging (CA)-CFAR, which uses the empirical mean from the cells in the reference window as the noise power estimate, is optimal. However, the reference cells may contain interfering targets and/or clutter edges, and in such situations CA-CFAR performs poorly. Several alternative schemes have been proposed, but none appears to work well uniformly over the broad class of possible reference window nonhomogeneities. In this paper we investigate the use of the Li-filter, a MSE-optirni.zed amalgamation of ordered-statistic (L) and linear (1) ifiters, to form an estimate of the noise power. Our results show that Ll-CFAR, while in some situations suboptimal, appears to be robust to reference window nonuniformities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Necessary and sufficient conditions for a binary signal to be invariant to median filtering are derived. Proof is shown that one pass of a recursive median filter on a binary signal results in a new signal that is invariant to further median filtering. A technique is described whereby threshold decomposition theory and the stacking property can be used to generalize these results so that they may be applied to arbitrary k-valued signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper presents an application of marker-controlled segmentation in petroleum engineering. The images to be segmented originate from high resolution conductivity measurements of borehole walls. These measurements reflect the composition and structure of the rock formation through which the well was drilled. In this application, we detect and measure small cavities in the walls. These cavities are called vugs. We use the tools provided by mathematical morphology. Our strategy is based on gradient image modification using markers and on the watershed transformation. First, the vugs are automatically marked, as well as the background. These markers together delineate areas of interest in which we know there is one contour per vug. In order to find the vug contour and perform measurements, we modify the gradient image in such a way that only a single edge is kept between the vug and the background markers. We perform the final step of edge detection using the watershed transformation of the modified gradient image. The final result is one closed contour per marked vug. We present this strategy in detail, show experimental results and discuss artifact elimination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In 1990 it was discovered that the traditional FCC (face-centered-cubic) coordinates used in 3D logical x-filtering may be mapped into IJK-coordinates in many different ways. Before this discovery one of the hexagonal planes of the FCC tessellation had been used as a base plane with the hexagon distorted into a Northeast-Southwest parallelogram. Three other parallelograms for this pseudo-hexagonal representation have now been discovered. All of these four mappings are now called 'pseudo- hexagonal.' Furthermore, pseudo-cubic tessellation, using the cubic planes as the base planes has been invented. The frequency response of x-filters in both the pseudo- hexagonal and pseudo-cubic mappings have now been investigated for ranks three, four, and five and for from one to twelve iterations. Another related investigation reported here concerns the interpolation between 3D serial sections where the resolution within a section is more finely grained that from section to section. Results of this investigation are presented for both the pseudo-hexagonal and pseudo-cubic mappings as related to the reconstruction of various organs in the human abdomen from serial-section tomography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.