PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The practical design and implementation of statistically optimal nonlinear filters must take into account the estimation precision and the robustness as factors contributing to the final filter performance. Therefore, the choice of a filter becomes in practice a tradeoff between the optimal filter performance and the precision or robustness of the designed filter. Moreover, the computational limits and especially the storage space limits have sometimes an important contribution to the filter choice. The present paper analyzes the connections between estimation precision, robustness and storage space. These connections are used to derive a consistent approach for selecting the best filters, in which a filter is considered 'better' than another one only if it outperforms the other filter under all possible circumstances. Examples and applications are presented throughout the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate the use of the Bayesian inference for some detection and classification problems of great importance in sonar imagery. More precisely this paper is concerned with the segmentation of sonar image, the classification of object lying on the sea-bottom and the classification of sea-floor. These aforementioned classification tasks are based on the identification of the detected cast shadows which correspond to a lack of acoustic reverberation behind the different natural or man-made objects lying on the sea-floor. The adopted Bayesian approach allows to model efficiently all the available prior information, for each detection and classification task under concern yielding a cost minimization problem. To this end, we associate to each Bayesian statistical modeling, a specific optimization strategy well suited to the global energy function to be minimized. These segmentation and classification schemes can be used separately for a specific application or can lead to an original Bayesian processing chain for the automatic classification of objects lying on the sea-floors. The efficiency and robustness of this unsupervised processing chain has been tested and demonstrated on a great number of real and synthetic sonar images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the proposed paper, the problem of robust estimation of the polynomial regression parameters is considered with application to image processing. The polynomial regression model states that the intensity function of an image can be represented as a polynomial function of defined order within a sample window plus independent noise which is assumed to be Gaussian distributed with a small fracture of outliers. The developed procedure for robust estimation of the polynomial regression parameters is based on computation of partial optimal estimates using the least squares method which exploits the fact that the majority of the regression residuals have Gaussian distribution. The final estimate is selected by the principle of maximum a posteriori probability. In direct form, the proposed technique is computationally expensive. Since the regression parameters can be represented as a linear combination of local moments, it allows to decrease the computational complexity of the proposed technique by an order (i.e. by O(N), where N is the size of the used subsamples) because local moments can be calculated recursively. The estimated regression parameters can be used for robust estimation of image and background intensity, noise variance, as well as for adaptive image filtering and segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The unconstrained design of optimal digital window-based filters from sample signals is very difficult because of the inability to obtain enough data to make good estimates of the filter parameters. This paper studies the estimation problem by windowing in the range, as well as in the domain. At each point, the signal is viewed through an aperture, which is the product between a domain window and a gray-scale range window. Signal values outside the aperture are project to the limit values inside the aperture. Experiments show that aperture filters can outperform linear filters for deblurring, especially in the restoration of edges. A sampling of the many experiments carried out to study the effects of aperture filters on deblurring is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gray-scale morphological operators are commonly employed in image processing applications, such as texture analysis, target detection and image enhancement. One major problem with these operators is their sensitivity to noise. Another issue is finding the right structuring elements for a process. This paper describes Choquet integral-based morphological operators. These operators do not necessarily use max and min and therefore they are less sensitive to noise. In this paper, we also introduce a technique to find an optimal gray scale structuring element. These developments yield applications, including but not limited to target detection and multi-layer filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One well-studied image processing task is the removal of impulse noise from images. Impulse noise can be introduced during image capture, during transmission, or during storage. The signal-dependent rank order mean (SD-ROM) filter has been shown to be effective at removing impulses from 2-D scalar- valued signals. Excellent results were presented for both a two-state and a multi-state version of the filter. The two- state SD-ROM filter relies on the selection of a set of threshold values. In this paper, we examine the performance of the algorithm with respect to the thresholds. We take three different approaches. First, we discuss the performance of the algorithm with respect to its root signals. Second, we present a probabilistic model for the SD-ROM filter. This model characterizes the performance of the algorithm in terms of the probability of detecting a corrupted pixel while avoiding uncorrupted pixels. Finally, we apply the insight gained from the root signal analysis and the statistical model to optimized thresholds found using a computerized search algorithm for a large number of images and noise conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We will discuss the simplified implementation of Recursive Median (RM) filters. It will be shown that every RM filter an alternative implementation. This implies a fast algorithm, [O(1) per pixel on average], for the one-dimensional RM filter. We also consider the case when RM filters are applied in a cascade of increasing filter window lengths, that is, the RM sieve. We will show that the RM sieve can be implemented in constant time per scale by applying only 3-point median operations. Both of the above mentioned fast implementations are viewed in a new light by constructing the corresponding Finite State Machines (FSM), and observing the achievable state reduction. Radical reduction of complexity takes place by implementing standard state reduction techniques. FSM models also open new possibilities for the analysis of these systems. Finally we discuss the benefits of using the RM sieve instead of the RM filter. We consider the streaking problem of the RM filter. It is demonstrated that the RM filter is not in itself a reliable estimator of location. As the cascading element in the structure of the sieve, however, it is very useful. It turns out that the use of RM sieve reduces the streaking problem to manageable level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new algorithm for removing mixed noise from images based on combining an impulse removal operation with local adaptive filtering in transform-domain is proposed in this paper. The key point is that the operation is designed so that it removes impulses while maintaining as much as possible of the frequency content of the original image. The second stage is an adaptive de-noising operation based on local transform. The proposed algorithm works well in de-noising images corrupted by a white (Gaussian, Laplacian, exponential) noise impulsive noise, and their mixtures. Comparison of the new algorithm with known techniques for removing mixed noise from images shows the advantages of the new approach, both quantitatively and visually.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A neural method for gray value segmentation now is applied to texture segmentation. The parallel-sequential algorithm is based on recursive nonlinear feature smoothing in a 4- neighborhood. The smoothed feature values then can be segmented using an adaptive adjacency criterion which defines a special graph structure, called the Feature Similarity Graph. The segments are the connected components of that graph. The combination of results from the different image features is done in a hierarchical process starting, like in the human visual system with gray value segmentation. Besides segments with homogeneous gray value this process also provides texture elements which are the basis for the calculation of new image features. Then, first, the modulus of the gray value gradient is used as a new feature of the original image. The following segmentation basing on that feature provides regions which are homogeneous with respect to the mean gray value gradient. Furthermore, texel directions are calculated. That feature contains information on texture orientation of textured image regions. With these features the same neural segmentation method is able to separate not only regions with different mean gray values but also those with different textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nonlinear cellular neural filters (NCNF) are based on the non- linearity of the activation functions of universal binary neuron (UBN) and multi-valued neuron (MVN). NCNF, which include the multi-valued non-linear filters (MVF) and cellular Boolean filters (CBF), their applications are presented in detail in this paper. The following problems are considered in the paper: (1) NCNF in general as a class of nonlinear filters, which includes multi-valued and cellular Boolean filters based on similar complex non-linearities; (2) Multi- valued filters as a nonlinear generalization of the simple low-pass and mean filters; (3) Connection of the multi-valued filters with other nonlinear filters; (4) Cellular Boolean filters; (5) Application of the NCNF to noise reduction; (6) Application of the NCNF to the extraction of image details; (7) Application of the NCNF to precise edge detection, edge detection by narrow direction, and image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have presented the enlargement neural network (NN) for edge-preserving image interpolation, which is based on the non-linear procedure which is presented by Greenspan. In this paper, we present a novel enlargement multi-neural network (MNN) which is an extended the enlargement NN. Simulation results show the superior performances of the proposed new approach with respect to other interpolation techniques as the enlargement NN and the Greenspan's method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a method for creation of rules for images classification using fuzzy expert systems. The method consists of the analysis of the results of clusters formed by the application of a biased clustering algorithm to the image pixels. Biased clustering algorithms are partially supervised classification algorithms which allows the use of imprecise, incomplete or conflicting expectancies of assignment of data points to classes, and by iterative clustering attempts to solve the conflicts and incompleteness and obtain labeled clusters. The resulting clusters can be used to create new rules or membership functions which can lead to more and/or better rules for classification of the data using a fuzzy expert system. The new rules and membership functions can also be compared with the ones used to create the original expectancies of assignment of data for validation. Examples of application of the proposed method to synthetic and image data are presented. The classification results are evaluated and compared, conclusions on the problems, advantages and overall features of the proposed methods are presented and future work directions are considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rank-order based filters have received considerable attention due to their inherent outlier rejection and detail preservation properties. One of important rank-order based filters is stack filters. There are two different approaches to design of stack filters. One might be called the structural approach, while the other might be called the estimation approach. On the other hand, we have proposed fuzzy stack filter in order to extend the class of stack filters. Fuzzy stack filters are very important for signal processing, because these filters include FIR filters and weighted median filters. In this paper, we develop an approach for finding optimal fuzzy stack filters under the structural constraints. The noise attenuation of fuzzy stack filters is studied. Finally, we apply the proposed fuzzy stack filters to image restoration problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a GA (Genetic Algorithm) image restoration method which has PSF (Point Spread Function) parameters as genes. The proposed technique restores an original image from its observation image which has been blurred partially by defocusing of a lens system. Though the degraded model of an image is a Gaussian function type, the value of distribution of the parameter which is assumed to be space variant is exactly unknown at each place. For this case, the evaluation function based on 'Regularization Theory' was proposed and some simulation experiments were performed, so that the validity has been shown on this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Methods for determining fractal dimensions of image objects have gained increasing importance in recent developments of image processing. Mainly for the classification of shape and textures of natural objects the fractal dimension has proven its usefulness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morphology is one of the most active research areas in nonlinear image processing and has many applications. The basic reason for this is the good noise attenuation and edge preservation performance, quantitative description of geometrical structure while maintaining low computational complexity. Over the past twenty years, many efforts have been devoted in psychophysics and physiology to analyzes the response of the human visual system which can perfectly detect and recognize the image information. Because the human visual system is an excellent image processor, it is natural to find and use a gap between the human visual system and morphology. The goal of this article is to introduce a new concept, Visual Morphology, which shares many of the important characteristics of the human visual system and morphology. This technique has been employed on NASA's Earth Observing System satellite data for the purpose of anomaly detection and visualization. Also, it has shown the beneficial effect of applying SAR images. It helps to improve the detection of small targets. Both theoretical analysis and simulation results have shown that the proposed concept has a great future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A basic paradigm in Mathematical Morphology is the construction of set operators by concatenations of dilations and erosions via the operations of composition, union, intersection and complementation. Since its introduction, in the sixties by Matheron and Serra, this paradigm has been applied on Image Analysis for designing set operators, that were called morphological operators. Classically morphological operators are constructed based on the experience and intuition of human beings. Recently, an approach for the automatic design of morphological operators, based on statistical optimization from the observation of collections of image pairs, was proposed. The two approaches have drawbacks: usually, the first approach is slow and depends on an expert in Mathematical Morphology, while the second requires large amounts of observed data. This paper proposes a symbiosis between the human and the statistical design approaches. The idea is that the design procedure be composed of simplified forms of both. Thus, avoiding difficulties that arise when applying each one independently
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The images formed by coherent imaging systems are characterized by presence of multiplicative noise with non- symmetrical p.d.f.s. The examples are the Rayleigh and one- side (negative) exponential distributions. For these cases the optimal L-filters are derived for different coefficient censoring by minimizing MSE of residual fluctuations. Some sub-optimal L-filters are considered as well. They are the Lpq-filters that use only two order statistics and the trimmed filters with symmetrical and nonsymmetrical coefficient censoring. Those filters are parametrically optimized according to the same criterion. The robust features of the considered filters are analyzed both theoretically using empirical influence functions and numerically with application of contamination noise model. As contaminating factor we exploited the salt-and-pepper noise with different probabilities and amplitudes of positive and negative spikes. Output estimate bias and variance were calculated and examined. It is shown that the use of sub-optimal filters is well motivated from different points of view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new efficient algorithm to design increasing, binary image window operators (or filters), based on a method called switching, is proposed. Switching in this context refers to a method that sequentially exchanges (switches) the value of a given operator at some points in order to generate another operator satisfying some algebraic properties. Here we study switchings on the optimal operator to generate an optimal increasing operator. The proposed method reformulates the original switching problem as a partition problem and gives a greedy algorithm to solve it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a frequency selective weighted median (FSWM) filter with arbitrary spectral behavior. The proposed scheme is motivated by the observation on the structure and design procedure of the linear-phase FIR high- pass (HP) filter. An FIR HP filter can be easily obtained by changing the sign of coefficients in odd position. Thus, the output of the HP filter can be represented as the difference between two subfilters which have all positive coefficients. This representation structure of the FIR HP filter is analogous to the difference of estimates (DoE) such as the difference of medians (DoM). The DoM is essentially a robust HP filter which is commonly used in edge detection. Based on this observation, we define a new nonlinear filtering structure consisting of linear combinations of weighted medians. We refer to this new filter class as the FSWM filter. It is shown experimentally that the FSWM filter can offer 'low-pass (LP),' 'HP,' 'band-pass (BP),' and 'band-stop (BS)' frequency characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a class of signal-dependent noise models that are encountered in image processing applications is considered. Such models are uniquely defined by the gamma exponent, which rules the dependence on the signal, and by the variance of a zero-mean random noise process. An automatic procedure for measuring the model parameters directly from noisy images is presented. Then, adaptive filtering is applied in a multiresolution fashion, to take advantage of increasing SNR of the data, at decreasing resolution. A rational Laplacian pyramid is generalized to the noise model to yield signal-independent noise on its layers. Experiments show a high accuracy of results, both of noise estimation and of filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The basic idea is to predict a pixel value in a non-distorted frame, and then to compare this value to that in the input (corrupted) image. Usually these two values are different and a decision has to be made upon the pixel value at the filter output. The output can be considered as a sum of the prediction and the prediction error processed in some way. For example, large values of prediction errors can be set to zero because they are classified as caused by impulse noise samples. Soft decisions on classification of prediction errors lead to good results for test images. Three-dimensional median filters of various types are used here as predictors. First of all, vector median filters are used. A median is calculated over windows in consecutive frames, e.g. from the last past frame, from the current frame and the next future frame. Windows in past and future frames are motion-compensated. Prediction error is processed either by a memoryless nonlinear element or even by a 2-D filter. Experimental results with videophone test sequences prove that the filters described are quite efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we consider a filtering method for image sequence degraded by additive Gaussian noise and/or impulse noise. In general, for the image sequence filtering, motion compensation (MC) method is required in order to obtain good filtering performance both in the still and moving regions of an image sequence. Nevertheless, a heavy computation load is imposed on MC method and MC tends to get mistaken motion vector owing to additive noise. To overcome above drawbacks of MC, we propose a Video-DDWM filter. The Video-DDWM filter is derived by the following 2 steps. In the first step, 2D-data- dependent weighted median (DDWM) filter, whose all weights are decided by local information is extend to 3D-DDWM filter. In the second step, a motion information as the motion detector with robustness for eliminating impulse noise is taken into the 3D-DDWM filter. In addition to less computational load than the 3D-DDWM filtering with MC, Video-DDWM filtering gives better image sequence restoration results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The expected increase of 3D data which will result from the development of multimedia systems requires improved compression algorithms in order to reduce the cost of data transmission. In the case of 3D data represented by 3D meshes, the coding procedure involves three different coding steps for the topology, the geometry and the attributes (such as texture, color and curvature) of the mesh. In this paper, we address the issue of 3D mesh geometry coding. Within a predictive framework and under a bitrate control constraint, we study two different prediction rules, namely the parallelogram and the polygonal rule, and present a comparison of their performances. The parallelogram rule uses the ancestors defined by the triangle tree traversal order of the mesh to predict the current vertex, while the polygonal rule performs the prediction from the already decoded vertices in the polygons incident to the current vertex. The two prediction rules applied to polygonal and triangular meshes lead to different behaviors in terms of compression ratio and distortion index. The parallelogram rule yields better results in the case of triangular models, while the polygonal rule is more efficient for most of the polygonal models. Based on this analysis, we combine the two prediction rules into a novel scheme, taking advantage of the specificity of each prediction. The simulations, carried out on 294 models of the Moving Picture Expert Group (MPEG) data set, established that an effective gain of up to 50% reduction of the distortion at a given bitrate was obtained by using the cooperative scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In coherent image acquisition systems, occurrence of speckle noise is a common phenomenon that is hard to remove without degrading the original image. Conventional linear filtering processes fail to remove this noise and improve the signal to noise ratio without degrading the image. Some statistical filtering processes like the Wiener filter and the Local Linear Minimum Mean Squared Estimator (LLMMSE) filter have been applied with limited success. Nonlinear filtering processes such as the morphological filters are able to reduce some of this inherently associated noise with less degradation. A new approach to non-linear filtering that is capable of removing speckle noise without noticeable degradation is presented in this paper. This approach is based on an adaptive fuzzy leader clustering network known as AFLC. AFLC is a neuro-fuzzy clustering algorithm that can be used to cluster noise pixels in the image separately. After the clustering process is completed, a search is performed throughout the image to localize the noise pixels and to eliminate them using a spatial technique similar to the median filter. The results achieved by this process have been compared with the results from the traditional median filter, the LLMMSE filter, and the connectivity-preserving morphological filter demonstrating superior performance of AFLC in removing speckle noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe an application of the granulometric mixing theorem to the problem of counting different types of white blood cells in bone marrow images. In principle, an iterative algorithm based on the mixing theorem can be used to count the proportion of cells in each class without explicitly segmenting and classifying them. The algorithm does not converge well for more than two classes. Therefore, a new algorithm based on the theorem is proposed. The proposed algorithm uses prior statistics to initially segment the mixed pattern spectrum and then applies the one-primitive mixing theorem to each initial component. Applying the mixing theorem to one class at a time results in better convergence. The counts produced by the proposed algorithm on 6 classes of cell -- Myeloblast, Promyelocyte, Myelocyte, Metamyelocyte, Band, and PMN -- are very close to the actual numbers; the deviation of the algorithm counts is not larger than deviation of counts produced by human experts. An important technical point is that, unlike previous algorithms, the proposed algorithm does not require prior knowledge of the total number of cells in an image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we derive algorithms for noise reduction and image enhancement using spectral amplitude estimation. The algorithms are based on a short space spectral analysis by either the DFT, the DCT or the Modulated Lapped Transform (MLT). We apply these algorithms to low-dose X-ray images acquired in a medical imaging modality called fluoroscopy. Giving moving images in real time, only low dose rates can be used to protect humans from extensive exposure. Low X-ray quantum counts associated with such low doses then result in considerable degradations of image quality through quantum noise (QN). Spectral-domain filtering allows specific tailoring of the algorithms to the two prominent properties of QN, viz. signal dependence and a lowpass shaped, nonwhite noise power spectrum. A comparison shows that the DFT performs best and even allows to detect orientation, while the DCT and MLT perform similarly to each other, with the MLT being least computationally demanding. The noise reduction achieved is about 5-6 dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the issue of 3-D human bronchial tree reconstruction from 2-D segmented slices. The 2-D gray-level slices to be segmented are reconstructed from volumetric data acquired by using a high resolution computerized tomography (HRCT) system operating in spiral mode. The proposed 3-D reconstruction technique consists in three major steps: (1) a fully-automated 2-D segmentation of bronchi performed on each slice and based on mathematical morphology theory; (2) a 3-D oriented and multivalued structure construction characterizing the 3-D topology of the segmented volume and a 3-D oriented propagation based on this structure; and (3) the 3-D reconstruction by using a 3-D mesh technique. Results are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most important features in image analysis is shape. Problems regarding shape are widely encountered in image processing applications, such as machine vision recognition, visually guided robots, analysis of biomedical images, etc. Mathematical morphology is the branch of image processing that deals with shape analysis. The definition of all morphological transformations is based on two primitive operations, namely dilation and erosion. Since many applications require the solution of morphological problems in real time, the efficient implementation of these operations, in terms of computational time, is crucial. In this paper, two algorithms for the dilation and erosion on an advanced associative processor are presented and evaluated. It is shown that these algorithms can take full advantage of the capabilities of the advanced architecture. Specifically, the ability to access all memory words in parallel leads to synchronous rapid execution of any image translation dictated by the structuring elements employed for morphological processing. The interconnection network allows the efficient implementation of image translations at any number of pixels. Also, the ability to perform logic operations parallel on the bits in each processing element leads to optimal computational complexity. Finally, it is shown that there is a trade-off between circuit complexity and communication delay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new Feature Matching approach for Real Time Image Registration based on features provided by a Multi- Scale Top Hat Filter. The idea is to calculate bright and dark regions of different sizes in two images, calculate 'image' and 'match history' specific characteristics for each region, match the regions from the two images based on their characteristics and then to estimate the x and y translations and rotation between the two images based on the region matches. Given the real time requirement and the performance characteristics of today's generation of Digital Signal Processors (DSP) we implemented the Multi-Scale Top Hat Filter with a lower resolution and compensated for the precision lost by using the Multi-Window Correlation algorithm. The successfully matched regions from the Multi-Scale Top Hat Filter provide exactly the kind of data to the Multi-Window Correlation algorithm which tend to be critical for its application. Automatic parameter adaptation and the Top Hat Filter specific nature of the detected regions provide a robust and adaptive algorithm dealing well with different scenes and environmental changes. We present the details of this approach including its real time implementation and its role in our Automatic Target Detection and Tracking (ATDT) System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Histogram equalization and specification have been widely used to enhance the content of grayscale images, with histogram specification having the advantage of allowing the output histogram to be specified as compared to histogram equalization which attempts to produce and output histogram which is uniform. Unfortunately, extending histogram techniques to color images is not very straightforward. Performing histogram specification on color images in the RGB color space results in specified histograms that are hard to interpret for a particular enhancement that is desired. Human perception of color interprets a color in terms of its hue, saturation, and intensity components. In this paper, we describe a method of extending graylevel histogram specification to color images by performing histogram specification on the luminance (or intensity), saturation, and hue components in the color difference (C-Y) color space. This method takes into account the correlation between the hue, saturation, and intensity components while yielding specified histograms which have physical meaning. Histogram specification was performed on an example color image and was shown to enhance the color content and details within this image without introducing unwanted artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Histogram equalization (HE) is one of the simplest and most effective techniques for enhancing gray-level images. For color images, HE becomes a more difficult task, due to the vectorial nature of data. We propose a new method for color image enhancement that uses two hierarchical levels of HE: global and local. In order to preserve the hue, equalization is only applied to intensities. For each pixel (called the 'seed' when processed) a variable-sized, variable-shaped neighborhood is determined to contain pixels that are 'similar' to the seed. Then, the histogram of the region is stretched in a range that is computed with respect to the statistical parameters of the region (mean, variance) and to the global HE function (of intensities), and only the seed will be given a new intensity value. We applied the proposed color HE method to various images and observed the results to be subjectively 'pleasant to the human eye,' with emphasized details, preserved colors, and with the histogram of intensities close to the ideal uniform one. The results compared favorably to those of three other methods (histogram explosion, histogram decimation, and three-dimensional histogram equalization) in terms of subjective visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.