PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We propose a new method for phase unwrapping based on Stochastic Relaxation. A cost functional is introduced as a measure of the smoothness of the unwrapped phase field. Hence the problem of reconstructing the unwrapped phase field can be formulated as a minimization problem for the integer deviations between the measured and the unknown neighboring pixel differences of the true phase, with the constraint that the deviations have to remove the inconsistencies of the measured phase differences. The optimization problem is solved by the simulated annealing with constraint technique. We tested our method on simulated and real interferograms in presence of aliasing and noise. We remark that our method does not remove noise: in the case of noisy phase fields, our method's output is very near to the input surface noise included. The consistency and efficiency of our method are also demonstrated by comparative test using least squares methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase unwrapping is one of the toughest problems in interferometric SAR processing. The main difficulties arise from the presence of point-like error sources, called residues, which occur mainly in close couples due to phase noise. We present an assessment of a local approach to the resolution of these problems by means of a neural network. Using a multi-layer perceptron, trained with the back- propagation scheme on a series of simulated phase images, fashion the best pairing strategies for close residue couples. Results show that god efficiencies and accuracies can have been obtained, provided a sufficient number of training examples are supplied. Results show that good efficiencies and accuracies can be obtained, provided a sufficient number of training examples are supplied. The technique is tested also on real SAR ERS-1/2 tandem interferometric images of the Matera test site, showing a good reduction of the residue density. The better results obtained by use of the neural network as far as local criteria are adopted appear justified given the probabilistic nature of the noise process on SAR interferometric phase fields and allows to outline a specifically tailored implementation of the neural network approach as a very fast pre-processing step intended to decrease the residue density and give sufficiently clean images to be processed further by more conventional techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR interferometry can be used to derive topographic information (DEM) on the earth surface. An operation called geocoding is necessary to translate the DEM form a range- azimuth to a latitude-longitude reference system. This paper introduces a new algorithm for geocoding interferometric DEMs based on an iterative procedure using the reference ellipsoid as the first guess, then locating each earth point on a succession of planes locally parallel to the ellipsoid. The procedure is shown to geometrically converge to the considered DEM point. Given its high accuracy, this method could be used as a precision tool for use in difficult zones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The estimation of sea ice motion in the Arctic Ocean is of interest for climate aspects, ocean dynamics, and ship navigation. Synthetic Aperture Radar (SAR) has provided information to extract the motion of sea ice in the Arctic Ocean. In order to estimate the sea ice motion from ERS-1/2 SAR image pairs or sequences, this paper present a coarse- to-fine or hierarchical matching method. The method has tow steps. In the first step, the mean drift of the sea ice motion is extracted from the image pair by using a spectral maximum cross-correlation matching technique. In the second stp, the detail movement after compensating for the mean drift is retrieved by using a distance measure called the sum-of-squared difference matching technique based on a Laplacian pyramid data structure. The paper also gives some experimental results which show that the method has good performance in the estimation of sea ice motion in the Arctic Ocean from ERS-1/2 SAR image pairs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two new Bayesian Maximum A Posterior vector speckle filters are developed for multi-channel detected synthetic aperture radar (SAR) images. These filters incorporate statistical descriptions of the scene and of the speckle in multi- channel SAR images. These filters incorporate statistical descriptions of the scene and of the speckle in multi- channel SAR images. These models account for the scene and system effects which result in the presence of a certain amount of correlation between the different channels. In order to account for the effects due to the spatial correlation of both the speckle and the scene in SAR images, estimators originating from the local autocorrelation functions are incorporated to these filters, to refine the evaluation of the non-stationary first order local statistics, to improve the resolution of the scene textural properties, and to preserve the useful spatial resolution in the speckle filtered image. Since the new established Bayesian speckle filters present the structure of control system, their application is the first processing step of application-oriented control system designed to exploit the synergy of SAR sensors. We present here such a control system, designed to retrieve soil roughness and soil moisture through Bayesian ERS/RADARSAT data such a control system, designed to retrieve soil roughness and soil moisture through Bayesian ERS/RADARSAT data fusion. Results obtained on a couple of ERS PRI and RADARSAT standard beam SAR images show that the new speckle filters present convincing performances for speckle reduction, for texture preservation and for small scene objects detection. The retrieval of soil roughness and soil moisture through Bayesian data fusion of ERS and RADARSAT data provides also valuable results for the monitoring of agriculture and environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the last few years some works have been presented, in open literature, in the field of SAR data analysis and information extraction. The more the SAR data become of common and routine usage, the more the need for automation of the interpretation and information extraction process increases. Alenia Aerospazio has started, in 1994, a research and development activity for the definition and implementation of a demonstrator of a system for SAR data analysis and automatic target recognition (ATR). The work has proceeded with the identification and selection of the basic tools for SAR image pre-processing. The last step was the definition of the architecture of the ATR system. The considered architecture is based on a two-step process that can be run in sequence or separately. For single image input a rule-based system is considered. Rules for the identification of a predefined set of targets are under implementation. For a temporal series of input images a change-detection approach is considered. This last part of the work is in a preliminary stage: only the algorithm flow and routines identification task has been completed. The possible implementation of the system on a massively parallel computer is considered as a final goal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A two-step Fuzzy C-Means (FCM) clustering algorithm was presented in this paper. In the first step a region-growing algorithm was utilized to make the data st over split, and the data set was reconstructed by using the means values of the segments. In the second step a traditional FCM clustering algorithm was realized to segment the reconstructed data set. In order to get the physical classes, a simple data training or the use of prior knowledge was required. The mean values of each class were obtained from the data training. Then the physical classes were identified through a simple distance measure. In order to improve the classification accuracy, a post-processing was developed by using a majority filter based on the sizes of objects and the context information. The algorithm was applied to two different applications, classification of sea ice and land cover from ERS-1/2 SAR images. In the sea ice case the SAR PRI images and the first order statistical parameter were used. The algorithm was also compared with a statistical classification method in this case. In the land cover case the SAR SLC images, the first order statistical parameter and the interferometric coherence information was used.Especially a set of proper logical calculation rules were used to determine the physical classes. The experiments have shown that the presented algorithm had a better performance and was more automatic in the case of multi- channel classification from SAR images than the statistical model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Strong scattering models which assume uniform distribution of the phase of the scatterers do not provide an adequate description of the phase of the scatterers do not provide an adequate description for the sonar scattering from the sea bed. The generalized-K and the homodyned-K have been proposed as models for weak scattering when the number of steps in the random walk fluctuate according to a negative binomial distribution. As a prerequisite to experimental analysis of sonar data as a case of weak scattering a review is carried out of the two models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The wavelet transform developed during the last years into a mature and very pragmatic formalism for the analysis of the scale behavior of signals. However, it also remains a tool to serve its very initial goal: the time-frequency analysis. In this article we summarize the basics of time-frequency- scale formalism for signal representation and analysis, and we overview several applications with promising results for the SAR signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes how low resolution SAR image products with a high number of looks can be simulated from calibrated amplitude SAR imagery. The simulation method reduces the resolution of the input imagery by smoothing, however the resulting product has a higher number of looks than is required, so the speckle noise has to be increased. This is achieved y using gamma distributed noise.the smoothing process is also normalized to preserve the calibration of the original imagery. Examples of simulated products and validation result for the products are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Satellite radar imaging requires the transmission of a huge amount of data, since the image reconstruction can not usually be directly accomplished on board. A compression scheme is therefore needed. The current image resolutions do not allow lossless type of coding. The statistical properties of the signal are besides often discarded. However, the correlation of this signal can be exploited in order to increase the compression algorithms performances. This paper shows compression techniques examples and the contribution of the statistical properties of the signal for compression purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR data processing has matured over the pst decade with development in processing approaches that include traditional time-domain methods, popular and efficient frequency-domain methods, and relatively new and more precise chirp-scaling methods. These approaches have been used in various processing applications to achieve various degrees of efficiency and accuracy. One common trait amongst all SAR data processing algorithms, however, is their iterative and repetitive nature that make them amenable to parallel computing implementation. With SAR's contribution to remote sensing now well-established, the processing throughout demand has steadily increased with each new mission. Parallel computing implementation of SAR processing algorithms is therefore an important means of attaining high SAR data processing throughput to keep up with the ever- increasing science demand. This paper concerns parallel computing implementation of a mode of data called ScanSAR. ScanSAR has the unique advantage of yielding wide swath coverage in a single data collection pass. This mode of data collection has been demonstrated on SIR-C and is being used operationally for the first time on Radarsat. The burst nature of ScanSAR data is a natural candidate for parallel computing implementation. This paper gives a description of such an implementation experience at Alaska SAR Facility for Radarsat ScanSAR mode data. A practical concurrent processing technique is also described that allows further improvement in throughput at a slight increase in system cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of airborne SAR suffers from the difficulties raised by the nonlinearities of the carrier trajectory. An original method for motion compensation allowing the use of 2D processing for image synthesis is introduced. It provides a more elegant and more efficient solution to range migration and range versus Doppler coupling, compared to the classical range/Doppler processing. Trials with the RAMSES airborne SAR of the ONERA confirm the interest of the new method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Systolic architectures for 2D digital filters are presented. The structures are derived directly from the transfer function. The proposed 2D systolic arrays for 2D digital filters have several advantages over the existing 2D arrays, such as modularity and use of nearest neighbor interconnections. These two features make the proposed architecture versatile and more suitable for VLSI implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new parallel multiplier design is proposed based on the technique of partitioning the operands into four groups however using different grouping and a combination of 4:2 compressor carry save adders for the accumulation of the 16 partial product terms. Also a design methodology of parallel multiplier is proposed which gives the designer more flexibility in finding the best trade off between the throughput rate and the hardware cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper addresses the problem of the development of robust algorithms for unwrapping the interferometric phase (IF) produced in the SAR interferometry. Two practicable methods are proposed based on the Green's first identify with properly chosen Green's functions satisfying the imposed Newmann's boundary conditions. These Green's functions are being found when solving the NEwmann's problem by means of the potential theory or through the representation of the Green's function as an infinite 2D series in the eigenfunctions of the Helmholtz equation. Then, the algorithms are being further elaborated using the method of regularization while searching for a gradient of the measured IF. Thus, the proposed approaches take into account the presence of the measurement noise in the IF. The developed algorithms proved to be stable both with respect to propagation of errors and with respect to local perturbations in the unwrapped phase due to inaccuracy of the measurement of a wrapped IF. The proposed algorithms allow their efficient numerical implementation using the fast Fourier transform. Experiments on data acquired by the satellite ERS-1 and supplied by CNES show high performance of both developed methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic classification of regions towards cartographic feature extraction by the airborne AeS-1 instrument is presented. We extract regions corresponding to cartographic features for the classes built-up area, forest, water and open area. Water and built-up area are extracted from the intensity image. The use of a DEM as additional source of information allows to distinguish built-up land and forest from all other classes. The mathematical tools for feature extraction from intensity and DEM data are fractal dimension estimation and mathematical morphology. The classification is done employing a decision tree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider a polarimetric SAR data classification method which includes scattering models. The proposed method is an integrated neural network classifier composed of two classification procedures. First, SAR data is pre-classified into three scattering classes by individually computing the Mueller matrix and Stokes vector. Second, we construct a neural network appropriate to each scattering class in order to classify the SAR data into realistic categories. Either the competitive or back-propagation neural network is employed as a classifier. The former learns by the LVQ1 and LVQ2.1 algorithms. As a result of the procedure using SIR-C C band data, pixels in the water category will be classified almost exclusively into the odd class. The even class includes only factory and urban categories. Therefore, it can be concluded that the neural classifier contains a smaller network and a more efficient learning process since it is applied to more limited category classifications. The neural network classifier employs an eight-dimension feature vector with backscattering coefficients and pseudo relative phases between HH and VV from the L and C bands. Average accuracy of the competitive neural network is slightly higher than that of the back-propagation network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Positivity and support have long been used to improve image quality beyond that achievable from the measured data alone. In this paper we analyze how positivity functions to reduce noise levels in measured Fourier data and the corresponding images. We show that positivity can be viewed as a signal- dependent support constraint, and thus it functions by enforcing Fourier-domain correlations. Using computer simulated data, we show the effects that positivity has upon measured Fourier data and upon images. We compare these results to equivalent result obtained using support as constraint. We show that support is a more powerful constraint than positivity in several ways: (1) more super- resolution is possible, (2) more Fourier domain noise reduction can occur, and (3) more image-domain noise reduction can occur.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new realization of DPCM video signal/image processing that provides the designer with more flexibility in finding the best trade off between throughput rate and hardware cost is introduced. This is achieved by combining the digit-serial computation with the DPCM video signal processing. The advantage of the posed realization is that the size of the memory used for multiplication can be reduced by a factor of at least 32 compared to 16 in the existing DPCM implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automated method is posed to register remotely sensed images with subpixel accuracy. The method is called ARTSPA. ARTSPA improves correspondence among control point pairs from pixel to subpixel accuracy. Registration with subpixel accuracy is realized by using the improved control point pairs. ARTSPA was successfully applied to JERS-1/OPS stereo pair images for deriving terrain height.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the new image segmentation technique. The suggested algorithm is based on the original histogram- based multi-threshold presegmentation procedure. The advantages of this procedure are the high computational speed and good separability in histogram modes detection. The following segmentation makes it possible to obtain the reasonable 'good-looking' set of noncrossing homogeneous regions of the image that could be found out and measured later. Experiments on the number of real airborne images demonstrate the efficiency of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image segmentation is the process which permits the image to be partitioned in zones of interest corresponding to scene objects. We propose two algorithms based on the mathematical pretopology and the structuring functions for detecting crests lines in a grey level image at a very high definition. The first algorithm is based on a method of grouping by relaxing propagation on the definition of a pretopological structure on the set to be classified. The second algorithm consists of grouping by extraction of a new pretopology from the one defined initially. It directly detects the crests lines, whereas the first makes it in an indirect way. These methods were tested on a SPOT panchromatic image on the region of Oran. From the results, we could conclude that these methods can be very well be embedded to a process of detection of roads, iron-shod ways, and water courses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article proposes a non-supervised segmentation approach for multi-sensor remote-sensing data. Attention is focused on the phase of automatic training in order to retrieve the image class parameters, needed in the successive parametric segmentation. Since the clustering of the whole data set is in general not possible due to the computational load involved, sampling is needed that allows one to estimate the distribution of the classes in the feature space. However, on the one hand the sampling of single pixels may effect strongly the correct estimation of the class distributions due to noise, while, on the other hand, simply taking the mean value of a window around the sample may have too storing a filtering effect. The proposed algorithm exploits the spatial interaction between the pixels in the image, taking carefully into account the local image content of each sample of the observed scene. A Bayesian network estimates for each candidate sample the most appropriate neighborhood, looking for connected components and thus for pixels that are likely to partake of the same class. From the selected neighborhood a mean value of the feature vector is computed that is to represent the sample, thus taking into account the local morphologic information. In this way the estimated class distributions in the feature space form a more robust representation of the true classes, thus bearing advantages to the parameter estimation and to the final segmentation. The image class models obtained by the proposed training step are used as input to a Markov random field (MRF) segmentation approach. Results presented show that a better separation of the natural classes is possible when using in a more careful fashion the local image content. Numerical results based on synthetic images show that the accuracy of the MRF segmentation approach improves from 72 percent to 96 percent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acquisition of data from satellites is of great interest for the importance that the recognition of these data has in different application environments such as geology, hydrology, town planning, observation of agricultural sites or forests, and others. This paper faces the problem of the statistical classification of SAR images. To this end in the literature different methods have been proposed such as the K-NN or the maximum likelihood. Their use allows to achieve fast classification maps but the accuracy obtained is often not satisfactory enough. The reason is that those methods do not fully exploit the spatial correlation information because they use classical features that do not capture this property. Moreover the classical approaches make use of a fixed set of features which do not allow optimal classification. This fact is even more evident for SAR images, where classes are overlapped. In this paper, the use of classical features as the sample mean and the sample variance, which exploit the spatial correlation property, will be shown within a statistical image classification framework. The parametric feature estimators, together with a brief description of the developed classification algorithm, are presented. Throughout the paper the usual hypothesis of independent samples is not applied, due to the strong texture characteristics of SAR images. Finally a section containing results of classification tests followed by a short discussion can be found.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisource, Multisensor, and Multitemporal Approaches
In this paper we present a method to generate 3D cartographic databases in dense urban areas using simultaneously scanned maps and aerial images. The generation of digital terrain models (DTM) and the 3D description of buildings using aerial imagery are both supported by the previous analysis of scanned maps. Our approach relies on various information extracted form the maps on different features like the road network, the urban blocks and the buildings. These features are used to guide the analysis of a disparity image calculated with a stereo pair of aerial images. For the generation of a DTM, the road network allows to focus the algorithms on regions where the information on the ground elevation is available: the crossroads and the road sections. For the detection and the description of buildings, each urban block provided by the road network of the map is analyzed separately, taking into account the different features detected in the map. The effectiveness of this approach is demonstrated using complex imagery over large and dense urban areas presenting a large variety of landscapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the Ordered Weighted Averaging OW A operator is introduced related to data cooperation in the field of image analysis. The concept of OW A operator was introduced by Yager in [5], [6] and [7] as a way for providing aggregations which lie between Max and Min. The structure of these operator involves a type of nonlinearity in the form of an ordering operation on the elements to be aggregated. The main difficulty in using this type of operators is to find the appropriate weighted vector. In our approach, we propose to generate the weights by a Neural Network. Furthermore, depending on the dispersion of sources opinions, the most appropriate operator is used taking into account the agreement or the conflict among the sources. In the following we review some basic ideas of the OW A aggregation operator, we then describe the general system operating showing how the coherence degree among the sources are integrated to produce the adaptative operator to any given situation. The method is tested on Landsat multispectral images, using different supervised and non-supervised classification techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An object-based approach for producing land use maps will be described in this paper. This approach has been used for integrating Landsat TM data within a GIS context for producing land use maps of urban-rural fringe areas. A contextual image classification method based on the SMAP estimate was used to produce land cover maps which provide knowledge for inferring land use types. Objectized land cover information, thematic knowledge and spatial composition rulers were used to infer the land use type of each object are. The prototype of this approach has been built using the GRASS 4.1 GIS software package and tested using a dataset compiled for this purpose. Results indicate a significant improvement compared with land use maps produced using a contextual image classification approach alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interpolation and classification are widely used in image processing and pattern recognition, including remote sensed image processing. In fact, interpolation and classification can be considered as problems of optimization, i.e., finding an optimal mapping from the value space of variables to the value space of the function, or from the feature space to the class space. Different methods are sued, some are based on known numerical data, and the others, on expert rules. In general, they have difficulty to integrate both the knowledge of experts and that implied in known numerical training samples. In the present paper, we propose to use neural fuzzy systems with asymmetric (pi) membership functions. A new global criterion to optimize and the corresponding learning algorithm are proposed also. To test the performance of the system proposed, we apply it to interpolation and classification problems. The comparison with other methods shows better behavior of such systems. The neural fuzzy system using asymmetric (pi) membership functions has the following advantages: (1) Asymmetric (pi) membership function gives a more general model of fuzzy rules, improving the precision of neural fuzzy system and assuring a good convergence in learning: (2) The neural fuzzy system can integrate both kinds of knowledge. (3) The neural fuzzy system allows a refinement of the expert knowledge, and the new fuzzy rulers found are easy to interpret.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study we propose the application of a fuzzy hybrid methodology for the classification of wetlands in the Venice lagoon: one of the most delicate examples of these types of ecosystems in the world. The identification of wetlands in these transitional areas is not a trivial task, since they are characterized by mixed signatures, depending on the amount of water, bare soil and vegetation components mainly present in the ground pixel. On the other hand, the importance of the maintaining of wetland extents by the use of remote sensing data justifies new efforts in order to increase result reliability, overtaking those obtained by traditional classification techniques. In this work, a fuzzy hybrid methodology has been applied in a specific area of the Venice lagoon, by using Landsat Thematic Mapper images and a set of color aerial photographs, at a higher geometric resolution, taken simultaneously with the satellite images classification results have been judged by experts a reliable basis for further multisource data analyses and accurate mapping procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a multistrategy fuzzy learning method to the generation and refinement of multisource remote sensing classification rules. The learning procedure uses theoretical knowledge in the form of fuzzy production rules and a set of training examples, or pixels, assigned to fuzzy classes to developed a method for accurately classifying pixels not seen during training. The strategy is organized to preserve the advantages of direct elicitation techniques and empirical learning strategies while avoiding the disadvantages these present when used as monostrategy learning method. The performance of the methodology has been evaluated applying it to the actual environmental problem of fire risk mapping in Mediterranean areas, using an approach in which information describing risk factors are mainly extracted, by means of classification procedures, from satellite remotely sensed images. Results achieved, quantitatively and qualitatively evaluated by experts, proves that the method proposed provides adequate solutions for multiple feature evaluation and accurate discrimination between coexisting borderline cases, which generally are main problems when dealing with multisource remote sensing classification tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of urban area is an important problem in image interpretation. It is interesting to be able to analyze town development on satellite images or to mask urban areas automatically. The method we present in this paper consists in the extraction of urban areas form remote sensing images and the classification of these areas. We separate the urban areas from the other types of regions. Then we classify hem according to a measure of the urban density. The algorithms we use, combine different types of operators in order to improve the final classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study presents and compares two models for estimating motion in meteorological images sequences. The first method makes use of the grey level pixel conservation hypothesis. It produces a dense vector field through a variational formulation, and authorizes discontinuities in the resulting field. A second method use a model taking affine motion as ground hypothesis. Motion parameters are then estimated with an incremental least-square procedure. One of its principal advantages results in a modeling of the variation of the grey level values. The two methods are complementary: the second computes a global estimation of the motion, which is locally enhanced by the first.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fractals are more and more used in image analysis and the use of fractal dimensions has been much studied for image feature extraction and segmentation. Though fractal dimension is an essential property, the use of fractal dimensions is limited. First, using only fractal dimensions cannot completely characterize images. Secondly, there could exist different images having the same fractal dimensions. In the present paper, we introduce the s-dimension content as a new image feature and use it to better characterize images. The s-dimension content is calculated in help of the covering-blanket method. To more effectively extract the fractal features, we propose to change also the spatial window size for calculation of fractal characteristics in different scales. The inherent relation between the s- dimension content of fractals and the image features is studied. Experimental results for boundary between different regions are presented. We present also the application of s- dimension content to the detection of small objects on a noisy background, the experimental results are reported and are compared with those obtained by the use of fractal dimension. According to the experimental results, we see that for small object detection, the use of s-dimension content is much more efficient than that of the fractal dimension, which shows the importance of the s-dimension content for image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method is presented to automatically generate 3D models of house roofs from aerial images of residential areas in urban sites. First, homogeneous regions with consistent photometric and chromatic triangulation network. Stereo matching of straight line segments is performed between corresponding regions only. Line segments that are matched across at lest three views are reconstructed by a bundle adjustment procedure. The reconstructed line segments are hypothesis is subjected to a consistency verification with respect to the 3D reconstruction and the original image data, and, if necessary, corrected accordingly. Observe that the combinatorics is kept under control by processing one region at the time. In a next stage, the polygons are glued together into a roof model. The emphasis here is on extracting the correct topology of the roof structure. Metric accuracy of the reconstruction is obtained in an additional step by backprojecting the recovered model of the roof structure onto the images and minimizing the total reprojection error. The viability of this approach has been tested on a state-of-the-art dataset or aerial images of residential areas in Brussels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Onboard real time analysis of remotely sensed images can reduce the data flow to the ground stations for high resolution Earth monitoring systems. Fast image texture analysis is possible by processing of optically obtained Fourier spectrum of the image. In the paper the questions of incoherent light source usage in the optoelectronic Fourier spectrum analyzer are considered. Using of incoherent light source of specific size makes it possible to provide an optimal spatial filtering of the spectrum, reduce a sensitivity to phase distortions and obtain a compact and rugged design of the device, suitable for onboard installation. Special algorithm of spectrum image digitizing is developed to enlarge a range of spatial frequencies to be processed and maximize the photosensitivity. The algorithms for texture properties recognition are developed and tested using prototype incoherent Fourier spectrum analyzer. Prototype description is also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study deals with the simulation of the image formation involved by using a nonlinear image processor composed of a nonlinear medium placed at the common focus of a 4-f system. A simple linear model is developed taking into account the third-order optical nonlinearity of the medium and the optical transfer function f the 4-f system. The simulated images through the system are given for phase and/or amplitude rectangular objects. Our model and its corresponding simulation lead to optimize nonlinear parameters of the medium for this class of objects, whatever their transmittance is, in order to enhance the visibility in image processors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
THe paper describes a technique developed to create synthetic response images, starting from fractal images. The purpose of constructing these images is to test whether the model developed by McCloy, specifying the relationship between cover conditions and image response, provides a good description of this relationship. The paper shows that the images created by this technique do indicate that this model is a good approximation of what actually happens in natural environments, but not agricultural and other man controlled environments. The work, in measuring the fractal dimension of a number of different cover types at two resolutions, has also indicated potentially significant differences in the fractal dimension of surfaces at different data resolutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies on the detection of SAR targets in clutter by making use of the polarimetric information. Two classes of detection algorithms, the weighting matrix algorithm and weighting vector algorithm, are discussed. Particularly, six multi-look fully polarimetric detectors, the multi-look polarimetric whitening filter, polarization match, maximum/minimum power, and odd-/even-scattering detectors, and two suboptimal detectors, the total power detector and the single-polarization-channel intensity detectors are constructed and theoretically analyzed. The experimental results are also given for demonstration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New integral multipositional remote sensing techniques have been developed to obtain the optical parameters of the atmosphere with more adequate accuracy. The results of multipositional techniques development and error calculations are considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic edge thresholding approach, based on investigation of local histograms of small nonoverlapping blocks of the quantized edge magnitude, is proposed. The edge magnitude is first quantized then divided into small nonoverlapping blocks. A threshold for each block is chosen using an iterative procedure. In this paper the effect of the choice of the quantizer is investigated using a quantitative measure. The performance of three quantizers is studied and compared to the result obtained without quantization of the gradient image and to previously reported method for automatic threshold selection for edge detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the images analysis method, based on a new nonparametric criterion for detection of change point of random fields and postanalysis of statistics. Peculiarity of the method is that it does not require a priori data on statistical properties of the textures. The objects to be detected on the images of earth's surface or ocean in a possible mathematical model are represented as relatively large homogeneous adjacent subareas with rather smooth boundaries. The analysis of such images can be made by successive methods for detection of change point and here the basis for detection of objects and their boundaries on the image is the difference of statistical properties of the object's textures. Algorithms for 2D postanalysis of statistics are developed. Computer simulations with satellite images are also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integral procedure commonly in use to inverse the medium location equation suffer from the fact of a prior assumed boundary values and relationship between backscattering and extinction coefficients. It has necessitated the development of new integral techniques based on sounding of investigated atmospheric volume from different points in space. Unlike traditional techniques, these techniques give possibility to find a posterior boundary values and relationship between measured coefficients. New approach appears as a result of the statistical errors simulations to be effective one for a variety of typical atmospheric scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the last years, wavelets have become very popular in the fields of signal processing and pattern recognition and have led to a large number of publications. In the discipline of remote sensing several applications of wavelets have emerged, too. Among them are such diverse topics as image data compression, image enhancement, feature extraction, and detailed data analysis. On the other hand, the processing of remote sensing image data - both for optical and radar data - follows a well-known systematic sequence of correction and data management steps supplemented by dedicated image enhancement and data analysis activities. In the following we will demonstrate where wavelets and wavelet transformed data can be used advantageously within the standard processing chain usually applied to remote sensing image data. Summarizing potential wavelet applications for remote sensing image data, we conclude that wavelets offer a variety of new perspectives especially for image coding, analysis, classification, archiving, and enhancement. However, applications requiring geometrical corrections and separate dedicated representation bases will probably remain a stronghold of classical image domain processing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of classes of stochastic algorithms, which are very powerful in the case of the degraded image reconstruction of a degraded image using iterative stochastic process require a large number of operations. We are investigating in massively parallel implementation of image processing dedicated simulated annealing based algorithms. In this paper, we discuss compromises for such implementation based on study of two dynamics: Metropolis and Glauber ones. Simulated results have been reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applications like real time on-board processed SAR image transmission for ships, ice or oil slick monitoring and detection, and also ground segment applications require a new philosophy for data representation and compression. This also includes high speed and high resolution data dissemination as for example monitoring of floodings, where the transmission in near real time of high resolution data via Internet could be a major improvement on mission level. Conventional SAR quicklook images do not satisfy the spatial resolution requirements for such applications. As an alternative, we propose a new visual epitome based on a wavelet feature coding technique for SAR images in order to preserve the spatial resolution and to achieve high compression factors. Combining data compression, despeckling, and image restoration allows us to reach compression rates of up to about 850, thus permitting easy storage in centralized archives as well as rapid dissemination over standard networks. After decompression at the user site, the quality of the quicklook images permit the visual inspection and analysis of all spatially important image details. This becomes apparent when comparing conventional multilook quicklook images with wavelet feature coded decompressed counterparts. Typical examples will be demonstrated. Due to the extremely high compression rates, the radiometric quality of the quicklook images is degraded. However, the use of wavelet multiresolution representation of the images bears the additional potential of progressive transmission that is stopped interactively when an acceptable level of radiometric fidelity is reached. The decompression effort is small, robust algorithms are available and further compression optimizations are being investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR systems, like any coherent imaging system, are subject to (I) speckling effects, which considerably reduce the useful detail within the acquired scenes and (II), strong geometric distortions. Furthermore, the resolution of SAR systems is comparable to the size of many of the objects of interest in the scene. Our paper proposes a unified treatment of these problems within the framework of probabilistic inference. Despeckling and segmentation are the main objectives only in the first case. In the second case, due to the strong geometric aberrations introduced by the SAR image formation system, the emphasis is on image resampling, with speckle reduction and image segmentation as collateral, but strongly related issues. In both cases, the model is built upon the statistical properties of the speckle noise and the SAR image formation equations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that remotely sensed reflectances in the visible and near-IR spectral regions are subject to perturbations caused by the atmospheric and geometrical characteristics at the time an image is taken. This paper starts off with a description of a newly developed analysis tool, SATCO, which simulates satellite signals of different surfaces, under different geometrical and atmospheric conditions in the first three spectral bands of the VEGETATION sensor on the SPOT4 platform, due to be launched early 1998. SATCO results are the used in the development of a database that will be the core of a new 'fuzzy' methodology for extracting the top-of-canopy (TOC) reflectances at nadir viewing conditions. From hereon, a simple compositing strategy is developed, resulting in estimated values for TOC over a ten day period. Results of this new methodology compared to conventional compositing strategies based on vegetation indices show a distinct reduction of the root mean square error of the estimated TOC with respect to the true values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.