The wide availability of workstations has made the creation of sophisticated image processing algorithms economically possible. Here the latest version of an algorithm designed to detect fronts automatically in satellite-derived Sea Surface Temperature (SST) fields, is presented. The Algorithm operates at three levels: picture level, window level, and local/pixel level, much as humans seem to. Following input of the data, the most obvious clouds (based on temperature and shape) are identified and tagged so that data which do not represent sea surface temperature are not used in the subsequent modules. These steps operate at the picture and then at the window level. The procedure continues at the window level with the formal portion of the edge detection. Using techniques for unsupervised learning, the temperature distribution (histogram) in each window is analyzed to determine the statistical relevance of each possible front. To remedy the weakness related to the fact that clouds and water masses do not always form compact populations, the algorithm also includes a study of the spatial properties instead of relying entirely on temperatures. In this way, temperature fronts are unequivocally defined. Finally, local operators are introduced to complete the contours found by the region based algorithm. The resulting edge detection is not based on the absolute strength of the front, but on the relative strength depending on the context, thus making the edge detection temperature-scale invariant. The performance of this algorithm is shown to be superior to that of other algorithms commonly used to locate edges in satellite-derived SST images.
A method of image classification based on image texture is presented. Texture analysis is performed using the variogram function. The classifier used is a supervised parallelepiped type classifier using a minimum distance to mean check for overlapping classifications.
A pattern recognition technique for terrain analysis is described which uses image processing and statistical inference techniques to classify landforms from an input Digital Elevation Model (DEM) and a database of previously stored patterns. The methodology involves the development of landform signatures from topographic primitives calculated from training areas in the DEM. Local statistical properties of characteristic distributions of primitives are calculated from spatial cooccurrence matrices. Unknown terrain in DEMS can then be classified using an associative memory technique. The method is easily scaled in terms of the number of patterns that can be stored in the database as well as the amount of generalization performed by the associative memory.
Two novel approaches to texture classification based upon stochastic modeling using Markov Random Fields are presented and contrasted. The first approach uses a clique-based probabilistic neighborhood structure and Gibbs distribution to derive the quasi likelihood estimates of the model coefficients. Likelihood ratio tests formed by the quasi-likelihood functions of pairs of textures are evaluated in the decision strategy to classify texture samples. The second approach uses a least squares prediction error model and error signature analysis to model and classify textures. The distribution of the errors is the information used in the decision algorithm which employs K-nearest neighbors techniques. A new statistic and complexity measure are introduced called the Knearest neighbor statistic (KNS) and complexity (KNC) which measure the overlap in K-nearest neighbor conditional distributions. Parameter vectors for each model, neighborhood size and structure, performance of the maximum likelihood and K-nearest neighbor decision strategies are presented and interesting results discussed. Results from classifying real video pictures of six cloth textures are presented and analyzed.
An approach to the fusion of information from airborne sensors for the purpose of target detection is described. This approach differs from alternate strategies in that the fusion occurs at the target hypothesis level, a symbolic level, rather than at the sensor level, i.e., candidate target coordinates are merged into correlated target hypotheses. Thus, a source in this approach consists of both a sensor which provides data about the target environment, and a list of candidate target coordinates generated as output from a target detection algorithm. The fusion algorithm is based on generating a statistical model for the detection and false alarm performance of each target coordinate source. Special emphasis is placed on modeling the positional misregistration which occurs when imagery is extracted from different platforms. An iterative clustering algorithm is derived from the source models based on a maximum likelihood target location estimation approach. Results of multisource fusion on several synthetic datasets are provided which indicate the encouraging performance of the system even under severe clutter and sensor misregistration conditions.
Since the beginning of space-borne remote sensing less than two decades ago, sensor technologies have greatly advanced. State-of-the-art sensor systems, such as the Earth Observing System (Eos), will have higher spatial, spectral, radiometric resolutions, which are selected together to enhance the capabilities of differentiating surface categories. Multiple, pointable platforms covering different parts of electromagnetic spectrum will circle the earth, detect and monitor terrestrial changes, and measure the essential surface and atmospheric parameters. It is anticipated that sensors of future generations will have even greater spectral, spatial, and radiometric resolutions. However, resolutions cannot increase without bound. Noise of electronic, mechanical, optical, and atmospheric origins limits the effective resolutions of the measurements. In this paper, several aspects of the effects of radiometric resolution on remotely sensed data are examined. It is shown that higher radiometric resolution indeed improves information content. But to improve the utilization of the spectrometer, radiometric sensitivity must also be modified. Using clusters constructed from empirical signatures, it is shown that discriminability between clusters converges beyond 6 bits. It is also shown that the information content of current sensor measurements is not limited by the atmosphere, but by the sensitivity settings of the spectrometers. It is proposed that a spectrometer with variable sensitivity and capable of sampling scene radiance into full dynamic range be used as a means of optimizing information content. If implemented, the same amount of information content currently observed could be measured with fewer bits.
A geometric correction method using the photographic model with the orbit parameters and attitude parameters of the satellite is presented in this paper. The piecewise majorization method is proposed to optimize and identify these parameters with a group of GCPs (Ground Controlled Points) to give the model high accuracy. This method is proved to be especially effective to those images of wide coverage and serious geometric distortion.
A ray equation based Kirchhoff depth migration is used to image primary reflections and deep water multiples recorded on an ocean bottom hydrophone (OBH). The resulting image of the subbottom sediments is shown to be improved by inclusion of the deep water multiple in the imaging process. Field data acquired jointly by Woods Hole Oceanographic Institute and University of Texas Institute for Geophysics at Austin consisting of an OBH (2300 m depth) recording a 10,800 cubic inch airgun array, are used to illustrate the feasibility of this technique. Images are obtained from both the primary reflections and energy which has undergone an additional path through the water column. Comparison of these images reveal an excellent correlation of reflectors with the predicted polarity reversal observed in the multiple's image. Synthetic data are used to examine the difficulties in identifying the true path of the water column multiple. For flat layered media there are two different multiple paths, one which reflects beneath the source and one which reflects over the receiver, which have identical travel times when the seafloor is approximately horizontal. They do not however have the same amplitude and it can be shown that their amplitudes differ sufficiently to allow a reliable image to be extracted from the energy which reflects over the receiver (receiver multiple). The difference in amplitude between this receiver multiple and the primary reflection is mostly due to geometric spreading and attenuation in the water column. This is usually small enough to allow observation of most primary events in the receiver multiple. While conventional seismic imaging techniques utilize only primary reflected energy we have shown that for an on bottom recording geometry energy reflecting from the free surface may also be used to image the subsurface. As a final step the image obtained from the multiple is corrected for the r phase shift from the free surface and added to the image from the primary reflection. The final image shows both extended lateral coverage and increased signal to noise.
In many geophysical problems, we are presented with a large amount of data and asked to invert for some set of physical parameters, the focus being on the design and solution of the inverse problem; in radargrammetry, however, the inverse problem is straightforward and we must concern ourselves instead with picking the useful data attributes to invert. This paper discusses Synthetic Aperture Radar (SAR) imaging with emphasis on radar backscatter properties. In studying how planetary surfaces modulate the radar cross-section, we determine that both the amount of and variation in backscattered microwave energy provide the necessary cues (i.e., object shading and boundary information) for stereo-interpretation. While past research into automated reconstruction of topography from radar stereo-pairs has concentrated on the use of shading, we show that boundary information may also be used successfully. We also identify how these attributes can be potentially misleading. In closing, this work suggests a process for generating high-quality topography maps from Salt imagery.
Experiments have been carried out with color workstation technology to permit color-blind persons to maximize their ability to interpret imagery. Software has been developed allowing individuals to tune and store their own imagery interpretation color scales. Test subjects comprising one color-impaired group and one control group with normal sight were requested to tune their own scales for interpreting three weather satellite images. Some of the results of these studies are presented.
SMSSV (Space Mission Scenario Simulation and Visualization), a system employing advanced computer graphics and animation techniques for spacecraft mission simulation is described. The system provides capabilities for complex model generation of both man-made and natural phenomena. It models orbital dynamics of terrestrial satellites, supports solids models for the earth, sun, and moon, and simulates the dynamics of terrestrial satellites for arbitrary elliptical orbits. A stellar background is also generated including magnitudes and spectral types.
The acquisition and real-time analysis of comprehensive, high resolution, meteorological data sets require considerable processing power. Each data source (such as radar, satellite, observing networks) requires unique processing to acquire the data, control quality, and convert the data into a user acceptable form. To rapidly present these data for display at a workstation, much of the data are routinely converted into display-ready form and stored on the workstation disk. The PROFS PC-based workstation allows the forecast and research meteorologist to rapidly manipulate the displays and also access the raw data for custom processing. Although the workstation has been optimized for real-time response, the software is being extended to also allow some review and perusal of recorded data.
Data-parallel algorithms for image computing on the Connection Machine are described. After a brief review of some basic programming concepts in *Lip, a parallel extension of Common Lisp, data-parallel programming paradigms based on a local (diffusion-like) model of computation, the scan model of computation, a general interprocessor communications model, and a region-based model are introduced. Algorithms for connected component labeling, distance transformation, Voronoi diagrams, finding minimum cost paths, local means, shape-from-shading, hidden surface calculations, affine transformation, oblique parallel projection, and spatial operations over regions are presented. An new algorithm for interpolating irregularly spaced data via Voronoi diagrams is also described.
The weather information industry is now just ten years old. In the past decade, the amount of raw data available from all sources has grown by at least two orders of magnitude. In the next decade, even greater growth can be expected. New systems such as NEXRAD, ASOS, the Profiler network, and GOES NEXT will make the data assimilation/processing problem one of our greatest challenges. Will the rapidly changing communications and computing technologies be able to handle the growth? Even as personal computers become smaller and more powerful, will individual users be able to cope with the flood of new information? This paper will review these trends, and explore the options that are available. In particular, emphasis will be placed on preprocessing these datasets, to provide "value-added raw data", which can then be further processed and analyzed to meet individual users' needs.
In many office environments, using imagery is as integral a part ofjob performance as using textual and numeric information. The use of images is common, for example, in such diverse areas as medical diagnosis, land management, and weather forecasting. For some time, computer-based office systems have provided tools for the manipulation of textual and numeric information. Technology has now made computer-based storage, retrieval, display, and manipulation of imagery also feasible for office applications.
Coding schemes for data rate reduction of digital video signals are being devised for various application areas. Such applications call for video signal processors suited for real-time operation and realization with small size. Small size can be achieved using advanced VLSI technology. Real-time processing of video signals re- quires several 100 Mega operations per second (MOPS) and correspondingly high data rates for operand transport. These requirements can be met by multiprocessors employing parallelization and pipelining in an adapted architecture. In order to support distinct applications, the multiprocessors have to be programmable. The requirements of video coding schemes have been extracted and mapped into a multiprocessor architecture for programmable real-time video processing. In this contribution, the extracted requirements, the adapted architecture of a multiprocessor, and the multiprocessor modules are presented. The realization of several modules of the multiprocessor using CMOS technology is also reported.
We are considering a texture as an image consisting of number of different subimages. In this paper an attempt is made to determine all subimages present in a given textured image and how many times such subimages are repeating, using texture features like two dimensional Entropy, Contrast and Homogenity.