Textural features have been widely used by the pattern recognition community for classification of digital images.4,11,14 However, the meteorological community has virtually ignored texture-based methods in cloud classifications. Emphasis strictly upon spectral signatures3 is at least in part due to the studies of Parikh12 and Parikh and Rosenfeld.13 They found that attempts to resolve ambiguities in multispectral cloud type signatures by the addition of textural features did not produce a significant reduc-tion in misclassification errors. However, these results were based upon 2 n mi spatial resolution in the visible channel and 4 n mi spatial resolution in the infrared. The present study reexamines the applicability of texture-based features for automatic cloud classification using very high spatial resolution (57 m) LANDSAT multispectral scanner digital data. We conclude that cloud classification can be accomplished using only a single visible channel.
A method is derived using digitised GOES east visual images to estimate the albedo of the Earth's surface. Atmospheric effects are accounted for with two equations from Nack and Curran1 which depend only on the solar zenith angle and which may be reparameterized to fit quantities derived from measurements. The albedos so found for Hampton, Virginia, us-ing satellite brightness counts, meteorological data, surface insolation measurements and astronomical computations, are in agreement with values cited in the literature.
An interactive, computer-based system is being investigated, as a part of the SEACAST objective, to automate meteorological satellite imagery interpretation. This system is composed of two elements: 1) a digital image processing procedure and 2) an expert analysis knowledge base. The purpose of the image processing algorithms is to infer low-level attributes of weather satellite imagery features; the purpose of the knowledge base is to relate these attributes to significant meteorological events. The feasibility of such a system is discussed.
An operational remote sensing system is described which supports the environment monitoring using the multi sensor - multi temporal data acquired by the geostationary and polar orbiting weather satellites. The information derived from the satellite images are maps on a continental scale with data on the estimated rainfall, the vegetation index (NDVI), and for experimental use, with data on the soil water available for crops. The operational system, called ARTEMIS, will meet the information requirements of the FAO monitoring programmes in the areas of food and feed security and plant protection.
A method is presented for transforming cloud images from one spaceborne sensor format to another. The procedure, known as Cloud Image Standardization (CIS), uses the known spatial resolution and spectral response properties of a sensor and the modeled angular scattering and emittance properties of various cloud types to derive spatial and spectral resampling relationships. The CIS system can be used to simulate cloud images of a supported sensor from cloud images of other supported sensors. The supported sensors potentially include DMSP OLS, GOES VISSR, NOAA AVHRR, and Landsat MSS and TM. Image standardization enables direct comparison of cloud images from different sensors and provides an interface applicable to the assimilation of nonstandard imagery data by specialized applications models. Since images processed by the CIS system appear to have been generated by the same sensor, there is an effective increase in the global coverage of cloud image data. Applications are presented to cloud field transformations using Landsat MSS and GOES image data.
A primary goal of satellite cloud analysis is to objectively detect the presence of a cloud field and to identify its spatial structure. Several different classical non-metric algorithms ex-ist which, to varying degrees, approach this goal. This paper describes how a technique using metric (Euclidean distance) statistical methods is applied in the detection and subsequent verification of cloud fields. To quantitatively and objectively determine the effectiveness of several different cloud detec-tion algorithms, an inferential metric statistical technique is applied. This method, termed Multi-Response Blocking Pro-cedures (MRBP), compares the results of the detection algo-rithms with a verification data set, resulting in a measure of agreement and a level of significance. The MRBP results are presented and discussed.
This paper describes a class of cloud detection algorithms that are well suited for implementations that provide ultrafast information about current cloud conditions. The algorithms can work on a variety of sensor data including data from satellites, radars and all-sky cameras. Using currently available VLSI technology, the algorithm implementations can generate information in less than five seconds after the sensor data are received.
Oceanic cloud patterns are classified in twenty classes from visible and infrared imagery available from a geostationary satellite. A vector of features representing height, albedo, shape and multilayering characteristics of the cloud fields permits an objective classification. An original aspect of the scheme is its capability to recognize direc-tional patterns such as cloud 'rolls' or 'streets', and doughnut-shaped open cells as well, from features derived from the power spectrum of the visible image. The classifier was trained using 2000 samples of size 128 x 128km extracted from February 1984 images over the Northwestern Atlantic. Expert nephanalysts suggest strict accuracy in 79% of the cases while the machine gives at least the second best choice among twenty classes 89% of the time. The McIDAS system is used to process the imagery. The grid of analysis is super-imposed on the satellite image and as the program runs, the class number appears in the middle of each box at the rate of one every 2.5 seconds while all the information retrieved is stored in a file. Applications of the scheme are suggested for meteorological para-meters such as the probability of precipitation and the surface air and dew point temperature.
The estimation of cloud motion is an active area of computer vision; it addresses the recovery of motion clues from a time-varying image sequence of a cloud cover, usually tracked by radar. Three major issues in this field are: (1)the choice of a measurement procedure and an area within the image where the measurement constraint is guaranteed to achieve minimum error, (2)the choice of a topological constraint and an area where this constraint is guaranteed to achieve minimum error, and (3)the identification of a procedure of coupling the two constraints mentioned above in order to obtain an optimal approximation to the velocity field. In this paper, we show that the retrieval of a reliable motion clue is closely related to the manner in which the measurement and topological constraints are coupled. An example using the gradient vector as a measurement, smoothness as a topological constraint, and the temporal gradient as a coupling factor is given. As shown when this approach is compared against a correspondence-based technique, the differential approach measures only a local average of the true motion. The performance of this differential technique and a correspondence-based technique are compared using a real radar image sequence.
An image analysis method is presented for use in detecting strong windshear events, called microbursts, in Doppler weather radar images. This technique has been developed for use in a completely automated surveil-lance system being procured by the Federal Aviation Administration (FAA) for the protection of airport terminal areas. The detection system must distill the rapidly evolving radar imagery into brief textual warning messages in real time, with high reliability.
Even a cursory reading of any daily newspaper shows that we are in the midst of a dramatic revolution in computer graphics. Virtually every day some new piece of hardware or software is announced, adding to the tools available to the working scientist. Three dimensional graphics form a significant part of this revolution having become virtually commonplace in advertising and on television.
Evaluating model performance is difficult in light of the amount of data which is output that is to be compared to an analysis for verification. Examining 2-D plots of many different parameters at several levels is tedious. This information must then be mentally integrated in an attempt to understand where problems may exist. The use of 4-D displays can greatly aid in evaluations by presenting the output of the model as a volume instead of 2-D slices. A capability for 4-D displays of meteorological data is being developed at the Space Science and Engineering Center. The Man-computer Interactive Data Access System (McIDAS) is used for all aspects of the analysis: this includes acquiring data, running the model, storing the output and displaying the results. A version of the Australian Regional Analysis and Forecast Modules were applied to the eastern portion of the USA and adjacent Atlantic Ocean. This assimilation system is being used to analyze intensive observing periods during the GALE (Genesis of Atlantic Lows Experiment) field experiment.
Meteorological data is by nature four dimensional. By creating three dimensional renderings of meteorological information (particularly cloud cover) for each of several times and creating a loop of these images, new views of meteorological parameters and processes are provided. These views are not only interesting visually but provide new insight into model operation of the underlying processes. This paper will demonstrate such techniques on actual data including GOES imagery, radar maps of precipitation, and forecasted cloud cover.
A technique has been developed to display GOES satellite cloud images in perspective over a topographical map. Cloud heights are estimated using temperatures from an infrared (IR) satellite image, surface temperature observations, and a climatological model of vertical temperature profiles. Cloud levels are discriminated from each other and from the ground using a pattern recognition algorithm based on the brightness variance technique of Coakley and Bretherton. The cloud regions found by the pattern recognizer are rendered in three-dimensional perspective over a topographical map by an efficient remap of the visible image. The visible shades are mixed with an artificial shade based on the geometry of the cloud-top surface, in order to enhance the texture of the cloud top.
A computer-generated, perspective topographic map was used for meteorological surface analysis to evaluate its advantage over traditional map backgrounds. A traditional and perspective map were compared for a precipitation case study that followed the passage of warm front in the northeastern U.S. Two precipitation data sets were analyzed against two different map backgrounds centered on Pennsylvania. Besides its immediate 3-dimensional "feel," the perspective map delineated the surface topographical features that were coupled with the prefrontal winds to produce the orographically-lifted precipitation patterns. The computer-generated map was judged superior to the traditional map used for surface analysis, especially in the case of the mesoscale data set.
Full color image processing systems are becoming more prevalent in the meteorological community. Aside from the usual capability for display of grey scale and pseudo color imagery, these full color systems can be used to create false color representations of multispectral imagery (the subject of another paper to be presented at this conference). This paper describes a simple technique to extend the display of meteorological imagery on a full color system from the spectral domain to the temporal domain. Potential application of this technique include display of temporal changes without having to animate the imagery. It is very difficult to publish images in the literature demonstrating temporal changes, since in performing the research the investigator usually animates these imagery at a workstation. Now the temporal changes can be displayed in the color domain for representation on a static display. It may even be possible to perform cloud tracking on a static display.
Visible and infrared satellite imagery data are a primary source of global cloud observations. Visible channels measure reflected solar energy and are used to detect clouds and snow. Infrared channels measure emitted thermal energy and, consequently, the brightness temperatures of clouds and the earth's surface both day and night. It is sometimes difficult to interpret such imagery because of varying conditions encountered on global scales. Snow cover is often confused with clouds in visible imagery because each surface reflects sunlight well. Low clouds are frequently confused with cloudfree land and oceans in infrared imagery because their temperatures can be nearly equal. It is found that more confident discriminations can be performed between such features when DMSP Operational Linescan System (OLS), NOAA Advanced Very High Resolution Radiometer (AVHRR), or Nimbus Scanning Multifrequency Microwave Radiometer (SMMR) data are combined into color image products. A multispectral image display technique is described that simultaneously combines several meteorological satellite images into a color image product. The technique, which has its origin in Landsat Multispectral Scanner image processing, is quick and effective, and clearly reveals many features of meteorological interest.
Digital elevation models and Landsat 5 Thematic Mapper (TM) scenes constitute image resolution data over spatial domains of typical interest in complex terrain meteorol-ogy. Techniques in use and under development for applying these data to research problems are presented. Topics include decorrelation of topographic shading under direct beam illumination and investigation of nighttime surface temperature.
The Earth Science and Applications Division of the NASA Marshall Space Flight Center has been chartered to conduct research, and to develop and use space technology to gain a basic understanding of the earth processes with emphasis on atmospheric processes. An integral part of the research and development efforts has been the Man computer Interactive Data Access System (McIDAS). The McIDAS computer system has permitted integration of data from satellites, aircraft remote sensors, ground based meteorological data sources, and modeled atmospheric radiances. The result has been an increase in knowledge of mesoscale atmospheric processes and has enabled researchers to recommend improvements and suggestions for planned future remote sensing instruments.
This paper describes a microcomputer-based system to simultaneously receive, process and display satellite APT cloud images, satellite APT cloud/sea surface temperature fields, weather broadcast teleprinter messages and weather broadcast facsimile charts. While performing other processing, the system can drive a facsimile recorder, a graphics printer or a landline/radio facsimile broadcast. This semi-portable and affordable system was specifically designed for use by remote users of weather data such as fishing vessels, merchant ships, smaller detached weather service centers/offices and field military units etc. The heart of the system is a unique Data Manager Card and a low-cost, high resolution image display supporting up to 64 grey shades and/or high resolution color. The paper outlines plans to conduct an at-sea evaluation of the system this winter aboard an American tuna clipper operating in the western South Pacific Ocean.
In recent years, the volume of data generated by meteorological sensing platforms has increased dramatically. Conventional approaches to processing these data streams lead to both rapid overload of system resources and staggering costs. A distributed processing approach can substantially reduce these costs while maintaining the data's accuracy and timeliness. SSEC's preprocessors for the GOES signal are used as examples to support this view.
The Weather Information System and Real-time Display (WISARD) software package is a general purpose interactive weather information retrieval system. It was initially developed to support the research and academic needs within the Department of Atmospheric Sciences of the University of North Dakota (UND). Because of the ease of use of the system and the availability of a college-wide computer network, the WISARD package is currently in use by all faculty, staff and students of the College of Aerospace Sciences at UND.
In the 1990s, significantly powerful new instruments will be available for the observation, processing and analysis of environmental and meteorological events. The Next Generation Radar (NEXRAD), Atmospheric Profiler (PROFILER), and Vertical Atmospheric Sounder (VAS) will provide timely data to the environmental observer, analyst and forecaster. This data, after processing, will provide the opportunity to:
* forecast for time and place at accuracies of less than an hour and locations less than one mile
* provide timely tornado warnings before as opposed to during and after an event
* continuously annd thoroughly meet the needs of the aircraft meteorlogist for severe weather prediction around airports
* provide reliability beyond the capabilities of today's systems.1