Land cover classification uses multispectral pixel information to separate image regions into categories. Image segmentation seeks to separate image regions into objects and features based on spectral and spatial image properties. However, making sense of complex imagery typically requires identifying image regions that are often a heterogeneous mixture of categories and features that constitute functional semantic units such as industrial, residential, or commercial areas. This requires leveraging both spectral classification and spatial feature extraction synergistically to synthesize such complex but meaningful image units. We present an efficient graphical model for extracting such semantically cohesive regions. We employ an initial hierarchical segmentation of images into features represented as nodes of an attributed graph that represents feature properties as well as their adjacency relations with other features. This provides a framework to group spectrally and structurally diverse features, which are nevertheless semantically cohesive, based on user-driven identifications of features and their contextual relationships in the graph. We propose an efficient method to construct, store, and search an augmented graph that captures nonadjacent vicinity relationships of features. This graph can be used to query for semantic notional units consisting of ontologically diverse features by constraining it to specific query node types and their indicated/desired spatial interaction characteristics. User interaction with, and labeling of, initially segmented and categorized image feature graph can then be used to learn feature (node) and regional (subgraph) ontologies as constraints, and to identify other similar semantic units as connected components of the constraint-pruned augmented graph of a query image.
Open-air explosive activities are carried out by a variety of institutions, including government agencies and private organizations. These activities result in debris plumes that contain elements of the explosive package as well as substantial amounts of entrained environmental materials. While Lidar monitoring technology for these situations has been around for years we developed a unique, interactive, post-experiment Lidar Data Analysis Toolset (LIDATO) that allows the expert user to determine the location, backscatter intensity distribution, volume, and boundaries for general debris plumes at any given time. This is true with the exception of the early development and transport of the plume where the plume is typically opaque to the Lidar and only the plume edge facing the Lidar system can be mapped. For this reason we incorporated video coverage using multiple cameras. While the analysis of the video is handled separately we used the resulting plume position data and combined them with the LIDATO results. The data-fusion product refines the separately gained results and increases the data set accuracy in all aspects for the early stages of the explosion.
The Multispectral Thermal Imager (MTI) is a technology test and demonstration satellite whose primary mission involved a finite number of technical objectives. MTI was not designed, or supported, to become a general purpose operational satellite. The role of the MTI science team is to provide a core group of system-expert scientists who perform the scientific development and technical evaluations needed to meet programmatic objectives. Another mission for the team is to develop algorithms to provide atmospheric compensation and quantitative retrieval of surface parameters to a relatively small community of MTI users. Finally, the science team responds and adjusts to unanticipated events in the life of the satellite. Broad or general lessons learned include the value of working closely with the people who perform the calibration of the data as well as those providing archived image and retrieval products. Close interaction between the Los Alamos National Laboratory (LANL) teams was very beneficial to the overall effort as well as the science effort. Secondly, as time goes on we make increasing use of gridded global atmospheric data sets which are products of global weather model data assimilation schemes. The Global Data Assimilation System information is available globally every six hours and the Rapid Update Cycle products are available over much of the North America and its coastal regions every hour. Additionally, we did not anticipate the quantity of validation data or time needed for thorough algorithm validation. Original validation plans called for a small number of intensive validation campaigns soon after launch. One or two intense validation campaigns are needed but are not sufficient to define performance over a range of conditions or for diagnosis of deviations between ground and satellite products. It took more than a year to accumulate a good set of validation data. With regard to the specific programmatic objectives, we feel that we can do a reasonable job on retrieving surface water temperatures well within the 1°C objective under good observing conditions. Before the loss of the onboard calibration system, sea surface retrievals were usually within 0.5°C. After that, the retrievals are usually within 0.8°C during the day and 0.5°C at night. Daytime atmospheric water vapor retrievals have a scatter that was anticipated: within 20%. However, there is error in using the Aerosol Robotic Network retrievals as validation data which may be due to some combination of calibration uncertainties, errors in the ground retrievals, the method of comparison, and incomplete physics. Calibration of top-of-atmosphere radiance measurements to surface reflectance has proven daunting. We are not alone here: it is a difficult problem to solve generally and the main issue is proper compensation for aerosol effects. Getting good reflectance validation data over a number of sites has proven difficult but, when assumptions are met, the algorithm usually performs quite well. Aerosol retrievals for off-nadir views seem to perform better than near-nadir views and the reason for this is under investigation. Land surface temperature retrieval and temperature-emissivity separations are difficult to perform accurately with multispectral sensors. An interactive cloud masking system was implemented for production use. Clouds are so spectrally and spatially variable that users are encouraged to carefully evaluate the delivered mask for their own needs. The same is true for the water mask. This mask is generated from a spectral index that works well for deep, clear water, but there is much variability in water spectral reflectance inland and along coasts. The value of the second-look maneuvers has not yet been fully or systematically evaluated. Early experiences indicated that the original intentions have marginal value for MTI objectives, but potentially important new ideas have been developed. Image registration (the alignment of data from different focal planes) and band-to-band registration has been a difficult problem to solve, at least for mass production of the images in a processing pipeline. The problems, and their solutions, are described in another paper.
The fifteen-channel Multispectral Thermal Imager (MTI) provides accurately calibrated satellite imagery for a variety of scientific and programmatic purposes. To be useful, the calibrated pixels from the individual detectors on the focal plane of this pushbroom sensor must be resampled to a regular grid corresponding to the observed scene on the ground. In the LEVEL1B_R_COREG product, it is required that the pixels from different spectral bands and from different sensor chip assemblies all be coregistered to the same grid. For the LEVEL1B_R_GEO product, it is further required that this grid be georeferenced to the Universal Transverse Mercator coordinate system. It is important that an accurate registration is achieved, because most of the higher level products (e.g. ground reflectance) are derived from these LEVEL1B_R products. Initially, a single direct georeferencing approach was pursued for performing the coregistration task. Although this continues to be the primary algorithm for our automated pipeline registration, we found it advantageous to pursue alternative approaches as well. This paper surveys these approaches, and offers lessons learned during the three years we have been addressing the coregistration requirements for MTI imagery at the Los Alamos National Laboratory (LANL).
We measure directional reflectance and daytime temperature of a
wintertime coniferous forest from space using data acquired by the
Department of Energy's Multispectral Thermal Imager (MTI). The study
site is the Howland experimental forest in central Maine. The data
include measurements from all seasons over a one-year period from
2001-2002 but with a concentration in late winter and early spring.
The results show variation in both reflectance and temperature with
direction and season. The reflectance results compare favorably with
previous bidirectional measurements performed at the Howland site.
Near-nadir reflectance in the visible bands varies periodically over
the year with a high in summer and a low in winter. Near-infrared
(NIR) reflectance shows dual variation. The canopy reflectance varies
as a function of solar and satellite zenith angle, presumably due to a changing proportion of shadows. Furthermore, a NIR pseudo-BRDF
(bidirectional reflectance distribution function) shows that the
canopy brightens in the NIR during fall and winter. Retrieved canopy
temperatures are consistently warmer in the off-nadir view by about
2°C, with a small seasonal variation. The seasonal canopy
temperature trend is well exhibited, and days with snow on the ground
are easily distinguished from days with no snow on the ground. The
results also show that the retrieved temperatures are consistently
warmer than above-canopy air temperature by about 4°C. This
difference is greater for off-nadir views and also appears to be
larger in the spring and summer than in the fall and winter.
This paper describes the use of photogrammetric principles to georeference imagery collected by the Multispectral Thermal Imager (MTI) satellite. The photogrammetric image registration (PIR) method consists of two main parts. The first part estimates a trajectory (exterior orientation as a function of time) for the sensor based on a photogrammetric bundle adjustment governed by user defined ground control points. The ground control points are defined by manual identification of conjugate points between the LEVEL1B_U imagery and reference data (an orthoimage and a digital elevation model derived from aerial photography). The second part uses this trajectory as input to a direct georeferencing method to determine the location of each pixel in the imagery. The PIR method uses mathematical models of the sensor, its trajectory, timing, and the terrain to mimic the actual image acquisition event. It was found that accurate calculation of the exterior orientation parameters was not a requirement to obtain accurately georeferenced imagery. This is particularly intriguing, and deserves more in-depth study, because the values of the exterior orientation parameters solved for through photogrammetric bundle adjustment are known to be in disagreement with the actual motion of the satellite platform. The individual steps of the PIR method, the mathematical models used, and the results of georeferencing MTI imagery through the use of this approach are described.
This paper describes an algorithm for the registration of imagery collected by the Multispectral Thermal Imager (MTI). The Automated Image Registration (AIR) algorithm is entirely image-based and is implemented in an automated fashion, which avoids any requirement for human interaction. The AIR method differs from the "direct georeferencing" method used to create our standard coregistered product since explicit information about the satellite's trajectory and the sensor geometry are not required. The AIR method makes use of a maximum cross-correlation (MCC) algorithm, which is applied locally about numerous points within any two images being compared. The MCC method is used to determine the row and column translations required to register the bands of imagery collected by a single SCA (band-to-band registration), and the row and column translations required to register the imagery collected by the three SCAs for any individual band (SCA-to-SCA registration). Of particular note is the use of reciprocity and a weighted least squares approach to obtaining the band-to-band registration shifts. Reciprocity is enforced by using the MCC method to determine the row and column translations between all pair-wise combinations of bands. This information is then used in a weighted least squares approach to determine the optimum shift values between an arbitrarily selected reference band and the other 15 bands. The individual steps of the AIR methodology, and the results of registering MTI imagery through use of this algorithm, are described.
An increasing number and variety of platforms are now capable of
collecting remote sensing data over a particular scene. For many
applications, the information available from any individual sensor may
be incomplete, inconsistent or imprecise. However, other sources may
provide complementary and/or additional data. Thus, for an application
such as image feature extraction or classification, it may be that
fusing the mulitple data sources can lead to more consistent and
Unfortunately, with the increased complexity of the fused data, the
search space of feature-extraction or classification algorithms also
greatly increases. With a single data source, the determination of a
suitable algorithm may be a significant challenge for an image
analyst. With the fused data, the search for suitable algorithms can
go far beyond the capabilities of a human in a realistic time frame,
and becomes the realm of machine learning, where the computational
power of modern computers can be harnessed to the task at hand.
We describe experiments in which we investigate the ability of a suite
of automated feature extraction tools developed at Los Alamos National
Laboratory to make use of multiple data sources for various feature
extraction tasks. We compare and contrast this software's capabilities
on 1) individual data sets from different data sources 2) fused data
sets from multiple data sources and 3) fusion of results from multiple
individual data sources.
In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products (LEVEL1B_R_COREG and LEVEL1B_R_GEO) resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct ``dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to tweak the precision of the band-to-band registration using correlations in the imagery itself.
Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.
Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.
The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.