The U. S. Geological Survey (USGS) initiated the Requirements, Capabilities and Analysis for Earth Observations (RCA-EO) activity in the Land Remote Sensing (LRS) program to provide a structured approach to collect, store, maintain, and analyze user requirements and Earth observing system capabilities information. RCA-EO enables the collection of information on current key Earth observation products, services, and projects, and to evaluate them at different organizational levels within an agency, in terms of how reliant they are on Earth observation data from all sources, including spaceborne, airborne, and ground-based platforms. Within the USGS, RCA-EO has engaged over 500 subject matter experts in this assessment, and evaluated the impacts of more than 1000 different Earth observing data sources on 345 key USGS products and services. This paper summarizes Landsat impacts at various levels of the organizational structure of the USGS and highlights the feedback of the subject matter experts regarding Landsat data and Landsat-derived products. This feedback is expected to inform future Landsat mission decision making. The RCA-EO approach can be applied in a much broader scope to derive comprehensive knowledge of Earth observing system usage and impacts, to inform product and service development and remote sensing technology innovation beyond the USGS.
In order to better understand the issues associated with Full Motion Video (FMV) geopositioning and to develop corresponding strategies and algorithms, an integrated test bed is required. It is used to evaluate the performance of various candidate algorithms associated with registration of the video frames and subsequent geopositioning using the registered frames. Major issues include reliable error propagation or predicted solution accuracy, optimal vs. suboptimal vs. divergent solutions, robust processing in the presence of poor or non-existent a priori estimates of sensor metadata, difficulty in the measurement of tie points between adjacent frames, poor imaging geometry including small field-of-view and little vertical relief, and no control (points). The test bed modules must be integrated with appropriate data flows between them. The test bed must also ingest/generate real and simulated data and support evaluation of corresponding performance based on module-internal metrics as well as comparisons to real or simulated “ground truth”. Selection of the appropriate modules and algorithms must be both operator specifiable and specifiable as automatic. An FMV test bed has been developed and continues to be improved with the above characteristics. The paper describes its overall design as well as key underlying algorithms, including a recent update to “A matrix” generation, which allows for the computation of arbitrary inter-frame error cross-covariance matrices associated with Kalman filter (KF) registration in the presence of dynamic state vector definition, necessary for rigorous error propagation when the contents/definition of the KF state vector changes due to added/dropped tie points. Performance of a tested scenario is also presented.
Geostatistical modeling of spatial uncertainty has its roots in the mining, water and oil reservoir exploration communities, and has great potential for broader applications as proposed in this paper. This paper describes the underlying statistical models and their use in both the estimation of quantities of interest and the Monte-Carlo simulation of their uncertainty or errors, including their variance or expected magnitude and their spatial correlations or inter-relationships. These quantities can include 2D or 3D terrain locations, feature vertex locations, or any specified attributes whose statistical properties vary spatially. The simulation of spatial uncertainty or errors is a practical and powerful tool for understanding the effects of error propagation in complex systems. This paper describes various simulation techniques and trades-off their generality with complexity and speed. One technique recently proposed by the authors, Fast Sequential Simulation, has the ability to simulate tens of millions of errors with specifiable variance and spatial correlations in a few seconds on a lap-top computer. This ability allows for the timely evaluation of resultant output errors or the performance of a “down-stream” module or application. It also allows for near-real time evaluation when such a simulation capability is built into the application itself.
In this paper we demonstrate a technique for extracting 3-dimensional data from 2-dimensional GPS-tagged video. We call our method Minimum Separation Vector Mapping (MSVM), and we verify it's performance versus traditional Structure From Motion (SFM) techniques in the field of GPS-tagged aerial imagery, including GPS-tagged full motion video (FMV). We explain how MSVM is better posed to natively exploit the a priori content of GPS tags when compared to SFM. We show that given GPS-tagged images and moderately well known intrinsic camera parameters, our MSVM technique consistently outperforms traditional SFM implementations under a variety of conditions.
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for
subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being
developed for such registration is presented that models relevant error sources in terms of the expected magnitude and
correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori
information of the sensor’s trajectory and attitude (pointing) information, in order to best deal with non-linearity effects.
Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for
corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori
accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered
sensor data and its a posteriori accuracy information are then made available to “down-stream” Multi-Image
Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image
optimal solution, including reliable predicted solution accuracy, is then performed for the object’s 3D coordinates. This
paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It
makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as
estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust,
direct search-based technique.
KEYWORDS: Data modeling, Matrices, Image processing, Error analysis, Computer simulations, Feature extraction, Geographic information systems, Monte Carlo methods, Performance modeling, Correlation function
The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline,
or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features.
The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free
public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create
enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of
automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual
truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial
uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for
conflation methods. Performance results are compiled for DCGIS street centerline features.
The topic of data uncertainty handling is relevant to essentially any scientific activity that involves making
measurements of real world phenomena. A rigorous accounting of uncertainty can be crucial to the decision-making
process. The purpose of this paper is to provide a brief overview on select issues in handling uncertainty in geospatial
data. We begin with photogrammetric concepts of uncertainty handling, followed by investigating uncertainty issues
related to processing vector (object) representations of geospatial information. Suggestions are offered for enhanced
modeling, visualization, and exploitation of local uncertainty information in applications such as fusion and conflation.
Stochastic simulation can provide an effective approach to improve understanding of the consequences uncertainty
propagation in common geospatial processes such as path finding. Future work should consider the development of
standardized modeling techniques for stochastic simulation for more complex object data, to include spatial and attribute
Data registration is the foundational step for fusion applications such as change detection, data conflation, ATR, and
automated feature extraction. The efficacy of data fusion products can be limited by inadequate selection of the
transformation model, or characterization of uncertainty in the registration process. In this paper, three components of
image-to-image registration are investigated: 1) image correspondence via feature matching, 2) selection of a
transformation function, and 3) estimation of uncertainty. Experimental results are presented for photogrammetric versus
non-photogrammetric transfer of point features for four different sensor types and imaging geometries. The results
demonstrate that a photogrammetric transfer model is generally more accurate at point transfer. Moreover,
photogrammetric methods provide a reliable estimation of accuracy through the process of error propagation. Reliable
local uncertainty derived from the registration process is particularly desirable information to have for subsequent fusion
processes. To that end, uncertainty maps are generated to demonstrate global trends across the test images.
Recommendations for extending this methodology to non-image data types are provided.
As the availability of geospatial data increases, there is a growing need to match these datasets together. However, since
these datasets often vary in their origins and spatial accuracy, they frequently do not correspond well to each other,
which create multiple problems. To accurately align with imagery, analysts currently either: 1) manually move the
vectors, 2) perform a labor-intensive spatial registration of vectors to imagery, 3) move imagery to vectors, or 4) redigitize
the vectors from scratch and transfer the attributes. All of these are time consuming and labor-intensive
operations. Automated matching and fusing vector datasets has been a subject of research for years, and strides are being
made. However, much less has been done with matching or fusing vector and raster data. While there are initial forays
into this research area, the approaches are not robust. The objective of this work is to design and build robust software
called MapSnap to conflate vector and image data in an automated/semi-automated manner. This paper reports the status
of the MapSnap project that includes: (i) the overall algorithmic approach and system architecture, (ii) a tiling approach
to deal with large datasets to tune MapSnap parameters, (iii) time comparison of MapSnap with re-digitizing the vectors
from scratch and transfer the attributes, and (iv) accuracy comparison of MapSnap with manual adjustment of vectors.
The paper concludes with the discussion of future work including addressing the general problem of continuous and
rapid updating vector data, and fusing vector data with other data.
The methods used to evaluate automation tools are a critical part of the development process. In general, the most
meaningful measure of an automation method from an operational standpoint is its effect on productivity. Both timed
comparison between manual and automation based-extraction, as well as measures of spatial accuracy are needed. In this
paper, we introduce the notion of correspondence to evaluate spatial accuracy of an automated update method. Over
time, existing vector data becomes outdated because 1) land cover changes occur, or 2) more accurate overhead images
are acquired, and/or vector data resolution requirements by the user may increase. Therefore, an automated vector data
updating process has the potential to significantly increase productivity, particularly as existing worldwide vector
database holdings increase in size, and become outdated more quickly. In this paper we apply the proposed evaluation
methodology specifically to the process of automated updating of existing road centerline vectors. The operational
scenario assumes that the accuracy of the existing vector data is in effect outdated with respect to newly acquired
imagery. Whether the particular approach used is referred to as 1) vector-to-image registration, or 2) vector data
updating-based automated feature extraction (AFE), it is open to interpretation of the application and bias of the
developer or user. The objective of this paper is to present a quantitative and meaningful evaluation methodology of
spatial accuracy for automated vector data updating methods.
From an operational standpoint, road extraction remains largely a manual process despite the existence of several
commercially available automation tools. The problem of automated feature extraction (AFE) in general is a challenging
task as it involves the recognition, delineation, and attribution of image features. The efficacy of AFE algorithms in
operational settings is difficult to measure due to the inherent subjectivity involved. Ultimately, the most meaningful
measures of an automation method are its effect on productivity and actual utility. Several quantitative and qualitative
factors go into these measures including spatial accuracy and timed comparisons of extraction, different user training
levels, and human-computer interface issues.
In this paper we investigate methodologies for evaluating automated road extraction in different operational
modes. Interactive and batch extraction modes of automation are considered. The specific algorithms investigated are the
GeoEye Interactive Road Tracker®(IRT) and the GeoEye Automated Road Tracker®(ART) respectively. Both are
commercially available from GeoEye. Analysis metrics collected are derived from timed comparisons and spatial
delineation accuracy. Spatial delineation accuracy is measured by comparing algorithm output against a manually
derived image reference. The effect of object-level fusion of multiple imaging modalities is also considered.
The goal is to gain insight into measuring an automation algorithm's utility on feature extraction productivity.
Findings show sufficient evidence to demonstrate a potential gain in productivity when using an automation method
when the situation is warranted. Fusion of feature layers from multiple images also demonstrates a potential for
increased productivity compared to single or pair-wise combinations of feature layers.
The literature is replete with assisted target recognition (ATR) techniques, including methods for ATR evaluation. Yet,
relatively few methods find their way to use in practice. Part of the problem is that the evaluation of an ATR may not go
far enough in characterizing its optimal use in practice. For example, a thorough understanding of a method's operating
conditions is crucial, e.g., performance across different sensor capabilities, scene context, target occlusions, etc. This
paper describes a process for a rigorous evaluation of ATR performance, including a sensitivity analysis. Ultimately, an
ATR algorithm is deemed valuable if it is actually utilized in practice by users. Thus, quantitative analysis alone is not
necessarily sufficient. Qualitative user assessment derived from user testing, surveys, and questionnaires is often needed
to provide a more complete interpretation of an evaluation for a particular method. We demonstrate our ATR evaluation
process using methods that perform target detection of civilian vehicles.
The variability of panchromatic and multispectral images, vector data (maps) and DEM models is growing. Accordingly,
the requests and challenges are growing to correlate, match, co-register, and fuse them. Data to be integrated may have
inaccurate and contradictory geo-references or not have them at all. Alignment of vector (feature) and raster (image)
geospatial data is a difficult and time-consuming process when transformational relationships between the two are
nonlinear. The robust solutions and commercial software products that address current challenges do not yet exist. In the
proposed approach for Vector-to-Raster Registration (VRR) the candidate features are auto-extracted from imagery,
vectorized, and compared against existing vector layer(s) to be registered. Given that available automated feature
extraction (AFE) methods quite often produce false features and miss some features, we use additional information to
improve AFE. This information is the existing vector data, but the vector data are not perfect as well. To deal with this
problem the VRR process uses an algebraic structural algorithm (ASA), similarity transformation of local features
algorithm (STLF), and a multi-loop process that repeats (AFE-VRR) process several times. The experiments show that it
was successful in registering road vectors to commercial panchromatic and multi-spectral imagery.
A persistent problem with new unregistered geospatial data is geometric image distortion caused by different sensor/camera location. Often this distortion is modeled by means of arbitrary affine transformations. However in most of the real cases such geometric distortion is combined with other distortions caused by different image resolutions, different feature extraction techniques and others. Often images overlap only partially. Thus, the same objects on two images can differ significantly. The simple geometric distortion preserves one-to-one match between all points of the same object in the two images. In contrast when images are only partially overlapped or have different resolution there is no one-to-one point match. This paper explores theoretical and practical limits of building algorithms that are both robust and invariant at the same time to geometric distortions and change of image resolution. We provide two theorems, which state that such ideal algorithms are impossible in the proposed formalized framework. On the practical side we explored experimentally the ways to mitigate these theoretical limitations. Effective point placement, feature interpolation, and super-feature construction methods are developed that provide good registration/conflation results for the mages of very different resolutions.
Basic invariance theory for frame photography has existed in the photogrammetric literature for over a century. It has been recently rediscovered and significantly extended by researchers in computer vision (CV) and image understanding (IU). It is applied to problems involved in image transfer and object recognition and reconstruction, among others. Research is in progress at our Photogrammetric Analysis Laboratory, to analyze various image invariance techniques, for both 2D and 3D objects, particularly with regard to their accuracy and reliability. Existing well established photogrammetric techniques, as well as modifications thereto, are evaluated as both equivalent and complementary methods to invariance. Relationships are investigated between the imaging parameters and the variables involved in invariance, which usually combine such parameters together. Rigorous unified least squares is used in cases of redundancy and constraints. Results show significant improvement in accuracy and robustness as compared to direct linear techniques generally used in the CV/IU literature.