PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Handling change within integrated geospatial environments is a challenge of dual nature. It comprises automatic change detection, and the fundamental issue of modeling/representing change. In this paper we present a novel approach for automated change detection which allows us to handle change more efficiently than commonly available approaches. More specifically, we focus on the detection of building boundary changes within a spatiotemporal GIS environment. We have developed a novel approach, as an extension of least-squares based matching. Previous spatial states of an object are compared to its current representation in a digital image, and decisions are automatically made as to whether or not change at the outline has occurred. Older object information is used to produce templates for comparison with the representation of the same object in a newer image. Semantic information extracted through an analysis of template edge geometry, and estimates of accuracy are used to enhance our model. This template matching approach allows us to integrate in a single operation object extraction from digital imagery with change detection. By decomposing a complete outline into smaller elements and applying template matching along these locations we are able to detect precisely even small changes in building outlines. In this paper we present an overview of our approach, theoretical models, certain implementation issues like template selection and weight coefficient assignment, and experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Key to the development and evaluation of ATR algorithms is the presence of a large amount of data with support information indicating the correct image locations of the various targets in the image. Manual methods can be employed to accomplish the truthing task very accurately, but would not be able to keep up with the demand for data. This paper describes an automated technique that is capable of truthing a large amount of data in a short time using ground truth data of the targets in the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic and timely image registration and alignment for producing highly accurate geodetic coordinates is of interest to tactical systems involved in battlespace awareness. We present an approach to registration that applies rigorous photogrammetric techniques to sensor geometry models to achieve registration accuracy of only a few pixels. Image collection is fully modeled in terms of its static geometry including aircraft and sensor parameters. The registration process not only aligns imagery, but also significantly reduces geoposition errors when multiple images are used. A normalized cross- correlation is applied to align image pixels through adjustments to the initial collection geometry. Our process is fully automatic and requires no operator intervention. This technique has a side benefit that the amount of time to register images is somewhat independent of the image size. Registration can be applied to imagery from disparate sensors, such as Synthetic Aperture Radar (SAR), Electro- Optical (EO), Multi-Spectral, and Infrared, in a multi- sensor fusion approach to reduce geodetic errors. This approach is implemented on standard Commercial-Off-The-Shelf hardware and has been tested on SAR and EO imagery at near real-time processing rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acousto-optic tunable filter (AOTF) based imagers hold great promise for the emergence of the next generation of compact, lightweight, low-cost, programmable hyperspectral imaging systems that can solve the problem of data bottleneck as well as provide polarization signatures for better target detection and identification. At the U.S. Army Research Laboratory, we have been developing such imagers that can operate from the visible to the long infrared wavelengths. Some of these imagers have been used in the laboratory and in the field for the collection of hyperspectral images. During the past year, we have worked on the design of higher sensitivity more compact visible and infrared imagers. We have designed imagers with tellurium dioxide (TeO2) AOTF cells up to 4.5 micrometers and with a thallium arsenic selenide (Tl3AsSe3, TAS) AOTF cell up to 11.5 micrometers . These imagers use focal plane arrays (FPAs)--Si CCD, InGaAs, InSb, and HgCdTe--as needed for the spectral region of interest. In this paper, we will describe the latest advances in our AOTF imager research and present the results obtained from these imagers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in the field of spectral sensing technology have elucidated the benefits of multispectral and hyperspectral sensing the military and civil user community. These advancements, when properly exploited can provide the additional and improved automated terrain analysis, image understanding, object detection, and material characterization capabilities. the U.S. Army has established a Center of Excellence for Spectral Sensing Technology. This Center conducts collaborative research on, and development and demonstration of spectral sensing, processing and exploitation techniques. The Center's collaborative efforts integrate programs across multiple disciplines and form a baseline program consisting of coordinated technology thrusts. Existing efforts span the domains of sensor hardware, data processing architectures, algorithms, and signal processing and exploitation technologies across wide spectral regions. These thrusts in turn enable progress and performance improvement in the automated analysis, understanding, classification, discrimination, and identification of terrestrial objects, and materials. The participants draw upon common scientific processes and disciplines to approach similar problems related to different categories and domains of phenomenology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acquisition and update of Geographic Information System (GIS) data are typically carried out using aerial or satellite imagery. Since new roads are usually linked to georeferenced pre-existing road network, the extraction of pre-existing road segments may provide good hypotheses for the updating process. This paper addresses the problem of extracting georeferenced roads from images and formulating hypotheses for the presence of new road segments. Our approach proceeds in three steps. First, salient points are identified and measured along roads from a map or GIS database by an operate or an automatic tool. These salient points are then projected onto the image-space and errors inherent in this process are calculated. In the second step, the georeferenced roads are extracted from the image using a dynamic programming algorithm. The projected salient points and corresponding error estimates are used as input for this extraction process. Finally, the road center axes extracted in the previous step are analyzed to identify potential new segments attached to the extracted, pre-existing one. This analysis is performed using a combination of edge-based and correlation-based algorithms. In this paper we present our approach and early implementation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated image geo-registration of military and defense related imagery can sometimes produce an unsuccessful result due to poor image quality, cloud cover, supporting data errors, and sensor phenomenology. In addition, there are many possible image processing algorithms that further compound the problem of prediction. An accurate mathematical model that is able to incorporate all these parameters and can predict the outcome of a registration event is not feasible. What is proposed here is a probabilistic approach to the problem. A robust quality metric that is able to determine the success of an autonomous registration will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force Research Lab, Advanced Guidance Division, AFRL/MNG located at Eglin AFB has expanded the capabilities of its Modular Algorithm Concept Evaluation Tool (MACET) for autonomous target acquisition (ATA) analysis to include an imagery truth editor for simultaneously displaying and working with multiple images of differing dimensionality and resolution. To support multi-sensor truthing, the MACET Truth Editor performs computer-assisted geo-spatial registration between multiple 2D images, or between 2D images and 3D images. The input images of overlapping scenes may be obtained from various sensor types (visible, passive infrared, laser radar (ladar), etc.) and taken at different sensor locations and orientations. Registration of 3D to 2D and 2D to 2D imagery pixels is made to a reference 3D coordinate system using `hints' provided by an analyst. Hints may include some combinations of the following to reach an approximate solution to the registration problem: marking of common points in each image, marking of horizon lines in 2D images, entry of imagery sensor characteristics (FOV, FPA layout, etc.), and entry of relative sensor location and orientation. The MACET Truth Editor has a consistent user interface that allows registration hints to be entered and truthing operations to be performed graphically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.