PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
For the last three decades, space science remote sensing technologies have been providing enormous amounts of useful data and information in broadening our understanding of the solar system including our home planet. This research, as it has expanded our learning process, has generated additional questions. This has further resulted in establishing new science requirements, which have culminated in defining and pushing the state-of-the-art technology needs. NASAs Earth Science exploration has deployed about 18 highly complex satellites so far and is in a process of defining and launching multiple observing systems in the next decade. Due to the heightened security alert of our Nation, researchers and technologists are paying serious attention to make use of these science driven technologies for dual use. In other words, how such sophisticated observing and measuring systems can be used in detecting multiple types of security concerns with substantial lead time so that the appropriate law enforcement agencies and decision makers can take adequate steps to defuse any potential risks and protect society. This paper examines numerous NASA technologies such as laser/lidar systems, microwave and millimeter wave technologies, optical observing systems, high performance computational techniques for rapid analyses, and imaging products that can have a tremendous pay off for security applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Advanced Video Guidance Sensor (AVGS) is the continuation and advancement of the Video Guidance Sensor (VGS) Developed by NASA/MSFC in the mid 1990's and flown successfully as an experiment on STS-87 and STS-95 in the late 1990's. The AVGS is designed to be an autonomous docking sensor using the same concept as the VGS, but with updated electronics, increased range, reduced weight and improved dynamic tracking capability. Currently under development as part of NASA's Demonstration of Autonomous Rendezvous Technology (DART) program at Orbital Sciences Corp, the AVGS will be the primary sensor at close in ranges. The AVGS is designed to provide line of sight bearing from greater than one kilometer and provide 6 DOF relative position and attitude data from 300 hundred meters to dock. This paper will provide an overview of the AVGS basic operation, improvements over the original VGS, development challenges, its current status and role in DART mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AeroAstro's patented RF Probe is a system designed to address the needs of spacecraft developers and operators interested in measuring and analyzing near-field RF emissions emanating from a nearby spacecraft of interest. The RF Probe consists of an intelligent spectrum analyzer with digital signal processing capabilities combined with a calibrated, wide-bandwidth antenna and RF front end that covers the 50 kHz to 18 GHz spectrum. It is capable of acquiring signal level and signal vector information, classifying signals, assessing the quality of a satellite’s transponders, and characterizing near-field electromagnetic emissions. The RF Probe is intended for either incorporation as part of a suite of spacecraft sensors, or as a stand-alone sensor on spacecraft or other platforms such as Unmanned Aerial Vehicles (UAVs). The RF Probe was initially conceived as a tool to detect and aid in diagnosis of malfunctions in a spacecraft of interest. However, the utility of the RF Probe goes far beyond this initial concept, spanning a wide range of military applications. Most importantly, the RF Probe can provide space situational awareness for critical on-orbit assets by detecting externally induced RF fields, aiding in protection against potentially devastating attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reliability of space related assets has been emphasized after the second loss of a Space Shuttle. The intricate nature of the hardware being inspected often requires a complete disassembly to perform a thorough inspection which can be difficult as well as costly. Furthermore, it is imperative that the hardware under inspection not be altered in any other manner than that which is intended. In these cases the use of machine vision can allow for inspection with greater frequency using less intrusive methods. Such systems can provide feedback to guide, not only manually controlled instrumentation, but autonomous robotic platforms as well. This paper serves to detail a method using machine vision to provide such sensing capabilities in a compact package. A single camera is used in conjunction with a projected reference grid to ascertain precise distance measurements. The design of the sensor focuses on the use of conventional components in an unconventional manner with the goal of providing a solution for systems that do not require or cannot accommodate more complex vision systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent decades, NASA's interest in spacecraft rendezvous and proximity operations has grown. Additional instrumentation is needed to improve manned docking operations' safety, as well as to enable telerobotic operation of spacecraft or completely autonomous rendezvous and docking. To address this need, Advanced Optical Systems, Inc., Orbital Sciences Corporation, and Marshall Space Flight Center have developed the Advanced Video Guidance Sensor (AVGS) under the auspices of the Demonstration of Autonomous Rendezvous Technology (DART) program. Given a cooperative target comprising several retro-reflectors, AVGS provides six-degree-of-freedom information at ranges of up to 300 meters for the DART target. It does so by imaging the target, then performing pattern recognition on the resulting image. Longer range operation is possible through different target geometries.
Now that AVGS is being readied for its test flight in 2004, the question is: what next? Modifications can be made to AVGS, including different pattern recognition algorithms and changes to the retro-reflector targets, to make it more robust and accurate. AVGS could be coupled with other space-qualified sensors, such as a laser range-and-bearing finder, that would operate at longer ranges. Different target configurations, including the use of active targets, could result in significant miniaturization over the current AVGS package. We will discuss these and other possibilities for a next-generation docking sensor or sensor suite that involve AVGS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NASA's Marshall Space Flight Center was the driving force behind the development of the Advanced Video Guidance Sensor, an active sensor system that provides near-range sensor data as part of an automatic rendezvous and docking system. The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state camera to detect the return from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The AVGS will fly as part of the Demonstration of Autonomous Rendezvous Technologies (DART) in October, 2004. This development effort has required a great deal of testing of various sorts at every phase of development. Some of the test efforts included optical characterization of performance with the intended target, thermal vacuum testing, performance tests in long range vacuum facilities, EMI/EMC tests, and performance testing in dynamic situations. The sensor has been shown to track a target at ranges of up to 300 meters, both in vacuum and ambient conditions, to survive and operate during the thermal vacuum cycling specific to the DART mission, to handle EMI well, and to perform well in dynamic situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Shuttle Inspection Lidar (SIL) system is a derivative of a scanning lidar system being developed by MD Robotics and Optech. It incorporates a lidar, a camera, lights and video communications systems. The SIL is designed to meet the specific requirements for the on-orbit inspection and measurement of the Space Shuttle leading edge Reinforced-Carbon Carbon (RCC) and Thermal Protection System (TPS). The SIL has a flexible electrical and mechanical interface that enables it to be mounted on different locations including the Shuttle Remote Manipulator System (SRMS, Canadarm), and the Space Station Remote Manipulator System (SSRMS) on the International Space Station (ISS).
This paper describes the SIL system and the specifications of the imaging lidar scanner system, and discusses the application of the SIL for on-orbit shuttle inspection using the on-orbit SRMS. Ground-based measurements of the shuttle TPS taken by a terrestrial version of the imager are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel method of increasing the focal depth of an optical system that can be used in a star tracker. The method is based on a special phase plate that is placed in the vicinity of the aperture stop of an optical system to produce the required spot size over a large focal depth. Phase retardation was applied to a lens system having a focal length of 30 mm, an F-number of 2, a working wavelength range of 0.5~0.75 μm and a view angle of 20 degrees. The performance of a lens having a suitable quartic phase was analyzed, and it was shown that the focal depth of such a lens system can be extended more than threefold as compared to a system having no phase plate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space-borne imaging systems derived from commercial technology have been successfully employed on launch vehicles for several years. Since 1997, over sixty such imagers - all in the product family called RocketCamTM - have operated successfully on 29 launches involving most U.S. launch systems.
During this time, these inexpensive systems have demonstrated their utility in engineering analysis of liftoff and ascent events, booster performance, separation events and payload separation operations, and have also been employed to support and document related ground-based engineering tests. Such views from various vantage points provide not only visualization of key events but stunning and extremely positive public relations video content.
Near-term applications include capturing key events on Earth-orbiting spacecraft and related proximity operations.
This paper examines the history to date of RocketCams on expendable and manned launch vehicles, assesses their current utility on rockets, spacecraft and other aerospace vehicles (e.g., UAVs), and provides guidance for their use in selected defense and security applications.
Broad use of RocketCams on defense and security projects will provide critical engineering data for developmental efforts, a large database of in-situ measurements onboard and around aerospace vehicles and platforms, compelling public relations content, and new diagnostic information for systems designers and failure-review panels alike.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the development of an instrumental prototype for IR hyperspectral imaging from geo-synchronous earth orbit (GEO). Within the framework of collaboration and funding support from the Canadian Space Agency (CSA), Telops performed the development and technical demonstration of a spectral dispersive module (SDM) with potential application for the US NOAA Hyperspectral Environmental Suite (HES). HES development will provide infrared and visible environmental data collection capabilities for the next GOES program series of geo-synchronous satellites that will collect weather and environmental data to aid in the prediction of weather and in climate monitoring. The design of the SDM is based on an Offner configuration. Such a design allows the gathering of high spatial and spectral resolution data while keeping the spatial and spectral distortions smaller than the size of a pixel. A convex diffraction grating is used in the system as a spectrally dispersing element. The targeted application of this Offner spectrometer configuration is weather sounding in the mid-IR spectral range. The design and demonstration phase of the SDM is described. Test results, such as spectral/spatial resolution, distortion, transmission and efficiency, with the engineering laboratory model are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most accurate method of measuring distance and motion is interferometry. This method of motion measurement correlates change in distance to change in phase of an optical signal. As one mirror in the interferometer moves, the resulting phase variation is visualized as motion of interferometric fringes. While traditional optical interferometry can easily be used to measure distance variation as small as 10 nm, it is not a viable method for measuring distance to, or motion of, an object located at a distance grater than half the coherence length of the illumination source. This typically limits interferometry to measurements of objects within <1 km of the interferometer. We present a new interferometer based on phase conjugation, which greatly increases the maximum distance between the illumination laser and the movable target. This method is as accurate as traditional interferometry, but is less sensitive to laser pointing error and operates over a longer path. Experiments demonstrated measurement accuracy of <15 nm with a laser-target separation of 50 times the laser coherence length.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sparse aperture (SA) telescopes represent a promising technology to increase the effective diameter of an optical system while reducing overall weight and stowable size. Although conceptually explored in the literature for decades, the technology has only recently matured to the point of being reasonably considered for certain
applications. In general, a sparse aperture system consists of an array of sub-apertures that are phased to synthesize a larger effective aperture. The models used to date to create predictions of sparse aperture imagery typically make use of a "gray world" assumption, where the input is a resampled black and white panchromatic image. This input is then degraded and resampled with a so-called polychromatic system optical transfer function (OTF), which is a weighted average of the OTFs over the spectral bandpass. In reality, a physical OTF is spectrally dependent, exhibiting varying structure with spatial frequency (especially in the presence of optical aberrations or sub-aperture phase errors). Given this spectral variation with spatial frequency, there is some concern the traditional gray world resampling approach may not address significant features of the image quality associated with sparse aperture systems. This research investigates the subject of how the image quality of a sparse aperture system varies with respect to a conventional telescope from a spectra-radiometric perspective, with emphasis on whether the restored sparse aperture image will be beset by spectral artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photo-thermo-plastic films (PTPF) with high-density recording (up to 1000 lines/mm) can be used repeatedly for structure zoned shooting. However, more work is underway to increase their light sensitivity and make it resistant to action of radioactive and powerful electromagnetic radiation. It was found that chalcogenide glass semidonductors (CGS) combine properties of both glasses and semiconductors that determine possibility of its using in different systems of optical data recording. Most bright prepresentatives of glass semiconductors are being sulfide and selenide of arsenic. Results of complex investigations on photelectric properties of their thin layers are presented.
The paper presents results of technological changing of the field of maximal photosensitivity of CGS layers in the range of optical spectral range from 400 to 800 nm, which satisfies requirements of optical data recording systems for spectrum-zoned-shooting. For increasing of CGS thin layers' photsensitivity the initial materials were doped by tin. Experiments have shown that doping by Sn on the level 1.2-1.4 at % increases photosensitivity of layer by more than one order of magnitude. High photosensitivity of obtained PTPF determines possibility of their wide application in different optical image registration systems, which can be used as board memory devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many on-orbit rendezvous missions would benefit from the ability to locate and track a spacecraft at a distance, compute its pose and attitude with high accuracy during close-in maneuvers, and to provide a visual record of the final mission event. The Rendezvous Laser Vision System (RELAVIS), designed by Optech and MD Robotics, meets such needs.
Installed on a seeker vehicle, RELAVIS provides an integrated laser-based vision system that obtains relative position and orientation of a target vehicle. RELAVIS supports targetless operation, does not require any external illumination sources and operates irrespective of location of a solar disk. The primary use of RELAVIS is to support autonomous satellite rendezvous and docking operations. RELAVIS can also be used for 3D workspace mapping and calibration, target vehicle inspection and reconnaissance in space environments.
RELAVIS has the unique capability of producing highly accurate data over a range of 0.5 metres to 3 kilometres and providing 6 degrees-of-freedom (pose) bearing and range data of a target spacecraft, which may be processed by an autonomous Guidance Navigation and Control (GNC) system for orbital rendezvous and docking operations. RELAVIS can also be equipped with a space-qualified camera unit to view on-orbit events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future planetary exploration missions will aim at landing a spacecraft in hazardous regions of a planet, thereby requiring an ability to autonomously avoid surface obstacles and land at a safe site. Landing safety is defined in terms of the local topography-slope relative to gravity and surface roughness-and landing dynamics, requireing an impact velocity lower than a given tolerance. In order to meet these challenges, a LIDAR-based Autonomous Planetary landing System (LAPS) was developed, combining the three-dimensional cartographic capabilities of the LIDAR with autonomous 'intelligent' software for interpreting the data and for guiding the Lander to the safe site. This paper provides an overview of the LAPS ability to detect obstacles, identify a safe site and support the navigation of the Lander to the detected safe site. It also demonstrates the performance of the system using real LIDAR data taken over a physical emulation of a Mars terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.