The Neptec Design Group has developed the Laser Camera System (LCS), a new 3D laser scanner for space applications, based on an autosynchronized principle from the National Research Council of Canada (NRC). The LCS operates both in imaging and target centroid acquisition modes. In imaging mode, the LCS raster scans objects and can produce 2D and 3D maps of their surface features. In centroid acquisition mode, the LCS determines the position of discrete target points on an object. The LCS was tested in August 2001 during mission STS-105 of the space shuttle Discovery to the International Space Stations. From a fixed location in the shuttle payload bay, the LCS 1500 nm eye-safe infrared laser was pre-programmed to draw Lissajous patterns on Inconel (black dots on a white background) and retro-reflective disc targets affixed on the Multi-Purpose Logistics Module (MPLM). The LCS acquired centroid data for two and a half hours during the MPLM demating operation to demonstrate its ability to track both types of targets when they are stationary and moving.
Scanning LIDARs are widely used as 3D sensors for navigation due to their ability to provide 3D information of terrains
and obstacles with a high degree of precision. The optics of conventional scanning LIDARs are generally monostatic, i.e.
launch beam and return beam share the same optical path in scanning optics. As a consequence, LIDARs with
monostatic optics suffer poor performance at short range (<5m) due to scattering from internal optics and insufficient
dynamic range of a LIDAR receiver to cover both short range and long range (1km) . This drawback is undesirable for
rover navigation since it is critical for low profile rovers to see well at short range. It is also an issue for LIDARs used in
applications involving aerosol penetration since the scattering from nearby aerosol particles can disable LIDARs at short
range. In many cases, multiple 3D sensors have to be used for navigation.
To overcome these limitations, Neptec has previously developed a scanning LIDAR (called TriDAR) with specially
designed triangulation optics that is capable of high speed scanning. In this paper, the reported WABS (Wide Angle
Bistatic Scanning) LIDAR has demonstrated a few major advances over the TriDAR design. While it retains the benefit
of bistatic optics as seen from TriDAR, in which launch beam path and return beam path are separated in space, it
significantly improves performance in term of field-of-view, receiving optical aperture and sensor size.
The WABS LIDAR design was prototyped under a contract with the Canadian Space Agency. The LIDAR prototype
was used as the 3D sensor for the navigation system on a lunar rover prototype. It demonstrated good performance of
FOV (45°×60°) and minimum range spec (1.5m); both are critical for rover navigation and hazard avoidance. The paper
discusses design concept and objective of the WABS LIDAR; it also presents some test results.
During the ESA Mars Sample Return (MSR) mission, a sample canister launched from Mars will be autonomously
captured by an orbiting satellite. We present the concept and the design of an active 3D camera supporting the orbiter
navigation system during the rendezvous and capture phase. This camera aims at providing the range and bearing of a
20 cm diameter canister from 2 m to 5 km within a 20° field-of-view without moving parts (scannerless). The concept
exploits the sensitivity and the gating capability of a gated intensified camera. It is supported by a pulsed source based
on an array of laser diodes with adjustable amplitude and pulse duration (from nanoseconds to microseconds). The
ranging capability is obtained by adequately controlling the timing between the acquisition of 2D images and the
emission of the light pulses. Three modes of acquisition are identified to accommodate the different levels of ranging
and bearing accuracy and the 3D data refresh rate. To come up with a single 3D image, each mode requires a different
number of images to be processed. These modes can be applied to the different approach phases. The entire concept of
operation of this camera is detailed with an emphasis on the extreme lighting conditions. Its uses for other space
missions and terrestrial applications are also highlighted. This design is implemented in a prototype with shorter ranging
capabilities for concept validation. Preliminary results obtained with this prototype are also presented. This work is
financed by the Canadian Space Agency.
NASA contracted Neptec to provide the Laser Camera System (LCS), a 3D scanning laser sensor, for the on-orbit inspection of the Space Shuttle's Thermal Protection System (TPS) on the return-to-flight mission STS-114. The scanner was mounted on the boom extension to the Shuttle Remote Manipulator System (SRMS). Neptec's LCS was selected due to its close-range accuracy, large scanning volume and immunity to the harsh ambient lighting of space.
The crew of STS-114 successfully used the LCS to inspect and measure damage to the Discovery Shuttle TPS in July, 2005. The crew also inspected the external-tank (ET) doors to ensure that they were fully closed. Neptec staff also performed operational support and real-time detailed analysis of the scanned features using analysis workstations at Mission Control Center (MCC) in Houston. This paper provides a summary of the on-orbit scanning activities and a description of the results detailed in the analysis.
In fringe-projection surface-geometry measurement, phase unwrapping techniques produce a continuous phase distribution that contains the height information of the 3-D object surface. To convert the phase distribution to the height of the 3-D object surface, a phase-height conversion algorithm is needed, essentially determined in the system calibration which depends on the system geometry. Both linear and non-linear approaches have been used to determine the mapping relationship between the phase distribution and the height of the object; however, often the latter has involved complex derivations. In this paper, the mapping relationship between the phase and the height of the object surface is formulated using linear mapping, and using non-linear equations developed through simplified geometrical derivation. A comparison is made between the two approaches. For both methods the system calibration is carried out using a least-squares approach and the accuracy of the calibration is determined both by simulation and experiment. The accuracy of measurement using linear calibration data was generally higher than using non-linear calibration data in most of the range of measurement depth.
Traditional sinusoidal phase-shifting algorithms involve the calculation of an arctangent function to obtain the phase, which results in slow measurement speed. This paper presents a novel high-speed two-step triangular phase-shifting approach for 3-D object measurement. In the proposed method, a triangular gray-level-coded pattern is used for the projection. Only two triangular patterns, which are phase-shifted by 180 degrees or half of the pitch, are needed to reconstruct the 3-D object. A triangular-shape intensity-ratio distribution is obtained by calculation of the two captured triangular patterns. Removing the triangular shape of the intensity ratio over each pattern pitch generates a wrapped intensity-ratio distribution. The unwrapped intensity-ratio distribution is obtained by removing the discontinuity of the wrapped image with a modified unwrapping method commonly used in the sinusoidal phase-shifting method. An intensity ratio-to-height conversion algorithm, which is based on the traditional phase-to-height conversion algorithm in the sinusoidal phase-shifting method, is used to reconstruct the 3-D surface coordinates of the object. Compared with the sinusoidal and trapezoidal phase shifting methods, the processing speed is faster with similar resolution. This method therefore has the potential for real-time 3-D object measurement. This has applications in inspection tasks, mobile-robot navigation and 3-D surface modeling.
With the loss of the Space Shuttle Columbia, there has been intense focus at NASA on being able to detect and characterize damage that may have been sustained by the orbiter during the launch phase. To help perform this task, the Neptec Laser Camera System (LCS) has been selected as one of the sensors to be mounted at the end of a boom extension to the Shuttle Robotic Manipulator System (SRMS). A key factor in NASA’s selection of the LCS was its successful performance during flight STS-105 as a Detailed Test Objective (DTO). The LCS is based on a patented designed which has been exclusively licensed to Neptec for space applications.
The boom will be used to position the sensor package to inspect critical areas of the Shuttle’s Thermal Protection System (TPS). The operational scenarios under which the LCS will be used have required solutions to problems not often encountered in 3D sensing systems. For example, under many of the operational scenarios, the scanner will encounter both commanded and uncommanded motion during the acquisition of data. In addition, various ongoing studies are refining the definition of what constitutes a critical breach of the TPS. Each type of damage presents new challenges for robust detection. This paper explores these challenges with a focus on the operational solutions which address them.
3D ranging and imaging technology is generally divided into time-based (ladar) and position-based (triangulation) approaches. Traditionally ladar has been applied to long range, low precision applications and triangulation has been used for short range, high precision applications. Measurement speed and precision of both technologies have improved such that ladars are viable at shorter ranges and triangulation is viable at longer ranges. These improvements have produced an overlap of technologies for short to mid-range applications. This paper investigates the two sets of technologies to demonstrate their complementary nature particularly with respect to space and terrestrial applications such as vehicle inspection, navigation, collision avoidance, and rendezvous & docking.
Neptec Design Group Ltd. has developed a 3D Automatic Target Recognition (ATR) and pose estimation technology demonstrator in partnership with the Canadian DND. The system prototype was deployed for field testing at Defence Research and Development Canada (DRDC)-Valcartier. This paper discusses the performance of the developed algorithm using 3D scans acquired with an imaging LIDAR. 3D models of civilian and military vehicles were built using scans acquired with a triangulation laser scanner. The models were then used to generate a knowledge base for the recognition algorithm. A commercial imaging LIDAR was used to acquire test scans of the target vehicles with varying range, pose and degree of occlusion. Recognition and pose estimation results are presented for at least 4 different poses of each vehicle at each test range. Results obtained with targets partially occluded by an artificial plane, vegetation and military camouflage netting are also presented. Finally, future operational considerations are discussed.
KEYWORDS: Liquid crystals, 3D image processing, 3D modeling, 3D scanning, Light sources and illumination, Laser scanners, 3D acquisition, Data modeling, Sensors, Target acquisition
The Neptec Design Group has developed a new 3D auto-synchronized laser scanner for space applications, based on a principle from the National Research Council of Canada. In imaging mode, the Laser Camera System (LCS) raster scans objects and computes high-resolution 3D maps of their surface features. In centroid acquisition mode, the LCS determines the position of discrete target points on an object. The LCS was flight-tested on-board the space shuttle Discovery during mission STS-105 in August 2001. When the shuttle was docked on the International Space Station (ISS), the LCS was used to obtain four high-resolution 3D images of several station elements at ranges from 5 m to 40 m. A comparison of images taken during orbital day and night shows that the LCS is immune to the dynamic lighting conditions encountered on orbit. During the mission, the LCS also tracked a series of retro-reflective and Inconel targets affixed to the Multi-Purpose Lab Module (MPLM), when the module was stationary and moving. Analysis shows that the accuracy of the photosolutions derived from LCS centroid data is comparable to that of the Space Vision System (SVS), Neptec's product presently used by NASA for ISS assembly tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.