Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. <p> </p>
Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. <p> </p>
Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
3D LIDARs (Light Detection and Ranging) with 1.5μm nanosecond pulse lasers have been increasingly used in different applications. The main reason for their popularity is that these LIDARs have high performance while at the same time can be made eye-safe. Because the laser hazard effect on eyes or skin at this wavelength region (<1.4μm) is mainly from the thermal effect accumulated from many individual pulses over a period of seconds, scanning can effectively reduce the laser beam hazard effect from the LIDARs. Neptec LIDARs have been used in docking to the International Space Station, military helicopter landing and industrial mining applications. We have incorporated the laser safety requirements in the LIDAR design and conducted laser safety analysis for different operational scenarios. While 1.5μm is normally said to be the eye-safe wavelength, in reality a high performance 3D LIDAR needs high pulse energy, small beam size and high pulse repetition frequency (PRF) to achieve long range, high resolution and high density images. The resulting radiant exposure of its stationary beam could be many times higher than the limit for a Class 1 laser device. Without carefully choosing laser and scanning parameters, including field-of-view, scan speed and pattern, a scanning LIDAR can’t be eye- or skin-safe based only on its wavelength. This paper discusses the laser safety considerations in the design of eye-safe scanning LIDARs, including laser pulse energy, PRF, beam size and scanning parameters in two basic designs of scanning mechanisms, i.e. galvanometer based scanner and Risley prism based scanner. The laser safety is discussed in terms of device classification, nominal ocular hazard distance (NOHD) and safety glasses optical density (OD).
Scanning LIDARs are widely used as 3D sensors for navigation due to their ability to provide 3D information of terrains
and obstacles with a high degree of precision. The optics of conventional scanning LIDARs are generally monostatic, i.e.
launch beam and return beam share the same optical path in scanning optics. As a consequence, LIDARs with
monostatic optics suffer poor performance at short range (<5m) due to scattering from internal optics and insufficient
dynamic range of a LIDAR receiver to cover both short range and long range (1km) . This drawback is undesirable for
rover navigation since it is critical for low profile rovers to see well at short range. It is also an issue for LIDARs used in
applications involving aerosol penetration since the scattering from nearby aerosol particles can disable LIDARs at short
range. In many cases, multiple 3D sensors have to be used for navigation.
To overcome these limitations, Neptec has previously developed a scanning LIDAR (called TriDAR) with specially
designed triangulation optics that is capable of high speed scanning. In this paper, the reported WABS (Wide Angle
Bistatic Scanning) LIDAR has demonstrated a few major advances over the TriDAR design. While it retains the benefit
of bistatic optics as seen from TriDAR, in which launch beam path and return beam path are separated in space, it
significantly improves performance in term of field-of-view, receiving optical aperture and sensor size.
The WABS LIDAR design was prototyped under a contract with the Canadian Space Agency. The LIDAR prototype
was used as the 3D sensor for the navigation system on a lunar rover prototype. It demonstrated good performance of
FOV (45°×60°) and minimum range spec (1.5m); both are critical for rover navigation and hazard avoidance. The paper
discusses design concept and objective of the WABS LIDAR; it also presents some test results.
Helicopter pilots in military and civilian operations need visual assistance for safe flight and landing under adverse
conditions, especially during white-out condition or brown-out condition, in which it is difficult for a pilot to see
obstacles or ground through snow or dust generated by the helicopter's rotorwash. There have been intensive efforts to
develop a sensor that can detect obstacles or ground inside aerosols in recent years.
LIDAR can use the gating function of timing discrimination to suppress the effect of scattering from aerosols, it can
generally "see" farther than passive sensors such as human eyes and cameras inside aerosols. The challenge of using a
LIDAR under aerosol conditions is not only the requirement of high laser power for penetrating aerosols, but also the
requirement of high detection dynamic range and the suppression of aerosol scattering in front of a LIDAR. Neptec's
Obscurant Penetrating Autosynchronous LIDAR (OPAL) uses an autosynchronized optical design, which utilizes a
triangulation relationship to control the amount of return beam accepted by the TOF (time-of-flight) receiver as a
function of target range. The design also maintains this property during high-speed optical scanning. As a result, OPAL
can suppress the return signals from nearby aerosol scattering and, at the same time, have a sensitivity and dynamic
range to detect obstacles or ground inside aerosol. Neptec has conducted experiments to study the effect of atmospheric
aerosol scattering on LIDAR, FLIR and human vision by using a propagation and aerosol evaluation corridor. Neptec
has also carried out flight tests of a prototype of OPAL on a NRC Bell 412 helicopter. In this paper, the concept of the
OPAL that is uniquely designed to penetrate aerosols will be described and its applications in helicopter landing will be
Although laser ranging and scanning sensors are widely used in a variety of industries, a sensor designed for spacecraft operations, including autonomous rendezvous, inspection and servicing remains a challenge. This is primarily due to critical requirements, including the need to have simultaneous high sampling speed, and good range and lateral resolution at both short range of a few meters and at long range of a few hundred meters. A typical LIDAR sensor is not suitable for tracking at the close-in distance, just before rendezvous, or during a critical close-up inspection, since its range resolution is in the tens of millimeters and can only be improved by averaging at the expense of speed. A laser triangulation sensor is capable of simultaneously having both high range resolution (~1mm) and high speed (~10kHz) at short distance. But the range resolution of a triangulation sensor reduces rapidly as range increases, its performance is inferior compared to a LIDAR based sensor at long range. NEPTEC TriDAR (triangulation + LIDAR) is a hybrid sensor that combines a triangulation sensor and a TOF sensor for spacecraft autonomous rendezvous and inspection. It has been developed in part from technology used in NEPTEC's OBSS (Orbiter Boom Sensor System) 3D laser camera. The OBSS LCS was used for inspection of the Shuttle tiles on STS-114. In this paper, the TriDAR design that combines triangulation and LIDAR to produce high speed and high resolution for both short and long range is described. To successfully produce this sensor for space, an athermalized optical steering system shared by the two sensors has been developed. Results from performance testing of a prototype, designed for autonomous rendezvous, are given.
Neptec has developed a vision system for the capture of non-cooperative objects on orbit. This system uses an active TriDAR sensor and a model based tracking algorithm to provide 6 degree of freedom pose information in real-time from mid range to docking. This system was selected for the Hubble Robotic Vehicle De-orbit Module (HRVDM) mission and for a Detailed Test Objective (DTO) mission to fly on the Space Shuttle.
TriDAR (triangulation + LIDAR) technology makes use of a novel approach to 3D sensing by combining triangulation and Time-of-Flight (ToF) active ranging techniques in the same optical path. This approach exploits the complementary nature of these sensing technologies. Real-time tracking of target objects is accomplished using 3D model based tracking algorithms developed at Neptec in partnership with the Canadian Space Agency (CSA). The system provides 6 degrees of freedom pose estimation and incorporates search capabilities to initiate and recover tracking. Pose estimation is performed using an innovative approach that is faster than traditional techniques. This performance allows the algorithms to operate in real-time on the TriDAR's flight certified embedded processor.
This paper presents results from simulation and lab testing demonstrating that the system's performance meets the requirements of a complete tracking system for on-orbit autonomous rendezvous and docking.
With the advance of linear CCD arrays and high precision galvanometer design in recent years, triangulation based 3D laser cameras have found wide applications from human contour digitization to object tracking and imaging on the International Space Station.  In most applications, a beam size of 1mm or larger is used to minimize the beam divergence over the entire range.
With a beam diameter of 1mm, the position resolution (X, Y direction) is normally in the order of one millimeter. In the triangulation method, the distance (Z direction) information is extracted from the position of a Gaussian shape peak on a detector array. There are two major sources of error, excessive edge effects and speckle noise caused by a large spot size. Edge effects are produced when parts of the same beam spot fall on surfaces at different distances. This causes the peak shape of the imaging spot on the array to deviate from Gaussian and produces errors in the distance measurement at the edge of an object.
In this paper, modeling of edge effects and speckle noise in an auto-synchronized 3D laser camera in terms of beam size, laser wavelength, optical aperture and geometrical parameters used in the triangulation arrangement are discussed. The methods to mitigate errors from edge effects and speckle noise, and the results showing high resolution in both lateral position and distance on a 3D object are presented.