Open Access Paper
11 January 2018 Development of LIDAR sensor systems for autonomous safe landing on planetary bodies
F. Amzajerdian, D. Pierrottet, L. Petway, M. Vanek
Author Affiliations +
Proceedings Volume 10565, International Conference on Space Optics — ICSO 2010; 105655A (2018) https://doi.org/10.1117/12.2309271
Event: International Conference on Space Optics—ICSO 2010, 2010, Rhodes Island, Greece
Abstract
Future NASA exploratory missions to the Moon and Mars will require safe soft-landings at the designated sites with a high degree of precision. These sites may include areas of high scientific value with relatively rough terrain with little or no solar illumination and possibly areas near pre-deployed assets. The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of large robotic and manned vehicles with a high degree of precision. Currently, NASA-LaRC is developing novel lidar sensors aimed at meeting NASA’s objectives for future planetary landing missions under the Autonomous Landing and Hazard Avoidance (ALHAT) project. These lidar sensors are 3-Dimensional Imaging Flash Lidar, Doppler Lidar, and Laser Altimeter. The Flash Lidar is capable of generating elevation maps of the terrain identifying hazardous features such as rocks, craters, and steep slopes. The elevation maps collected during the approach phase between 1000 m to 500 m above the ground can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground velocity and distance data allowing for precision navigation to the selected landing site. Prior to the approach phase at altitudes of over 15 km, the Laser Altimeter can provide sufficient data for updating the vehicle position and attitude data from the Inertial Measurement Unit. At these higher altitudes, either the Laser Altimeter or the Flash Lidar can be used for generating a contour map of the terrain below for identifying known surface features such as craters for further reducing the vehicle relative position error.

INRODUCTION

Future NASA exploratory missions to the Moon and Mars will require safe soft-landings at the designated sites with a high degree of precision. These sites may include areas of high scientific value with relatively rough terrain with little or no solar illumination and possibly areas near pre-deployed assets. The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of large robotic and manned vehicles with a high degree of precision. Currently, NASA-LaRC is developing novel lidar sensors aimed at meeting NASA’s objectives for future planetary landing missions under the Autonomous Landing and Hazard Avoidance (ALHAT) project [1]. These lidar sensors are 3-Dimensional Imaging Flash Lidar, Doppler Lidar, and Laser Altimeter. The Flash Lidar is capable of generating elevation maps of the terrain identifying hazardous features such as rocks, craters, and steep slopes. The elevation maps collected during the approach phase between 1000 m to 500 m above the ground can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground velocity and distance data allowing for precision navigation to the selected landing site. Prior to the approach phase at altitudes of over 15 km, the Laser Altimeter can provide sufficient data for updating the vehicle position and attitude data from the Inertial Measurement Unit. At these higher altitudes, either the Laser Altimeter or the Flash Lidar can be used for generating a contour map of the terrain below for identifying known surface features such as craters for further reducing the vehicle relative position error.

To fulfill the requirements of landing at any pre-designated site under any lighting conditions, ALHAT is pursuing active sensor technology development and maturation to implement five sensor functions: Altimetry, Velocimetry, Terrain Relative Navigation (TRN), Hazard Detection and Avoidance (HDA) and Hazard Relative Navigation (HRN). Table 1 below lists the ALHAT sensor suite and their top level performance specifications for achieving each of required functions with some degree of redundancy. Figure 1 illustrates the operational scenario of these sensors. Flash Lidar is being considered for performing all these functions with exception of velocimetry for which a Doppler Lidar is being developed. The ability of the Doppler Lidar to provide velocity data with approximately 1 cm/sec is highly attractive for precision landing. Additionally, the Doppler Lidar provides high resolution altitude and ground-relative attitude data that may further improve precision navigation to the identified landing site. The Laser Altimeter provides independent altitude data over a large operational altitude range of 20 km to 100 m. All three laser sensors have a nominal update rate of 30 Hz.

Fig. 1.

Operational scenario of landing sensors.

00007_PSISDG10565_105655A_page_3_1.jpg

Table 1.

ALHAT Sensor Suite.

SensorFunctionOperational Altitude RangePrecision/Resolution
Flash LidarHDA/HRN1000 m – 100 m5 cm/40 cm
TRN15 km – 5 km20 cm/6 m
Altimetry20 km – 100 m20 cm
Doppler LidarVelocimetry2500 m – 10 m1 cm/sec
Altimetry2500 m – 10 m5 cm
Laser AltimeterAltimetry20 km – 100 m20 cm

All five of the aforementioned functions provide input to the navigation filter for Landing Vehicle “state” estimation, flight trajectory retargeting, and maneuvering to a safe site. Of these five functions, Altimetry and Velocimetry are direct sensor measurements, whereas the TRN, HDA, and HRN functions can be considered relative measurements, since the sensor “output” is derived from a correlation with either “a priori” terrain information, or with a sequence of previous sensor measurements. The later functions can also be considered techniques, since a number of sensor – algorithm combinations can achieve similar results under the appropriate concept of operation. The location determination of safe landing sites is made from the location determination of hazards, simultaneously recorded within full 3-D images in complete spatial and temporal resolution. The simultaneity of the recording of full 3-D scenes with single laser pulses not only enables more rapid acquisition, but simpler and more rapid processing of scene information to enable the time-sensitive precision navigation necessary to avoid hazards and land precisely at the retargeted location.

The imaging Flash Lidar is being considered as the primary sensor, due to its ability to provide 3-Dimensional images of surfaces and hazards, for future robotic and manned landing missions to the Moon and Mars. An imaging lidar system records a three dimensional (3D) image of a scene by converting intensity versus time of flight of short laser pulses into intensity versus distance along the line of sight for each spatially resolved area within a 2D image. In older, more conventional imaging lidar systems, each 2D pixel is recorded with a separate laser pulse. Thus many laser pulses are required to record large, multi-pixel images. A Flash Lidar system records full 3D images with a single laser pulse, permitting higher data rates and freezing out movement within the scene and motion of the transmitter/receiver platform. The need for high speed raster scanners to sequentially address image pixels is also eliminated. The receiver is much like the familiar digital camera, but with “smart pixels” that are capable of recording the required sequential temporal information.

The capabilities of the Flash Lidar technology for autonomous safe landing application have been investigated in a series of static and dynamic experiments at a sensor test range and from aircraft platforms [2]. One of the major objectives of these tests was to define areas of technology improvement required to meet NASA’s autonomous safe landing needs. These tests also helped the development of various algorithms including image reconstruction, HDA, HRN, and TRN. Furthermore, the analyses of the test data allow improvement to the Flash lidar computer models used in end-to-end landing system simulations. All the Flash Lidar experiments to date have been based on the technology developed by Advanced Scientific Concepts (ASC) [3]. This 3-D Imaging camera has a 128x128 pixel array capable of generating real-time image frames at up to a 30 Hz rate. Characterization of Flash Lidars and other remote sensing instruments are routinely made at the long-range test facility at NASA-LaRC. Analyses of the ASC Flash Lidar test results from the long-range test facility at NASA-LaRC and the flight tests onboard helicopter and fixed-wing aircraft have been reported previously [4].

The results of the static and airborne tests were proved critical in defining the areas of the technology improvement and development of signal processing algorithms necessary for achieving the ALHAT objectives and meeting NASA’s autonomous safe landing needs. A series of Flash Lidar component technology advancement projects were initiated in 2008 in collaboration with industry, aimed at the development of a Flash Lidar landing sensor system that can efficiently perform the four functions described above. Table 2 summarizes the current state of the Flash Lidar technology and the performance goal of the current technology advancement activities.

Table 2.

Flash Lidar ALHAT performance goals.

Mode of OperationParameterCurrentGoal
HDA/HRNMax operational range400 m> 1000 m
Number of pixels128X128256X256
FOV3 degVariable 6 – 24 deg
Precision8 cm5 cm
GSD20 cm10 cm
Map size102 m X 102 m204 m X 204 m
Map acquisition time10 sec1 sec
TRNMax operational range8 km20 km
Number of illuminated pixels10X1020X20
Precision20 cm20 cm
Update rate30 Hz30 Hz

These activities include the development of low noise 256x256 pixel Avalanche Photodiode array, high sensitivity 256x256 Readout Integrated Circuit (ROIC), efficient transmitter laser with optimum pulse temporal and spatial profiles, programmable field-of-view receiver optics, and novel signal processing techniques. Increasing the number of pixels by a factor of 4 and extending the operational range of the lidar by a factor 2.5 translates to 25X more sensitive or more powerful system. This is expected to be achieved by increasing the detection sensitivity (i.e., combination of the detector array and ROIC performance) by 10X and by increasing the effective laser pulse energy by 2.5X. The generation of 3-D maps covering an area of the order of 200X200 meters with 10 cm resolution will be achieved by a combination of an advanced receiver optics design and novel signal processing techniques. A motorized optical mechanism is being developed to allow for increasing the lidar field of view as the vehicle descents thus preserving the coverage area during final approach phase. A set of signal processing algorithms are being developed for accurate calibration of the lidar signal and enhancing the image resolution and reducing its noise through “super resolution” or “digital magnification” techniques. Upon completion, these component technologies will be integrated into a system to demonstrate the Flash Lidar capabilities in meeting ALHAT’s objectives.

DOPPLER LIDAR

The Doppler Lidar is a versatile instrument capable of providing precision velocity vectors relative to the sensor reference frame, vehicle platform altitude, and ground relative attitude. With this sensor the landing vehicle can acquire a surface inertial navigation fix during the approach phase, accurate to a few centimeters in position and a few centimeters per second in velocity. This allows the vehicle to accurately navigate from a few kilometers altitude to the previously defined surface location very accurately.

The Doppler Lidar obtains high-resolution range and velocity information from a frequency modulated continuous wave (FMCW) laser waveform whose instantaneous frequency is modulated linearly with time. Figure 2 shows the waveform’s frequency content versus time, and the resulting intermediate frequency (IF) that holds the desired range and velocity information. The green triangular waveform represents the frequency content of the transmitted waveform, and the blue trace simulates a received waveform. The horizontal shift to the right of the received waveform is due to the time delay caused by the round trip time of flight of the laser beam to the target. The vertical shift of the received waveform represents the Doppler frequency change that arises from the motion of the vehicle relative to the ground.

Fig. 2.

The laser frequency modulation has a linear chirp waveform. Received waveform is delayed in time. Lower trace is the difference between transmit and receive waveforms.

00007_PSISDG10565_105655A_page_5_1.jpg

The lidar design uses an optical homodyne receiver configuration, in which a portion of the transmitted beam serves as the reference local oscillator (LO) for the optical receiver. The LO optical field mixes with the time delayed received field at the detector yielding a time varying intermediate frequency (IF) as shown by the lower (red) trace in Figure 2. The IF trace shows two distinct frequencies, one caused by the up-ramp, and one caused by the down-ramp of the waveform. The difference in up-ramp and down-ramp frequency provides the vehicle velocity and their mean value provides the range to the ground. The lidar transmits 3 laser beams separated 45 degrees pointed nadir in order to determine the 3 components of the vehicle velocity, and to accurately measure altitude and attitude relative to the local ground.

A breadboard Doppler Lidar was assembled and tested onboard a helicopter in 2008 to evaluate its capabilities for the landing application. The results of helicopter test showed excellent agreement with the high accuracy GPS derived velocities. The data collected during the flight tests also proved to be very valuable for the development of a compact and efficient system, shown in Figure 3, which was recently used in another helicopter flight test campaign. The data collected from this latest field test is currently being processed and analyzed for further improving its operational characteristics.

Fig. 3.

Doppler Lidar prototype system with a fiber-coupled optical head having 3 lenses pointing to different directions. The Doppler Lidar provides vehicle vector velocity and altitude.

00007_PSISDG10565_105655A_page_6_1.jpg

LASER ALTIMETER

The vehicle altitude can be measured by the Flash Lidar at high altitudes approaching 20 km and the Doppler Lidar from altitudes of a few km’s above the ground. However, a separate Laser Altimeter sensor can ease the Flash Lidar accommodation design and provide a redundancy to this critical data. A Laser Altimeter has been designed and built specifically for ALHAT. The breadboard version of this sensor was first tested in the fixed-wing aircraft test from altitudes over 8 km in 2008. A compact and low-power prototype system was recently completed. The ALHAT Laser Altimeter has been tested at the NASA LaRC test range facility and was then flown in the most recent ALHAT field test onboard a helicopter. The results of these tests indicate an operational range of almost 30 km with a range precision of about 8 cm.

CONCLUSION

Lidar has been identified by NASA as a key technology for enabling autonomous safe landing of future robotic and crewed lunar landing vehicles. NASA LaRC has been developing three laser/lidar sensor systems under the ALHAT project. The capabilities of these Lidar sensor systems were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard helicopters and a fixed wing aircraft. The airbone tests were perfomed over Moon-like terrain in the California and Nevada deserts. These tests provided the necessary data for the development of signal processing software, and algorithms for hazard detection and navigation. The tests helped identify technology areas needing improvement and will also help guide future technology advancement activities.

REFERENCES

[1] 

C. D. Epp, E. A. Robinson, and T. Brady, “Autonomous Landing and Hazard Avoidance Technology (ALHAT),” in Proc. of IEEE Aerospace Conference, 1 –7 (2008). Google Scholar

[2] 

F. Amzajerdian, M. Vanek, L. Petway, D. Pierrottet, G. Busch, A. Bulyshev, “Utilization of 3-D Imaging Flash Lidar Technology for Autonomous Safe Landing on Planetary Bodies,” in SPIE Proceeding, (2010). Google Scholar

[3] 

R. Stettner, H. Bailey, and S. Silverman, “Three Dimensional Flash Ladar Focal Planes and Time Dependent Imaging,” International Symposium on Spectral Sensing Research, Bar Harbor, Maine, (2006). Google Scholar

[4] 

D. Pierrottet, F. Amzajerdian, B. Meadows, R. Estes, A. Noe, “Characterization of 3-D imaging lidar for hazard avoidance and autonomous landing on the Moon,” in Proc. of SPIE, (2007). Google Scholar
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
F. Amzajerdian, D. Pierrottet, L. Petway, and M. Vanek "Development of LIDAR sensor systems for autonomous safe landing on planetary bodies", Proc. SPIE 10565, International Conference on Space Optics — ICSO 2010, 105655A (11 January 2018); https://doi.org/10.1117/12.2309271
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
LIDAR

Sensors

Doppler effect

Algorithm development

3D image processing

Astronomical imaging

Receivers

Back to Top