Jigsaw three-dimensional (3D) imaging laser radar is a compact, light-weight system for imaging
highly obscured targets through dense foliage semi-autonomously from an unmanned aircraft. The
Jigsaw system uses a gimbaled sensor operating in a spot light mode to laser illuminate a cued
target, and autonomously capture and produce the 3D image of hidden targets under trees at high 3D
voxel resolution. With our MIT Lincoln Laboratory team members, the sensor system has been
integrated into a geo-referenced 12-inch gimbal, and used in airborne data collections from a UH-1
manned helicopter, which served as a surrogate platform for the purpose of data collection and
system validation. In this paper, we discuss the results from the ground integration and testing of the
system, and the results from UH-1 flight data collections. We also discuss the performance results
of the system obtained using ladar calibration targets.
Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode simplifies the signal processing by eliminating the need for analog-to-digital converters and non-linearity corrections. With appropriate optics, the 32x32 array of digital time values represents a 3-D spatial image frame of the scene. Successive image frames illuminated with the multi-kilohertz pulse repetition rate laser are accumulated into range histograms to provide 3-D volume and intensity information. In this article, we describe the Jigsaw program goals, our demonstration sensor system, the data collection campaigns, and show examples of 3-D imaging with foliage and camouflage penetration. Other applications for this 3-D imaging direct-detection ladar technology include robotic vision, avigation of autonomous vehicles, manufacturing quality control, industrial security, and topography.
We present a pose-independent Automatic Target Detection and Recognition (ATD/R) System using data from an airborne 3D imaging ladar sensor. The ATD/R system uses geometric shape and size signatures from target models to detect and recognize targets under heavy canopy and camouflage cover in extended terrain scenes.
A method for data integration was developed to register multiple scene views to obtain a more complete 3-D surface signature of a target. Automatic target detection was performed using the general approach of “3-D cueing,” which determines and ranks regions of interest within a large-scale scene based on the likelihood that they contain the respective target. Each region of interest is further analyzed to accurately identify the target from among a library of 10 candidate target objects.
The system performance was demonstrated on five extended terrain scenes with targets both out in the open and under heavy canopy cover, where the target occupied 1 to 5% of the scene by volume. Automatic target recognition was successfully demonstrated for 20 measured data scenes including ground vehicle targets both out in the open and under heavy canopy and/or camouflage cover, where the target occupied between 5 to 10% of the scene by volume. Correct target identification was also demonstrated for targets with multiple movable parts that are in arbitrary orientations. We achieved a high recognition rate (over 99%) along with a low false alarm rate (less than 0.01%)