Ball Aerospace has signed an exclusive license agreement to be the sole manufacturer of the Geiger-mode avalanche photodiode (GmAPD) light detection and ranging (LIDAR) cameras for the defense and aerospace industries. The license was provided by Argo AI, which acquired the former manufacturer of the technology, Princeton Lightwave Inc. (PLI), in October 2017. Over the past 10 years PLI developed and advanced GmAPD detectors and cameras capable of detecting single photons. This detector sensitivity combined onto multi-pixel arrays enables high resolution LIDAR and communication systems, which are capable of extended range operation with significant savings to system size, weight, and power. Specific applications of this technology include target detection, acquisition, tracking, 3D mapping, intelligence, surveillance, and reconnaissance missions capable of direct and coherent detection. In this work, we review the current state of this technology focusing on the three options of Geiger-mode cameras that will be manufactured by Ball Aerospace. Moreover, we present details of expected camera and detector performance (e. g. Format, Photon Detection Efficiency, Dark Count Rate, Wavelength, Timing), review production, manufacturing capabilities, and update the community on future technology paths for anticipated customer needs. Ball Aerospace will manufacture and further develop Geiger-mode LIDAR camera technology as the premier merchant supplier of advanced, large-format, single photon sensitive camera products and systems.
Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to
produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous
demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will
provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in
rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and
tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their
associated algorithms which have led to a number of advancements in high-speed wavefront processing along with
additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This
evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the
electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique
requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution
temporal data, which is then processed through FPGAs. The number of calculations required to process the data is
greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation
allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal
Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report
outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.
3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system’s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.
Conference Committee Involvement (2)
Remote Sensing System Engineering IV
12 August 2012 | San Diego, California, United States
Remote Sensing System Engineering III
2 August 2010 | San Diego, California, United States