PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11005, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Australian Defence Science and Technology Group have developed novel single photon avalanche diode (SPAD) arrays using Silicon based complementary metal-oxide semiconductor (CMOS) processes. The first of these was a simple 32x32 pixel array, followed by higher density arrays developed with our partners. These single photon detector arrays have inherently low dark currents and we have used them in several Flash LADAR systems, including an innovative design where the LADAR is cued by an 8-12 micron infrared imager which shares a common aperture. The use of Flash LADAR (rather than scanning) has the advantage that moving targets can be imaged accurately. We have developed modelling and simulation tools for predicting SPAD LADAR performance and use processing techniques to suppress ‘background’ counts and resolve targets that are obscured by clutter. In this paper we present some of our initial results in discriminating small (<1 m) targets at ranges out to 10 km. Results from our field experiments include extraction of a 0.5m object at 10 km and identification of a small flying UAV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real time program is implemented to classify different model airplanes imaged using a 32x32 SPAD array camera in time-of-flight mode. The algorithm uses random feature extractors in series with a linear classifier and is implemented on the NVIDIA Jetson TX2 platform, a power efficient embedded computing device. The algorithm is trained by calculating the classification matrix using a simple pseudoinverse operation on collected image data with known corresponding object labels. The implementation in this work uses a combination of serial and parallel processes and is optimized for classifying airplane models imaged by the SPAD and laser system. The performance of different numbers of convolutional filters is tested in real time. The classification accuracy reaches up to 98.7% and the execution time on the TX2 varies between 34.30 and 73.55 ms depending on the number of convolutional filters used. Furthermore, image acquisition and classification use 5.1 W of power on the TX2 board. Along with its small size and low weight, the TX2 platform can be exploited for high-speed operation in applications that require classification of aerial targets where the SPAD imaging system and embedded device are mounted on a UAS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SEAHAWK is a high-performance, low-SWAP LIDAR for real-time topographic and bathymetric 3D mapping applications. Key attributes include real-time waveform and point cloud processing, real-time calculation of total propagated uncertainty (TPU), a novel co-located green and infra-red transceiver architecture based on a 12” circular scanner with holographic optical element (HOE), an ultra-compact Cassegrain telescope, custom detector architecture with dynamic load modulation (DLM), and analog-to-digital converters providing improved resolution, dynamic range, and sensitivity. SEAHAWK’s design yields higher sea-surface detection percentages than other circular scanning LIDARs and thereby enables more robust sea-surface correction strategies. The real time point clouds provide sensor operators with immediate, actionable intelligence about data quality while the aircraft remains on-station.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SEAHAWK is a new lidar for deep-water bathymetric surveying. Its performance and SWaP objectives generated requirements for the optical design affecting aperture, FOV, transmission efficiency, alignment accuracy, spectral filtering, and system size. Fabrication and other hardware limitations added constraints, particularly on the apertures of the detectors, filters, and custom scanner optics. An initial thin lens analysis produced a 3-channel receiver layout leading to the fabrication of an all-aluminum 300 mm diameter F/3.6 Cassegrain telescope having a total physical length less than 200 mm. An optimization of the relay optics maximized the narrowband filter performance by keeping the incidence angle constant across the system’s 38 mrad FOV. The resulting compact optical subsystem yields a smaller lidar head than other deep-water bathymetric lidars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A dual-wavelength circular scanner with collinear transmit and receive axes has been developed for use in the SEAHAWK bathymetric lidar. The scanner optics consist of an achromatic prism pair located concentrically within a 11.3” diameter dual-zone holographic optical element (HOE). This scanner achieves coaligned green and infrared beams at a 20° off- nadir scan angle when using a 50W dual-wavelength laser (30W @ 532 nm and 20W @1064 nm) as the transmitter. The main engineering challenges in achieving the design were minimizing the optical pointing error between the four optical axes (two transmit and two receive) and developing a rugged prism pair design sufficient to withstand the high laser power. The design proved sensitive to fabrication and alignment errors so success depended on analyzing optical and mechanical tolerances, acknowledging fabrication limitations, measuring critical optical components, tailoring the design to the as- built components, and utilizing a custom alignment fixture featuring a digital autocollimator. Final measurements of the deployed scanner indicate its optical pointing error has a cone half angle error of less than 0.06° (1 mrad).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a recently developed compact LiDAR polarimeter, the transmit beam cycles through multiple polarization states within each laser pulse and the receiver splits the received signal into multiple polarization state analyzers (PSAs), with the PSA outputs temporally multiplexed into a single detector. This enables measurement of up to 12 of the 16 Mueller matrix elements of a target using limited hardware. However, due to numerical issues, one entire column of the matrix is not accessible, including the M22 element (counting from zero) on the matrix diagonal. Experimental data show that for most surfaces of interest in a defense/security setting, the off-diagonal elements tend to be negligibly small and the diagonal elements vary between targets, so access to the diagonal elements is of high interest. In this paper, we show that if an elliptical PSA is used at the receiver and if most of the off-diagonal Mueller matrix elements are assumed to be zero a priori, then the otherwise inaccessible M22 element can be estimated. We explore the linear algebra of the problem to determine the full list of which subsets of Mueller matrix elements can be estimated with this hardware configuration. We also theoretically and empirically investigate the effects of the rotations of the electro-optic transmitter plate and receiver elliptical PSA on the overall system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The complexity of airborne laser scanning systems has increased to achieve the growing end user requirements of point density, accuracy, precision and vegetation penetration requirements. In doing so a gap has grown in translating the technical specifications of the laser scanning system into ASPRS or USGS requirements that the user operating the system must achieve in their point cloud deliverable to the end customer. Further complicating this are new application areas such as forestry where point density and ground penetration can be challenging. This study will examine how the laser scanner system technical specifications, position and orientation system specifications, and operational parameters relate to end product accuracy and vegetation penetration requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ground based electro-optical tracking system (EOTS) and electro-optics and infrared (EO/IR) are the most popular small UAV (C-sUAV) detection systems. The EO/IR systems are able to detect sUAVs at a long distance about several kilometers under clean environment. However, its performance is degraded in various noises like fixed patterns, dead/bad pixels and complex background conditions such as saturated images or foggy environments. In this study, we propose an efficient methodology using high power laser radar for real time CsUAV systems. The goal of our system is to find a 0.5 meter sUAV at 2 kilo meter distance in real time. For that challenging goal, we use a laser radar with dual pan-tilt scanning systems and also apply the variable-radially bounded nearest neighbor (V-RBNN) methodology as a fast clustering method. The experimental results show that the proposed method is able to detect 0.5 meter sUAV and its calculation time is under 20 millisecond per frames in complex background and long range conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last decade, small and mirco unmanned aerial vehicles (MUAV) have become an increasing risk to security and safety in civilian and military scenarios. Further, countermeasures are hard to deploy due to the MUAV ability to operate with highly agile flight maneuver and the physical constrains due to very small cross-sections as well for RADAR as for LiDAR detection. The French-German Research Institute of Saint-Louis (ISL) is studying heterogeneous sensor networks for detection and identification of threats. Shortwave laser gated viewing is used to record images of the target and different image processing algorithms and filters are investigated to perform reliable tracking of MUAV flying in front of textured and clear sky backgrounds. Here, an analysis approach is presented to analyze the MUAV flight behavior and three-dimensional path from image data. Further, a prediction approach is presented to estimate the target position in near future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. However, not every UAV is a potential threat and therefore the UAV not only has to be detected, but classified or identified. 360o scanning LiDAR systems can be deployed for the detection and tracking of (micro) UAVs in ranges up to 50 m. Unfortunately, the verification and classification of the detected objects is not possible in most cases, due to the low resolution of that kind of sensor. In this paper, we propose an automatic alignment of an additional sensor (mounted on a pan-tilt head) for the identification of the detected objects. The classification sensor is directed by the tracking results of the panoramic LiDAR sensor. If the alignable sensor is an RGB- or infrared camera, the identification of the objects can be done by state-of-the-art image processing algorithms. If a higher-resolution LiDAR sensor is used for this task, algorithms have to be developed and implemented. For example, the classification could be realized by a 3D model matching method. After the handoff of the object position from the 360o LiDAR to the verification sensor, this second system can be used for a further tracking of the object, e.g., if the trajectory of the UAV leaves the field of view of the primary LiDAR system. The paper shows first results of this multi-sensor classification approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is currently a growing need for scientifically accurate terrain maps of the earth. One way to develop terrain maps includes the use of LIDAR sensing and digital imagery. By fusing these sets of data, a textured digital elevation map (TDEM) can be created. The high cost of equipment and full-scale aircraft operation can be mitigated by creating a sensor package that includes both LIDAR and digital imaging that can be mounted on a low-cost, small, unmanned aerial system (sUAS). This sensor package is called a texel camera (TC) and is composed of commercial off the shelf sensors: LIDAR, digital camera, inertial navigation system, and computer processing unit. The TC is calibrated so that each data sample contains a LIDAR scan, registered digital image, aircraft attitude, position data, and timestamp. As the TC acquires data, a point cloud is created that describes the surface of the object. Each LIDAR measurement in a scan corresponds with a known pixel in the digital image. Fusing this data allows the formation of scientifically accurate TDEMs. The TC is a more cost-effective terrain mapping solution than current mobile and manual data collection methods. It is designed to fit in rotorcraft and fixedwing sUAS. Other advantages to the TC include adaptability with existing sUAS and the ability to map views from different perspectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new technique for texturing a Digital Surface Map (DSM) formed using a series of texel images (fused digital image/lidar) taken from a low-cost small unmanned aerial vehicle (UAV). In previous algorithms, an orthonormal view and a single beam had been used to texture the output DSM. The problem of using orthorectified texture obtained from such a view and a lidar scanner with multiple beams is addressed, and a new technique is described where the texture is selected based on the orientation of the triangular mesh formed by tessellating the 3D points from the lidar point cloud. This paper demonstrates the improvement in quality of the output textured DSM when viewed from various viewpoints in a 3D viewer. The stretched pixels are reduced and the texture of the side of objects is greatly improved. The final output textured DSM is shown and the improvement over the previous method is reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new, lower-cost, and 10FPS-attainable lidar scanning system has been developed. The functions of the system include steering the incident laser, correcting the alignments and aberrations, outputting the signal of the misalignment, and laser incident timing. The system incorporates low-cost parts that include a Liquid Crystal on Silicon (LCOS), a spherical mirror with a transparent layer (SMT), a spherical surface meniscus lens, and photosensors. The key component in this system is the SMT, which increases the LCOS scanning area, reduces the aberration, and creates a reference reflection for optical system correction. Beam steering by the optical phased array is accomplished by varying the refractive index of the liquid crystal in the LCOS. Scanning flash lidar is realized by the combination of the LCOS and SMT, which can emit an intensity distribution-corrected conical beam. The SMT reference reflection corrects three types of errors by adjusting the control parameters of the LCOS: misalignment upon lidar fabrication, long-term or temperature-dependent variation of the optical system, and installation error. In addition, the alignment process manhours and investments for a factory are reduced by eliminating the need for mechanical adjustments. The system can realize flash area-varying lidar, which can achieve a field of view of 1.8 rad in the horizontal direction and 0.3 rad in the vertical direction. An arbitrary irradiation area can be designed using optical phased array techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The LIDAR scanner is at the heart of object detection of the self-driving car. Mutual interference between LIDAR scanners has not been regarded as a problem because the percentage of vehicles equipped with LIDAR scanners were very rare. With the growing number of autonomous vehicles equipped with LIDAR scanner operated close to each other at the same time, the LIDAR scanner may receive laser pulses from other LIDAR scanners. In this paper, three types of experiments and their results are shown, according to the arrangement of two LIDAR scanners. We will show the probability that any LIDAR scanner will interfere mutually by considering spatial and temporal overlaps. It will present some typical mutual interference scenario and report an analysis of the interference mechanism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal interference between two light detection and ranging (lidar) scanners can occur when the transmitted laser energy from one lidar is scattered from a target and returned to a second lidar. By modeling lidar transmission paths via ray tracing, it is shown that signal interference can be modeled by the coincidence of intersection between two lidar transmission paths and a scattering target. The evaluation of experimental observation and an analytical framework of lidar signal interference is presented that compares results of a Monte Carlo simulation to interference observations from circularly scanning lidar sensors. The comparison between simulated and experimentally observed interference events suggests that lidar interference may largely explained by geometry and angular conditions. The model provides preliminary explanation as to the angular distribution of interference events and distinct transitions between occurrences of different interference modes. However, further radiometric refinement is likely needed to best explain the manifestation of some interference events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LIDAR sensors and LIDAR systems utilized for precise surveying in various fields of application are operated from significantly distinct platforms ranging from static platforms during a single 3D scan acquisition in terrestrial or static laser scanning to a multitude of different platforms in kinematic laser scanning like mobile laser scanning, UAV-based laser scanning or airborne laser scanning. The related fields of application impose substantially different requirements with respect to accuracy, measurement rate, and data density. The results have to serve various data consumer communities and impose vastly dissimilar requirements on the LIDAR equipment, e.g., size, weight, cost and performance. Still, there are some general issues one has to address in data processing and delivery. In some cases, the emphasis lies specifically on rapid point cloud processing and delivery – although delivery time requirements may range from seconds up to weeks, depending on the application at hand. Processing requirements are demanding as in this paper we assume final point clouds to be clean – i.e. virtually noise free –, georeferenced, and consistent. We discuss general challenges in the data processing chain applicable to all types of LIDAR, regardless of the underlying technology, i.e. waveform LIDAR, discrete LIDAR, single-photon LIDAR, or Geiger-mode LIDAR. Applications include, e.g., rapid generation of data previews for the operator in kinematic LIDAR and the automated registration of all acquired point clouds in stop-and-go acquisition with static LIDAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data from the Optech Titan airborne laser scanner were collected over Monterey, CA, in three wavelengths (532 nm, 1064 nm, and 1550 nm), in October 2016, by the National Center for Airborne LiDAR Mapping (NCALM). Lidar waveform data at 532 nm from the Optech Titan were analyzed for data collected over the forested area at the Pont Lobos State Park. Standard point cloud processing was done using LAStools. Waveform data were converted into pseudo “hypercubes” in order to facilitate use of the analysis structures used for hyperspectral imagery. Analysis approaches used were ENVI classification tools such as Support Vector Machines (SVM), Spectral Angle Mapper (SAM), Maximum Likelihood, and K-means to classify returns. Through the use of this analog to hyperspectral data analysis to classify vegetation and terrain, the results are that, by using the Support Vector Machines with full waveform data, we can successfully improve low vegetation classifiers by 40%, and differentiate tree types (Pine/Cypress) at 40–60% accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the pulse coded LIDAR system, the number of laser pulses used at a given measurement point changes depending on the optical modulation and the method of spreading code used in OCDMA. The number of laser pulses determines the pulse width, power, and duration of the pulse transmission of a measurement point. These parameters determine the maximum measurement distance of the laser and the number of measurement points that can be employed per second. In this paper, we suggest possible combinations of modulation and spreading technology, evaluate the performance and characteristics of them, and study optimal combinations according to varying operating environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LiDAR systems typically use a single fixed-frequency pulsed laser to obtain ranging and reflectance information from a complex scene. In recent years, there has been an increased interest in multispectral (MS) LiDAR. Here, progress in the development of a MS LiDAR with agile wavelengths selection is reported. Broadcast wavelengths are selected from a spectrally-broad source, in a pre-programmed or at-will fashion, to support target discrimination using 2D information. In this study, where measured reflectance spectra of the target of interest and background are provided, an L1-band selection algorithm is used to identify the most valuable wavebands to distinguish between scene elements. Anomaly detection methods have also been successfully demonstrated and will be discussed. Furthermore, an investigation into the use of a Silicon Photomultiplier (SiPM) device for collecting pulse returns from targets such as vegetation, minerals, and human-made objects with varying spatial and spectral properties is completed. In particular, an assessment of the impact of the device response to (1) different focal plane spot illumination conditions and (2) bias level settings is carried out, and the implications to radiometric accuracy and target discrimination capability are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the last two decades, several research papers have addressed robust filtering algorithms for the airborne laser scanning (ALS) data. Although most of these filtering algorithms are accurate and robust, they are limited to postprocessing since they rely on complex algorithms and needs high execution time if implemented in an embedded processor. There are number of applications that require generating digital surface models (DSMs) in real-time such as path planning for ground vehicles, where a UAV equipped with a LiDAR scan the terrain ahead to find the path ahead of a ground vehicle. LiDAR scans are also critical to assist with finding the most suitable region for UAV Landing. With the growing demand for safe operation of autonomous systems like UAVs, there is a need for efficient LiDAR processing algorithms capable of generating DSMs in real-time. The aim of this research is to discuss the design of an efficient algorithm that can filter LiDAR point cloud, generate DSM and operate in real-time. The algorithm is suitable for real-time implementation on limited resources embeddedprocessors without the need for a supercomputer. It is also capable of estimating the slope maps from the DSM. The proposed method was successfully implemented in C++ in real-time and was examined in an airborne platform. With comparison to the reference data, we were able to demonstrate the capability of the developed method to distinguish, in real-time, the roofs of the buildings (areas of low slope) from the edges of the same buildings (areas of high slope).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Harsh Environment Operations and Environmental Sensing
We report on the design, fabrication and testing of a 1645 nm injection-seeded and locked Er:YAG laser resonator with single-frequency output operating at a methane line with > 500 μJ/pulse at 4-7 kHz pulse repetition frequencies with a pulse width < 1 μsec. The state-of-the-art technology for lidar methane sensing uses Optical Parametric Oscillator (OPO)/Optical Parametric Amplifier (OPA)–based systems. A key innovation of our system is the use of resonantly 1532 nm pumped Er:YAG gain crystals, which results in improved efficiency and a reduced footprint compared with the current OPO systems. Another feature adapted in our system is the high bandwidth injection locking technique which includes fast piezoelectric mirror and in-house developed FPGA locking algorithm, capable of active locking and wavelength control for each pulse as pulse repetition frequencies up to 10 kHz. The single frequency laser output follows the seed diode wavelength and which scans across the targeted methane absorption line.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neptec Technologies’ next generation OPAL 3D LiDAR uses multi-detection technology for penetrating obscurants to detect objects. This multi-returns LiDAR system can receive up to 7 returns from one single laser pulse. Based on a Risley prism scanning mechanism, the OPAL Performance Series (Third Generation), employs independent motor control to spin both prisms and generate optimized scan patterns with customized fields-of-view from 30° to 120°. The OPAL-P500 was recently evaluated to detect specific objects of various reflective indices within a controlled obscurant chamber capable of generating a number of aerosol obscurants. Obscurants used in this investigation include: Arizona road dust and water fog. The obscurant cloud optical densities were monitored using a transmissometer. A series of six mesh screens were placed in the chamber, with solid targets at the far end of the chamber and with no obscurants present in the air. In this test scenario, the number of return pulses and their relative strengths were validated from a single laser pulse/shot. These meshes were placed at various distances from each other to characterize the detection probabilities in clear conditions. Alternatively the meshes were removed and the solid targets remained at the back of the chamber to validate the OPAL-P500 target detection performance in obscurants of varying densities. The data from a number of testing scenarios will be presented to observe and analyze the effects of obscurants and target reflectivity using the OPAL-P500’s multi-returns LiDAR technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global shutter flash LIDAR is the sensor of choice for space-based autonomous relative navigation applications. Advanced Scientific Concepts (ASC) has recently delivered LIDAR cameras to the NASA / Lockheed- Martin OSIRSRex and the NASA / Boeing CST-100 Starliner programs. These are two of the first operational space programs to use global shutter, flash LIDAR based relative navigation systems. The OSIRIS-REx spacecraft was launched in September 2016 and is the first opportunity to understand how global shutter flash LIDAR performance and reliability is impacted by long term exposure to the deep space environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present, mathematically and experimentally, a novel temporally multiplexed polarimetric LADAR (TMP-LADAR) architecture which is capable of characterizing the polarimetric properties (Mueller matrix elements) of a target using a single 10 ns laser pulse. By exploiting the Kerr nonlinear optical effect, birefringence within an optical fiber can be modulated based on the instantaneous intensity of the input laser pulse, which results in temporally varying polarization states within the laser pulse exiting the fiber. We introduce a model that describes the varying polarization of a laser pulse through an optical fiber and experimentally verify the operation of this novel polarization state generator (PSG).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The comparative characteristics of highly sensitive photodetectors for modern lidar systems, like self-driving vehicles and autonomous systems for collision avoidance, sensors for aircraft and marine vessels, atmospheric lidar sensing systems, topographic mapping tools are considered. Estimates of the basic parameters of photodetectors such as sensitivity threshold, dynamic range, time and amplitude resolution, as well as the effect of background light on sensitivity were made for the new-type experimental detector HD-SiPM (high-density silicon photomultiplier) and compared with the APD module and commercial SiPM devices, optimized for ToF LiDAR application. Comparison results show that the HDSiPM looks promising for application in various ToF LiDAR systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work provides a survey of illumination sources versus application requirements for some common 3D imaging approaches. Sources for FMCW, pulsed, and flash LiDAR, in applications such as autonomous driving, face recognition, and underwater imaging are discussed. The requirements and restrictions for each application are considered, including power, maximum range, field of view, and eye safety. In the context of these application requirements and restrictions, source characteristics such as coherence length, average power, peak power, bandwidth, and timing characteristics are used to evaluate the suitability of each source. This multidimensional survey attempts to provide a matrix with suitable sources for time of flight (ToF), frequency modulated continuous wave (FMCW) and flash lidar systems for various applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The latest three-dimensional imaging results from Voxtel teamed with the University are Dayton are presented using Voxtel’s VOX3D™ series flash lidar camera. This camera uses the VOX3D series flash lidar sensor which integrates a 128×128 InGaAs p-i-n detector array with a custom, multi-mode, low-noise, complementary metal-oxide semiconductor readout integrated circuit. In this paper, results are presented of: short-range (< 10 m) three-dimensional lidar imaging performed at University of Dayton with a fast, low-power eye safe laser (20-μJ per pulse, 10-kHz) in high-bandwidth, windowed region-of-interest mode; and longer range (30 – 150 m) outdoor lidar tests performed at Voxtel with two different eye safe lasers (300-μJ and 3-mJ per pulse, 10-Hz) in full-frame low-bandwidth mode. The VOX3D camera achieves a single-shot timing precision of 23.2 cm and 10.7 cm in high-bandwidth and low-bandwidth modes respectively, with the timing precision in high bandwidth mode being limited by camera electronics. The VOX3D camera has a maximum range of 51 m and 159 m with 300-μJ and 3-mJ lasers in full-frame low-bandwidth mode, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ball Aerospace has signed an exclusive license agreement to be the sole manufacturer of the Geiger-mode avalanche photodiode (GmAPD) light detection and ranging (LIDAR) cameras for the defense and aerospace industries. The license was provided by Argo AI, which acquired the former manufacturer of the technology, Princeton Lightwave Inc. (PLI), in October 2017. Over the past 10 years PLI developed and advanced GmAPD detectors and cameras capable of detecting single photons. This detector sensitivity combined onto multi-pixel arrays enables high resolution LIDAR and communication systems, which are capable of extended range operation with significant savings to system size, weight, and power. Specific applications of this technology include target detection, acquisition, tracking, 3D mapping, intelligence, surveillance, and reconnaissance missions capable of direct and coherent detection. In this work, we review the current state of this technology focusing on the three options of Geiger-mode cameras that will be manufactured by Ball Aerospace. Moreover, we present details of expected camera and detector performance (e. g. Format, Photon Detection Efficiency, Dark Count Rate, Wavelength, Timing), review production, manufacturing capabilities, and update the community on future technology paths for anticipated customer needs. Ball Aerospace will manufacture and further develop Geiger-mode LIDAR camera technology as the premier merchant supplier of advanced, large-format, single photon sensitive camera products and systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.