PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9832, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will discussion multiple flash lidar camera options and will compare sensitivity by calculating the required energy to map a certain area under specific conditions. We define two basic scenarios, and in each scenario look at bare earth 3D imaging, 3D imaging with 64 grey levels, or 6 bits of grey scale, 3D imaging with 3 return pulses from different ranges per detector element, and 3D imaging with both grey scale and multiple returns in each detector. We will compare Gieger Mode Avalanche Photo-Diodes, GMAPDs, Linear Mode Avalanche PhotoDiodes, LMAPDs, and low bandwidth cameras traditionally used for 2D imaging, but capable of being used for 3D imaging in conjunction with a rapid polarization rotation stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processing data from high-altitude, airborne lidar instruments that employ single-photon sensitive, arrayed detectors poses several challenges. Arrayed detectors produce large volumes of data; single-photon sensitive detectors produce high levels of noise; and high-altitude operation makes accurate geolocation difficult to achieve. To address these challenges, a unique and highly automated processing chain for high-altitude, single-photon, airborne lidar mapping instruments has been developed. The processing chain includes algorithms for coincidence processing, noise reduction, self-calibration, data registration, and geolocation accuracy enhancement. Common to all single-photon sensitive systems is a high level of background photon noise. A key step in the processing chain is a fast and accurate algorithm for density estimation, which is used to separate the lidar signal from the background photon noise, permitting the use of a wide-range gate and daytime operation. Additional filtering algorithms are used to remove or reduce other sources of system and detector noise. An optimization algorithm that leverages the conical scan pattern of the instrument is used to improve geolocation and to self-calibrate the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LIDAR has become the inevitable technology to provide accurate 3D data fast and reliably even in adverse measurement situations and harsh environments. It provides highly accurate point clouds with a significant number of additional valuable attributes per point. LIDAR systems based on Geiger-mode avalanche photo diode arrays, also called single photon avalanche photo diode arrays, earlier employed for military applications, now seek to enter the commercial market of 3D data acquisition, advertising higher point acquisition speeds from longer ranges compared to conventional techniques. Publications pointing out the advantages of these new systems refer to the other category of LIDAR as "linear LIDAR", as the prime receiver element for detecting the laser echo pulses - avalanche photo diodes - are used in a linear mode of operation. We analyze the differences between the two LIDAR technologies and the fundamental differences in the data they provide. The limitations imposed by physics on both approaches to LIDAR are also addressed and advantages of linear LIDAR over the photon counting approach are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we show the feasibility and the benefit to use a Geiger-mode Avalanche Photo-Diode (GmAPD) array for long range detection, up to several kilometers. A simulation of a Geiger detection sensor is described, which is a part of our end-to-end laser simulator, to generate simulated 3D laser images from synthetic scenes. Resulting 3D point clouds have been compared to experimental acquisitions, performed with our GmAPD 3D camera on similar scenarios. An operational case of long range detection is presented: a copper cable outstretched above the ground, 1 kilometer away the experimental system and with a horizontal line-of-sight (LOS). The detection of such a small object from long distance observation strongly suggests that GmAPD focal plane arrays could be easily used for real-time 3D mapping or surveillance applications from airborne platforms, with good spatial and temporal resolutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Lasers and Electro-Optics Branch at Goddard Space Flight Center has been tasked with building the Lasers for the Global Ecosystems Dynamics Investigation (GEDI) Lidar Mission, to be installed on the Japanese Experiment Module (JEM) on the International Space Station (ISS)1. GEDI will use three NASA-developed lasers, each coupled with a Beam Dithering Unit (BDU) to produce three sets of staggered footprints on the Earth's surface to accurately measure global biomass. We will report on the design, assembly progress, test results, and delivery process of this laser system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The latest mission proposals for exploration of solar system bodies require accurate position and velocity data during the descent phase in order to ensure safe, soft landing at the pre-designated sites. During landing maneuvers, the accuracy of the on-board inertial measurement unit (IMU) may not be reliable due to drift over extended travel times to destinations. NASA has proposed an advanced Doppler lidar system with multiple beams that can be used to accurately determine attitude and position of the landing vehicle during descent, and to detect hazards that might exist in the landing area. In order to assess the effectiveness of such a Doppler lidar landing system, it is valuable to simulate the system with different beam numbers and configurations. In addition, the effectiveness of the system to detect and map potential landing hazards must be understood. This paper reports the simulated system performance for a proposed multi-beam Doppler lidar using the LadarSIM system simulation software. Details of the simulation methods are given, as well as lidar performance parameters such as range and velocity accuracy, detection and false alarm rates, and examples of the Doppler lidars ability to detect and characterize simulated hazards in the landing site. The simulation includes modulated pulse generation and coherent detection methods, beam footprint simulation, beam scanning, and interaction with terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the first time, a 3-D imaging Flash Lidar instrument has been used in flight to scan a lunar-like hazard field, build a 3-D Digital Elevation Map (DEM), identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The flight tests served as the TRL 6 demo of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) system and included launch from NASA-Kennedy, a lunar-like descent trajectory from an altitude of 250m, and landing on a lunar-like hazard field of rocks, craters, hazardous slopes, and safe sites 400m down-range. The ALHAT project developed a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar is a second generation, compact, real-time, air-cooled instrument. Based upon extensive on-ground characterization at flight ranges, the Flash Lidar was shown to be capable of imaging hazards from a slant range of 1 km with an 8 cm range precision and a range accuracy better than 35 cm, both at 1-σ. The Flash Lidar identified landing hazards as small as 30 cm from the maximum slant range which Morpheus could achieve (450 m); however, under certain wind conditions it was susceptible to scintillation arising from air heated by the rocket engine and to pre-triggering on a dust cloud created during launch and transported down-range by wind.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapid knowledge of road network conditions is vital to formulate an efficient emergency response plan following any major disaster. Fallen buildings, immobile vehicles, and other forms of debris often render roads impassable to responders. The status of roadways is generally determined through time and resource heavy methods, such as field surveys and manual interpretation of remotely sensed imagery. Airborne lidar systems provide an alternative, cost-e↵ective option for performing network assessments. The 3D data can be collected quickly over a wide area and provide valuable insight about the geometry and structure of the scene. This paper presents a method for automatically detecting and characterizing debris in roadways using airborne lidar data. Points falling within the road extent are extracted from the point cloud and clustered into individual objects using region growing. Objects are classified as debris or non-debris using surface properties and contextual cues. Debris piles are reconstructed as surfaces using alpha shapes, from which an estimate of debris volume can be computed. Results using real lidar data collected after a natural disaster are presented. Initial results indicate that accurate debris maps can be automatically generated using the proposed method. These debris maps would be an invaluable asset to disaster management and emergency response teams attempting to reach survivors despite a crippled transportation network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for automated registration of lidar datasets specifically tailored to geometries with high length-to-width ratios operates on data in curvilinear coordinates. It relaxes the minimum change in perspective requirement between neighboring datasets typical of other algorithms. Range data is filtered with a series of discrete Gaussian and derivative of Gaussian filters to form a second-order Taylor series approximation to the surface about each sampled point. Principal curvatures with respect to the surface normal are calculated and compared across neighboring datasets to determine homologies and the best fit transfer matrix. The method reduces raw data volume requirements and processing time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and classification of small surface and airborne targets at long ranges is a growing need for naval security. Long range ID or ID at closer range of small targets has its limitations in imaging due to the demand on very high transverse sensor resolution. It is therefore motivated to look for 1D laser techniques for target ID. These include vibrometry, and laser range profiling. Vibrometry can give good results but is also sensitive to certain vibrating parts on the target being in the field of view. Laser range profiling is attractive because the maximum range can be substantial, especially for a small laser beam width. A range profiler can also be used in a scanning mode to detect targets within a certain sector. The same laser can also be used for active imaging when the target comes closer and is angular resolved. The present paper will show both experimental and simulated results for laser range profiling of small boats out to 6-7 km range and a UAV mockup at close range (1.3 km). We obtained good results with the profiling system both for target detection and recognition. Comparison of experimental and simulated range waveforms based on CAD models of the target support the idea of having a profiling system as a first recognition sensor and thus narrowing the search space for the automatic target recognition based on imaging at close ranges. The naval experiments took place in the Baltic Sea with many other active and passive EO sensors beside the profiling system. Discussion of data fusion between laser profiling and imaging systems will be given. The UAV experiments were made from the rooftop laboratory at FOI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many modern LIDAR platforms contain an integrated RGB camera for capturing contextual imagery. However, these RGB cameras do not collect a near-infrared (NIR) color channel, omitting information useful for many analytical purposes. This raises the question of whether LIDAR data, collected in the NIR, can be used as a substitute for an actual NIR image in this situation. Generating a LIDAR-based NIR image is potentially useful in situations where another source of NIR, such as satellite imagery, is not available. LIDAR is an active sensing system that operates very differently from a passive system, and thus requires additional processing and calibration to approximate the output of a passive instrument. We examine methods of approximating passive NIR images from LIDAR for real-world datasets, and assess differences with true NIR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a prototype real-time computer for a bathymetric lidar capable of producing point clouds attributed with total propagated uncertainty (TPU). This real-time computer employs a “mixed-mode” architecture comprised of an FPGA, CPU, and GPU. Noise reduction and ranging are performed in the digitizer’s user-programmable FPGA, and coordinates and TPU are calculated on the GPU. A Keysight M9703A digitizer with user-programmable Xilinx Virtex 6 FPGAs digitizes as many as eight channels of lidar data, performs ranging, and delivers the data to the CPU via PCIe. The floating-point-intensive coordinate and TPU calculations are performed on an NVIDIA Tesla K20 GPU. Raw data and computed products are written to an SSD RAID, and an attributed point cloud is displayed to the user. This prototype computer has been tested using 7m-deep waveforms measured at a water tank on the Georgia Tech campus, and with simulated waveforms to a depth of 20m. Preliminary results show the system can compute, store, and display about 20 million points per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precise topographical information has a very important role in geology, hydrology, natural resources survey and deformation monitoring. The extracting DEM technology based on synthetic aperture radar interferometry (InSAR) obtains the three-dimensional elevation of the target area through the phase information of the radar image data. The technology has large-scale, high-precision, all-weather features. By changing track in the location of the ground radar system up and down, it can form spatial baseline. Then we can achieve the DEM of the target area by acquiring image data from different angles. Three-dimensional laser scanning technology can quickly, efficiently and accurately obtain DEM of target area, which can verify the accuracy of DEM extracted by GBInSAR. But research on GBInSAR in extracting DEM of the target area is a little. For lack of theory and lower accuracy problems in extracting DEM based on GBInSAR now, this article conducted research and analysis on its principle deeply. The article extracted the DEM of the target area, combined with GBInSAR data. Then it compared the DEM obtained by GBInSAR with the DEM obtained by three-dimensional laser scan data and made statistical analysis and normal distribution test. The results showed the DEM obtained by GBInSAR was broadly consistent with the DEM obtained by three-dimensional laser scanning. And its accuracy is high. The difference of both DEM approximately obeys normal distribution. It indicated that extracting the DEM of target area based on GBInSAR is feasible and provided the foundation for the promotion and application of GBInSAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is currently a good deal of interest in developing laser radar (ladar) for autonomous navigation and collision avoidance in a wide variety of vehicles. In many of these applications, minimizing size, weight and power (SWaP) is of critical importance, particularly onboard aircraft and spacecraft where advanced imaging systems are also needed for location, alignment, and docking. In this paper, we describe the miniaturization of a powerful ladar system based on an electro-optic (EO) beamsteering device in which liquid crystal birefringence is exploited to achieve a 20° x 5° field of view (FOV) with no moving parts. This FOV will be significantly increased in future versions. In addition to scanning, the device is capable of operating in a “point and hold” mode where it locks onto a single moving object. The nonmechanical design leads to exceptionally favorable size and weight values: 1 L and < 1 kg respectively. Furthermore, these EO scanners operate without mechanical resonances or inertial effects. A demonstration was performed with a 50 kHz, 1 microjoule laser with a 2 mm beam diameter to image at a range of 100 m yielding a 2 fps frame rate limited by the pulse laser repetition rate. The fine control provided by the EO steerer results in an angle precision of 6x10-4 degrees. This FOV can be increased with discreet, non-mechanical polarization grating beamsteerers. In this paper, we will present the design, preliminary results, and planned next generation improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Army Research Laboratory (ARL) has continued to research a short-range ladar imager for use on small unmanned ground vehicles (UGV) and recently small unmanned air vehicles (UAV). The current ladar brassboard is based on a micro-electro-mechanical system (MEMS) mirror coupled to a low-cost pulsed erbium fiber laser. It has a 5-6 Hz frame rate, an image size of 256 (h) x 128 (v) pixels, a 42º x 21º field of regard, 35 m range, eyesafe operation, and 40 cm range resolution with provisions for super-resolution. Experience with driving experiments on small ground robots and efforts to extend the use of the ladar to UAV applications has encouraged work to improve the ladar’s performance. The data acquisition system can now capture range data from the three return pulses in a pixel (that is first, last, and largest return), and information such as elapsed time, operating parameters, and data from an inertial navigation system. We will mention the addition and performance of subsystems to obtain eye-safety certification. To meet the enhanced range requirement for the UAV application, we describe a new receiver circuit that improves the signal-to-noise (SNR) several-fold over the existing design. Complementing this work, we discuss research to build a low-capacitance large area detector that may enable even further improvement in receiver SNR. Finally, we outline progress to build a breadboard ladar to demonstrate increased range to 160 m. If successful, this ladar will be integrated with a color camera and inertial navigation system to build a data collection package to determine imaging performance for a small UAV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm x 60mm x 40mm, weighing less than 40g and consuming less than 750mW of power using a ~5mW laser. This MEMS scan module was controlled by a smartphone via Bluetooth while flying on a drone, and could project vector content, text, and perform laser based tracking. Also, a “point-and-range” LiDAR module was developed for UAV applications based on low SWaP (Size, Weight and Power) gimbal-less MEMS mirror beam-steering technology and off-the-shelf OEM LRF modules. For demonstration purposes of an integrated laser range finder module, we used a simple off-the-shelf OEM laser range finder (LRF) with a 100m range, +/-1.5mm accuracy, and 4Hz ranging capability. The LRFs receiver optics were modified to accept 20° of angle, matching the transmitter‟s FoR. A relatively large (5.0mm) diameter MEMS mirror with +/-10° optical scanning angle was utilized in the demonstration to maintain the small beam divergence of the module. The complete LiDAR prototype can fit into a small volume of <70mm x 60mm x 60mm, and weigh <50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Security measures sometimes require persistent surveillance of government, military and public areas Borders, bridges, sport arenas, airports and others are often surveilled with low-cost cameras. Their low-light performance can be enhanced with laser illuminators; however various operational scenarios may require a low-intensity laser illumination with the object-scattered light intensity lower than the sensitivity of the Ladar image detector. This paper discusses a novel type of high-gain optical image amplifier. The approach enables time-synchronization of the incoming and amplifying signals with accuracy ≤ 1 ns. The technique allows the incoming signal to be amplified without the need to match the input spectrum to the cavity modes. Instead, the incoming signal is accepted within the spectral band of the amplifier. We have gauged experimentally the performance of the amplifier with a 40 dB gain and an angle of view 20 mrad.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compact eye-safe laser rangefinders (LRFs) are a key technology for future sensors. In addition to reduced size, weight and power (SWaP), compact LRFs are increasingly being required to deliver a higher repetition rate, burst mode capability. Burst mode allows acquisition of telemetry data from fast moving targets or while sensing-on-the-move. We will describe a new, ultra-compact, long-range, eye-safe laser rangefinder that incorporates a novel transmitter that can deliver a burst capability. The transmitter is a diode-pumped, erbium:glass, passively Q-switched, solid-state laser which uses design and packaging techniques adopted from the telecom components sector. The key advantage of this approach is that the transmitter can be engineered to match the physical dimensions of the active laser components and the submillimetre sized laser spot. This makes the transmitter significantly smaller than existing designs, leading to big improvements in thermal management, and allowing higher repetition rates. In addition, the design approach leads to devices that have higher reliability, lower cost, and smaller form-factor, than previously possible. We present results from the laser rangefinder that incorporates the new transmitter. The LRF has dimensions (L x W x H) of 100 x 55 x 34 mm and achieves ranges of up to 15km from a single shot, and over a temperature range of -32°C to +60°C. Due to the transmitter’s superior thermal performance, the unit is capable of repetition rates of 1Hz continuous operation and short bursts of up to 4Hz. Short bursts of 10Hz have also been demonstrated from the transmitter in the laboratory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The importance of creating 3D imagery is increasing and has many applications in the field of disaster response, digital elevation models, object recognition, and cultural heritage. Several methods have been proposed to register texel images, which consist of fused lidar and digital imagery. The previous methods were limited to registering up to two texel images or multiple texel swaths having only one strip of lidar data per swath. One area of focus still remains to register multiple texel images to create a 3D model. The process of creating true 3D images using multiple texel images is described. The texel camera fuses the 2D digital image and calibrated 3D lidar data to form a texel image. The images are then taken from several perspectives and registered. The advantage of using multiple full frame texel images over 3D- or 2D-only methods is that there will be better registration between images because of the overlapping 3D points as well as 2D texture used in the joint registration process. The individual position and rotation mapping to a common world coordinate frame is calculated for each image and optimized. The proposed methods incorporate bundle adjustment for jointly optimizing the registration of multiple images. Sparsity is exploited as there is a lack of interaction between parameters of different cameras. Examples of the 3D model are shown and analyzed for numerical accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3-D Monte Carlo ray-tracing simulation of LiDAR propagation models the reflection, transmission and ab- sorption interactions of laser energy with materials in a simulated scene. In this presentation, a model scene consisting of a single Victorian Boxwood (Pittosporum undulatum) tree is generated by the high-fidelity tree voxel model VoxLAD using high-spatial resolution point cloud data from a Riegl VZ-400 terrestrial laser scanner. The VoxLAD model uses terrestrial LiDAR scanner data to determine Leaf Area Density (LAD) measurements for small volume voxels (20 cm sides) of a single tree canopy. VoxLAD is also used in a non-traditional fashion in this case to generate a voxel model of wood density. Information from the VoxLAD model is used within the LiDAR simulation to determine the probability of LiDAR energy interacting with materials at a given voxel location. The LiDAR simulation is defined to replicate the scanning arrangement of the Riegl VZ-400; the resulting simulated full-waveform LiDAR signals compare favorably to those obtained with the Riegl VZ-400 terrestrial laser scanner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this study is to present and evaluate the benefit and capabilities of high resolution 3D data from unmanned aircraft, especially in conditions where existing methods (passive imaging, 3D photogrammetry) have limited capability. Examples of applications are detection of obscured objects under vegetation, change detection, detection in dark or shadowed environments, and an immediate geometric documentation of an area of interest. Applications are exemplified with experimental data from our small UAV test platform 3DUAV with an integrated rotating laser scanner, and with ground truth data collected with a terrestrial laser scanner. We process lidar data combined with inertial navigation system (INS) data for generation of a highly accurate point cloud. The combination of INS and lidar data is achieved in a dynamic calibration process that compensates for the navigation errors from the lowcost and light-weight MEMS based (microelectromechanical systems) INS. This system allows for studies of the whole data collection-processing-application chain and also serves as a platform for further development. We evaluate the applications in relation to system aspects such as survey time, resolution and target detection capabilities. Our results indicate that several target detection/classification scenarios are feasible within reasonable survey times from a few minutes (cars, persons and larger objects) to about 30 minutes for detection and possibly recognition of smaller targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LiDAR and hyperspectral data provide rich and complementary information about the content of a scene. In this work, we examine methods of data fusion, with the goal of minimizing information loss due to point-cloud rasterization and spatial-spectral resampling. Two approaches are investigated and compared: 1) a point-cloud approach in which spectral indices such as Normalized Difference Vegetation Index (NDVI) and principal components of the hyperspectral image are calculated and appended as attributes to each LiDAR point falling within the spatial extent of the pixel, and a supervised machine learning approach is used to classify the resulting fused point cloud; and 2) a raster-based approach in which LiDAR raster products (DEMs, DSMs, slope, height, aspect, etc) are created and appended to the hyperspectral image cube, and traditional spectral classification techniques are then used to classify the fused image cube. The methods are compared in terms of classification accuracy. LiDAR data and associated orthophotos of the NPS campus collected during 2012 - 2014 and hyperspectral Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data collected during 2011 are used for this work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data from Optech Titan are analyzed here for purposes of terrain classification, adding the spectral data component to the lidar point cloud analysis. Nearest-neighbor sorting techniques are used to create the merged point cloud from the three channels. The merged point cloud is analyzed using spectral analysis techniques that allow for the exploitation of color, derived spectral products (pseudo-NDVI), as well as lidar features such as height values, and return number. Standard spectral image classification techniques are used to train a classifier, and analysis was done with a Maximum Likelihood supervised classification. Terrain classification results show an overall accuracy improvement of 10% and a kappa coefficient increase of 0.07 over a raster-based approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose and develop a field-widened Michelson interferometer (FWMI) system to act as a new type of spectroscopic filter in HSRL application. Due to the field widening characteristic, the FWMI can allow relatively large off-axis incident angle, and can be designed to any desirable wavelength. The theoretical foundations of the FWMI are introduced in this paper, and the developed prototype interferometer is described. It consists of a solid arm made of the glass H-ZF52 with the dimension of 37.876 mm, and an air gap with the length of 20.382 mm. These two interference arms are connected to a cube beam splitter to constitute a Michelson interferometer. Due to the matched dimensions and refractive indices of the two arms, the experimental testing results show that the OPD variation of the developed FWMI is about 0.04 lambda and the RMS is less than 0.008 lambda when the incident angle is as much as 1.5 degree (half angle). The cumulative wavefront distortion caused by the FWMI is less than 0.1 lambda PV value and 0.02 lambda RMS value. To lock the filtering frequency of the FWMI to the laser transmitter, a frequency locking system, which is actually an electro-optic feedback loop, is established. The setup and principle of this frequency locking system are also described in detail. Good locking accuracy of the FWMI about 27MHz is demonstrated through the frequency locking technique. All these results validate the feasibility of this developed FWMI system as a spectroscopic filter of an HSRL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A polarized high-spectral-resolution lidar (HSRL) based on a field-widened Michelson interferometer (FWMI) is developed in Zhejiang University, China, which is intended to profile various atmospheric aerosol optical properties simultaneously, such as the backscatter coefficient, the extinction coefficient, depolarization ratio, lidar ratio, etc. Due to the enlarged field-of-view (FOV) of the FWMI spectroscopic filter compared with the conventional Fabry-Perot interferometer (FPI) filter, we can expand the angular acceptable angle of the HSRL system to about 1 degree yet without any degradation of the spectral discrimination, enhancing the photon efficiency considerably. In this paper, we describe the developed FWMI-based polarized HSRL system comprehensively. The instrument configuration parameters and overall systematic structure are first presented. Then the FWMI subsystem, as the core apparatus of this HSRL, is particularly focused on. Instrumental calibration approach and the data retrieval are also discussed in detail. To our knowledge, this HSRL system is the first new generation of lidar which employs the FWMI spectroscopic filter in China, and great potential will be shown with the gradually improved engineering design in near future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Raman scattering of several liquids and solid materials has been investigated near the deep ultraviolet absorption features corresponding to the electron energy states of the chemical species present. It is found to provide significant enhancement, but is always accompanied by absorption due to that or other species along the path. We investigate this trade-off for water vapor, although the results for liquid water and ice will be quantitatively very similar. An optical parametric oscillator (OPO) was pumped by the third harmonic of a Nd:YAG laser, and the output frequency doubled to generate a tunable excitation beam in the 215-600 nm range. We use the tunable laser excitation beam to investigate pre-resonance and resonance Raman spectroscopy near an absorption band of ice. A significant enhancement in the Raman signal was observed. The A-term of the Raman scattering tensor, which describes the pre-resonant enhancement of the spectra, is also used to find the primary observed intensities as a function of incident beam energy, although a wide resonance structure near the final-state-effect related absorption in ice is also found. The results suggest that use of pre-resonant or resonant Raman LIDAR could increase the sensitivity to improve spatial and temporal resolution of atmospheric water vapor measurements. However, these shorter wavelengths also exhibit higher ozone absorption. These opposing effects are modeled using MODTRAN for several configurations relevant for studies of boundary layer water and in the vicinity of clouds. Such data could be used in studies of the measurement of energy flow at the water-air and cloud-air interface, and may help with understanding some of the major uncertainties in current global climate models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An axially non-uniform tapered As2S3 planar waveguide has been designed for mid-IR supercontinuum generation. The dispersion profile is varying along the propagation distance. Numerical results show this scheme significantly broadens the generated continuum, extending from ~1 μm to ~7 μm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As already known commonly, high-spectral-resolution lidar technique (HSRL) employs a narrowband spectroscopic filter to separate the elastic backscattered aerosol signal from the thermal Doppler broadened molecular backscattered contribution. This paper presents a new and comprehensive view of HSRL technique from the perspective of spectral discrimination, without concretizing the analysis into a specific spectral discrimination filter. Based on a general HSRL layout with three-channel configuration, a theoretical model of retrieval error evaluation is introduced. In this model, we only take the error sources related to the spectral discrimination parameters into account, and ignore other error sources not associated with these focused parameters. This theoretical model is subsequently verified by Monte Carlo (MC) simulations. Both the model and MC simulations demonstrate that a large molecular transmittance and a large spectral discrimination ratio (SDR, i.e., ratio of the molecular transmittance to the aerosol transmittance) are beneficial to reduce the retrieval error. Moreover, we find that the signal-to-noise ratio (SNR) and SDR of the lidar system are often tradeoffs, and we suggest considering a suitable SDR for higher molecular transmittance (thus higher SNR) instead of using unnecessarily high SDR when designing the spectral discrimination filter. This view interprets the function of the narrowband spectroscopic filter in HSRL system essentially, and will provide some general guidelines for the reasonable design of the spectral discrimination filter for HSRL community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research and protection of the ocean ecosystem are key works to maintain the marine status and develop marine functions. However, human’s knowledge about the ocean is greatly limited. Now, in situ, acoustic and remote sensing methods have been applied in the research to understand and explore the ocean. Especially, the lidar is one outstanding remote sensing method for its high spatial and temporal resolution as well as the ability of the vertical detection. Highspectral- resolution lidar (HSRL) employs an ultra-narrow spectral filter to distinguish scattering signals between particles and water molecules without assuming a lidar ratio and obtains optical properties of the ocean with a high accuracy. Nevertheless, the complexity of the seawater causes variable marine optical properties, which gives huge potentiality to develop a HSRL working at different wavelengths in order to promote the inversion accuracy and increase the detection depth. The field-widened Michelson interferometer (FWMI), whose central transmittance can be tuned to any wavelength and field of view is large, can be employed as the HSRL spectral filter and solves problems that the operating wavelength of the iodine filter is fixed and the field of view of Fabry-Perot interferometer is small. The principle of the HSRL based on the FWMI designing for the ocean remote sensing will be presented in detail. In addition, the availability of the application of the FWMI influenced by the disturbance of the states of Brillouin scattering is analyzed and the preliminary theory shows that the HSRL instrument basing on FWMI could be employed in the marine remote sensing with a high accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.