PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10332, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel coordinate measurement system based on a combination of frequency scanning interferometry and multilateration. The system comprises a number of sensors (minimum of four) that surround the measurement volume. Spherical glass retro-reflectors act as targets that are used to define the points in space to be measured. The sensors all measure the absolute distance to all targets simultaneously. The resulting distances are then used to compute the coordinates of the targets and other systematic parameters such as the sensor locations. Initial experimental comparison with a commercial laser tracker has shown that the proposed system is capable of achieving coordinate uncertainties of the order of 40 μm in a measurement volume of 10 m × 5 m × 2.5 m. The system is self-calibrating, inherently traceable to the international system of units (the SI) and computes rigorous coordinate uncertainty estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 ± 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate localisation and characterisation of holes is often required in the field of automated assembly and quality control. Compared to time consuming coordinate measuring machines (CMM), fringe-projection-based 3D scanners offer an attractive alternative as a fast, non-contact measurement technique that provides a dense 3D point cloud of a large sample in a few seconds. However, as we show in this paper, measurement artifacts occur at such hole edges, which can introduce errors in the estimated hole diameter by well over 0.25 mm, even though the estimated hole centre locations are largely unaffected. A compensation technique to suppress these measurement artifacts has been developed, by modelling the artifact using data extrapolated from neighboring pixels. By further incorporating a sub-pixel edge detection technique, we have been able to reduce the root mean square (RMS) diameter errors by up to 9.3 times using the proposed combined method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper analyses differences between dome and flat port housings used for underwater photogrammetry. The underwater environment negatively affects image quality and 3D reconstructions, but this influence on photogrammetric measurements, so far, has not been addressed properly in the literature. In this work, motivations behind the need for systematic underwater calibrations are provided, then experimental tests using a specifically designed photogrammetric modular test object in laboratory and at sea are reported. The experiments are carried out using a Nikon D750 24 Mpx DSLR camera with a 24 mm f2.8 AF/D lens coupled with a NIMAR NI3D750ZM housing, equipped first with a dome and, successively, with a flat port. To quantify the degradation of image quality, MTF measurements are carried out, then the outcomes of self-calibrating bundle adjustment calibrations are shown and commented. Optical phenomena like field curvature as well as chromatic aberration and astigmatism are analysed and their implications on the degradation of image quality is factored in the bundle adjustment through a different weighting of 2D image observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the modern world optical sensor systems have a vast number of applications. In this research work the system composed of the two cameras and the laser illumination with the 49 lasers is considered. In the previous research work we proposed the general calibration technique for this system. It was shown that the most complicated subtask in this technique is determining the beam directions for the laser illumination calibration because it couldn’t be solved using the known algorithms. The main stages which are required to execute for determining the beam directions for the laser illumination are: tracking of the laser illumination points in the image sequence of the calibration object; calculation of the coordinates in space for the found laser illumination points; constructing the laser beams in space passing close as much as possible to the found points. Within the scope of the research carried out, all the main stages for determining the beam directions for the laser illumination are considered. But much attention in this work is devoted to the third stage. The origin of the laser beam is known since it coincides with the known location of the laser on the laser illumination. Thus the problem is to find the ray passing through the origin and the least divergent in mean square from the found set of points. Also the algorithms for performing each stage are suggested. Particularly, we developed our own algorithms taking into consideration specifics of the available system with the laser illumination, viz. the algorithms for detecting the laser illumination points, the algorithms for constructing the laser beam from the found set of points. The beam directions in space can be determined for each laser of the illumination. These determined directions can be used as the subset of the calibration parameters for the whole system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D representation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are applied. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D point cloud acquired with such a custom device consists of multiple 2D line scans. Therefore the scanner pose of each line scan need to be determined as well as parameters resulting from a calibration to generate a 3D point cloud. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to identify the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out of lab calibration is possible. Additional a theoretical model was created to test the algorithm and analyse impact of the scanner accuracy. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a Dynamixel servo-motor, and a control unit. The calibration system consists of an hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. First, the calibration algorithm was tested with an ideal hemisphere model created in Matlab. Second, laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly with the 3D scanner system and the calibration system. The results show that the length accuracy of the laser scanner is most critical. It influences the required size of the hemisphere and the calibration accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging based on laser illumination is present in various fields of applications such as medicine, security, defense, civil engineering and in the automotive sector. In this last domain, research and development to bring autonomous vehicles on the roads has been intensified the recent years. Among the various technologies currently studied, automotive lidars are a fast-growing one due to their accuracy to detect a wide range of objects at distances up to a few hundreds of meters in various weather conditions. First commercialized devices for ADAS were laser scanners. Since then, new architectures have recently appeared such as solid-state lidar and flash lidar that offer a higher compactness, robustness and a cost reduction. Flash lidars are based on time-of-flight measurements, with the particularity that they do not require beam scanners because only one short laser pulse with a large divergence is used to enlighten the whole scene. Depth of encountered objects can then be recovered from measurement of echoed light at once, hence enabling real-time 3D mapping of the environment. This paper will bring into the picture a cutting edge laser diode source that can deliver millijoule pulses as short as 12 ns, which makes them highly suitable for integration in flash lidars. They provide a 100-kW peak power highly divergent beam in a footprint of 4x5 cm2 (including both the laser diode and driver) and with a 30-% electrical-to-optical efficiency, making them suitable for integration in environments in which compactness and power consumption are a priority. Their emission in the range of 800-1000 nm is considered to be eye safe when taking into account the high divergence of the output beam. An overview of architecture of these state-of-the-art pulsed laser diode sources will be given together with some solutions for their integration in 3D mapping systems. Future work leads will be discussed for miniaturization of the laser diode and drastic cost reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion artefacts in time-of-flight range imaging are treated as a feature to measure. Methods for measuring linear radial velocity from range imaging cameras are developed and tested. With the measurement of velocity, the range to the position of the target object at the start of the data acquisition period is computed, effectively correcting the motion error. A new phase based pseudo-quadrature method designed for low speed measurement measures radial velocity up to ±1.8 m/s with RMSE 0.045 m/s and standard deviation of 0.09-0.33 m/s, and new high-speed Doppler extraction method measures radial velocity up to ±40 m/s with standard deviation better than 1 m/s and RMSE of 3.5 m/s.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Range imaging plays an essential role in many fields: 3D modeling, robotics, heritage, agriculture, forestry, reverse engineering. One of the most popular range-measuring technologies is laser scanner due to its several advantages: long range, high precision, real-time measurement capabilities, and no dependence on lighting conditions. However, laser scanners are very costly. Their high cost prevents widespread use in applications. Due to the latest developments in technology, now, low-cost, reliable, faster, and light-weight 1D laser range finders (LRFs) are available. A low-cost 1D LRF with a scanning mechanism, providing the ability of laser beam steering for additional dimensions, enables to capture a depth map. In this work, we present an unsynchronized scanning with a low-cost LRF to decrease scanning period and reduce vibrations caused by stop-scan in synchronized scanning. Moreover, we developed an algorithm for alignment of unsynchronized raw data and proposed range image post-processing framework. The proposed technique enables to have a range imaging system for a fraction of the price of its counterparts. The results prove that the proposed method can fulfill the need for a low-cost laser scanning for range imaging for static environments because the most significant limitation of the method is the scanning period which is about 2 minutes for 55,000 range points (resolution of 250x220 image). In contrast, scanning the same image takes around 4 minutes in synchronized scanning. Once faster, longer range, and narrow beam LRFs are available, the methods proposed in this work can produce better results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new evaluation strategy for optical 3D scanners based on structured light projection is introduced. It can be used for the characterization of the expected measurement accuracy. Compared to the procedure proposed in the VDI/VDE guidelines for optical 3D measurement systems based on area scanning it requires less effort and provides more impartiality. The methodology is suitable for the evaluation of sets of calibration parameters, which mainly determine the quality of the measurement result. It was applied to several calibrations of a mobile stereo camera based optical 3D scanner. The performed calibrations followed different strategies regarding calibration bodies and arrangement of the observed scene. The results obtained by the different calibration strategies are discussed and suggestions concerning future work on this area are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments shows that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper investigates the performances of two portable mobile mapping systems (MMSs), the handheld GeoSLAM ZEB-REVO and Leica Pegasus:Backpack, in two typical user-case scenarios: an indoor two-floors building and an outdoor open city square. The indoor experiment is characterized by smooth and homogenous surfaces and reference measurements are acquired with a time-of-flight (ToF) phase-shift laser scanner. The noise of the two MMSs is estimated through the fitting of geometric primitives on simple constructive elements, such as horizontal and vertical planes and cylindrical columns. Length measurement errors on different distances measured on the acquired point clouds are also reported. The outdoor tests are compared against a MMSs mounted on a car and a robust statistical analysis, entailing the estimation of both standard Gaussian and non-parametric estimators, is presented to assess the accuracy potential of both portable systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work aims at developing a generic and anisotropic point error model, which is capable of computing magnitude and direction of a priori random errors, described in the form of error ellipsoids for each individual point of the cloud. The direct TLS observations are the range (ρ), vertical (α) and horizontal (θ) angles, each of which is in fact associated with a priori precision value. A practical methodology was designed and performed in real-world test environments to determine these precision values. The methodology has two experimental parts. The first part is a static and repetitive measurement configuration for the determination of a priori precisions of the vertical (σα) and horizontal (σθ) angles. The second part is the measurement of a test stand which contains four plates in white, light grey, dark grey and black colors, for the determination of a priori precisions of the range observations (σρ). The test stand measurement is performed in a recursive manner so that sensor-to-object distance, incidence angle and surface reflectivity are parameterized. The experiment was conducted with three TLSs, namely Faro Focus 3D X330, Riegl VZ400 and Z+F 5010x in the same location and atmospheric conditions. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. Validation of the proposed error model was performed in real world scenarios, which revealed feasibility of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One crucial ingredient for augmented reality application is having or obtaining information about the environment. In this paper, we examine the case of an augmented video application for vehicle-mounted cameras facing forward. In particular, we examine the method of obtaining geometry information of the environment via stereo computation / structure from motion. A detailed analysis of the geometry of the problem is provided, in particular of the singularity in front of the vehicle. For typical scenes, we compare monocular configurations with stereo configurations subject to the packaging constraints of forward-facing cameras in consumer vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Overflows in urban drainage structures, or sewers, must be prevented on time to avoid their undesirable consequences. An effective monitoring system able to measure volumetric flow in sewers is needed. Existing stateof-the-art technologies are not robust against harsh sewer conditions and, therefore, cause high maintenance expenses. Having the goal of fully automatic, robust and non-contact volumetric flow measurement in sewers, we came up with an original and innovative idea of a vision-based system for volumetric flow monitoring. On the contrast to existing video-based monitoring systems, we introduce a second camera to the setup and exploit stereo-vision aiming of automatic calibration to the real world. Depth of the flow is estimated as the difference between distances from the camera to the water surface and from the camera to the canal’s bottom. Camerato-water distance is recovered automatically using large-scale stereo matching, while the distance to the canal’s bottom is measured once upon installation. Surface velocity is calculated using cross-correlation template matching. Individual natural particles in the flow are detected and tracked throughout the sequence of images recorded over a fixed time interval. Having the water level and the surface velocity estimated and knowing the geometry of the canal we calculate the discharge. The preliminary evaluation has shown that the average error of depth computation was 3 cm, while the average error of surface velocity resulted in 5 cm/s. Due to the experimental design, these errors are rough estimates: at each acquisition session the reference depth value was measured only once, although the variation in volumetric flow and the gradual transitions between the automatically detected values indicated that the actual depth level has varied. We will address this issue in the next experimental session.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The naked eye is not able to perceive very slow movements such as those occurring in certain structures under external forces. This might be the case of metallic or concrete bridges, tower cranes or steel beams. However, sometimes it is of interest to view such movements, since they can provide useful information regarding the mechanical state of those structures. In this work, we analyze the utility of video magnification to detect imperceptible movements in several types of structures. First, laboratory experiments were conducted to validate the method. Then, two different tests were carried out on real structures: one on a water slide and another on a tower crane. The results obtained allow us to conclude that image cross-correlation and video magnification is indeed a promising low-cost technique for structure health monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Structure from motion approach became a powerful mean for scene 3D reconstruction using only a sequence of images from moving camera as initial data. Such a technique has a significant potential for unmanned aerial or unmanned ground vehicles for navigation in unknown environments. Different techniques are used for estimation the 3D structure of a scene such as optical flow approach, feature detection and matching in the set of images, features tracking through a sequence of images. Robustness and accuracy of scene 3D coordinates measurements are the important characteristics of structure from motion algorithms which has to provide the reliability of the navigation. The technique for scene 3D reconstruction using unmanned aerial vehicle imagery is developed based on preliminary features detection and matching in a set of stereo pairs with appropriate basis which allows reaching reasonable accuracy of 3D measurements. The results of accuracy evaluation for two variants of surface 3D reconstruction from image sequence are presented and discussed: for the case of un-calibrated images and for images with known interior orientation. The ways for improving the accuracy of the developed 3D reconstruction technique are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed sensor units and additionally installed them on an existing measuring vehicle in order to record various road parameters. These parameters mainly include the inclination of the road both parallel and perpendicular to the travel movement, the width of the road, the detection and location of road markings and the detection of weather-related road damage. These values can be used to calculate the maximum speed, the shock absorber settings or the optimization of the driving comfort of vehicles traversing these roads. The roll-angle-module, in conjunction with the additional values given by the measuring vehicle itself, provides the transverse inclination of the road. For this purpose, the distances obtained from two infrared (IR) modules located on the outside of the vehicle are recorded in real time and the resulting angle of the vehicle with respect to the road is determined with a suitable function. This is necessary since the measured value changes of the two modules based on the rotation-related movement and the radiation characteristics of the IR modules do not have the same magnitude. Thus, without mathematical adjustment, the ascertained value of the inclination would be greater than the actual angle of vehicle to the road. This value is again calculated with the angle from the vehicle to the center of the earth, which is output directly from the vehicles accelerometers and GPS data, and the angle of the road is obtained. The angle in the direction of travel is calculated purely on the GPS data. A mesh of the road topography can be created by the superposition of the angular values at all measurement coordinates. The width module, which consists of a camera and two line lasers, provides the width of the roadway in a post-processing step. Furthermore, road markings are detected and provided with the corresponding time stamp of the video and grouped on the basis of various criteria. The lasers used here serve as a wide standard for calibration. A schematic diagram of the measuring vehicle is shown in Figure 1. The post-processing is done by means of a Python code which stores the individual frames of the recorded video one by one. By means of a routine colors are detected, a recalibration of the width over the created ”green” image is executed and then the ”white” image is examined on objects with specific parameters and divided into groups, such as ”pedestrian crossing”. The bluish-colored cameras shown in Fig. 1 are used for the stereoscopic recording of the road and the subsequent processing and recognition of road signs, traffic lights and roadside borders. The output can be saved as a text document or as a collection in the graphical user interface. Furthermore, a laser module is used to generate a structured light pattern in order to detect weather-induced influences on the road, such as potholes. For this purpose a routine was developed and adapted, which can determine the dimensions of the road defects based on the position of the imaged points and the known geometric parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth images have recently attracted much attention in computer vision and high-quality 3D content for 3DTV and 3D movies. In this paper, we present a new semi self-taught learning application framework for enhancing resolution of depth maps without making use of ancillary color images data at the target resolution, or multiple aligned depth maps. Our framework consists of cascade random forests reaching from coarse to fine results. We learn the surface information and structure transformations both from a small high-quality depth exemplars and the input depth map itself across different scales. Considering that edge plays an important role in depth map quality, we optimize an effective regularized objective that calculates on output image space and input edge space in random forests. Experiments show the effectiveness and superiority of our method against other techniques with or without applying aligned RGB information
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article considers modeling method of RAW data received by Lidar in real time as well as its implementation. As the method to determine ranging we consider ray tracing method and the alternative method to determine the ranging using Z-buffer which is often applied while 3D modeling. Mathematical apparatus to estimate the power of reflected radiation is offered. The results of the work show the estimation of the performance to implement the offered method on CPU and GPU performed with the help of OpenGL technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A triangulated irregular network (TIN) is a viable structure for vector representation of raster image data. To visualize the image characterized by triangulation, it is required to fit a continuous surface of pixel brightness values in the triangulation (i.e. to interpolate data stored in its vertices). From this perspective, this paper presents a multi-frame image fusion and enhancement process that employs TIN structures rather than arrays of pixels as the original working units. The feasibility of this application relates to the fact that a TIN model offers a good quality digital image representation with a reduced density of pixel values as compared to a corresponding raster representation [4]. In the proposed process several low-resolution unregistered and compressed images (such as those extracted from a video footage) of a common scene are: (a) registered to a sub-pixel level (b) transformed to a TIN structure, (c) grouped or mapped globally within a singular framework to create a denser TIN composite, and (d) the TIN representation is used in reverse to reconstruct a higher resolution image in raster format with more details than any of the original input frames. Tests and subsequent results are shown to demonstrate the validity and accuracy of the proposed multi-frame image enhancement process. A comparison of this process of multi-frame image enhancement using various interpolation methods and practices is included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.