PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The time-of-flight (TOF) principle is a well known principle to acquire a scene in all three dimensions. The advantages of the knowledge of the third dimension are obvious for many kinds of applications. The distance information within the scene renders automatic systems more robust and much less complex or even enables completely new solutions. A solid-state image sensor containing 124 x 160 pixels and the corresponding 3D-camera, the so-called SwissRanger camera, has already been presented in detail in [1]. It has been shown that the SwissRanger camera achieves depth resolutions in the sub-centimeter range, corresponding to a measured time resolution of a few tens of picoseconds with respect to the speed of light (c~3•108 m/s).
However, one main drawback of these so-called lock-in TOF pixels is their limited capacity to handle background illumination. Keeping in mind that in outdoor applications the optical power on the sensor originating from background illumination (e.g., sun light) may be up to a few 100 times higher than the power of the modulated illumination, the sensor requires new pixel structures eliminating or at least reducing the currently experienced restrictions in terms of background illumination.
Based on a 0.6 µm CMOS/CCD technology, four new pixel architectures suppressing background illumination and/or improving the ratio of modulated signal to background signal at the pixel-output level were developed and will be presented in this paper. The theoretical principle of operation and the expected performance are described in detail, together with a sketch of the implementation of the different pixel designs at silicon level. Furthermore, test results obtained in a laboratory environment are published. The sensor structures are characterized in a high background-light environment with up to sun light conditions. The distance linearity over a range of a few meters with the mentioned light conditions is measured. At the same time, the distance resolution is plotted as a function of the target distance, the integration time and the background illumination power. This in-depth evaluation leads to a comparison of the various background suppression approaches; it also includes a comparison with the traditional pixel structure in order to highlight the benefits of the new approaches.
The paper concludes by providing parameter estimations which enables the outlook to build a sensor with a high lateral resolution containing the most promising pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method of the surface orientation (normal vector) imager being independent upon non-Lambertian reflectance components.
It consists of six light sources at vertices of a hexagon and the three-phase correlation image sensor (3PCIS) for demodulating the amplitude and phase of reflected light at two illumination modes. To separate the Lambertial and the specular reflectance components, the light sources first illuminate the object in six phases being different in 2π/6 between the neighbors (the dipole modulation mode) and then in three phases being different in 4π/6 each other
(the quadrapole modulation mode). In the dipole modulation mode,
the amplitude and phase depend both on the Lambertian reflectance (surface orientation) and on the non-Lambertian reflectance (specular strength). In the quadrapole modulation mode, the former component is eliminated and only the latter component remains. Subtracting it from the dipole modulation result, we obtain the surface orientation map based on the photometric stereo principle. We implemented the method using CMOS 64x64 pixel 3PCIS and successfully reconstructed the normal vector maps for various non-Lambertian object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The stability of a rock slope depends on the rock mass geo-structure and its discontinuities. Discontinuities show up at the rock surface as smooth and often plane surfaces. From their location and orientation the main families of fractures can be inferred and a stability analysis performed. To gather information on their distribution, surveys are typically carried out with geological compass and tape along scan lines, with obvious limitations and drawbacks. Here an highly automated image-based approach is presented to compute the required rock parameters: an accurate high resolution Digital Surface Model of the area of interest is generated from an image sequence and segmented in plane surfaces within a multi resolution RANSAC search, which returns location and orientation of each plane. To avoid measuring ground control points, the camera may be interfaced to a GPS receiver. Multiple overlapping and convergent images are captured to achieve good accuracy over the whole network, minimize occlusions and avoid poor object-camera relative geometry. The method is applied to the rock face of Corma di Machaby (Italy): the results are compared to those of a traditional survey with compass and to those of a laser scanner survey.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a 3D acquisition project range maps collected around the object to be modeled, need to be integrated. With portable range cameras these range maps are taken from unknown positions and their coordinate systems are local to the sensor. The problem of unifying all the measurements in a single reference system is solved by taking contiguous range maps with a suitable overlap level; taking one map as reference and doing a rototranslation of the adjacent ones by using an "Iterative Closest Point" (ICP) method. Depending on the 3D features over the acquired surface and on the amount of overlapping, the ICP algorithm convergence can be more or less satisfactory. Anyway it always has a random component depending on measurement uncertainty. Therefore, although each individual scan has a very good accuracy, the error's propagation may produce deviations in the aligned set respect to real surface points. In this paper a systematic study of the different alignment modality and the consequent total metric distortions on the final model, is shown. In order to experiment these techniques a case-study of industrial interest was chosen: the 3D modeling of a boat's hull mold. The experiments involved a triangulation based laser scanner integrated with a digital photogrammetry system. In order to check different alignment procedures, a Laser Radar capable to scan all the object surface with a single highly accurate scan, was used to create a "gold-standard" data set. All the experiments were compared with this reference and from the comparison several interesting methodological conclusions have been obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The strengthening of reinforced concrete beams through the use of epoxy-bonded carbon composites has been widely researched in the United States since 1991. Despite the widespread attention of researchers, however, there are no reliable methods of predicting the failure of the repaired and strengthened beams by delamination of the carbon composite from the parent concrete. To better understand delamination, several investigators have presented analytical work to predict the distribution of stresses along the interface between the carbon composite and the concrete. Several closed-form solutions can be found in the literature to predict the levels of shear stress present between the bonded composite plate and the parent concrete beam. However, there has been very little experimental verification of these analytical predictions. The few experiments that have been conducted have used numerous electrical resistance strain gages, adhered to the surface of the carbon composite at various intervals along the length of the test section, in order to deduce the interfacial shear stress using first differences. This method, though very crude, demonstrated that there are substantial differences between the distributions of interfacial shear stresses in actual repaired beams versus the analytical predictions.
This paper presents a new test program in which large-scale (2.4 m long), carbon-fiber-strengthened reinforced concrete beams are load-tested to failure, while employing digital image correlation (DIC) to record the three-dimensional displacements of the surface of the carbon fiber plate. Three-dimensional digital image correlation is a two-camera, stereoscopic technique for measuring true, 3D full-field surface displacements. The technique uses a subset-based correlation method to determine the correspondence between images from the two cameras and between images at different load levels. From each load’s full-field surface displacements, a surface strain map can be generated. The resulting strain maps allow the investigation of the load transfer from the carbon fiber to the concrete beam with a level of detail not achievable with standard strain gages.
The focus of this paper is the application of the three-dimensional digital image correlation technique to the investigation of FRP reinforced concrete beams. The paper presents: 1) the results of the experimental testing; 2) an overview the three-dimensional digital image correlation technique; 3) the adaptations required to utilize the 3D correlation method on the large, 0.1m x 2.0m, imaged area of the beam; and 4) the effect of the discontinuous failure mechanisms, inherent in reinforced concrete structures, on the analysis of the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Specular surfaces are used in a wide variety of industrial and consumer products like varnished or chrome plated parts of car bodies, dies, molds or optical components. Shape deviations of these products usually reduce their quality regarding visual appearance and/or technical performance. One reliable method to inspect such surfaces is deflectometry. It can be employed to obtain highly accurate values representing the local curvature of the surfaces. In a deflectometric measuring system, a series of illumination patterns is reflected at the specular surface and is observed by a camera. The distortions of the patterns in the acquired images contain information about the shape of the surface. This information is suited for the detection and measurement of surface defects like bumps, dents and waviness with depths in the range of a few microns. However, without additional information about the distances between the camera and each observed surface point, a shape reconstruction is only possible in some special cases. Therefore, the reconstruction approach described in this paper uses data observed from at least two different camera positions. The data obtained is used separately to estimate the local surface curvature for each camera position. From the curvature values, the epipolar geometry for the different camera positions is recovered. Matching the curvature values along the epipolar lines yields an estimate of the 3d position of the corresponding surface points. With this additional information, the deflectometric gradient data can be integrated to represent the surface topography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For computer graphics applications, capturing the appearance parameters of objects (reflectance, transmittance and small scale surface structures), is as important as capturing the overall shape. We briefly review recent approaches developed by the computer graphics community to solve this problem. Excellent results have been obtained by various researchers measuring spatially varying reflectance functions for some classes of objects. We will consider some challenges from two of the remaining problematic classes of objects. First we will describe our experience scanning and modeling the throne of Tutankhamen. The major difficulties in this case were that the base shape was a highly detailed non-convex geometry with complex topology, and the shape was covered by optically uncooperative gold and silver. Then we will discuss some observations from our ongoing project to scan and model historic buildings on the Yale campus. The major difficulties in this second case are quantity of data and the lack of control over acquisition conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spherical images are linear images, which are exact in central projection. They are explicitly determined by the projection centre. The technical approach consists of collecting scenery through a single perspective and combining the images like panoramic mosaics.
A general application of spherical imaging is hempispehric visualisation of space. In hemispheric visualisation, we distinguish between horizontal, half hemispheric, and full hemispheric imaging. The photogrammetric applications of spherical imaging aim at acquisition of 3D environmental or terrain models. Then the base to distance ratio is typically large.
We assume, that the primary advantage of spherical imaging will be nevertheless on stereoscopy applications. We aim at full-scale stereoscopy with projection of spherical images in scale of 1:1. In case of full-scale stereoscopy, the stereoscopic plasticity will have a value of 1 and the base is typically short. Natural viewing would equal to base lengths of human eyes, i.e. to 65 mm.
We present in the paper the Stereodrome, which is a physical realisation of full-scale stereo viewing. It consists of a photogrammetric workstation, a high-resolution stereo projector, necessary stereo eye-ware, and a back projection screen. Originally we have motivated us for building the Stereodrome by the fact that it is the only means to really see the behaviour of 3D point clouds in details. In the paper we will also discuss, in which way full-scale stereo display has been used for validating the quality of existing 3D geoinformation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an algorithm for efficient image synthesis. The main goal is to generate realistic virtual views of a dynamic scene from a new camera viewpoint. The algorithm works on-line on two or more incoming video streams from calibrated cameras. A reasonably large distance between the cameras is allowed.
The main focus is on video-conferencing applications, where the background is assumed to be static. By performing a foreground segmentation, the foreground and the background can be handled separately. For the background a slower, more accurate algorithm can be used. Reaching a high throughput is most crucial for the foreground, as this is the dynamic part of the scene.
We use a combined approach of CPU and GPU processing. Performing depth calculations on the GPU is very efficient, thanks to the possibilities of the latest graphical boards. However the result tends to be rather noisy. As such we apply a regularisation algorithm on the CPU to ameliorate this result. The final interpolation is again provided by rendering on the graphical board.
CPU and GPU can run completely in parallel. This is realised by an implementation using multiple threads. As such different algorithms can be applied to two frames simultaneously and the total throughput is increased.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a summary of the 3D modeling work that was accomplished in preparing multimedia products for cultural heritage interpretation and entertainment. The three cases presented are the Byzantine Crypt of Santa Cristina, Apulia, temple C of Selinunte, Sicily, and a bronze sculpture from the 6th century BC found in Ugento, Apulia. The core of the approach is based upon high-resolution photo-realistic texture mapping onto 3D models generated from range images. It is shown that three-dimensional modeling from range imaging is an effective way to present the spatial information for environments and artifacts. Spatial sampling and range measurement uncertainty considerations are addressed by giving the results of a number of tests on different range cameras. The integration of both photogrammetric and CAD modeling complements this approach. Results on a CDROM, a DVD, virtual 3D theatre, holograms, video animations and web pages have been prepared for these projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gabriele Guidi, Bernard Frischer, Monica De Simone, Andrea Cioci, Alessandro Spinetti, Luca Carosso, Laura Loredana Micoli, Michele Russo, Tommaso Grasso
Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special
metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose "browsing by 3D scene" and "rendering by photographs" based on viewpoint-based approach. The idea is that by linking the 3D models and photographs via the spatial information, "viewpoint" in particular, we can use them as a reference for each other when browsing photographs or walking through the 3D scene. We use the camera parameters to express the viewpoint. Each photograph has the extrinsic camera parameters as metadata, which is defined according to the same coordinates as the 3D model, and hence we can compare the viewpoints of them and judge their similarity. Unlike content-base image retrieval, the viewpoint-based search is robust to the difference of features such as color and shape among images. The browsing by 3D scene method allows users to retrieve images that contain the same object but show it with different appearances and to browse the images taken from a similar viewpoint in groups. On the other hand, when a user want to see a particular 3D scene, the user specifies a sample image by selecting a photograph from the archive. The system then renders the 3D scene
with the viewpoint similar to that of the selected photograph.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D-coordinate producing systems have become very successful in the recent years. Aerial and terrestrial laser scanners are now ‘state-of-the-art’ to get information of the third and forth dimension and replace photogrammetric camera systems. Laser scanners typically have a very high point density, but because of their sequential mode of operation their shortcomings are both their speed of acquisition and their size.
The new range-imaging camera SwissRanger, developed by Centre Suisse d'Electronique et de Microtechnique SA (CSEM), is the first step to a follow-up generation of 3D-measurement systems and gets over the shortcomings and disadvantages of current photogrammetric systems and laser scanners. Because of its high-resolution with more than twenty thousand pixel, the created 3D-dataset even allows deducing geometrical information of the environment. The accuracy of the acquired distance is approximately 1 cm. Temporal resolution depends on various parameters like integration time, soft- and hardware. But five Hertz image sequences can be easily reached. Therefore, (near) real-time measurements are possible.
Several influencing parameters have been investigated in the calibration Lab of the Institute of Geodesy and Photogrammetry at the Swiss Federal Institute of Technology Zurich (Switzerland).
Besides first experiences and analysis of the data, acquired by the SwissRanger, a suitable approach for the calibration of such a system is to be considered and validated. First, a two-component calibration splits the sensor into a camera and a range measuring module. Both are calibrated separately with common known methods. The results of this calibration approach are compared to a newly developed single-step calibration. Hereby, the sensor is regarded as one single (black box) system. No assumption about the internal model is necessary. The results of the calibration are used for the improvement of the measurements.
Further, new applications for such a 3D-positioning system are presented. Besides the usefulness of the SwissRanger in car parking assistant systems, the applicability for an indoor positioning system is evaluated. The required accuracy and precision are focused.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a multiple camera system that consists of a large number of cameras, each camera has to be calibrated in order to use the image information obtained from them effectively. When a target scene is large, the conventional calibration methods using 3D or 2D object are difficult to apply because setting these objects is an elaborate task. Although another approach called self-calibration using only image point correspondences seems to be suitable in such a situation, this method is often susceptible to noise. In this paper, we propose a new camera calibration method for such systems using 1D object, which has three points on a line with known distances of each other. The main reason for using 1D object as a calibration object is because it is more flexible than 3D or 2D object in a large scene. By using the free-moving 1D object without knowledge about its position and only one calibrated camera, we can calibrate multiple cameras simultaneously, so the proposed method presents an easy and practical solution. Experimental results of computer-simulated data are shown in this paper. In the presentation, experimental results of real image data will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayer colour filter arrays (CFA) are commonly used to obtain digital colour imagery from a single-chip CCD or CMOS camera. Colour information is captured via a regular array of colour filters placed over the image sensor, and the full colour image is reconstructed in a demosaicing process. Colour imagery derived in such a way is prone to visual artefacts including false colours, poor edge definition and a loss of image and colour sharpness. Such artefacts are suspected of degrading the quality of photogrammetric measurements made from demosaiced images. An approach to demosaicing based on the use of tuneable Gaussian filters is proposed. The new approach is designed to minimise image artefacts and is specifically aimed at improving the quality of photogrammetric measurements made with the demosaiced imagery. Results are given for a specific application of Bayer CFA cameras to underwater stereo length measurement of fish. The results show a reduction in visual artefacts and an improvement in the quality of stereo measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The digitalization of real-world objects is of great importance in various application domains. E.g. in industrial processes quality assurance is very important. Geometric properties of workpieces have to be measured. Traditionally, this is done with gauges which is somewhat subjective and time-consuming. We developed a robust optical laser scanner for the digitalization of arbitrary objects, primary, industrial workpieces. As measuring principle we use triangulation with structured lighting and a multi-axis locomotor system. Measurements on the generated data leads to incorrect results if the contained error is too high. Therefore, processes for geometric inspection under non-laboratory conditions are needed that are robust in permanent use and provide high accuracy as well as high operation speed. The many existing methods for polygonal mesh optimization produce very esthetic 3D models but often require user interaction and are limited in processing speed and/or accuracy. Furthermore, operations on optimized meshes consider the entire model and pay only little attention to individual measurements. However, many measurements contribute to parts or single scans with possibly strong differences between neighboring scans being lost during mesh construction. Also, most algorithms consider unsorted point clouds although the scanned data is structured through device properties and measuring principles. We use this underlying structure to achieve high
processing speeds and extract intrinsic system parameters and use them for fast pre-processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for the least squares matching of overlapping 3D surfaces is presented. It estimates the transformation parameters between two or more fully 3D surfaces, using the Generalized Gauss-Markoff model, minimizing the sum of squares of the Euclidean distances between the surfaces. This formulation gives the opportunity of matching arbitrarily oriented 3D surfaces simultaneously, without using explicit tie points. Besides the mathematical model and execution aspects we give further extension of the basic model. The first extension is the simultaneous matching of sub-surface patches, which are selected in cooperative surface areas. It provides a computationally effective solution, since it matches only relevant multi-subpatches rather than the whole overlapping areas. The second extension is the matching of surface geometry and its attribute information, e.g. reflectance, color, temperature, etc., under a combined estimation model. We give practical examples for the demonstration of the basic method and the extensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A growing number of mixed reality applications have to build 3D models of arbitrary shapes. However, modeling of an arbitrary shape implies a trade-off between accuracy and computation time. Real-time methods based on the visual hull cannot model the holes of the shape inside the approximated silhouette. Carving methods can but they are not real time. The aim of this paper is to improve their accuracy and computation time. It presents a novel multiresolution algorithm for 3D reconstruction of arbitrary 3D shapes from range data acquired at fixed viewpoints. The algorithm is split into two parts. The first part labels a voxel thanks to the current viewpoint and without taking into account previous labels. The second part updates the labels and grows the octree representing the voxelized space. It determines the number of calls made to the first part, which is time consuming. A novel set of labels, the study of the parallelepiped projections and a front to back propagation of information allow us to improve accuracy in both parts, to reduce the computation cost of the voxel labeling part and to reduce the number of calls made to it by the mutiresolution and voxel updating part.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, a laser scanner has been receiving more attention as a useful tool for real-time 3D data acquisition, and various applications such as city modeling, DTM generation and 3D modeling of cultural heritage were proposed. However, robust filtering for distinguish on- and off-terrain points from point cloud 3D data collected by airborne laser scanner is still issues. In particular, filtering of point cloud 3D data collected by terrestrial laser scanner has more severe problems caused by many occlusion parts, windows, few the deepest points, wall of buildings and so on.
In order to perform 3D texture modeling of cultural heritage using terrestrial laser ranging data, texture modeling method are investigated in this paper, and proposed filtering method is based on flatness within 30*30cm. Flatness area (ground surface, wall of structures, etc.) and non-flatness area (trees, bushes, etc.) is classified using measurement result of many target., and non-flatness areas are interpolated using morphological procedure.
The filtering method shows very robust result, and the most remarkable point of this filtering method is its ability to obtain break-lines which give important information for 3D modeling since 3D model of historical structure are consists of flatness areas (e.g. roof, wall, pillar). Therefore, surface patch of 3D model is identified by extracting a flatness area which is surrounded by break-line, and 3D model for the patch is generated using point cloud 3D data along the frame of the patch.
Furthermore, curve points for surface patch are detected from break-line, and a surface patch is generated in automatically step-by-step, texture modeling will be done with the surface patch and digital image.
Therefore, automatic detection of the curve point, which is necessary for model making, is very difficult. Because, break-line includes a lot of small curve points.
In this paper, we particularly watched this problem and carried out promotion of efficiency of model making by developing a solution method.
With these processes, efficient 3D representation using textured model is performed without any processes.
This paper presents 3D textured modeling method for historical structure using terrestrial laser ranging data and break-line by flatness evaluation, detection method of curve point using break-line.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last decade, the demand of 3D models for objects documentation and visualization is drastically increased. 3D modeling of close-range objects is required in different applications, like cultural heritage, industry, animation or medicine. While Photogrammetry is a well proved technique for 3D reconstruction of real objects, featuring important properties like accurate sensor calibration, use of both analog or digital imageries, low cost and high portable system, laser scanning technology is becoming a very promising alternative for surveying and modeling applications. Tipically, laser scanners allow for fast acquisition of huge amount of 3D data which can be often combined with colour hi-res digital images. As a result, real objects can be represented with a higher level of detail together with a good metric accuracy. Among several works so far presented about laser scanning for cultural heritage survey, some modeling and accuracy related issues have been not yet solved and discussed in details. In this contribution we report about two case studies realized with photogrammetry and laser scanner and we provide some advices and suggestions about the more suitable 3D modeling method for a given object, taking into account its size and shape complexity, the required accuracy and the target application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new approach to 3D human body tracking is proposed. A
sparse 3D reconstruction of the subject to be tracked is made using a structured light system consisting of a precalibrated LCD projector and a camera. At a number of points-of-interest, easily detectable features are projected. The resulting sparse 3D reconstruction is
used to estimate the body pose of the tracked person. This new estimate of the body pose is then fed back to the structured light system and allows to adapt the projected patterns, i.e. decide where to project features. Given the observations, a physical simulation is used to estimate the body pose by attaching forces to the limbs of the body model. The sparse 3D observations are augmented by denser silhouette information, in order to make the tracking more robust.
Experiments demonstrate the feasibility of the proposed approach and show that the high speeds that are required due to the closed feedback loop can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose new measurement technique of whole three dimensional shape for small moving objects. The proposed measurement system is very simple structure with the use of a CCD camera that installed a fish-eye lens and a cylinder that coating mirror inside. The CCD camera is set on the top side of the cylinder, and its optical axis is set to the center of cylinder. A captured image includes two types information. One is direct view of the target, the other is reflected view. These two information are used for measuring the shape of target by means of stereo matching. This proposed method can acquire the shape of target using only single image, so we can obtaine the three dimensional shape with the moving with the use of image sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on a proof-of-concept test for modeling a dynamic surface by integrating terrestrial laser scanning into videogrammetry. The ultimate objective is to apply the methodology to determine the surface geometry of membrane structures, and retrieve the displacement and deformation information from the sequential three-dimensional model. Due to the characteristics of a membrane surface, conventional targeting is impracticable. Therefore the laser footprints produced during laser scanning, together with projected dots of light, are used as control points and videogrammetric targets, replacing the need for physically attached targets. Following the videogrammetry workflow developed in this experiment, the laser footprints and the projected dots could be extracted from the acquired video imagery and their 3D object coordinates are estimated. Then the surface model is constructed based on the estimated target points.
The originality of this paper is the integration of videogrammetry and terrestrial laser scanning. The introduction of laser scanning not only determines the 3D surface model, but also provides full control in the videogrammetric process. Moreover, the developed system presented herein demonstrates it is capable of constructing the three-dimensional surface model over time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we report on the historic development of human body digitization and on the actual state of commercially available technology.
Complete systems for the digitization of the human body exist since more than ten years. One of the main users of this technology was the entertainment industry. Every new movie excited with attractive visual effects, but only few people knew that the most thrilling cuts were realized by using virtual persons. The faces and bodies of actors were digitized and the "virtual twin" replaced the actor in the movie. Nowadays, the state of the human body digitization is so high that it is not possible any more to distinguish the real actor from the virtual one. Indeed, for the rush technical development has to be thanked the movie industry, which was one of the strong economic motors for this technology.
Today, with the possibility of a massive cost reduction given by new technologies, methods for digitization of the human body are used also in other fields of application, such as ergonomics, medical applications, computer games, biometry and anthropometrics. With the time, this technology becomes interesting also for sport, fitness, fashion and beauty. A large expansion of human body digitization is expected in the near future.
To date, different technologies are used commercially for the measurement of the human body. They can be divided into three distinguished groups:laser-scanning, projection of light patterns, combination modeling and image processing. The different solutions have strengths and weaknesses that profile their suitability for specific applications. This paper gives an overview of their differences and characteristics and expresses clues for the selection of the adequate method. Practical examples of commercial exploitation of human body digitization are also presented and new interesting perspectives are introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a work-flow to reconstruct complicated parts of the human body is presented . This approach focuses on the surface measurement of an human shoulder acquired through a synchronized multi image acquisition system. This surface measurement at certain steps of the video sequence will serve as a key frame to fit soft objects related to an articulated skeleton to it and therefore constraint the tracking procedure in the in-between frames. The approach shows a work-flow for a 3D reconstruction of an human shoulder starting with the camera set up and image acquisiton, camera calibration and orientation, and ending in surface measurements and 3D reconstruction of the scene at a certain frame. The calibration and orientation is done with a moving reference field method, making use of the video sequence character. The approximations for the surface measurement procedure are gained through multiphoto cross correlation and the surface measurements and 3D reconstruction are processed with multiphoto geometrically constrained matching. The process is described and tested with synthetic images acquired from a commercial rendering software package. This work is embedded in the REBOMO+ project, which is a joint project with the Computer Graphic Lab of the EPFL Lausanne.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly where visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull construction are largely unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of less than 1.0mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10.5% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3%, 10.5%, 4.1%, and 1.2% using 4, 8, 16, and 64 cameras, respectively).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most common methods for accurate capture of three-dimensional human motion require a laboratory environment and the attachment of markers or fixtures to the body segments. These laboratory conditions can cause unknown experimental artifacts. Thus our understanding of normal and pathological human movement would be enhanced by a method that allows capture of human movement without the constraint of markers or fixtures placed on the body. Markerless methods are not widely available because the accurate capture of human movement without markers is technically challenging. A reported method of constructing a body's visual hull using shape-from-silhouette (SFS) offers an attractive approach. However, to date the influence of camera placement and number of cameras on construction of visual hulls for biomechanical analysis is largely unknown. The purpose of this study was to evaluate the accuracy of SFS construction of a human form for biomechanical analysis dependent on camera placement and number of cameras. Visual hull construction was sensitive to camera placement and the subject's pose. Uniform camera distributions such as circular and hemispherical camera arrangements provided most favorable results. Setups with less than 8 cameras yielded largely inaccurate visual hull constructions and great fluctuations for different poses and positions across a viewing volume, while setups with 16 and more cameras provided good volume estimations and consistent results for different poses and positions across the viewing volume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D city modeling from airborne imagery includes mainly two parts: (1) image processing procedures and (2) 3D modeling for man-made objects such as buildings, roads and other objects. Line extraction and stereo matching are usually utilized as an image processing procedures. However, there are some issues for automatic man-made object modeling. In particular, spatial data acquisition of buildings are important for reliable city modeling.
In these circumstances, this paper focuses on more efficient line matching method using Least-Median of Squares (LMedS). The LMedS have ability to calculate the trifocal tensor more accurately than least squares method, and the LMedS is able to remove inaccurate matched lines during the line matching procedure. Therefore, more accurate line matching can be performed, and more efficient city modeling was realized. This paper describes more efficient line matching method using LMedS, and investigates an adaptability of this system to city modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A photogrammetric strategy for the orientation of image sequences acquired by Mobile Mapping Vehicles (MMV) is presented. The motivations for this are twofold: to allow image georeferencing in short GPS outages for the MMV under development at the University of Parma, currently lacking an IMU; to improve the consistency of image georeferencing between asynchronous frames. The method may also contribute to limit the drift errors of low-cost integrated IMU/GPS systems in GPS outages. Drawing on techniques developed for structure and motion (S&M) reconstruction from image sequences and accounting for the specific conditions of the MMV imaging geometry, highly reliable multi-image matches are found, refining image orientation with a final bundle adjustment. Dealing with scenes with poor image texture and the automation of the convergence of the bundle to the solution are still problems. After successfully orienting image sequences up to about 200 m long, the accuracy of the orientation and reconstruction process was checked in a test field. Although not all constraints between synchronous image pairs are yet enforced, the accuracy degradation along the sequence was found to be still well within the specifications for the MMV. Furthermore curved path and possible solution to the poorness of tracked points are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional digital preservation of historical treasure has become a major focus of research in computer vision and graphics recently. It possesses the advantages of invariant preservation, remote display, ease of browsing and study, 3D model copy, etc. It is particularly important for digital library systems that have been successfully established in many countries. Furthermore, there have been some pioneering researches on preserving cultural and historical relics, e.g. famous pictures, stone carvings, and well-known architectures and landscape. There are many priceless Chinese treasures of jadeite material, but existing 3D scanning techniques are unable to be applied to such curio because of the semi-transparent and reflective material properties as well as the safety consideration. In this paper, we present a novel semi-automatic system to reconstruct three-dimensional models of jadeite material from image sequences. There are two major challenges from the 3D model reconstruction for treasures of jadeite material from uncalibrated image sequences. The first is the semi-diaphaneity as well as the highly specular property of jadeite materials and the other is the unknown camera information from given image sequences, including intrinsic (calibration) and extrinsic (position and orientation) parameters.
The proposed modeling process first recovers the camera information and the rough structure through a structure from motion algorithm, and then further extracts the fine details of the model from dense correspondences between image patches. We have developed three techniques for this challenging task, including structure from motion algorithm, image registration, and dense depth computation
First of all, for the highly specular material, we manually select some corresponding feature points between adjacent images, because it is very difficult to reliably establish the correspondences from the image sequences. These established correspondences supply the information to recover the camera parameters and the initial guess for the dense matching of the image patches. For the structure from motion algorithm, it consists of two steps; the first step is the projective reconstruction and the second step is the self-calibration and the metric update. Considering the high feature missing rate due to the highly specular material property, we proposed a robust method for projective reconstruction to recover the missing points, which greatly reduces the traditional error accumulation problem. The self-calibration and metric update process makes use of the image acquisition assumptions to obtain the camera parameters. It iteratively performs the following two steps; the first is the closed-form solution from the linear constraints on camera calibration matrix based on absolute conic, and the second is an optimization process to fit the nonlinear constraints. Then the obtained solution offers the initial guess for the strategic bundle adjustment algorithm. As to the image registration, existing techniques failed due to the complex lighting effect on jadeite material. By including the brightness variation factors into the model and considering the reflective highlight effect, we developed a novel optical flow computation technique to reliably compute the dense matching through the image patches. Based on the extracted camera information and registered image patches, the dense depth information of the jadeite object can be computed. And we can refine the original rough model by the supplied dense depth information through subdivision and adaptation of the 3D rough mesh model. Finally, experimental results of 3D model reconstruction from the image sequence of the Chinese treasure, Jadeite Cabbage with Insects, are shown to demonstrate the performance of the developed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a practical approach of the calibration of a stereovision system is proposed. At first, the internal parameters are calibrated with the traditional method. Then, after the two cameras are mounted, relative position of the two cameras is determined by solving the essential matrix. Thus, elaborate setup of the control points is avoided, which allows the method be applied outside laboratory environments. Additionally, a new solution of the essential matrix is described, which is easier to be comprehended and implemented. Chessboard-like patterns are used in the calibration and the grid corners are detected automatically by a new scheme based on the symmetry in the local area. Experiments have been carried out to test the approach. Results achieved by the proposed approach are as accurate as those achieved by the traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Structured-light rangefinder is distinguished from other range scanning systems by its use of off-the-shelf hardware and fast data acquisition. We propose a novel approach to calibrate such a system, namely, calibrate the camera and the projector of the system. This approach handles all types of distortions and produces results in high accuracy. Basically, camera calibration techniques compute the pixel-ray correspondences and represent them by a mathematic model with limited number of parameters. These parametric models, however, cannot model general distortions while distortions not modelled, sometimes, can greatly affect the quality of further applications. The proposed approach computes and maintains a ray database for all pixels explicitly. By trading memory for accuracy, this approach solves the above problem. The resulting ray database can be directly used for 3D point reconstruction using calibration system. It is also useful for model fitting and other operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A linear method that calibrates a camera from a single view of two concentric semicircles of known radii is presented. Using the estimated centers of the projected semicircles and the four corner points on the projected semicircles, the focal length and the pose of the camera are accurately estimated in real-time. Our method is applied to augmented reality applications and its validity is verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spatial and temporal characteristics of the data used to describe moving objects' movement make them large in quantity and complex to manage. Different queries to motion data ask for various organization methods. According to the needs of most applications, general motion model is used to represent the translation and rotation of moving objects during a period of time. Because the motion data are multidimensional in space and time dimension, 2n tree is employed to construct the main part of the index to these data. Meanwhile other kinds of index algorithms should be added to the index structure so as to meet the needs of queries other than state queries only related to a specific epoch. Thus, motion data index structure (MDIS) is constructed as a multi-entry multi-level index structure for the organization of motion data. Each index within MDIS may work alone or cooperate with each other to process different kinds of queries. The extra space needed for MDIS is only about 5%~6% of the total storage space of motion data themselves. And the respond time to each query is much decreased and acceptable to most applications dealing with moving objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.