PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Augmented Reality (AR) is a departure from standard virtual reality in a sense that it allows users to see computer generated virtual objects superimposed over the real world through the use of see-through head-mounted display. Users of such system can interact in the real/virtual world using additional information, such as 3D virtual models and instructions on how to perform these tasks in the form of video clips, annotations, speech instructions, and images. In this paper, we describe two prototypes of a collaborative industrial Tele-training system. The distributed aspect of this system will enables users on remote sites to collaborate on training tasks by sharing the view of the local user equipped with a wearable computer. The users can interactively manipulate virtual objects that substitute real objects allowing the trainee to try out and discuss the various tasks that needs to be performed. A new technique for identifying real world objects and estimating their coordinates in 3D space is introduced. The method is based on a computer vision technique capable of identifying and locating Binary Square Markers identifying each information stations. Experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Highly accurate avatars of humans promise a new level of realism in engineering and entertainment applications, including areas such as computer animated movies, computer game development interactive virtual environments and tele-presence. In order to provide high-quality avatars, new techniques for the automatic acquisition and creation are required. A framework for the capture and construction of arbitrary avatars from image data is presented in this paper. Avatars are automatically reconstructed from multiple static images of a human subject by utilizing image information to reshape a synthetic three-dimensional articulated reference model. A pipeline is presented that combines a set of hardware-accelerated stages into one seamless system. Primary stages in this pipeline include pose estimation, skeleton fitting, body part segmentation, geometry construction and coloring, leading to avatars that can be animated and included into interactive environments. The presented system removes traditional constraints in the initial pose of the captured subject by using silhouette-based modification techniques in combination with a reference model. Results can be obtained in near-real time with very limited user intervention.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a high-speed, single shot range scanner. The depth acquisition is based on classical triangulation, facilitated by structured light. The projection pattern consists of equidistant vertical stripes. The major contribution of our research is that this setup is amenable to real-time processing. Both from an algorithmic and an implementation point of view, the speed constraint is taken into account. The paper discusses both the pattern detection and the camera and projector calibration. The subpixel accurate detection, which is the main computational problem, is implemented as a two-stage algorithm. An initialization procedure yields the rough contours. Subpixel accuracy is reached through an iterative relaxation process. A consistent labeling is assigned based on belief propagation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient, realistic face animation is still a challenge. A system is proposed that yields realistic animations for speech. It starts from real 3D face dynamics, observed at a frame rate of 25 fps for thousands of points on the faces of speaking actors. When asked to animate a face it replicates the visemes that is has learned, and adds the necessary coarticulation effects. The speech animation could be based on as few as 16 modes, extracted through Independent Component Analysis from the observed face dynamics. Also faces for which only a static, neutral 3D model is available, can be animated. Rather then animating via verbatim copying other faces’ deformation fields, the visemes are adapted to the shape of the new face. By localising this face in a Face Space, where also the locations of the example faces are known, visemes are adapted automatically according to the relative distance with respect to these examples. The animation tool proposes a good speech-based face animation as a point of departure for animators, who also get support by the system to then make further changes as desired.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper quickly reviews 20 years of development in the field of 3D laser imaging. An overview of 3D digitizing techniques is presented with an emphasis on some of the numerous commercial techniques and systems currently available. This paper covers some of the most important methods that have been developed during the years, both at NRC and elsewhere, with a focus on commercial systems that are good representation of the key technologies that survived the test of the years.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of automating the processing of dense range data, specifically the automated interpretation of such data containing curved surfaces. This is a crucial step in the automated processing of range data for applications in object recognition, measurement, re-engineering and modeling. We propose a two stage process using model-based curvature classification as the first step. Features based on differential geometry, mainly curvature features, are ideally suited for processing objects of arbitrary shape including of course curved surfaces. The second stage uses a modified region growing algorithm to perform the final segmentation. The results of the proposed approach are demonstrated on different range data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years industrial photogrammetry has emerged from a highly specialized niche technology to a well established tool in industrial coordinate measurement applications with numerous installations in a significantly growing market of flexible and portable optical measurement systems. This is due to the development of powerful, but affordable video and computer technology.
The increasing industrial requirements for accuracy, speed, robustness and ease of use of these systems together with a demand for the highest possible degree of automation have forced universities and system manufacturer to develop hard- and software solutions to meet these requirements.
The presentation will show the latest trends in hardware development, especially new generation digital and/or intelligent cameras, aspects of image engineering like use of controlled illumination or projection technologies, and algorithmic and software aspects like automation strategies or new camera models.
The basic qualities of digital photogrammetry- like portability and flexibility on one hand and fully automated quality control on the other - sometimes lead to certain conflicts in the design of measurement systems for different online, offline, or real-time solutions. The presentation will further show, how these tools and methods are combined in different configurations to be able to cover the still growing demands of the industrial end-users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel system for real-time three-dimensional surface orientation measurement. The advantages of our method are: (1) single frame capture of normal vector distribution, (2) dense, pixel-wise capture of normal vectors, and (3) independence on surface reflectance and background illumination. This system consists of two components; one is the sinusoidally amplitude-modulated three-phase (3P) light sources at vertices of a triangle and another is the three-phase correlation image sensor (3PCIS) for demodulating the amplitude and phase of reflected light from the surface. Based on the photometric stereo principle, the phase and amplitude can be easily converted to the azimuth and inclination, respectively, of the normal vector of the surface. We implemented this system using our CMOS 64 × 64 pixel 3PCIS developed by us and successfully reconstructed the normal vector map in its frame rate (30Hz).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical triangulation methods based on a laser light sheet and a camera are frequently used as a surface measurement technique in a wide range of applications. They allow for the fast accurate determination of height profiles, based on relatively simple hardware and software configurations. Moreover, they can be implemented very efficiently and are especially suited for measurements on moving objects such as products on an assembly line.
The study presented in the paper describes the adaptation of laser light sheet optical triangulation techniques to the task of water level profile measurements in hydromechanics experimental facilities. The properties of water surfaces necessitate several modifications of optical triangulation techniques to make them applicable: The mirror-like reflection properties of water surfaces form a contradiction to the assumption of diffuse reflection, on which standard light sheet triangulation techniques are based; this problem can be circumvented by using a diffuse reflecting projection plane to capture the mirror-like reflection of the laser line from the water surface. Due to the angle of incidence law, however, water surface tilts caused by waves will usually cause a strong degradation of the quality of the results when using reflected light; this effect can largely be compensated by processing max-store images derived from short image sequences rather than single images.
These extensions of optical triangulation turned out to be crucial for the applicability of the method on water surfaces. Besides the theoretical concept and a sensitivity analysis of the method, a system configuration is outlined, and the results of a number of practical experiments are shown and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical 3-d digitizing of object surfaces can be performed by combining fringe projection measurement systems and close-range photogrammetry. Photogrammetry provides fast and precise reference point determination to merge several point clouds generated by the range sensor. In this paper, 3-d measurements using a COMET digitizing system in combination with the DPA-Win photogrammetric software are reported on. Different digital cameras (consumer, prosumer, professional type) can be applied. Tests were carried out to determine the accuracy of the photogrammetric reference network with respect to the number of images and/or reference points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an overview of our project to construct a digital archive of cultural heritages. Among the efforts in our project, we briefly overview our research on geometric and photometric preservation of cultural assets and restoration of their original appearance. Digital geometric modeling is achieved through a pipeline consisting of scanning, registering and merging multiple range images. For these purposes, we have developed a robust simultaneous registration method and an efficient and robust voxel based integration method. On top of the geometrical model, we align texture images acquired at the scanning. Because the geometrical relation between the range sensor and the image sensor are calibrated, we automatically align texture images onto the geometrical models. For photometric modeling, we have developed a surface light field based method, which captures the appearance variation of real world objects under different viewpoints and illumination conditions from a series of images. As an attempt to restore the original appearance of historical heritages, we have reconstructed several buildings and statues that have been lost in the past. In this paper, we overview these techniques and show several results of applying the proposed methods to existing ancestral assets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the great valley of Bamiyan, north-west of Kabul, Afghanistan, two big standing Buddha statues were carved out of the sedimentary rock of the region around the second to fourth centuries AD. The larger statue was 53 meters high while the smaller Buddha measured 35 m. The two colossal statues were demolished on March 2001 by the Taleban, using mortars, dynamite, anti-aircraft weapons and rockets. After the destruction, a consortium was founded to rebuild the Great Buddha at original shape, size and place. Our group performed the required computer reconstruction, which serves as a basis for the physical reconstruction. The work has been done using three different types of imagery in parallel and in this paper we present our results of the 3D computer reconstruction of the statue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present paper proposes a virtual environment for visualizing virtualized cultural and historical sites. The proposed environment is based on a distributed asynchronous architecture and supports stereo vision and tiled wall display. The system is mobile and can run from two laptops. This virtual environment addresses the problems of intellectual property protection and multimedia information retrieval through encryptation and content-based management respectively. Experimental results with a fully textured 3D model of the Crypt of Santa Cristina in Italy are presented, evaluating the performances of the proposed virtual environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the work that was accomplished in preparing a multimedia CDROM about the history of a Byzantine Crypt. An effective approach based upon high-resolution photo-realistic texture mapping onto 3D models generated from range images is used to present the spatial information about the Crypt. Usually, this information is presented on 2D images that are flat and don’t show the three-dimensionality of an environment. In recent years, high-resolution recording of heritage sites has stimulated a lot of research in fields like photogrammetry, computer vision, and computer graphics. The methodology we present should appeal to people interested in 3D for heritage. It is applied to the virtualization of a Byzantine Crypt where geometrically correct texture mapping is essential to render the environment realistically, to produce virtual visits and to apply virtual restoration techniques. A CDROM and a video animation have been created to show the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on Leshan Grand Buddha’s isoline images gained from conventional close-range photogrammetry work many years ago, a simple digitized method is suggested in this paper for 3D reconstruction of such famous buddhas, which is very important for these buddhas’ reparation, research and reproduction. The whole work of 3D Buddha reconstruction includes digitization of buddha’s contour line map, generation of digital buddha model, texture mapping using close-range imagery as well as 3D simulation and animation. The experiment shows that the GIS software package GeoStar can be directly used for 3D generation and visualization of Chinese buddhas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pose Estimation, Calibration, and Registration Techniques
This work introduces a novel approach to the precise estimation, in real-time, of pose parameters of planar three-dimensional object. A suitable set of coplanar marks is used to calculate the tilt and pan angles values of the planar object. Feature points are calculated with subpixel accuracy and a weighted approach is applied to reduce variations in feature points positions due to noise. An application of this method is also shown for virtual TV sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the registration of close range imagery to virtual urban models, using buildings and other fixed objects in a scene. We introduce a novel approach, using radiometric and spatial queries to support the registration of ground level imagery. Image registration involves the comparison of an image’s content to the information contained in a VR model, to identify in the VR model the facades that best resemble the ones contained in the processed imagery. This registration-through-queries approach allows us to use coarse information in the form of imprecisely outlined facades to perform image registration, removing the requirements for time-consuming processes like precise delineation of control point measurement. In the paper we introduce radiometric indexing schemes to support object facade queries, and present experiments to demonstrate the function of these metrics in our image registration framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D city modeling from airborne imagery includes mainly two parts: (1) image processing procedures and (2) 3D modeling for man-made objects such as buildings, roads and other objects. Line or feature extraction and stereo matching are usually utilized as an image processing procedures, and geometrical data acquisition for man-made objects are performed. However, there are some issues for automatic or semi-automatic man-made object modeling. These problems include uncertainty within matching, extraction of man-made objects and spatial data acquisition. In particular, spatial data acquisition of buildings are important for reliable city modeling.
With this objective, this paper focuses especially on efficient and robust line matching method using optical flow, which enable an automatic building extraction since line gives important information for building extraction and satisfied results are depend on rigorous line extraction and matching. Furthermore, building extraction using morphological opening and 3D modeling for urban area are also investigated in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is presented for piecewise planar segmentation of 3D point clouds, which uses spatial subdivision into finite volume elements. For each volume element a local plane is fitted and these planes are grouped to detect bigger planar structures and construct a piecewise planar object model. The algorithm has a higher detection sensitivity for small object planes than a previous global plane detection methods based on RANSAC fitting and plane sweeping. Experimental results are presented for a synthetic dataset, which was used to evaluate the algorithms performance and for a real dataset, which was used to compare it to other methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will present details of a coded target system that employs a Hough transform and segment matching to automatically recognise and identify the targets in digital images. The code system is based on a square surrounding the central circular target and will be described at a level of detail that would allow the system to be readily duplicated. Pre-detection processes, developed to improve the success rate under unfavourable conditions, and the tests conducted to validate a correct target match will also be described. Finally, the paper will include some examples of the use of the coded targets, drawn from calibrations of digital still cameras and underwater stereo-video systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Various applications such as meteorology, climatology or hydrology require information about the soil hydraulic properties over large areas. Microwave radiometry is a promising approach to gather this type of information. The microwave emission from soils is strongly affected by the roughness of the soil surface. This effect has therefore to be quantified to get a reasonable estimation of the hydraulic properties. In a cooperation of the Institute of Terrestrial Ecology, Soil Physics with the Institute of Geodesy and Photogrammetry digital surface models of soils were generated to study the influence of the surface roughness on the soil measurements. Accurate Digital Surface Models (DSM) can be derived by the application of photogrammetric measurement techniques and provide the spatial basis to extract roughness information. In this paper an approach to determine the roughness of the topsoil surface is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The arising interest towards 3D modeling of both single objects and whole environments is strictly related with the availability of more and more powerful computing and surveying devices. A new set of issues has to be addressed in the 3D modeling of real objects. A lot of data are needed about the object surface or volume, which have then to be aggregated, regardless the data format and the acquisition device used, in order to get the final model. Actually, the data registration requires an approximate estimate of the alignement between acquired data.This approach is often time-consuming, increases the final cost of the 3D model and represents the major limit to the wide spreading of real object models. Taking into account this drawback, a fully automatic range data registration system has been developed. This system is able to execute all the steps needed for 3D modeling of real objects in automatic way or at least minimizing as more as possible the human intervention, without any other information but the range data only. In this paper an overview of the whole registration system is presented, focusing on the integration between the two main blocks. In the first one, overlapping areas between range image pairs are detected by mean of spin-images and an initial approximate alignement between image pairs is computed. Then, in the second block, a refinement of this estimate is performed by use of a cascade of two registration algorithms: the Frequency Domain and the ICP. Some interesting applications of proposed strategy for 3D modeling of cultural heritage objects will be also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Statistical modeling of signal/image data has been used extensively for recognition and estimation. Principal component analysis was very popular for the statistical signal modeling and analysis. In this paper, we present a system to build a 3D statistical head model from incomplete data. In this system, we first transformed the 3D head scan data points into a cylindrical coordinate to obtain 2D surface maps. After these 2D surface maps were aligned, we computed the associated mean vector and covariance matrix. Then, the principal component analysis technique was applied to compute the principal components and the corresponding eigenvalues of the covariance matrix. Experimental results are given to show the 3D head shape variations from the computed 3D statistical model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wearable 3D measurement realizes to acquire 3D information of an objects or an environment using a wearable computer. Recently, we can send voice and sound as well as pictures by mobile phone in Japan. Moreover it will become easy to capture and send data of short movie by it. On the other hand, the computers become compact and high performance. And it can easy connect to Internet by wireless LAN. Near future, we can use the wearable computer always and everywhere. So we will be able to send the three-dimensional data that is measured by wearable computer as a next new data. This paper proposes the measurement method and system of three-dimensional data of an object with the using of wearable computer. This method uses slit light projection for 3D measurement and user’s motion instead of scanning system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analogue irregular objects digitizing non-contact method is offered in order to automate virtual models reconstruction. The original Compact Video Digitizing System is described to carry out video digitizing method. Both the irregular objects video digitizing process principle scheme and mathematical modelling are considered too. In order to provide irregular products manufacturing most effective the application of offered video digitizing method is shown in Objects Recursive Creation Computer Technology. This technology is realized by Compact Reverse Engineering System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper offers an introduction of computer assemble and simulation of ancient building. A pioneer research work was carried out by investigators of surveying and mapping describing ancient Chinese timber buildings by 3D frame graphs with computers. But users can know the structural layers and the assembly process of these buildings if the frame graphs are processed further with computer. This can be implemented by computer simulation technique. This technique display the raw data on the screen of a computer and interactively manage them by combining technologies from computer graphics and image processing, multi-media technology, artificial intelligence, highly parallel real-time computation technique and human behavior science. This paper presents the implement procedure of simulation for large-sized wooden buildings as well as 3D dynamic assembly of these buildings under the 3DS MAX environment. The results of computer simulation are also shown in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acquisition of the spatial data is a fundamental problem in multi-dimensional and dynamic GIS construction and infrastructure. Ground-based mobile Laser scanning System, which is mainly used in the reconstruction of 3D city and acquisition of local region geographic information, has an important function in rebuilding 3D spatial object. This integrated systems have the same sensors GPS/INS for positioning. In this paper, our researches are focused on multi-sensor integration without GPS/DGPS/INS. The application system we developed is a ground-based motive platform, upon which multi-sensors were integrated. In system, a relative positioning sensor we employed is the Rotary Encoder, which determines the relative positions of the platform from original position and the Laser Scanner’s posture. The Laser Scanner surveys the distances between the platform and the object. All data were transferred through wireless cable into the server located in the office. The wireless modem we applied provides reliable wireless data communication for either point-to-point or multipoint applications. In this paper the outline of system, principles and algorithm are presented. Moreover, some trials and experiences are presented in this paper. Finally some conclusions and further research work are introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New ideas and solutions never come alone. Although automated feature extraction is not sufficiently mature to move from the realm of scientific investigation into the category of production technology, a new goal has arisen: 3D simulation of real-world objects, extracted from images. This task, which evolved from feature extraction and is not an easy task itself, becomes even more complex, multi-leveled, and often uncertain and fuzzy when one exploits time-sequenced multi-source remotely sensed visual data. The basic components of the process are familiar image processing tasks: fusion of various types of imagery, automatic recognition of objects, removng those objects from the source images, and replacing them in the images with their realistic simulated "twin" object rendering. This paper discusses how to aggregate the most appropriate approach to each task into one technological process in order to develop a Manipulator for Visual Simulation of 3D objects (ManVIS) that is independent or imagery/format/media. The technology could be made general by combining a number of competent special purpose algorithms under appropriate contextual, geometric, spatial, and temporal constraints derived from a-priori knowledge. This could be achieved by planning the simulation in an Open Structure Simulation Strategy Manager (O3SM) a distinct component of ManVIS building the simulation strategy before beginning actual image manipulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.