PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper, provided is an overview of our project on 3D object and face modeling from images taken by a free-moving camera. We strive to advance the state of the art in 3D computer vision, and develop flexible and robust techniques for ordinary users to gain 3D experience from a ste of casually collected 2D images. Applications include product advertisement on the Web, virtual conference, and interactive games. We briefly cover the following topics: camera calibration, stereo rectification, image matching, 3D photo editing, object modeling, and face modeling. Demos on the last three topics will be shown during the conference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are all experts in the perception and interpretation of faces and their dynamics. This makes facial animation a particularly demanding area of graphics. Increasingly, computer vision is brought to bear and 3D models and their motions are learned from observations. The paper subscribes to this strand for the 3D modeling of human speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from faces with a few markers. A 3D reconstruction of a speaking face is produced for each video frame. A topological mask of the lower half of the face is fitted to the motion. The 3D shape statistics are extracted and principal components analysis reduces the dimension of the maskspace. The final speech tracker can work without markers, as it is only allowed to roam this constrained space of masks. Upon the representation of the different visemes in this space, speech or text can be used as input for animation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic body measurement system is essential for apparel mass customization. This paper introduces the development of a body-scanning system using the multi-line triangulation technique, and methods for body size extraction and body modeling. The scanning system can rapidly acquire the surface data of a body, provide accurate body dimensions, many of which are not measurable with conventional methods, and also construct a body form based on the scanned data as a digital model of the body for 3D garment design and for virtual try-on of a designed garment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently three different angiographic techniques are used to measure and visualize major blood vessels in the human body: magnetic resonance (MR), computer tomography (CT) and digital subtraction (DS) angiography. Although these imaging systems have been already qualitatively compared, a quantitative assessment is still missing. The goal of this work is to provide a tool enabling a quantitative comparison of the three imaging techniques to an unbiased reference. MR-, CT- and DS-angiographies are first performed on a corpse. Then, a casting of the abdominal aorta and its main branches is prepared, removed from the body and measured with photogrammetric methods. The elongated and thin cast is fixed in a 3D frame with 16 signalized small spheres used for calibration and orientation purposes. Three fixed CCD cameras acquire triplets of images of the casting, which is turned in 8 positions. In order to perform multi-image matching, an artificial random texture is projected onto the object. For each triplet of images, a semi-automated matching process based on least squares matching determines a dense set of corresponding points. Their 3D coordinates are then computed by forward intersection, with a mean standard deviation of about 0.2 mm. The result from the 8 positions are merged together into a 3D point cloud and an adequate filter is applied to remove the noise and the redundancy in the overlapping regions. The paper depicts the basic design of the system and the measurement methods. Furthermore some preliminary results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual production for broadcast is currently mainly used in the form of virtual studios, where the resulting media is a sequence of 2D images. With the steady increase of 3D computing power in home PCs and the technical progress in 3D display technology, the content industry is looking for new kinds of program material, which makes use of 3D technology. The applications range form analysis of sport scenes, 3DTV, up to the creation of fully immersive content. In a virtual studio a camera films one or more actors in a controlled environment. The pictures of the actors can be segmented very accurately in real time using chroma keying techniques. The isolated silhouette can be integrated into a new synthetic virtual environment using a studio mixer. The resulting shape description of the actors is 2D so far. For the realization of more sophisticated optical interactions of the actors with the virtual environment, such as occlusions and shadows, an object-based 3D description of scenes is needed. However, the requirements of shape accuracy, and the kind of representation, differ in accordance with the application. This contribution gives an overview of requirements and approaches for the generation of an object-based 3D description in various applications studied by the BBC R and D department. An enhanced Virtual Studio for 3D programs is proposed that covers a range of applications for virtual production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel approach for constructing multiresolution surface models from a set of calibrated images. The output is a texture-mapped triangular surface mesh that best matches all the input images. The mesh is obtained by deforming a generic initial mesh such as a sphere or cube according to image and geometry-based forces. This technique has the following key features: (1) the initial mesh is able to converge to the object surface from arbitrarily far away, (2) the resolution of the final mesh adapts to the local complexity of the object, (3) sharp corners and edges of object surface are preserved in the final mesh, (4) occlusion is correctly modeled during convergence, (5) re-projection error of the final mesh is optimized, and (6) the output is ideally suited for rendering by existing graphics hardware. The approach is shown is shown to yield good results on real image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, the ability to create panoramic photographs is included with most of the commercial digital cameras. The principle is to shoot several pictures and stitch them together to build a panorama. To ensure the quality of the final image, the different pictures have to be perfectly aligned and the colors of the images should match. While the alignment of images has received a lot of attention from the computer vision community, the mismatch in colors was often ignored and handled using smooth transitions from one picture to the next to mask the mismatch. This paper presents a method to simultaneously estimate the alignment of the pictures and the color transformation between them. By estimating the color transformation from the scene to the pixels, the method is able to remove the mismatch in colors of the different images, and thus leads to better image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photogrammetry is increasingly involved in production of 3D walk-throughs and fly-throughs. This production is costly due to the current standard technique used for rendering, which relies on detailed polygon modeling and huge amounts of computation. The emerging image based rendering techniques are superior alternatives in that they require no polygon models and that the rendering speed is independent of scene complexity. In order to adapt photogrammetry to become more visualization capable, a new strategy named visualization first modeling second is proposed in the paper. This strategy facilitates the production of 3D walk- throughs directly from captured images using image-based rendering techniques. It makes interactive photogrammetric modeling more efficient by using 3D walk-through as user- interface. It also provides for joint use of images based and polygon based rendering to perfection 3D walk-through. The paper presents the detail of this new strategy after reviewing both rendering techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Trends of utilization of airborne laser scanning are introduced. The first airborne laser scanner in Japan was developed in 1996 and airborne laser scanning gained much attention recently. Research issues conducted by the Geographical Survey Institute, the governmental organization for surveying and mapping, on the airborne laser scanning are presented including accuracy test, 3D city modeling, detailed landform measurement of slopes and crustal deformation by a volcano.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of forestry, for decades field measurements have bene done in order to estimate tree stem volumes from a certain area. This has been done to get information about the growth of forest in certain time periods. The estimation has been based on sample plots. In a plot, one diameter of a tree stem and its distance from the center of the plot have been measured. Until now the measurements have been made manually with a tape measure. In this paper the procedure to use video measurement for getting the required information will be presented. Coordinate system will be created on site and 3D stem volume models will be estimated based on video image measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D spatial visualization by computers become common nowadays as it is represented by computer games. There are so many applications of the 3D geographic information. However the data providing methods are not enough for the 3D representation. Laser measurement is the one of data providing technique on this point. We tried to discuss the problems that occur in the actual data providing process. Two cases are presented by this paper is the areas of recreating urban street space and grasping the situation of disaster site. For the former case, there is no problem for the ground laser scanner in measurement but insufficient development of software as the modeling tools. For the latter case, it shows that the ground laser scanner is an efficient way to understand the situation of the disaster sties instantly and functionally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ARPENTEUR is a web application for digital photogrammetry mainly dedicated to architecture. ARPENTEUR has been developed since 1998 by two French research teams: the 'Photogrammetry and Geomatics' group of ENSAIS-LERGEC's laboratory and the MAP-gamsau CNRS laboratory located in the school of Architecture of Marseille. The software package is a web based tool since photogrammetric concepts are embedded in Web technology and Java programming language. The aim of this project is to propose a photogrammetric software package and 3D modeling methods available on the Internet as applets through a simple browser. The use of Java and the Web platform is ful of advantages. Distributing software on any platform, at any pace connected to Internet is of course very promising. The updating is done directly on the server and the user always works with the latest release installed on the server. Three years ago the first prototype of ARPENTEUR was based on the Java Development Kit at the time only available for some browsers. Nowadays, we are working with the JDK 1.3 plug-in enriched by Java Advancing Imaging library.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a component approach that combines in a seamless way the strong features of laser range acquisition with the visual quality of purely photographic approaches. The relevant components of the system are: (i) Panoramic images for distant background scenery where parallax is insignificant; (ii) Photogrammetry for background buildings and (iii) High detailed laser based models for the primary environment, structure of exteriors of buildings and interiors of rooms. These techniques have a wide range of applications in visualization, virtual reality, cost effective as-built analysis of architectural and industrial environments, building facilities management, real-estate, E-commerce, remote inspection of hazardous environments, TV production and many others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Creating geometrically correct and complete 3D models of complex environments remains a difficult problem. Techniques for 3D digitizing and modeling have been rapidly advancing over the past few years although most focus on single objects or specific applications such as architecture and city mapping. The ability to capture details and the degree of automation vary widely from one approach to another. One can safely say that there is no single approach that works for all types of environment and at the same time is fully automated and satisfies the requirements of every application. In this paper we show that for complex environments, those composed of several objects with various characteristics, it is essential to combine data from different sensors and information from different sources. Our approach combines models created from multiple images, single images, and range sensor. It can also use known shapes, CAD, existing maps, survey data, and GPS. 3D points in the image-based models are generated by photogrammetric bundle adjustment with our without self-calibration depending on the image and point configuration. Both automatic and interactive procedures are used depending on the availability of reliable automated process. Producing high quality and accurate models, rather than full automation, is the goal. Case studies in diverse environments are used to demonstrate that all the aforementioned features are needed for environments with a significant amount of complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on initial investigations into appropriate calibration models for and the reliability and stability of the calibration of the Kodak DC200 series cameras. Results of test of different types of digital still cameras are compared, in general, to the DC200 series and then various calibration test of a DC265 camera are presented and analyzed. Block- and photo-invariant camera calibration models are compared to ascertain their suitability for the physical variability of the cameras. In conclusion, this paper makes some recommendations on the potential reliability and stability of the Kodak DC265 camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present our work on the implementation and calibration of a multisensor measuring system. The work is part of a large scale research project on optical measurement using sensor actuator coupling and active exploration. This project is a collaboration of researchers form seven institutes of the University of Stuttgart including photogrammetry, mechanical engineering and computer science. The system consist of optical sensors which can be manipulated in position and orientation by robot actuators, and light sources which control illumination. The system performs different tasks including object recognition, localization and gauging. Flexibility is achieved by replacing the common serial measurement chain by nested control loops involving autonomous agents which perform basic tasks in a modular fashion. The system is able to inspect and gauge several parts from a set of parts sorted in a 3D model database. The paper gives an overview of the entire system and details some of the photogrammetry-related aspects such as the calibration of the different sensors, the calibration of the measurement robot using photogrammetric measurements, as well as data processing steps like segmentation, object pose determination, and gauging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent emergence of high-resolution laser scanning technology offers unprecedented levels of data density for close range metrology applications such as deformation monitoring and industrial inspection. The scanner's pulsed laser ranging device coupled with beam deflection mechanisms facilitates rapid acquisition of literally millions of 3D point measurements. Perhaps the greatest advantage of such a system lies in the high sample density that permits accurate and detailed surface modeling as well as superior visualization relative to existing measurement technologies. As with any metrology technique, measurement accuracy is critically dependent upon instrument calibration. This aspect has been, and continues to be, an important research topic within the photogrammetric community. Ground-based laser scanners are no exception, and appropriate calibration procedures are still being developed. The authors' experience has shown that traditional sensor calibration techniques, in some instances, can not be directly applied to laser scanners. This paper details an investigation into the calibration and use the Cyrax 2400 3D laser scanner. With its variable spatial resolution and high accuracy, the Cyrax offers great potential for close range metrology applications. A series of rigorous experiments were conducted in order to quantify the instrument's precision and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we compare the accuracy and resolution of a 3D-laser scanner prototype that tracks in real-time and computes the relative pose of objects in a 3D space. This 3D-laser scanner prototype was specifically developed to study the use of such a sensor for space applications. The main objective of this project is to provide a robust sensor to assist in the assembly of the International Space Station where high tolerance to ambient illumination is paramount. The laser scanner uses triangulation based range data and photogrammetry methods to calculate the relative pose of objects. Range information is used to increase the accuracy of the sensing system and to remove erroneous measurements. Two high-speed galvanometers and a collimated laser beam address individual targets mounted on an object to a resolution corresponding to an equivalent imager of 10000 by 10000 pixels. Knowing the position coordinates of predefined targets on the objects, their relative poses can be computed using either the scanner calibrated 3D coordinates or spatial resection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper assesses some of the practical ramifications of recent developments in estimating vision parameters given information characterizing the uncertainty of the data. This uncertainty information may sometimes be estimated in association with the observation process, and is usually represented in the form of covariance matrices. An empirical study is made of the conditions under which improved parameter estimates can be obtained from data when covariance information is available. We explore, in the case of fundamental matrix estimation and conic fitting, the extent to which the noise should be anisotropic and inhomogeneous if improvements over traditional methods are to be obtained. Critical in this is the devising of synthetic experiments under which noise conditions can be precisely controlled. Given that covariance information is, in itself, subject to estimation error, testes are also undertaken to determine the impact of imprecise covariance information upon the quality of parameter estimates. We thus investigate the consequences for parameter estimation of inaccuracies in the characteristiziaton of noise that inevitably arise in practical computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking and trajectory building are essential elements of vision-based motion capture systems for biomechanics applications. Optimal performance of multi- camera, real-time video metric system for the recovery of motion analysis parameters depends not only on the incorporation of rigorous photogrammetric processes, but also upon robust and reliable object point tracking. This paper discusses developments related to object point tracking within a motino capture system being developed for use in a biomechanics laboratory. The path generation sequence for object targets is first outlined, after which an account of the formulation of a kinematic model based on the alpha-beta-gamma filter/predictor is provided. The process of tracking in both 2D image space and 3D object space is then discussed, and initial result form practical experiments are summarized. These results indicate that the proposed object tracking and trajectory building approach can provide robust performance and an operation sped which is sufficient to a low real-time 3D target tracking at standard video frame rates and beyond.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the implementation of a LCD stripe projection system, based on a new processing scheme called line shift processing. One advantage of the method is, that the projection device cam, but is not required to be stable over time nor does it have to be calibrated. The new method therefore allows us to use an off-the-shelf multimedia projector to acquire dense 3D data of large objects in a matter of seconds. Since we are able to control our source of illumination, we can also acquire registered color information using standard monochrome cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the main features of an optical instrument for 3D vision, based on the projection of structure light. The envisaged application is the non contact, fast acquisition of points clouds both for dimensional and quality control, and for reverse engineering. The components of the system are a liquid crystal projector, projecting fringe patterns on a the target, and a video-camera for the acquisition of the patterns. The measurement technique developed to elaborate the patterns, and retrieve the depth information is based on the combination of the Gray Code and the Phase Shift methods. It yields an extended measuring range at high resolution, and allows the measurement of a wide typology of objects, characterized by shape discontinuities and by fine surface details. The digitization of large objects is carried out by acquiring multiple views and by aligning them into a global reference system. To this aim, suitable rototranslation matrices are computed and used to perform the transformation. From the extensive set of experiments carried out to evaluate the measurement performance, good linearity has been observed, and an overall variability of the measurement error of +/- 35 micrometers have been estimated in correspondence with each single view. The error due to the alignment of multiple views is within 0.1 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a system for 3D vision based on the projection of bidimensional patterns of incoherent light and on phase coding. A novel projection scheme is exploited: two gratings at different wavelengths are combined into a single pattern a d demodulated in the natural domain of the signal to retrieval the depth information. Two phase maps are determined, whose sensitivity to height variations is proportional to the wavelength of the pattern gratings: the phase unwrapping is performed by compensating the phase uncertainty of the finest gratin with the information coming from the coarse one. Thus, both high measurement resolution and extended height range are obtained. The approach requires the acquisition of only one image, shows good robustness against fine variations of the fringe period and well adapts to the measurement of free-from shape. In this paper, the phase demodulation procedure and the unwrapping algorithm are detailed, and the accuracy of the measurement is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D form measurement by using images is used in the various fields, for example a computer vision, a robot vision, CAD, and so on. In the application, the 3D measurement is often required to be fast. For the target which has the diffusion surface reflection characteristic, the active method is effective to a high-speed measurement. The active method includes the slit optical projection method, the method of the space encoding with a pattern projection, and the time series encoding pattern projection method, etc. However, there is a problem in a speed improvement as for each method. It is necessary for measurement to project several times at least. In this paper, the concentric circle is used as a projection pattern, and we can measure an object by projecting one time. It was confirmed in experiment that 3D measurement by using concentric circle is possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a 3D measurement principle for the correlation image sensor (CIS), which generates temporal correlation between light intensity and an external reference signal at each pixel. Another key of our system besides the CIS is amplitude-modulation of the scanning sheet beam, the phase of which relative to a reference signal is varied according to the scanning angle. After a scan within a frame, the phase is demodulated with a quadrature pair of reference signals and output by the CIS to compute the individual angle of the sheet bam at each pixel. By virtue of lock-in detection principle, the effects of background illumination and/or surface reflectance nonuniformity of the object are thoroughly removed. We implemented this system using our CMOS 64 by 64 pixel CIS, and successfully reconstructed a depth map under its frame rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D models are important products of measurement systems in a variety of applications such as archaeology, medicine, architecture and engineering. In order to be appropriate to the end user, such object models must comply with certain accuracy standards. To achieve both accuracy and object coverage with a close range photogrammetric approach often requires a convergent image network in combination with retro reflective targets or some form of projected pattern. Retro-reflective targets alone can only provide sparse data. However, by optimizing image acquisition to record both targets and natural surface texture information, proven high accuracy retro reflective target image measurement can be combined with a wealth of automated image analysis algorithms developed by both photogrammetric and machine vision communities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, the number of pixels for consumer digital still camera are amazingly increasing by modern semiconductor and digital technology. The biggest pixels as a consumer digital camera were 0.8 millions at the 1996, and transmission techniques of image to PC had been received attention. Only 4 years later, in the 2000, there are 25 kinds of consumer pixel cameras on the market which have more than 3 million pixels in Japan. The functionary for transmission of image of PC is standardized, and the price is less than 1000 US dollars. IN these circumstances, it is expected that 3 million consumer digital still cameras will become useful tool in various real-time imaging fields, e.g. industry, machine and robot vision, archeology, architecture, construction management, and so on. With this objective, performance evaluation of 3 million consumer pixel cameras for digital photogrammetry were investigated in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The demand for mobile 3D measurement system for industrial applications has increased rapidly in the past years. Some optical photogrammetry based 3D systems are available and in the daily use. In this paper a new approach is described which shows a combination of optical and tactile techniques using inverse photogrammetry. The approach and the practical applications are described in the following.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In cooperation of the Department of Veterinary Surgery at the University of Zurich and the Institute of Geodesy and Photogrammetry at ETCH Zurich, a system for the measurement of 3D deformations of hose hooves under different load conditions has been developed. The paper describes the basic design of the system, discusses a calibration strategy and presents first results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concrete problem, which decision is presented in given paper, consists in measuring of pulp extractor diameter apart 1 mm from its operating end and their automatic sorting. The range of measuring sizes is 180-260 micrometers , necessary measurement accuracy is 1 micrometers . The sorting is carried out on 8 subranges in 10 micrometers . The ellipticity of a pulp extractor is analyzed additionally and used as a qualitative index. The comparative analysis of different tools on metrology performances and cost problems of their embodying has allowed to select television system, as the class on the basis of which is necessary to build a required system. Problems decided at built-up of a system are: use of short focus lenses with major augmentation for shaping pulse duration solved for measuring to within 1 micrometers error; the account of lenses aberrations influence on a measuring error; use of cameras with a size of pixels in 0.7-1 micrometers ; definition of the line number, which corresponds to a gauge diameter of a pulp extractor; holding of a statistical average and extrapolation of data of measuring; the analysis of system variants with the purpose of its simplification and cost decreasing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method for arbitrary view generation from multiple view images taken with uncalibrated camera system. In PGS, that is the 3D space which is defined by epipolar geometry between the two basis cameras in the multiple cameras, we reconstruct 3D shape model from the silhouette images of the multiple cameras. For het shape reconstruction in the PGS, the multiple cameras do not have to be fully calibrated, but the fundamental matrices of every camera to the two basis cameras must be collected. By using the 3D model reconstructed in the PGS, we can obtain the point correspondence between arbitrary pair of images which can generate the image of arbitrary view between the pair of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereoscopic video capture, storage and display are critical tasks for several video metric applications and usually require specialized equipment. A low cost configuration for stereoscopic video imaging has been developed in the Laboratory of Photogrammetry of the National Technical University of Athens. The system consists of two conventional video cameras connected on a PC by using a multiple frame grabber board. Specialized software developed under Windows 98 guarantees accurate camera synchronization and uninterrupted image capture of 25-30 frames per second. Stereoscopic video playback is also accomplished without using expensive hardware equipment. The required software was developed on a low price video card that supports LCD glasses for 3D-computer gaming. The video sequence is firstly rectified to the normal case by creating epipolar images to facilitate stereo viewing. Accurate coordinate measurements are also performed on computer screen by using a conventional mouse, if exterior orientation dat is also available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel method to synthesize high-resolution images by constructing a light field from image sequence taken with a moving video camera. Our method integrates multiple frames from video camera that partly captures the object by constructing a light field, which is quite different from general mosaic methods. In case of light field constructed straightforwardly, blur and discontinuity are introduced into generated images by depth variation of the object. In our method, light field is optimized to remove these blur and discontinuity, so that clear images can be generated. The optimized light field is adapted to the depth variation of object surface, but the exact shape of the object is not necessary. Extremely large resolution images that are impractical in the real system can be virtually generated from the light field. Results of the experiment applied to book surface demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase correlation is a very robust technique to estimate image translations, but it works only for monochromatic images. If the input image is a color image, it must be first converted to monochrome, wasting part of the input information. In this work we extend the phase correlation algorithm to the case of multi-component images such as RGB and multi-spectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wide variety of medical and archeological applications has a demand for skull geometric parameter measurements. Traditional contact measurement technique has some disadvantages such as low accuracy and a need for real skull for processing. Applying of photogrammetric method for non- contact spatial coordinates determination and 3D model generation allows to provide high precision and conventional interface for expert. However, the problem of textured human skull 3D reconstruction seems to be rather complicated concerning the following aspects. The human skull is a real 3D object, which can not be reconstructed basing on single stereo pair. The way of whole 3D model reconstruction basing on acquiring a set of stereo images covered the whole object surface is time consuming and requires a special mean for integration of obtained 2.5D fragments into united 3D model. Another requirement to skull 3D model is to provide for the expert the possibility of easy finding the object point, which has to be measured. Accurate photorealistic texture mapping can satisfy this requirement. The paper presents the approach, which provides high performance automated skull 3D reconstruction along with accurate texture generation. The system developed includes three CCD cameras, Pentium personal computer equipped with frame grabbers, structural light projector and PC-controlled turnable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While photographs vividly capture a scene from a single viewpoint, it is our goal to capture a scene in such a way that a viewer can freely move to any viewpoint, just as he or she would in an actual scene. We have built a prototype system to quickly digitize a scene using a laser rangefinder and a high-resolution digital camera that accurately captures a panorama of high-resolution range and color information. With real-world scenes, we have provided data to fuel research in many area, including representation, registration, data fusion, polygonization, rendering, simplification, and reillumination. The real-world scene data can be used for many purposes, including immersive environments, immersive training, re-engineering and engineering verification, renovation, crime-scene and accident capture and reconstruction, archaeology and historic preservation, sports and entertainment, surveillance, remote tourism and remote sales. We will describe our acquisition system, the necessary processing to merge data from the multiple input devices and positions. We will also describe high quality rendering using the data we have collected. Issues about specific rendering accelerators and algorithms will also be presented. We will conclude by describing future uses and methods of collection for real- world scene data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.