PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We present a series of dimensional metrology procedures for evaluating the geometrical performance of a 3D imaging
system that have either been designed or modified from existing procedures to ensure, where possible, statistical
traceability of each characteristic value from the certified reference surface to the certifying laboratory. Because there
are currently no internationally-accepted standards for characterizing 3D imaging systems, these procedures have been
designed to avoid using characteristic values provided by the vendors of 3D imaging systems. For this paper, we focus
only on characteristics related to geometric surface properties, dividing them into surface form precision and surface fit
trueness. These characteristics have been selected to be familiar to operators of 3D imaging systems that use
Geometrical Dimensioning and Tolerancing (GD&T). The procedures for generating characteristic values would form
the basis of either a volumetric or application-specific analysis of the characteristic profile of a 3D imaging system. We
use a hierarchical approach in which each procedure builds on either certified reference values or previously-generated
characteristic values. Starting from one of three classes of surface forms, we demonstrate how procedures for
quantifying for flatness, roundness, angularity, diameter error, angle error, sphere-spacing error, and unidirectional and
bidirectional plane-spacing error are built upon each other. We demonstrate how these procedures can be used as part of
a process for characterizing the geometrical performance of a 3D imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Time-of-Flight (ToF) camera uses a near infrared (NIR) to obtain the distance from the camera to an object. The
distance is calculated from the amount of time shift between the emitted and reflected NIR. ToF cameras generally
modulate NIR with a square wave rather than a sinusoidal wave due to its difficulty in hardware implementation. The
previous method using simple trigonometric function estimates the time shift with the difference of electrons generated
by the reflected square wave. Thus the estimated time shift includes a harmonic distortion caused by the nonlinearity of
trigonometric function. In this paper, we propose a new linear estimation method to reduce the harmonic distortion. For
quantitative evaluation, the proposed method is compared to the previous method using our prototype ToF depth camera.
Experimental results show that the distance obtained by the proposed method is more accurate than that by the previous
method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single
viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path
problem, which causes range distortions when stray light interferes with the range measurement in a given pixel.
Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but
enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating
the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only
been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this
paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator
unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization
approach, rather than relying on the processing provided by the manufacturer, to determine the individual component
returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fibers are industrially important particles that experience coupling between rotational and translational motion during
sedimentation. This leads to helical trajectories that have yet to be accurately predicted or measured. Sedimentation
experiments and hydrodynamic analysis were performed on 11 copper "fibers" of average length 10.3 mm and diameter
0.20 mm. Each fiber contained three linear but non-coplanar segments. Fiber dimensions were measured by imaging
their 2D projections on three planes. The fibers were sequentially released into silicone oil contained in a transparent
cylinder of square cross section. Identical, synchronized cameras were mounted to a moveable platform and imaged the
cylinder from orthogonal directions. The cameras were fixed in position during the time that a fiber remained in the field
of view. Subsequently, the cameras were controllably moved to the next lower field of view. The trajectories of
descending fibers were followed over distances up to 250 mm. Custom software was written to extract fiber orientation
and trajectory from the 3D images. Fibers with similar terminal velocity often had significantly different terminal
angular velocities. Both were well-predicted by theory. The radius of the helical trajectory was hard to predict when
angular velocity was high, probably reflecting uncertainties in fiber shape, initial velocity, and fluid conditions
associated with launch. Nevertheless, lateral excursion of fibers during sedimentation was reasonably predicted by fiber
curl and asymmetry, suggesting the possibility of sorting fibers according to their shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a depth up-sampling method using the confidence map for a fusion of a high resolution color sensor
and low resolution time-of-flight depth sensor. The confidence map represents the accuracy of depth depending on the
reflectance of a measured object and is estimated with amplitude, offset, and reconstructed error of a received signal.
The proposed method suppresses the depth artifacts that are caused by difference between low and high reflective
materials on an object at a distance. Although the surface of an object is located at the same distance, the reflectance of
small regions within the surface depends on constituent materials. Weighted filter generated by confidence map is added
to the modified noise-aware filter for depth up-sampling that is proposed by Derek et al., and is adaptively selected. The
proposed method consists of followings; the normalization, the reconstruction, the confidence map estimation, and the
modified noise-aware filtering. In the normalization, amplitudes and offsets of received signals are calculated and
received signals are normalized by those. The phase shifts are measured between transmitted and received signals. In the
reconstruction, received signals are reconstructed using only the values of phase shifts and the reconstruction errors are
calculated. The confidence map is estimated with amplitudes, offsets, and reconstruction errors. The coefficients of a
modified noise-aware filter are adaptively selected by referring to the confidence map. The proposed method shows the
enhanced results of removing depth artifacts in the experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Instruments and Methods for 3D Metrology from Images
We now have numerous autostereoscopic displays, and it is mandatory to characterize them because it will allow
to optimize their performances and to make efficient comparison between them. Therefore we need standards
so we have to be able to quantify the quality of the viewer's perception. The purpose of the present paper is
twofold; we first present a new instrument of characterization of the 3D perception on a given autostereoscopic
display; then we propose a new way to realize an experimental protocol allowing to get a full characterization.
This instrument will allow us to compare efficiently the different autostereoscopic displays but it will also validate
practically the adequacy between the shooting and rendering geometries. In this aim, we are going to match a
perceived scene with the virtual scene. It is hardly possible to determine the scene perceived by a viewer placed
in front of an autostereoscopic display. Indeed if it may be executable on the pop-out, it is impossible on the
depth effect because the depth of the virtual scene is set behind the screen. Therefore, we will have to use an
optical illusion based on the deflection of light by a mirror to know the position which the viewer perceives some
points of the virtual scene on an autostereoscopic display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accuracy of stereo matching depends on precise detection of corresponding points in a pair of stereo images by
template matching. A multiband imaging system captures more than three channels in a visible range. The multiband
imaging technique is useful for improving the accuracy of the stereo matching. This paper proposes an imaging system
and an algorithm for stereo matching based on multiband images. The imaging system is composed of a liquid-crystal
tunable filter and a high sensitive monochrome camera. This imaging system has the advantage that low contrast color
textures that are lost in RGB images could be detected in the multiband-spectral images. We use a modified sequential
similarity detection algorithm (SSDA) for the acceleration of multiband template matching. The similarity is calculated
for each band in the descending order of the variance in template. The template matching is accelerated by quick
discontinuance of the calculation at dissimilar points. Experimental results show that multiband stereo matching is
accurate in comparing with RGB stereo matching. Measurement targets were sheets of color texture patches with small
color differences. The rate of detection of correct corresponding points was 98.4% by multiband stereo matching, while
the rate was 34.7% by RGB stereo matching. Moreover, use of the modified SSDA reduced the CPU time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact
measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the
range of targets and realizes silhouette detection which can directly extract targets from complex background and
decrease the complexity of moving target image processing. Time delay integration increases information of one single
frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about
flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling
badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory
and can give motion parameters of moving targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is currently no general-purpose, open standard for storing data produced by three dimensional (3D) imaging
systems, such as laser scanners. As a result, producers and consumers of such data rely on proprietary or ad-hoc formats
to store and exchange data. There is a critical need in the 3D imaging industry for open standards that promote data
interoperability among 3D imaging hardware and software systems. For the past three years, a group of volunteers has
been working within the ASTM E57 Committee on 3D Imaging Systems to develop an open standard for 3D imaging
system data exchange to meet this need. The E57 File Format for 3D Imaging Data Exchange (E57 format hereafter) is
capable of storing point cloud data from laser scanners and other 3D imaging systems, as well as associated 2D imagery
and core meta-data. This paper describes the motivation, requirements, design, and implementation of the E57 format,
and highlights the technical concepts developed for the standard. We also compare the format with other proprietary or
special purpose 3D imaging formats, such as the LAS format, and we discuss the open source library implementation
designed to read, write, and validate E57 files.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System alignment strategies impact the overall performance of white light scanners, in particular affecting the
uncertainty that is determined using the sphere spacing error specified in the VDI 2634 guideline. This paper addresses
the accuracy of optical white light or so called "topometric" scanners. In almost any application of such scanners it is
necessary to put together scans from different directions: from a couple of scans to a couple of hundred scans. Accuracy
for the scanner itself can usually be well described for a single scan. However, the accuracy for assembled data sets
from many scans is harder to estimate and specify as it depends on many more parameters as well as on the alignment
strategy being used. This paper will describe different alignment strategies including the use of robots, tracking systems,
and targets, as well as best fitting methods. The impact of these methods on the resulting overall accuracy is described
and demonstrated using real test examples. In addition, different methods of achieving these accuracy numbers will be
presented including using guidelines such as provided in VDI 2634. This paper will briefly touch on the basic principles
of white light scanning to understand the potential as well as to illustrate the limitations of these techniques.
This paper is intended to provide a useful guideline for engineers or quality managers who want to establish or learn
more about new scanning technologies, with special attention given to the accuracy issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fringe projection sensors gain in importance in manufacturing quality control due to their multiple advantages. In order
to adapt the measurement strategy to a specific inspection task, both a suitable sensor and the necessary measurements
have to be chosen, so that the complete workpiece shape is recorded with a tolerance-compatible measurement
uncertainty. Thus a reliable forecast of the measurement uncertainty is crucial for an effective inspection-planning
procedure. There are multiple influences, whose impacts on the measurement result vary dependent on the position of
each measured point. So the local measurement uncertainty at each measured point - here called 'local optical probing
uncertainty' - is individual. Today, this local probing uncertainty cannot be predicted. This paper shows a simulationbased
approach to eliminate this shortfall. Firstly, a definition for local optical probing uncertainty is given. Then the
model for the simulation of fringe projection measurements - including a GUM-compliant forecast for the local probing
uncertainty - is described. This simulation is then implemented into an assistance system that supports the inspection
planner when setting up the measurement strategy. Finally a method for the experimental verification of the local optical
probing uncertainty is introduced and the simulation results are verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The core of the paper is focused on the experimental characterization of four different 3D laser scanners based on Time
of Flight principle, through the extraction of resolution, accuracy and uncertainty parameters from specifically designed
3D test objects. The testing process leads to four results: z-uncertainty, xy-resolution z-resolution and z-accuracy. The
first is obtained by the evaluation of random residuals from the 3D capture of a planar target, the second from the
scanner response to an abrupt z-jump, and the last two from direct evaluation of the image extracted by different
geometric features progressively closer each other. The aim of this research is to suggest a low cost characterization
process, mainly based on calibrated test object easy to duplicate, that allow an objective and reliable comparison between
3D TOF scanner performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D
cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding
the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This
can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be
projected with different horizontal displacement on the left and right camera images. These images, when fed
separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D
imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating
depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects
from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor
do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth
characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this
paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three dimensional (3D) imaging sensors, such as laser scanners, are being used to create building information models
(BIMs) of the as-is conditions of buildings and other facilities. Quality assurance (QA) needs to be conducted to ensure
that the models accurately depict the as-is conditions. We propose a new approach for QA that analyzes patterns in the
raw 3D data and compares the 3D data with the as-is BIM geometry to identify potential errors in the model. This
"deviation analysis" approach to QA enables users to analyze the regions with significant differences between the 3D
data and the reconstructed model or between the 3D data of individual laser scans. This method can help identify the
sources of errors and does not require additional physical access to the facility. To show the approach's potential
effectiveness, we conducted case studies of several professionally conducted as-is BIM projects. We compared the
deviation analysis method to an alternative method - the physical measurement approach - in terms of errors detected
and coverage. We also conducted a survey and evaluation of commercial software with relevant capabilities and
identified technology gaps that need to be addressed to fully exploit the deviation analysis approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth estimation in focused plenoptic camera is a critical step for most applications of this technology and poses
interesting challenges, as this estimation is content based. We present an iterative algorithm, content adaptive,
that exploits the redundancy found in focused plenoptic camera captured images. Our algorithm determines for
each point its depth along with a measure of reliability allowing subsequent enhancements of spatial resolution
of the depth map. We remark that the spatial resolution of the recovered depth corresponds to discrete values
of depth in the captured scene to which we refer as slices. Moreover, each slice has a different depth and will
allow extraction of different spatial resolutions of depth, depending on the scene content being present in that
slice along with occluding areas. Interestingly, as focused plenoptic camera is not theoretically limited in spatial
resolution, we show that the recovered spatial resolution is depth related, and as such, rendering of a focused
plenoptic image is content dependent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro gears are applied in an increasing quantity in many applications. Therefore, precise measurements are of growing
importance to ensure their quality. This contribution describes the measurement of gears of a micro planetary gear set
with a tactile probe, a tactile-optical probe, an optical sensor, and computed tomography (CT).
For the tactile measurements, a high precision piezoresistive microprobe was used. A so-called fiber probe was applied
for tactile-optical measurements. This probe applies image processing to determine the position of the tactile probing
element. For all tactile and tactile-optical measurements, single point probing was used. The optical measurements were
carried out with an imaging sensor based on focus variation. Due to limited accessibility, on some gears not all regions
could be measured by the optical sensor and the tactile-optical probe. In contrast to this, with CT the whole part could be
measured with high point density. We used a micro-CT system and carried out measurements with Synchrotron-CT.
All the sensors used deliver measurement data in Cartesian coordinates. It is a challenge to transfer these data into
coordinates in which gear parameters are defined. For this, special attention must be paid to the determination of the gear
axis and to the orientation of the teeth.
The applied procedures are detailed for different micro gears. The comparison between data of different measurements
was carried out successfully. The deviations between the CT data and the tactile or tactile-optical data lie in the range of
only a few micrometers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method to evaluate stereo camera depth accuracy in human centered applications. It enables the comparison
between stereo camera depth resolution and human depth resolution. Our method uses a multilevel test target which can
be easily assembled and used in various studies. Binocular disparity enables humans to perceive relative depths
accurately, making a multilevel test target applicable for evaluating the stereo camera depth accuracy when the accuracy
requirements come from stereoscopic vision.
The method for measuring stereo camera depth accuracy was validated with a stereo camera built of two SLRs (singlelens
reflex). The depth resolution of the SLRs was better than normal stereo acuity at all measured distances ranging
from 0.7 m to 5.8 m. The method was used to evaluate the accuracy of a lower quality stereo camera. Two parameters,
focal length and baseline, were varied. Focal length had a larger effect on stereo camera's depth accuracy than baseline.
The tests showed that normal stereo acuity was achieved only using a tele lens.
However, a user's depth resolution in a video see-through system differs from direct naked eye viewing. The same test
target was used to evaluate this by mixing the levels of the test target randomly and asking users to sort the levels
according to their depth. The comparison between stereo camera depth resolution and perceived depth resolution was
done by calculating maximum erroneous classification of levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Grotta dei Cervi is a Neolithic cave where human presence has left many unique pictographs on the walls of many
of its chambers. It was closed for conservation reasons soon after its discovery in 1970. It is for these reasons that a 3D
documentation was started. Two sets of high resolution and detailed three-dimensional (3D) acquisitions were captured
in 2005 and 2009 respectively, along with two-dimensional (2D) images. From this information a textured 3D model
was produced for most of the 300-m long central corridor. Carbon dating of the guano used for the pictographs and
environmental monitoring (Temperature, Relative humidity, and Radon) completed the project. This paper presents this
project, some results obtained up to now, the best practice that has emerged from this work and a description of the
processing pipeline that deals with more than 27 billion 3D coordinates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For decades three-dimensional (3D) measurements of engineering components have been made using fixed metrologyroom
based coordinate measuring machines (CMMs) fitted most commonly with single point or to a much lesser extent,
scanning tactile probes. Over the past decade there has been a rapid uptake in development and subsequent use of
portable optical-based 3D coordinate measuring systems. These optical based systems capture vast quantities of point
data in a very short time, often permitting freeform surfaces to be digitised. Documented standards for the verification of
fixed CMMs fitted with tactile probes are now widely available, whereas verification procedures and more specifically
verification artefacts for optical-based systems are still in their infancy.
To assist industry in the verification of optical based coordinates systems, this paper describes a freeform verification
artefact that has been developed, calibrated and used to support a measurement intercomparison between a fixed CMM
and a number of optical based systems. These systems employ technologies involving laser triangulation scanning,
photogrammetry and fringe projection. The NPL freeform verification artefact is presented and a measurement
intercomparison is reported which identifies that the accuracy of the optical-based systems tested is not as good as tactile
probing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Research Council of Canada (NRC) is currently evaluating and designing artifacts and methods to
completely characterize 3-D imaging systems. We have gathered a set of artifacts to form a low-cost portable case and
provide a clearly-defined set of procedures for generating characteristic values using these artifacts. In its current
version, this case is specifically designed for the characterization of short-range (standoff distance of 1 centimeter to 3
meters) triangulation-based 3-D imaging systems. The case is known as the "NRC Portable Target Case for Short-Range
Triangulation-based 3-D Imaging Systems" (NRC-PTC). The artifacts in the case have been carefully chosen for their
geometric, thermal, and optical properties. A set of characterization procedures are provided with these artifacts based on
procedures either already in use or are based on knowledge acquired from various tests carried out by the NRC.
Geometric dimensioning and tolerancing (GD&T), a well-known terminology in the industrial field, was used to define
the set of tests. The following parameters of a system are characterized: dimensional properties, form properties,
orientation properties, localization properties, profile properties, repeatability, intermediate precision, and
reproducibility. A number of tests were performed in a special dimensional metrology laboratory to validate the
capability of the NRC-PTC. The NRC-PTC will soon be subjected to reproducibility testing using an intercomparison
evaluation to validate its use in different laboratories.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The behavior of nine 3D shape descriptors which were computed on the surface of 3D face models, is studied.
The set of descriptors includes six curvature-based ones, SPIN images, Folded SPIN Images, and Finger prints.
Instead of defining clusters of vertices based on the value of a given primitive surface feature, a face template
composed by 28 anatomical regions, is used to segment the models and to extract the location of different
landmarks and fiducial points. Vertices are grouped by: region, region boundaries, and subsampled versions of
them. The aim of this study is to analyze the discriminant capacity of each descriptor to characterize regions
and to identify key points on the facial surface. The experiment includes testing with data from neutral faces
and faces showing expressions. Also, in order to see the usefulness of the bending-invariant canonical form
(BICF) to handle variations due to facial expressions, the descriptors are computed directly from the surface
and also from its BICF. In the results: the values, distributions, and relevance indexes of each set of vertices,
were analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an integrated 3D face scanning system using the structured light technique. After illuminating the
face by a pattern with horizontal colored strips using the De_Bruijn sequence, an image is taken and used to obtain the
3D information. A second image without illumination is used to add the texture to the reconstructed model. The
precision of the 3D model depends on the determination of the strip centers. The technique proposed to determine these
centers uses a smoothing Gaussian filter with a large kernel applied to the V component in the HSV color space. A
classic connection algorithm allows to link the isolated points in the detected strips. The color of the detected strips is
determined for the whole connected parts in two steps. Firstly, we use the H component of the HSV color space to
determine the color of each set of pixels. Then, we apply a region growing algorithm to assign the colors to the
remaining pixels. Each connected and colored part of line is treated alone and matched to a line in the projected pattern.
Finally each detected point is triangulated with its corresponding one in the projected pattern to generate the model.
Experiment results show a good 3D resolution with this technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facial feature points are one of the most important clues for many computer vision applications such as face
normalization, registration and model-based human face coding. Hence, automating the extraction of these points would
have a wide range of usage. In this paper, we aim to detect a subset of Facial Definition Parameters (FDPs) defined in
MPEG-4 automatically by utilizing both 2D and 3D face data. The main assumption in this work is that the 2D images
and the corresponding 3D scans are taken for frontal faces with neutral expressions. This limitation is realistic with
respect to our scenario, in which the enrollment is done in a controlled environment and the detected FDP points are to
be used for the warping and animation of the enrolled faces [1] where the choice of MPEG-4 FDP is justified. For the
extraction of the points, 2D, 3D data or both is used according to the distinctive information they carry in that particular
facial region. As a result, total number of 29 interest points is detected. The method is tested on the neutral set of
Bosphorus database that includes 105 subjects with registered 3D scans and color images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel method for 3D-shape matching using Bag-of-Feature techniques (BoF). The method starts
by selecting and then describing a set of points from the 3D-object. Such descriptors have the advantage of
being invariant to different transformations that a shape can undergo. Based on vector quantization, we cluster
those descriptors to form a shape vocabulary. Then, each point selected in the object is associated to a cluster
(word) in that vocabulary. Finally, a BoF histogram counting the occurrences of every word is computed. These
results clearly demonstrate that the method is robust to non-rigid and deformable shapes, in which the class of
transformations may be very wide due to the capability of such shapes to bend and assume different forms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this work is to evaluate 3D keypoints detectors and descriptors, which could be used for quasi real time 3D
object recognition. The work presented has three main objectives: extracting descriptors from real depth images,
obtaining an accurate degree of invariance and robustness to scale and viewpoints, and maintaining the computation time
as low as possible. Using a 3D time-of-flight (ToF) depth camera, we record a sequence for several objects at 3 different
distances and from 5 viewpoints. 3D salient points are then extracted using 2 different curvatures-based detectors. For
each point, two local surface descriptors are computed by combining the shape index histogram and the normalized
histogram of angles between the normal of reference feature point and the normals of its neighbours. A comparison of the
two detectors and descriptors was conducted on 4 different objects. Experimentations show that both detectors and
descriptors are rather invariant to variations of scale and viewpoint. We also find that the new 3D keypoints detector
proposed by us is more stable than a previously proposed Shape Index based detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an industrial context, recovering a continuous model is necessary to make modifications or to exchange
data with a format including continuous representation of objects like STEP. But for many reasons, the initial
continuous object can be lost after a display or an exchange with a discretized format. The mesh can also be
deformed after a numerical computation. It is then important to have a method to create a new continuous
model of the object from a mesh. In case of CAD object, the first step is to detect simple primitives like: plane,
sphere, cone and cylinder from a 3D CAD mesh. This paper is focused on this step. This method of detection
use curvature features to recover each primitive type. Segmentation is based on the curvature feature computed
for each vertex. It permits to extract sub-meshes. Each one corresponds to a primitive. Parameters of these
primitives are found with a fitting process according to the curvature features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a new formulation for the 3D salient local features based on the voxel grid inspired by the Scale
Invariant Feature Transform (SIFT). We use it to identify the salient keypoints (invariant points) on a 3D voxelized
model and calculate invariant 3D local feature descriptors at these keypoints. We then use the bag of words approach on
the 3D local features to represent the 3D models for shape retrieval. The advantages of the method are that it can be
applied to rigid as well as to articulated and deformable 3D models. Finally, this approach is applied for 3D Shape
Retrieval on the McGill articulated shape benchmark and then the retrieval results are presented and compared to other
methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on building polygons (building footprints) on a digital map, a GIS and CG integrated system to generate 3D
building models automatically is proposed. Most building polygons' edges meet at right angles (orthogonal polygon). A
complicated orthogonal polygon can be partitioned into a set of rectangles. In order to partition an orthogonal polygon,
we proposed a useful polygon expression (RL expression) and a partitioning scheme in deciding from which vertex a
dividing line (DL) is drawn. After partitioning, the integrated system will place rectangular roofs and box-shaped
building bodies on these rectangles. In this paper, we propose a new scheme for partitioning building polygons and for
creating a complicated shape of building models based on orthogonal building polygons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D road models are widely used in many computer applications such as racing games and driving simulations. However,
almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor.
There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those
existing in the real world. Real road network contains various elements such as road segments, road intersections and
traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and
the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a
novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange
models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of
several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and
classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on
a set of level estimation rules. Parametric representations of the road centerlines are then generated through link
segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a
set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic
road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road
elements. Preliminary results show that the proposed method is highly effective and useful in many applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of objects on a given road path by vehicles equipped with range measurement
devices is important to many civilian and military applications such as obstacle avoidance in
autonomous navigation systems. In this thesis, we develop a method to detect objects of a
specific size lying on a road using an acquisition vehicle equipped with forward looking Light
Detection And Range (LiDAR) sensors and inertial navigation system. We use GPS data to
accurately place the LiDAR points in a world map, extract point cloud clusters protruding from
the road, and detect objects of interest using weighted random forest trees. We show that our
proposed method is effective in identifying objects for several road datasets collected with
various object locations and vehicle speeds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synchronization in 3D data hiding is one of the main problems. We need to know where we can embed information,
and be able to find this space in order to extract the message. Various algorithms propose synchronization
techniques by triangle or vertex path in a 3D mesh. In this paper, we proposed a new synchronization technique
based on Euclidean minimum spanning tree computing (EMST) and the analysis of the displacement of
the vertices without moving the connections in the tree. Based on the analysis of the vertices, we select the
most robust vertices and synchronize these areas by computing a new EMST called "robust EMST". Then, we
analyze the robustness of the technique, i.e. the stability of the most robust vertices selection; and demonstrate
the consistence of the criterion selection with the vertex displacement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When considering probabilistic pattern recognition methods, especially methods based on Bayesian analysis, the
probabilistic distribution is of the utmost importance. However, despite the fact that the geometry associated with the
probability distribution constitutes essential background information, it is often not ascertained. This paper discusses
how the standard Euclidian geometry should be generalized to the Riemannian geometry when a curvature is observed in
the distribution. To this end, the probability distribution is defined for curved geometry. In order to calculate the
probability distribution, a Lagrangian and a Hamiltonian constructed from curvature invariants are associated with the
Riemannian geometry and a generalized hybrid Monte Carlo sampling is introduced. Finally, we consider the
calculation of the probability distribution and the expectation in Riemannian space with path integrals, which allows a
direct extension of the concept of probability to curved space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed a highly enhanced compression scheme of Integral Imaging (InIm) by use of sub-images
(SIs) removing the Motion Vector (MV) of residual image array transformed from Sub-Image Array (SIA). In the pickup
process, SIA is generated from EIA after the object through the virtual pinhole array is recorded as Elemental Image
Array (EIA). It provides enhanced compression efficiency by improving the similarity among SIs. In the proposed
method, a segmented area, which is a macroblock, in the reference SI is matched on current SIs applying to MSE. MVs
occurred among SIs might result in an additional increase for data compression. Accordingly, the computed motion
estimation from the block-matching is saved as MV and all objects in each current SI are shifted to the object position of
the reference SI to compensate their MV based on the motion estimation. We can enhance the similarity of SIs removing
MV, so that an improvement of compression efficiency of the SIA could be obtained. In addition, the video compression
scheme such as MPEG-4 can be applied to data reduction of the consecutive frames. The proposed algorithm
outperforms the baseline JPEG and the conventional EIA compression scheme applied to InIm and is compared with
simulations produced using these schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aortic dissection is a real problem of public health, it is a medical emergency and may quickly lead to death.
Aortic dissection is caused by aortal tissue perforation because of blood pressure. It consists of tears (or holes
of the intimal tissue) inside lumens. These tears are difficult to detect because they do not correspond to a
filled organ to segment; they are usually visually retrieved by radiologists by examining gray level variation on
successive image slices, but it remains a very difficult and error-prone task.
Our purpose is to detect these intimal tears to help cardiac surgeons in making diagnosis. It would be
useful either during a preoperative phase (visualization and location of tears, endoprothesis sizing); or during a
peroperative phase (a registration of tears on angiographic images would lead to a more accuracy of surgeon's
gestures and thus would enhance care of patient).
At this aim, we use Aktouf et al.'s holes filling algorithm proposed in the field of digital topology. This
algorithm permits the filling of holes of a 3D binary object by using topological notions - the holes are precisely
the intimal tears for our aortic dissection images, after a first preprocessing step.
As far as we know, this is the first time that such a proposal is made, even if it is a crucial data for cardiac
surgeons. Our study is a preliminary and innovative work; our results are nevertheless considered satisfactory.
This approach would also gain to be known to specialists of other diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A set of calculation methods has been developed and tested to provide means of creating virtual copies of three
dimensional (3D) historical objects with minimal user input. We present a step by step data processing path
along with algorithm description required to reconstruct a realistic 3D model of a culturally significant object.
The important feature for archiving historical objects is the ability to include both information about its shape
and texture, allowing visualization using arbitrary conditions of illumination. Data samples used as input for the
processing method chain were collected using an integrated device consisting of shape, multispectral color and
simplified BRDF measurements. To confirm the usability of presented methods, it has been tested by example
of real life object - statue of an ancient Greek goddess Kybele. Additional visualization methods have also been
examined to render a realistic virtual representation satisfying intrinsic surface properties of the investigated
specimen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the European Commission around 200,000 counterfeit Euro coins are removed from circulation every
year. While approaches exist to automatically detect these coins, satisfying error rates are usually only reached for
low quality forgeries, so-called "local classes". High-quality minted forgeries ("common classes") pose a problem
for these methods as well as for trained humans. This paper presents a first approach for statistical analysis of
coins based on high resolution 3D data acquired with a chromatic white light sensor. The goal of this analysis is
to determine whether two coins are of common origin. The test set for these first and new investigations consists
of 62 coins from not more than five different sources. The analysis is based on the assumption that, apart from
markings caused by wear such as scratches and residue consisting of grease and dust, coins from equal origin have
a more similar height field than coins from different mints. First results suggest that the selected approach is
heavily affected by influences of wear like dents and scratches and the further research is required the eliminate
this influence. A course for future work is outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the difficulty of dealing with specularity of several surfaces, few methods have been proposed to measure
three-dimensional shapes of specular metallic objects. In this paper we present an application on this kind of material of
an approach called "Scanning From Heating". This approach has been developed initially for 3D reconstruction of
transparent objects. This article presents an application of the working principle of SFH method on material with high
thermal conductivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent
challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires
that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system
requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and
screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture
functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the
device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video
resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with
migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D
needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage
scenario (including interactions between the user and the capture/display device) is carefully considered. Finally,
both the processing power of the device and the practicality of the scheme needs to be taken into account while
designing the calibration and processing methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active triangulation is a well-established technique for collecting range data. Use of low-power systems in
environments with high ambient light is motivated by considerations of price, portability and eye safety. However,
when environment light is high, detection of laser return can be challenging. A method combining filtering of
specialized low-level features with dynamic programming is proposed. Results are achieved in these challenging
conditions comparable to results achieved with current techniques under ideal conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods
described here combine high-resolution photography with surround vision and full stereo view in an immersive
environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the
StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also
the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically
created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately
20 feet, or at the object of major interest.
A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces
the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking,
even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo
panoramas created with this acquisition and display technique can be applied without modification to a large array of VR
devices having different screen arrangements and different VR libraries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system
that enables users to touch the virtual environment they are immersed in.
The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to
project any graphic image onto the user's hands and into the space surrounding them. With his or her head position
optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact
with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object.
HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as
well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can
place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software
and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional,
affordable and scalable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main barriers to create and use compelling scenarios in virtual reality is the complexity and time-consuming
efforts for modeling, element integration, and the software development to properly display and interact with the content
in the available systems. Still today, most virtual reality applications are tedious to create and they are hard-wired to the
specific display and interaction system available to the developers when creating the application. Furthermore, it is not
possible to alter the content or the dynamics of the content once the application has been created.
We present our research on designing a software pipeline that enables the creation of compelling scenarios with a fair degree
of visual and interaction complexity in a semi-automated way. Specifically, we are targeting drivable urban scenarios,
ranging from large cities to sparsely populated rural areas that incorporate both static components (e. g., houses, trees) and
dynamic components (e. g., people, vehicles) as well as events, such as explosions or ambient noise.
Our pipeline has four basic components. First, an environment designer, where users sketch the overall layout of the scenario,
and an automated method constructs the 3D environment from the information in the sketch. Second, a scenario editor used
for authoring the complete scenario, incorporate the dynamic elements and events, fine tune the automatically generated
environment, define the execution conditions of the scenario, and set up any data gathering that may be necessary during
the execution of the scenario. Third, a run-time environment for different virtual-reality systems provides users with the
interactive experience as designed with the designer and the editor. And fourth, a bi-directional monitoring system that
allows for capturing and modification of information from the virtual environment.
One of the interesting capabilities of our pipeline is that scenarios can be built and modified on-the-fly as they are being
presented in the virtual-reality systems. Users can quickly prototype the basic scene using the designer and the editor on a
control workstation. More elements can then be introduced into the scene from both the editor and the virtual-reality display.
In this manner, users are able to gradually increase the complexity of the scenario with immediate feedback. The main use
of this pipeline is the rapid development of scenarios for human-factors studies. However, it is applicable in a much more
general context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shared virtual worlds such as Second Life privilege a single point-of-view, namely that of the user. When logged into
Second Life a user sees the virtual world from a default viewpoint, which is from slightly above and behind the user's
avatar (the user's alter ego 'in-world.') This point-of-view is as if the user were viewing his or her avatar using a camera
floating a few feet behind it. In fact it is possible to set the view to as if you were seeing the world through the eyes of
your avatar or you can even move the camera completely independent of your avatar. A change in point-of-view, means,
more than just a different camera point-of-view. The practice of using multiple avatars requires a transformation of
identity and personality. When a user 'enacts' the identity of a particular avatar, their 'real' personality is masked by the
assumed personality. The technology of virtual worlds permits both a change of point-of -view and also facilitates a
change in identity. Does this cause any psychological distress? Or is the ability to be someone else and see a world (a
game, a virtual world) through a different set of eyes somehow liberating and even beneficial?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reengineering of life expanded by perceptual experiences in the sense of presence in Virtual
Reality and Augmented Reality is the theme of our investigation in collaborative practices
confirming the artists´ creativity close to the inventivity of scientists and mutual capacity for the
generation of biocybrid systems. We consider the enactive bodily interfaces for human existence
being co-located in the continuum and symbiotic zone between body and flesh - cyberspace and data
- and the hybrid properties of physical world. That continuum generates a biocybrid zone
(Bio+cyber+hybrid) and the life is reinvented. Results reaffirm the creative reality of coupled body
and mutual influences with environment information, enhancing James Gibson's ecological
perception theory. The ecosystem life in its dynamical relations between human, animal, plants,
landscapes, urban life and objects, bring questions and challenges for artworks and the reengineering
of life discussed in our artworks in technoscience. Finally, we describe an implementation in which
the immersion experience is enhanced by the datavisualization of biological audio signals and by
using wearable miniaturized devices for biofeedback.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Users of immersive virtual reality environments have reported a wide variety of side and after effects including the
confusion of characteristics of the real and virtual worlds. Perhaps this side effect of confusing the virtual and real
can be turned around to explore the possibilities for immersion with minimal technological support in virtual world
group training simulations. This paper will describe observations from my time working as an artist/researcher with the
UCSD School of Medicine (SoM) and Veterans Administration San Diego Healthcare System (VASDHS) to develop
trainings for nurses, doctors and Hospital Incident Command staff that simulate pandemic virus outbreaks. By examining
moments of slippage between realities, both into and out of the virtual environment, moments of the confusion of
boundaries between real and virtual, we can better understand methods for creating immersion. I will use the mixing of
realities as a transversal line of inquiry, borrowing from virtual reality studies, game studies, and anthropological studies
to better understand the mechanisms of immersion in virtual worlds. Focusing on drills conducted in Second Life, I will
examine moments of training to learn the software interface, moments within the drill and interviews after the drill.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.