PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 6804, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer simulators are a popular method of training surgeons in the techniques of laparoscopy. However, for the trainee to feel totally immersed in the process, the graphical display should be as lifelike as possible and two-handed force feedback interaction is required. This paper reports on how a compelling immersive experience can be delivered at low cost using commonly available hardware components. Three specific themes are brought together. Firstly, programmable shaders executing in standard PC graphics adapter's deliver the appearance of anatomical realism, including effects of: translucent tissue surfaces, semi-transparent membranes, multilayer image texturing and real-time shadowing. Secondly, relatively inexpensive 'off the shelf' force feedback devices contribute to a holistic immersive experience. The final element described is the custom software that brings these together with hierarchically organized and optimized polygonal models for abdominal anatomy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors have developed a virtual reality exposure system that reflects the Japanese culture and environment.
Concretely, the system focuses on the subway environment, which is the environment most patients receiving treatment
for panic disorder at hospitals in Tokyo, Japan tend to avoid. The system is PC based and features realistic video images
and highly interactive functionality. In particular, the system enables instant transformation of the virtual space and
allows situations to be freely customized according to the condition and symptoms expressed by each patient. Positive
results achieved in therapy assessments aimed at patients with panic disorder accompanying agoraphobia indicate the
possibility of indoor treatment. Full utilization of the functionality available requires that the interactive functions be
easily operable. Accordingly, there appears to be a need for usability testing aimed at determining whether or not a
therapist can operate the system naturally while focusing fully on treatment. In this paper, the configuration of the virtual
reality exposure system focusing on the subway environment is outlined. Further, the results of usability tests aimed at
assessing how naturally it can be operated while focusing fully on treatment are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an initial study exploring and evaluating a novel, accessible and user-centred interface developed for a VR Medical training environment. In particular, the proposed system facilitates a detailed 3D information exchange, with the aim of improving the user's internal 3D understanding and visualisation of complex anatomical inter-relationships. In order to evaluate the effectiveness of the proposed VR teaching method we developed a female 3D model under the guidance of Consultant Breast surgeons with particular emphasis given on the axilla section. In turn we commenced a comparative study between PBL tutorials augmented with VR and the contemporary teaching techniques involving twelve participants. Overall the paper outlines the development process of the proposed VR Medical Training environment, discusses the results from the comparative study, and offers suggestions for further research and a tentative plan for future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this research is to compare the performance of different stereoscopic displays and tracking/interaction
devices in the context of motor behavior and interaction quality within various Virtual Reality (VR) environments.
Participants were given a series of VR tasks requiring motor behaviors with different degrees of freedom. The VR tasks
were performed using a monoscopic display and two stereoscopic displays (shutter glasses and autostereoscopic display)
and two tracking devices (optical and magnetic). The two 3D tracking/ interaction devices were used to capture
continuous 3D spatial hand position with time stamps. Participants completed questionnaires evaluating display comfort
and simulation fidelity among the three displays and the efficiency of the two interaction devices. The trajectory of
motion was reconstructed from the tracking data to investigate the user's motor behavior. Results provide information
on how stereoscopic displays can affect human motor behavior and interaction modes during VR tasks. These
preliminary results suggest that the use of shutter glasses provides a more immersive and user-friendly display than
autostereoscopic displays. Results also suggest that the optical tracking device, available at a fraction of the cost of the
magnetic tracker, provides similar results for users in terms of functionality and usability features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual Reality (VR), especially in a technologically focused discourse, is defined by a class of hardware and software,
among them head-mounted displays (HMDs), navigation and pointing devices; and stereoscopic imaging. This
presentation examines the experiential aspect of VR. Putting "virtual" in front of "reality" modifies the ontological status
of a class of experience-that of "reality." Reality has also been modified [by artists, new media theorists, technologists
and philosophers] as augmented, mixed, simulated, artificial, layered, and enhanced. Modifications of reality are closely
tied to modifications of perception. Media theorist Roy Ascott creates a model of three "VR's": Verifiable Reality,
Virtual Reality, and Vegetal (entheogenically induced) Reality. The ways in which we shift our perceptual assumptions,
create and verify illusions, and enter "the willing suspension of disbelief" that allows us entry into imaginal worlds is
central to the experience of VR worlds, whether those worlds are explicitly representational (robotic manipulations by
VR) or explicitly imaginal (VR artistic creations). The early rhetoric surrounding VR was interwoven with psychedelics,
a perception amplified by Timothy Leary's presence on the historic SIGGRAPH panel, and the Wall Street Journal's tag
of VR as "electronic LSD." This paper discusses the connections-philosophical, social-historical, and psychological-perceptual
between these two domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The idea of Virtual Reality once conjured up visions of new territories to explore, and expectations of awaiting worlds of
wonder. VR has matured to become a practical tool for therapy, medicine and commercial interests, yet artists, in
particular, continue to expand the possibilities for the medium. Artistic virtual environments created over the past two
decades probe the phenomenological nature of these virtual environments. When we inhabit a fully immersive virtual
environment, we have entered into a new form of Being. Not only does our body continue to exist in the real, physical
world, we are also embodied within the virtual by means of technology that translates our bodied actions into
interactions with the virtual environment. Very few states in human existence allow this bifurcation of our Being, where
we can exist simultaneously in two spaces at once, with the possible exception of meta-physical states such as
shamanistic trance and out-of-body experiences. This paper discusses the nature of this simultaneous Being, how we
enter the virtual space, what forms of persona we can don there, what forms of spaces we can inhabit, and what type of
wondrous experiences we can both hope for and expect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present how Augmented Reality (AR) can be used to create an intimate integration of process data with
the workspace of an industrial CNC (computer numerical control) machine. AR allows us to combine interactive
computer graphics with real objects in a physical environment - in this case, the workspace of an industrial lathe.
ASTOR is an autostereoscopic optical see-through spatial AR system, which provides real-time 3D visual feedback
without the need for user-worn equipment, such as head-mounted displays or sensors for tracking. The use of a
transparent holographic optical element, overlaid onto the safety glass, allows the system to simultaneously provide
bright imagery and clear visibility of the tool and workpiece. The system makes it possible to enhance visibility of
occluded tools as well as to visualize real-time data from the process in the 3D space. The graphics are geometrically
registered with the workspace and provide an intuitive representation of the process, amplifying the user's understanding
and simplifying machine operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Voluble is a dynamic space-time diagram of the solar system. Voluble is designed to help users understand the
relationship between space and time in the motion of the planets around the sun. Voluble is set in virtual reality to relate
these movements to our experience of immediate space. Beyond just the visual, understanding dynamic systems is
naturally associated to the articulation of our bodies as we perform a number of complex calculations, albeit
unconsciously, to deal with simple tasks. Such capabilities encompass spatial perception and memory. Voluble
investigates the balance between the visually abstract and the spatially figurative in immersive development to help
illuminate phenomena that are beyond the reach of human scale and time. While most diagrams, even computer-based
interactive ones, are flat, three-dimensional real-time virtual reality representations are closer to our experience of space.
The representation can be seen as if it was "really there," engaging a larger number of cues pertaining to our everyday
spatial experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A stereoscopic volumetric workstation (SVW) designed to exploit hand-eye coordination for "remote repair"
tasks provides an interesting telepresence environment that opens new opportunities for stereoscopic imaging.
We describe the motivation, design decisions, implementation and preliminary user testing of a multi-user,
multi-computer, network connected, augmented reality system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A primary requirement when elements are to be combined stereoscopically, is that homologous points in each eye view
of each element have identical parallax separation at any point of interaction. If this is not done, the image parts on one
element will appear to be at a different distance from the corresponding or associated parts on the other element. This
results in a visual discontinuity that appears very unnatural. For example, if a live actor were to appear to "shake hands"
with a cartoon character, a very natural appearing juncture may appear to be the case when seen in 2-D, but their hands
may appear to miss when seen in 3-D.
Previous efforts to compensate, or correct these errors have involved painstaking time-consuming trial-and-error tests.
In the area of pure animation, efforts to make cartoon characters appear more realistic were developed. A "motion
tracking" technique was developed. This involves an actor wearing a special suit with indicator marks at various points
on their body. The actor walks through the scene, then the animator tracks the points using motion capture software.
Because live action and CG elements can interact or change at several different points and levels within a scene,
additional requirements must also be addressed. "Occlusions" occur when one object passes in front of another. A
particular tracking point may appear in one eye-view, and not the other. When Z-axis differentials are to be considered
in the live action as well as the CG elements, and both are to interact with each other, both eye-views must be tracked,
especially at points of occlusion.
A new approach would be to generate a three dimensional grid, within which the action is to take place. This grid can
be projected, onto the stage where the live action part is to take place. When differential occlusions occur, the grid may
be seen and CG elements plotted in reference to it. Because of the capability of precisely locating points in a digital
image, a pixel-accurate virtual model of both the actual and the virtual scene may be matched with extreme accuracy.
The metrology of the grid may also be easily changed at any time, not only as to the pitch of the lines, but also the
introduction of intentional distortions, such as when a forced perspective is desired.
This approach would also include using a special parallax indicator, which may be used as a physical generator, such as
a bar-generator light and actually carried in the scene. Parallax indicators can provide instantaneous "readouts" of the
parallax at any point on the animator's monitor. Customized software would equate as the cursor is moved around the
screen, the exact parallax at that indicated pixel would appear on the screen, immediately adjacent to that point.
Preferences would allow the choice of either keying the point to the left-eye image, the right-eye image, or a point midway
in-between.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been developing a wearable interface system, "BOWL ProCam (BOdy-Worn Laser Projector Camera)", for providing the user with a mixed-reality interface by not using any head-mounted devices. The BOWL ProCam is equipped with a laser projector that has the long focal depth, a high-definition fish-eye camera for enabling wide-range situation understanding, and attitude sensors for projection stabilization. In this paper, we first show an evaluation by simulation on the proper position for wearing a projector-camera system in the context of real-world task support. According to the result, the upper chest area was selected as the wearing position. Next, we briefly describe interaction techniques effectively employing both nearby projection surfaces such as the user's hands and far projection surfaces such as a tabletop and wall. This paper then presents preliminary experiments on active-stereo and hand-posture-classification techniques to realize such interaction with a proof-of-concept system that uses a conventional light-bulb projector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a concept of a Internet Virtual Studio as a modern system for production of news, entertainment,
educational and training material is proposed. This system is based on virtual studio technology and integrated with
multimedia data base. Its was developed for web television content production. In successive subentries the general
system architecture, as well as the architecture of modules one by one is discussed. The authors describe each module by
presentation of a brief information about work principles and technical limitations. The presentation of modules is
strictly connected with a presentation of their capabilities. Results produced by each of them are shown in the form of
exemplary images. Finally, exemplary short production is presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual studio is a popular technology for TV programs, that makes possible to synchronize computer graphics (CG) to
realshot image in camera motion. Normally, the geometrical matching accuracy between CG and realshot image is not
expected so much on real-time system, we sometimes compromise on directions, not to come out the problem. So we
developed the hybrid camera calibration method and CG generating system to achieve the accurate geometrical matching
of CG and realshot on virtual studio. Our calibration method is intended for the camera system on platform and tripod
with rotary encoder, that can measure pan/tilt angles. To solve the camera model and initial pose, we enhanced the
bundle adjustment algorithm to fit the camera model, using pan/tilt data as known parameters, and optimizing all other
parameters invariant against pan/tilt value. This initialization yields high accurate camera position and orientation
consistent with any pan/tilt values. Also we created CG generator implemented the lens distortion function with GPU
programming. By applying the lens distortion parameters obtained by camera calibration process, we could get fair
compositing results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper details how simple PC software, a small network of consumer level PCs, some do-it-yourself hardware and four low cost video projectors can be combined to form an easily configurable and transportable projection display with applications in virtual reality training. This paper provides some observations on the practical difficulties of using such a system, its effectiveness in delivering a VE for training and what benefit may be offered through the deployment of a large number of these low cost environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.