While a lot of areas in VR have made significant advances, visual rendering in VR is often not quite keeping up with the state of the art. There are many reasons for this, but one way to alleviate some of the issues is by using ray tracing instead of rasterization for image generation. Contrary to popular belief, ray tracing is a realistic, competitive technology nowadays. This paper looks at the pros and cons of using ray tracing and demonstrates the feasibility of employing it using the example of a helicopter flight simulator image generator.
One of the main barriers to create and use compelling scenarios in virtual reality is the complexity and time-consuming
efforts for modeling, element integration, and the software development to properly display and interact with the content
in the available systems. Still today, most virtual reality applications are tedious to create and they are hard-wired to the
specific display and interaction system available to the developers when creating the application. Furthermore, it is not
possible to alter the content or the dynamics of the content once the application has been created.
We present our research on designing a software pipeline that enables the creation of compelling scenarios with a fair degree
of visual and interaction complexity in a semi-automated way. Specifically, we are targeting drivable urban scenarios,
ranging from large cities to sparsely populated rural areas that incorporate both static components (e. g., houses, trees) and
dynamic components (e. g., people, vehicles) as well as events, such as explosions or ambient noise.
Our pipeline has four basic components. First, an environment designer, where users sketch the overall layout of the scenario,
and an automated method constructs the 3D environment from the information in the sketch. Second, a scenario editor used
for authoring the complete scenario, incorporate the dynamic elements and events, fine tune the automatically generated
environment, define the execution conditions of the scenario, and set up any data gathering that may be necessary during
the execution of the scenario. Third, a run-time environment for different virtual-reality systems provides users with the
interactive experience as designed with the designer and the editor. And fourth, a bi-directional monitoring system that
allows for capturing and modification of information from the virtual environment.
One of the interesting capabilities of our pipeline is that scenarios can be built and modified on-the-fly as they are being
presented in the virtual-reality systems. Users can quickly prototype the basic scene using the designer and the editor on a
control workstation. More elements can then be introduced into the scene from both the editor and the virtual-reality display.
In this manner, users are able to gradually increase the complexity of the scenario with immediate feedback. The main use
of this pipeline is the rapid development of scenarios for human-factors studies. However, it is applicable in a much more
This document describes the Virtual Reality Simulated MIG Lab (sMIG), a system for Virtual Reality welder training. It is designed to reproduce the experience of metal inert gas (MIG) welding faithfully enough to be used as a teaching tool for beginning welding students. To make the experience as realistic as possible it employs physically accurate and tracked input devices, a real-time welding simulation, real-time sound generation and a 3D display for output. Thanks to being a fully digital system it can go beyond providing just a realistic welding experience by giving interactive and immediate feedback to the student to avoid learning wrong movements from day 1.
Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed
virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces,
they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced
a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial
solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution
camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial
markers with known size and absolute position inside the VR scene behind the user but in view of the camera.
These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to
update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as
those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and
accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture
recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our
original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements
and noise reduction.