Driving simulation (DS) and Virtual Reality (VR) share the same technologies for visualization and 3D vision and may use the same technics for head movement tracking. They experience also similar difficulties when rendering the displacements of the observer in virtual environments, especially when these displacements are carried out using driver commands, including steering wheels, joysticks and nomad devices. High values for transport delay, the time lag between the action and the corresponding rendering cues and/or visual-vestibular conflict, due to the discrepancies perceived by the human visual and vestibular systems when driving or displacing using a control device, induces the so-called simulation sickness.
While the visual transport delay can be efficiently reduced using high frequency frame rate, the visual-vestibular conflict is inherent to VR, when not using motion platforms. In order to study the impact of displacements on simulation sickness, we have tested various driving scenarios in Renault’s 5-sided ultra-high resolution CAVE. First results indicate that low speed displacements with longitudinal and lateral accelerations under a given perception thresholds are well accepted by a large number of users and relatively high values are only accepted by experienced users and induce VR induced symptoms and effects (VRISE) for novice users, with a worst case scenario corresponding to rotational displacements. These results will be used for optimization technics at Arts et Métiers ParisTech for motion sickness reduction in virtual environments for industrial, research, educational or gaming applications.
Immersive digital project reviews consist in using virtual reality (VR) as a tool for discussion between various stakeholders of a project. In the automotive industry, the digital car prototype model is the common thread that binds them. It is used during immersive digital project reviews between designers, engineers, ergonomists, etc. The digital mockup is also used to assess future car architecture, habitability or perceived quality requirements with the aim to reduce using physical mockups for optimized cost, delay and quality efficiency. Among the difficulties identified by the users, handling the mockup is a major one. Inspired by current uses of nomad devices (multi-touch gestures, IPhone UI look’n’feel and AR applications), we designed a navigation technique taking advantage of these popular input devices: Space scrolling allows moving around the mockup. In this paper, we present the results of a study we conducted on the usability and acceptability of the proposed smartphone-based interaction metaphor compared to traditional technique and we provide indications of the most efficient choices for different use-cases accordingly. It was carried out in a traditional 4-sided CAVE and its purpose is to assess a chosen set of interaction techniques to be implemented in Renault’s new 5-sides 4K x 4K wall high performance CAVE. The proposed new metaphor using nomad devices is well accepted by novice VR users and future implementation should allow an efficient industrial use. Their use is an easy and user friendly alternative of the existing traditional control devices such as a joystick.
Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault’s CAVE™ aims at answering needs of the various vehicle conception steps . Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look’n’feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.
Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion.
Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems
(cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design,
ergonomics studies) or in scientific studies of visual perception.
The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g.
vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering
head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a
non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static
observers due to image distortions when rendering image for viewpoints different from a sweet spot.
We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based
on a cylindrical screen projection system.
Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal
cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was
introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing
participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of
higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification
is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality
Many authors report that binocular vision plays an important role in the evaluation of the distance to scenery objects. Furthermore, it was observed that narrow (<20°) angular visual observation field, called field of view, causes underestimation of distances to objects in natural scenes. In a series of experiments we studied if distances were underestimated for larger fields of views (60°, 90° and 120°) and if binocular vision could correct distance estimation. We have also studied distance estimation in virtual environments in the same observation conditions as it is known that it may be poorer because of the lack or bias of visual stimuli. Observers had to estimate proximal distances in real and virtual (large fields of view head mounted display and power wall) scenes of a car interior. We have found that there is a strong underestimation of distances in observing proximal objects (≤50cm) in a reduced field of view for both real and virtual scenes and more the field of view is reduced, more observers underestimate distances. Furthermore underestimation is stronger in virtual environments for the same objects than in real ones. We have also compared distance estimations between monocular and binocular observation conditions and found no significant differences for all fields of view. Our results show that binocular vision is not allowing better distance estimation than monocular vision. These results suggest an unexpectedly weak effect of binocular vision on the observation of distances of proximal objects in multi-cue environments.
The perceptual effects of changes of texture luminance either between the eyes or over time have been studied in several experiments and have led to a better comprehension of phenomenons such as sieve effect, binocular and monocular lustre and rivaldepth.
In this paper, we propose an ecological model of glittering texture and analyze glitter perception in terms of variations of texture luminance and animation frequency, in dynamic illumination conditions. Our approach is based on randomly oriented mirrors that are computed according to the specular term of Phong's image rendering formula. The sparkling effect is thus correlated to the relative movements of the resulting textured object, the light array and the observer's point of view.
The perceptual effect obtained with this model depends on several parameters: mirrors' density, the Phong specular exponent and the statistical properties of the mirrors' normal vectors. The ability to independently set these properties offers a way to explore a characterization space of glitter. A rating procedure provided a first approximation of the numerical values that lead to the best feeling of typical sparkling surfaces such as metallic paint, granite or sea shore.
Different stereoscopic effects, base don 100 percent binocular luminance contrast have been described previously: the 'sieve' effect, the 'binocular lustre' effect, the 'floating' effect and the rivaldepth' effect. By mean of a dichoptic set-up, we have measured the detection thresholds for these different effects in function of binocular luminance contrast. Psychometric data have ben recorded using a Yes-No paradigm, a spatial 2AFC paradigm and a temporal 2AFC paradigm. Our results show that even for small contrast all these stereoscopic effects are perceived. We have noticed an increase of the detection thresholds in the following order: 'sieve', 'binocular lustre', 'rivaldepth' and 'floating' effect. Two groups have been distinguished.