While virtual reality and digital games share many core technologies, the programming environments, toolkits, and
workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in
visualization and simulation often have a different feature set or design philosophy than game engines, while popular
game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives
developers a unified development environment and the ability to easily port projects, but involves challenges beyond just
adding stereo 3D visuals.
In this paper we outline the issues involved in adapting a game engine for use with an immersive display system
including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss
application development and workflow approaches including camera management, rendering synchronization, GUI
design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic
In this paper we present appARel, a creative research project at the intersection of augmented reality, fashion, and
performance art. appARel is a mobile augmented reality application that transforms otherwise ordinary garments with
3D animations and modifications. With appARel, entire fashion collections can be uploaded in a smartphone
application, and “new looks” can be downloaded in a software update. The project will culminate in a performance art
fashion show, scheduled for March 2013. appARel includes textile designs incorporating fiducial markers, garment
designs that incorporate multiple markers with the human body, and iOS and Android apps that apply different
augments, or “looks”, to a garment. We discuss our philosophy for combining computer-generated and physical
objects; and share the challenges we encountered in applying fiduciary markers to the 3D curvatures of the human body.
Virtual reality has long been used for training simulations in fields from medicine to welding to vehicular operation, but
simulations involving more complex cognitive skills present new design challenges. Foreign language learning, for
example, is increasingly vital in the global economy, but computer-assisted education is still in its early stages.
Immersive virtual reality is a promising avenue for language learning as a way of dynamically creating believable scenes
for conversational training and role-play simulation. Visual immersion alone, however, only provides a starting point.
We suggest that the addition of social interactions and motivated engagement through narrative gameplay can lead to
truly effective language learning in virtual environments. In this paper, we describe the development of a novel
application for teaching Mandarin using CAVE-like VR, physical props, human actors and intelligent virtual agents, all
within a semester-long multiplayer mystery game. Students travel (virtually) to China on a class field trip, which soon
becomes complicated with intrigue and mystery surrounding the lost manuscript of an early Chinese literary classic.
Virtual reality environments such as the Forbidden City and a Beijing teahouse provide the setting for learning language,
cultural traditions, and social customs, as well as the discovery of clues through conversation in Mandarin with
characters in the game.
For decades, virtual reality artwork has existed in a small but highly influential niche in the world of electronic and new
media art. Since the early 1990's, virtual reality installations have come to define an extreme boundary point of both
aesthetic experience and technological sophistication. Classic virtual reality artworks have an almost mythological
stature - powerful, exotic, and often rarely exhibited. Today, art in virtual environments continues to evolve and mature,
encompassing everything from fully immersive CAVE experiences to performance art in Second Life to the use of
augmented and mixed reality in public space. Art in Virtual Reality 2010 is a public exhibition of new artwork that
showcases the diverse ways that contemporary artists use virtual environments to explore new aesthetic ground and
investigate the continually evolving relationship between our selves and our virtual worlds.
Dots and Dashes is a virtual reality artwork that explores online romance over the telegraph, based on Ella Cheever
Thayer's novel <i>Wired Love - a Romance in Dots and Dashes (an Old Story Told in a New Way)<sup>1</sup>. </i>The uncanny
similarities between this story and the world of today's virtual environments provides the springboard for an exploration
of a wealth of anxieties and dreams, including the construction of identities in an electronically mediated environment,
the shifting boundaries between the natural and machine worlds, and the spiritual dimensions of science and technology.
In this paper we examine the parallels between the telegraph networks and our current conceptions of cyberspace, as
well as unique social and cultural impacts specific to the telegraph. These include the new opportunities and roles
available to women in the telegraph industry and the connection between the telegraph and the Spiritualist movement.
We discuss the development of the artwork, its structure and aesthetics, and the technical development of the work.
This paper examines historical audio applications used to provide real-time immersive sound for CAVE<sup>TM</sup> environments and discusses their relative strengths and weaknesses. We examine and explain issues of providing spatialized sound immersion in real-time virtual environments (VEs), some problems with currently used sound servers, and a set of requirements for an 'ideal' sound server. We present the initial configuration of a new cross-platform sound server solution using open source software and the Open Sound Control (OSC) specification for the creation of real-time spatialized audio with CAVE applications, specifically Ygdrasil (Yg) environments. The application, aNother Sound Server (NSS) establishes an application interface (API) using OSC, a logical server layer implemented in Python, and an audio engine using SuperCollider (SC). We discuss spatialization implementation and other features. Finally, we document the Synthecology project which premiered at WIRED NEXTFEST 2005 and was the first VE to use NSS. We also discuss various techniques that enhance presence in networked VEs, as well as possible and planned extensions of NSS.