This paper examines historical audio applications used to provide real-time immersive sound for CAVETM environments and discusses their relative strengths and weaknesses. We examine and explain issues of providing spatialized sound immersion in real-time virtual environments (VEs), some problems with currently used sound servers, and a set of requirements for an 'ideal' sound server. We present the initial configuration of a new cross-platform sound server solution using open source software and the Open Sound Control (OSC) specification for the creation of real-time spatialized audio with CAVE applications, specifically Ygdrasil (Yg) environments. The application, aNother Sound Server (NSS) establishes an application interface (API) using OSC, a logical server layer implemented in Python, and an audio engine using SuperCollider (SC). We discuss spatialization implementation and other features. Finally, we document the Synthecology project which premiered at WIRED NEXTFEST 2005 and was the first VE to use NSS. We also discuss various techniques that enhance presence in networked VEs, as well as possible and planned extensions of NSS.
The development of a reliable untethered interactive virtual environment has long been a goal of the VR community. Several nonmagnetic tracking systems have been developed in recent years based on optical, acoustic, and mechanical solutions. However, an inexpensive, effective, and unobtrusive tracking solution remains elusive. This paper presents a camera based three-dimensional hand tracking system implemented in the PARIS augmented reality environment and used to drive a demonstration application.