The proliferation of technological devices and artistic strategies has brought about an urgent and justifiable need to
capture site-specific time-based virtual reality experiences. Interactive art experiences are specifically dependent on the
orchestration of multiple sources including hardware, software, site-specific location, visitor inputs and 3D stereo and
sensory interactions. Although a photograph or video may illustrate a particular component of the work, such as an
illustration of the artwork or a sample of the sound, these only represent a fraction of the overall experience. This paper
seeks to discuss documentation strategies that combine multiple approaches and capture the interactions between art
projection, acting, stage design, sight movement, dialogue and audio design.
Craniofacial anthropometry (the measurement and analysis of head and face dimensions) has been used to assess and
describe abnormal craniofacial variation (dysmorphology) and the facial phenotype in many medical syndromes.
Traditionally, anthropometry measurements have been collected by the direct application of calipers and tape measures
to the subject's head and face, and can suffer from inaccuracies due to restless subjects, erroneous landmark
identification, clinician variability, and other forms of human error. Three-dimensional imaging technologies promise a
more effective alternative that separates the acquisition and measurement phases to reduce these variabilities while also
enabling novel measurements and longitudinal analysis of subjects. Indiana University (IU) is part of an international
consortium of researchers studying fetal alcohol spectrum disorders (FASD). Fetal alcohol exposure results in
predictable craniofacial dysmorphologies, and anthropometry has been proven to be an effective diagnosis tool for the
condition. IU is leading a project to study the use of 3D surface scanning to acquire anthropometry data in order to more
accurately diagnose FASD, especially in its milder forms. This paper describes our experiences in selecting, verifying,
supporting, and coordinating a set of 3D scanning systems for use in collecting facial scans and anthropometric data
from around the world.
Navigating effectively in virtual environments at human scales is a difficult problem. However, it is even more difficult to navigate in large-scale virtual environments such as those simulating the physical Universe; the huge spatial range of astronomical simulations and the dominance of empty space make it hard for users to acquire reliable spatial knowledge of astronomical contexts. This paper introduces a careful combination of navigation and visualization techniques to resolve the unique problems of large-scale real-time exploration in terms of travel and wayfinding. For large-scale travel, spatial scaling techniques and constrained navigation manifold methods are adapted to the large spatial scales of the virtual Universe. We facilitate large-scale wayfinding and context awareness using visual cues such as power-of-10 reference cubes, continuous exponential zooming into points of interest, and a scalable world-in-miniature (WIM) map. These methods enable more effective exploration and assist with accurate context-model building, thus leading to improved understanding of virtual worlds in the context of large-scale astronomy.