Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with
metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral
experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the
19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven
virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century
scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at
the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active
development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel
(10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the Varrier<sup>TM</sup> 60 tile,
100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the
installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality
that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced
expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a
combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall
system design. We discuss the resulting aesthetic experience in relation to the overall system.