Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual
environments and high resolution tiled display walls. This paper outlines the design and implementation of the
CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.
Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser,
represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV,
desktop, and mobile). WebGL’s value in biomedical visualization has been demonstrated by applications for interactive
anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of
instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL
design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D
graphics. The methodology was informed by the development of an interactive web application depicting the anatomy
and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.
CytoViz is an artistic, real-time information visualization driven by statistical information gathered during gigabit
network transfers to the Scalable Adaptive Graphical Environment (SAGE) at various events. Data streams are mapped
to cellular organisms defining their structure and behavior as autonomous agents. Network bandwidth drives the growth
of each entity and the latency defines its physics-based independent movements. The collection of entity is bound
within the 3D representation of the local venue. This visual and animated metaphor allows the public to experience the
complexity of high-speed network streams that are used in the scientific community.
Moreover, CytoViz displays the presence of
discoverable Bluetooth devices carried by
nearby persons. The concept is to generate
an event-specific, real-time visualization that
creates informational 3D patterns based on
actual local presence. The observed
Bluetooth traffic is put in opposition of the
wide-area networking traffic by overlaying
2D animations on top of the 3D world. Each
device is mapped to an animation fading
over time while displaying the name of the
detected device and its unique physical
CytoViz was publicly presented at two major
international conferences in 2005 (iGrid2005
in San Diego, CA and SC05 in Seattle, WA).