Immersive video enables end users to experience video in a more natural way interactively with viewer motion from any position and orientation within a supported viewing space. MPEG Immersive Video (MIV) is an upcoming standard being developed to handle the compression and delivery of immersive media content. It extracts only needed information in the form of patches from a collection of cameras capturing the scene and compresses with video codecs such that the scene can be reconstructed at the decoder side from any pose. A MIV bitstream is composed of non-video components carrying view parameters and patch information in addition to multiple video data sub-bitstreams carrying texture and geometry information. In this paper, we describe a simplified MIV carriage method, using an SEI message within a single layer HEVC bitstream, to take advantage of existing video streaming infrastructure, including legacy video servers. The Freeport player is built on the open-source VLC video player, a GPU DirectX implementation of a MIV renderer, and a face tracking tool for viewer motion. A prerecorded demonstration of Freeport player is provided.
In contrast with traditional extended depth-of-field approaches, we propose a depth-based deconvolution technique that realizes the depth-variant nature of the point spread function of an ordinary fixed-focus camera. The developed technique brings a single blurred image to focus at different depth planes which can be stitched together based on a depth map to output a full-focus image. Strategies to suppress the deconvolution’s ringing artifacts are implemented on three levels: block tiling to eliminate boundary artifacts, reference maps to reduce ringing initiated by sharp edges, and depth-based masking to mitigate artifacts raised by neighboring depth-transition surfaces. The performance is validated numerically for planar and multidepth objects.
Advances in medical imaging technologies are assisting radiologists in more accurate diagnoses. This paper details an
autostereoscopic static volumetric display, called CSpace®, capable of projecting three-dimensional (3D) medical
imaging data in 3D world coordinates. Using this innovative technology, the displayed 3D data set can be viewed in the
optical medium from any perspective angle without the use of any viewing aid. The design of CSpace® allows a volume
rendering of the surface and the interior of any organ of the human body. As a result, adjacent tissues can be better
monitored, and disease diagnoses can be more accurate. In conjunction with CSpace hardware, we have developed a
software architecture that can read digital imaging and communication in medicine (DICOM) files whether captured by
ultrasound devices, magnetic resonance imaging (MRI), or computed tomography (CT) scanners. The software acquires
the imaging parameters from the files' header, and then applies the parameters on the rendered 3D object to display it in
the exact form it was captured.
A public-private research collaboration has demonstrated a promising three-dimensional volumetric display system with
the capability of satisfying the performance criteria of: ease of viewing, high-resolution, scalability, and reliability. The
system utilizes commercial off-the-shelf micro-electro-mechanical systems (MEMS) based mirror arrays to direct
infrared light beams into an image space. To date, monochromatic images have been demonstrated in an image space
material that exhibits two-photon upconversion. The prototype display requires no special viewing aids, produces a
volumetric image that is viewable from 360 degrees, and as presently designed is capable of producing 800 million
volumetric pixels of image content.