Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift
in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven
by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of
3D TVs potentially removing the requirement for an external games console. Although native stereo support has been
integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU
independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D
gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a
3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics
applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule
based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering
technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the
stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques,
including depth based image rendering, both in terms of frame rates and impact on battery consumption.
The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D
content to be commercially available to the consumer. Console and PC games provide the most readily accessible source
of high quality 3D content.
This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key
issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios
face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG
environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot
be choreographed by hand but must be automatically calculated in real-time without significant impact on performance.
Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the
scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual
camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering
(DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render
The mobile phone is quickly evolving from a communications device to an application platform and in the process has become the focus for the development of new technologies. The most challenging technical issues for commercializing a 3D phone are a stereoscopic display technology which is suitable for mobile applications as well as a means for driving the display using the limited capabilities of a mobile handset. In this paper we describe a prototype 3D mobile phone which was developed on a commercially available mobile hardware platform. The demonstration handset was retrofitted with a Polarization Activated Microlens<sup>TM</sup> array that is 2D/3D switchable and provides both class-leading low crosstalk levels, and suitable brightness characteristics and viewing zones for operation without compromising battery running time. This next generation autostereoscopic display technology, which combines the advantages in brightness of a lenticular 3D display with the 2D/3D switching capability of parallax barrier, is deployed on a 2.2" landscape QVGA TFT LCD base panel. The stereoscopic content solution is an essential component of a commercially viable 3D handset. We describe how a range of stereoscopic software solutions have been developed on the phone's existing application processor without the need for custom hardware.
The recent emergence of auto-stereoscopic 3D viewing technologies has increased demand for the creation of 3D video content. A range of glasses-free multi-viewer screens have been developed that require as many as 9 views generated for each frame of video. This presents difficulties in both view generation and transmission bandwidth. This paper examines the use of stereo video capture as a means to generate multiple scene views via disparity analysis. A machine learning approach is applied to learn relationships between disparity generated depth information and source footage, and to generate depth information in a temporally smooth manner for both left and right eye image sequences. A view morphing approach to multiple view rendering is described which provides an excellent 3D effect on a range of glasses-free displays, while providing robustness to inaccurate stereo disparity calculations.
Encoding 3D information using depth maps is quickly becoming the dominant technique for rendering high quality stereoscopic images. This paper describes how depth maps can be highly compressed and transmitted alongside 2D images with minimal additional bandwidth. The authors have previously described a rapid 2D to 3D conversion system for generating depth maps. This system, which relies on Machine Learning algorithms, effectively encodes the relationships between a 2D source image and the associated depths of objects within the image. These relationships, which are expressed in terms of the colour and position of objects, may be exploited to provide an effective compression mechanism. This paper describes the practical implementation of this technology in an integrated 2D to 3D conversion system. We demonstrate the advantages of the encoding scheme relative to other industry standard compression techniques, examining issues relating to bandwidth, decoding performance and the effect of compression artifacts on stereoscopic image quality.
The conversion of existing 2D images to 3D is proving commercially viable and fulfills the growing need for high quality stereoscopic images. This approach is particularly effective when creating content for the new generation of autostereoscopic displays that require multiple stereo images. The dominant technique for such content conversion is to develop a depth map for each frame of 2D material. The use of a depth map as part of the 2D to 3D conversion process has a number of desirable characteristics: 1. The resolution of the depth may be lower than that of the associated 2D image. 2. It can be highly compressed. 3. 2D compatibility is maintained. 4. Real time generation of stereo, or multiple stereo pairs, is possible. The main disadvantage has been the laborious nature of the manual conversion techniques used to create depth maps from existing 2D images, which results in a slow and costly process. An alternative, highly productive technique has been developed based upon the use of Machine Leaning Algorithm (MLAs). This paper describes the application of MLAs to the generation of depth maps and presents the results of the commercial application of this approach.