In the area of computer graphics the design of hardware and software has primarily been driven by the need to achieve maximum performance. Energy efficiency was usually neglected, assuming that a stable always-on power source was available. However, the advent of the mobile era has brought into question these ideas and designs in computer graphics since mobile devices are both limited by their computational capabilities and their energy sources. Aligned to this emerging need in computer graphics for energy efficiency analysis we have setup a software framework to obtain power measurements from 3D scenes using off-the-shelf hardware that allows for sampling the energy consumption over the power rails of the CPU and GPU. Our experiments include geometric complexity, texture resolution and common CPU and GPU workloads. The goal of this work is to combine the knowledge obtained from these measurements into a prototype energy-aware balancer of processing resources. The balancer dynamically selects the rendering parameters and uses a simple framerate-based dynamic frequency scaling strategy. Our experimental results demonstrate that our power saving framework can achieve savings of approximately 40%.
In this paper we describe a set of interactive tools that we have built as an extension to our image-based stereoscopic non-photorealistic rendering system. The base system is capable of automatically turning stereoscopic input images to stereoscopic pictures that resemble artwork, including concept drawings, cartoons and paintings. The tools described here aim to complement the traditional stereoscopic viewing experience of the end-user by enabling him to interact with the perceived stereoscopic space. The observers of the generated artwork can easily enhance the perceived depth by manipulating the two artistic-looking projections while stereo viewing. The users can examine the stereoscopic artwork via the use of stereoscopic cursors, as well as explore the structure of multi-layered artwork by peeling away layers at different depths to reveal other layers, initially occluded.
We present an algorithm for generating automatically stereoscopic paintings with varying levels of detail. We describe our interactive system built around the algorithm to enable users to select the level of detail of the painting. In this context of interactivity we have modified our stereo painting algorithm, presented in previous work, in order to explore the idea of user-driven artistic level of detail selection and display. In particular, a stereo painting is composed by two canvases, one for each eye. These canvases contain multiple refining layers of brush strokes that compose the final painting. In past research, the underlying coarser layers are obscured and function only as the basis to progressively build the finer painting layers. In contrast, our interactive stereo viewing system enables the user to selectively toggle the visibility of finer strokes to reveal coarser representations of the artwork.
In this paper we present an algorithm to automatically generate sketches from stereo image pairs. Stereo analysis is initially performed on the input stereo pair to estimate a dense depth map. An Edge Combination image is computed by localising object contour edges, as indicated by the depth map, within the intensity reference image. We then approximate these pixel-represented contours by devising a new parametric curve fitting algorithm. The algorithm successfully recovers a minimum number of control points required to fit a Bezier curve onto the pixel-edge dataset in the Edge Combination image. Experiments demonstrate how the Edge Combination algorithm, used for dominant edge extraction, can be combined with a curve fitting algorithm to automatically provide parameterized artistic sketches or concept drawings of a real scene.