Relief printing technology developed by Océ allows the superposition of several layers of colorant on different types of
media which creates a variation of the surface height defined by the input to the printer. Evaluating the reproduction
accuracy of distinct surface characteristics is of great importance to the application of the relief printing system. Therefore,
it is necessary to develop quality metrics to evaluate the relief process. In this paper, we focus on the third dimension of
relief printing, i.e. height information. To achieve this goal, we define metrics and develop models that aim to evaluate relief
prints in two aspects: overall fidelity and surface finish. To characterize the overall fidelity, three metrics are calculated:
Modulation Transfer Function (MTF), difference and root-mean-squared error (RMSE) between the input height map and
scanned height map, and print surface angle accuracy. For the surface finish property, we measure the surface roughness,
generate surface normal maps and develop a light reflection model that serves as a simulation of the differences between
ideal prints and real prints that may be perceived by human observers. Three sets of test targets are designed and printed by
the Océ relief printer prototypes for the calculation of the above metrics: (i) twisted target, (ii) sinusoidal wave target, and
(iii) ramp target. The results provide quantitative evaluations of the printing quality in the third dimension, and demonstrate
that the height of relief prints is reproduced accurately with respect to the input design. The factors that affect the printing
quality include: printing direction, frequency and amplitude of the input signal, shape of relief prints. Besides the above
factors, there are two additional aspects that influence the viewing experience of relief prints: lighting condition and
Proc. SPIE. 9018, Measuring, Modeling, and Reproducing Material Appearance
KEYWORDS: Visual process modeling, Visualization, Reflection, Image resolution, Control systems, Image quality, Surface properties, High dynamic range imaging, Information visualization, Image quality standards
Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves.
When we look at an image we are able to perceive both the properties of the image and the properties of the objects
represented by the image. Research on image quality has typically focused improving image properties (resolution,
dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual
representations. In this paper we describe a series of experiments that investigate how well images of different quality
convey information about the properties of the objects they represent. In the experiments we focus on the effects that two
image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We
found that different experimental methods produced differing results. Specifically, when the stimulus images were
presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and
conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus
images were presented sequentially, observers were able to disregard the image plane properties and more accurately
match the gloss of the objects represented by the different quality images. These findings suggest that in understanding
image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information
represented by that medium.
Proc. SPIE. 9015, Color Imaging XIX: Displaying, Processing, Hardcopy, and Applications
KEYWORDS: Visual process modeling, Visualization, Colorimetry, High dynamic range imaging, Associative arrays, Space operations, Optimization (mathematics), Computer graphics, Time multiplexed optical shutter, Image quality standards
In this paper, we present a novel approach of tone mapping as gamut mapping in a high-dynamic-range (HDR) color space. High- and low-dynamic-range (LDR) images as well as device gamut boundaries can simultaneously be represented within such a color space. This enables a unified transformation of the HDR image into the gamut of an output device (in this paper called <i>HDR gamut mapping</i>). An additional aim of this paper is to investigate the suitability of a specific HDR color space to serve as a working color space for the proposed HDR gamut mapping. For the HDR gamut mapping, we use a recent approach that iteratively minimizes an image-difference metric subject to in-gamut images. A psychophysical experiment on an HDR display shows that the standard reproduction workflow of two subsequent transformations – tone mapping and then gamut mapping – may be improved by HDR gamut mapping.
We are developing tangible imaging systems<sup>1-4</sup> that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook – our first implementation on a laptop computer; tangiView – a more refined implementation on a tablet device; tangiPaint – a tangible digital painting application; and phantoView – an application that takes the tangible imaging concept into stereoscopic 3D.
We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.
Human observers are able to make fine discriminations of surface gloss. What cues are they using to perform this task? In
previous studies, we identified two reflection-related cues-the contrast of the reflected image (c, contrast gloss) and the sharpness of
reflected image (d, distinctness-of-image gloss)--but these were for objects rendered in standard dynamic range (SDR) images with
compressed highlights. In ongoing work, we are studying the effects of image dynamic range on perceived gloss, comparing high
dynamic range (HDR) images with accurate reflections and SDR images with compressed reflections. In this paper, we first present
the basic findings of this gloss discrimination study then present an analysis of eye movement recordings that show where observers
were looking during the gloss discrimination task. The results indicate that: 1) image dynamic range has significant influence on
perceived gloss, with surfaces presented in HDR images being seen as glossier and more discriminable than their SDR counterparts;
2) observers look at both light source highlights and environmental interreflections when judging gloss; and 3) both of these results
are modulated by surface geometry and scene illumination.
When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active
manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections.
We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based
studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer
containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the
orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module
so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual
objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces
rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration
system will be better able to support a range of appearance perception applications.
In his book "Understanding Media" social theorist Marshall McLuhan declared: "The medium is the message." The
thesis of this paper is that with respect to image quality, imaging system developers have taken McLuhan's dictum too
much to heart. Efforts focus on improving the technical specifications of the media (e.g. dynamic range, color gamut,
resolution, temporal response) with little regard for the visual messages the media will be used to communicate. We
present a series of psychophysical studies that investigate the visual system's ability to "see through" the limitations of
imaging media to perceive the messages (object and scene properties) the images represent. The purpose of these studies
is to understand the relationships between the signal characteristics of an image and the fidelity of the visual information
the image conveys. The results of these studies provide a new perspective on image quality that shows that images that
may be very different in "quality", can be visually equivalent as realistic representations of objects and scenes.
Proc. SPIE. 6806, Human Vision and Electronic Imaging XIII
KEYWORDS: Human-machine interfaces, Visualization, Photography, 3D modeling, Light sources and illumination, Visual system, Human vision and color perception, Algorithm development, Computer graphics, Visual compression
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics,
where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In
this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted
an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic
methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties,
We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded
the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material /
lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images.
We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression
size, and did not find them to be very correlated. Understanding the differences between these measures can lead to
the design of more efficient rendering algorithms in computer graphics.
This paper describes three varieties of realism that need to be considered in evaluating computer graphics images and defines the criteria that need to be met if each kind of realism is to be achieved. The paper introduces a conceptual framework for thinking about realism in images, and describes a set of research tools for measuring image realism and assessing its value in graphics applications.
In this paper we introduce a new model of surface appearance that is based on quantitative studies of gloss perception. We use image synthesis techniques to conduct experiments that explore the relationships between the physical dimensions of glossy reflectance and the perceptual dimensions of glossy appearance. The product of these experiments is a psychophysically-based model of surface gloss, with dimensions that are both physically and perceptually meaningful and scales that reflect our sensitivity to gloss variations. We demonstrate that the model can be used to describe and control the appearance of glossy surfaces in synthesis images, allowing prediction of gloss matches and quantification of gloss differences. This work represents some initial steps toward developing psychophyscial models of the goniometric aspects of surface appearance to complement widely-used colorimetric models.