In this paper we will present an overview of our research into perception and biologically inspired modeling of illumination (flow) from 3D textures and the influence of roughness and illumination on material perception. Here 3D texture is defined as an image of an illuminated rough surface. In a series of theoretical and empirical papers we studied how we can estimate the illumination orientation (in the image plane) from 3D textures of globally flat samples. We found that the orientation can be estimated well by humans and computers using an approach based on second order statistics. This approach makes use of the dipole-like structures in 3D textures that are the results of illumination of bumps / throughs. For 3D objects, the local illumination direction varies over the object, resulting in surface illuminance flow. This again results in image illuminance flow in the image of a rough 3D object: the observable projection in the image of the field of local illumination orientations. Here we present results on image illuminance flow analysis for images from the Utrecht Oranges database, the Curet database and two vases. These results show that the image illuminance flow can be estimated robustly for various rough materials. In earlier studies we have shown that the image illuminance flow can be used to do shape and illumination inferences. Recently, in psychophysical experiments we found that adding 3D texture to a matte spherical object improves judgments of the direction and diffuseness of its illumination by human observers. This shows that human observers indeed use the illuminance flow as a cue for the illumination.
The aim of this study was to investigate whether inferences of light in the empty space of a painting and on objects in that painting are congruent with each other. We conducted an experiment in which we tested the perception of light qualities (direction, intensity of directed and ambient components) for two conditions: a) for a position in empty space in a painting and b) on the convex object that was replaced by the probe in the first condition. We found that the consistency of directional settings both between conditions and within paintings is highly dependent on painting content, specifically on the number of qualitatively different light zones[1] in a scene. For uniform lighting observers are very consistent, but when there are two or more light zones present in a painting the individual differences become prominent. We discuss several possible explanations of such results, the most plausible of which is that human observers are blind to complex features of a light field2.
We studied whether lighting influences the visual perception of material scattering qualities. To this aim we made an
interface or “material probe”, called MatMix 1.0, in which we used optical mixing of four canonical material modes. The
appearance of a 3D object could be adjusted by interactively adjusting the weights of the four material components in the
probe. This probe was used in a matching experiment in which we compared material perception under generic office
lighting with that under three canonical lighting conditions. For the canonical materials, we selected matte, velvety,
specular and glittery, representing diffuse, asperity, forward, and specular micro facet scattering modes. For the
canonical lightings, we selected ambient, focus and brilliance lighting modes. In our matching experiment, observers
were asked to change the appearance of the probe so that the material qualities of the probe matched that of the stimuli.
From the matching results, we found that our brilliance lighting brought out the glossiness of our stimuli and our focus
lighting brought out the velvetiness of our stimuli most similarly to office lighting. We conclude that the influence of
lighting on material perception is material-dependent.
KEYWORDS: Image quality, Visual process modeling, Image processing, Inverse optics, Video processing, 3D image processing, Interfaces, Image analysis, Human vision and color perception, Video
In the past decades perceptual (or perceived) image quality has been one of the most important criteria for evaluating
digitally processed image and video content. With the growing popularity of new media like stereoscopic displays there
is a tendency to replace image quality with viewing experience as the ultimate criterion. Adopting such a high-level
psychological criterion calls for a rethinking of the premises underlying human judgment. One premise is that perception
is about accurately reconstructing the physical world in front of you ("inverse optics"). That is, human vision is striving
for veridicality. The present study investigated one of its consequences, namely, that linear perspective will always yield
the correct description of the perceived 3D geometry in 2D images. To this end, human observers adjusted the frontal
view of a wireframe box on a television screen so as to look equally deep and wide (i.e. to look like a cube) or twice as
deep as wide. In a number of stimulus configurations, the results showed huge deviations from veridicality suggesting
that the inverse optics model fails. Instead, the results seem to be more in line with a model of "vision as optical interface".
We present a novel setup in which real objects made of different materials can be mixed optically. For the materials we
chose mutually very different materials, which we assume to represent canonical modes. The appearance of 3D objects
consisting of any material can be described as linear superposition of 3D objects of different canonical materials, as in
"painterly mixes". In this paper we studied mixtures of matte, glossy and velvety objects, representing diffuse, forward
and asperity scattering modes.
Observers rated optical mixtures on four scales: matte-glossy, hard-soft, cold-warm, light-heavy. The ratings were done
for the three combinations of glossy, matte, and velvety green birds. For each combination we tested 7 weightings.
Matte-glossy ratings varied most over the stimuli and showed highest (most glossy) scores for the rather glossy bird and
lowest (most matte) for the rather velvety bird. Hard-soft and cold-warm were rated highest (most soft and warm) for
rather velvety and lowest (most hard and cold) for rather glossy birds. Light-heavy was rated only somewhat higher
(heavier) for rather glossy birds. The ratings varied systematically with the weights of the contributions, corresponding to
gradually changing mixtures of material modes. We discuss a range of possibilities for our novel setup.
In this study we demonstrate that touch decreases the ambiguity in a visual image. It has been previously
found that visual perception of three-dimensional shape is subject to certain variations. These variations can
be described by the affine transformation. While the visual system thus seems unable to capture the Euclidean
structure of a shape, touch could potentially be a useful source to disambiguate the image. Participants performed
a so-called 'attitude task' from which the structure of the perceived three-dimensional shape was calculated. One
group performed the task with only vision and a second group could touch the stimulus while viewing it. We found
that the consistency within the haptics+vision group was higher than in the vision-only group. Thus, haptics
decreases the visual ambiguity. Furthermore, we found that the touched shape was consistently perceived as
having more relief than the untouched the shape. It was also found that the direction of affine shear differences
within the two groups was more consistent when touch was used. We thus show that haptics has a significant
influence on the perception of pictorial relief.
The appearance of objects in scenes is determined by their shape, material properties and by the light field, and,
in contradistinction, the appearance of those objects provides us with cues about the shape, material properties
and light field. The latter so-called inverse problem is underdetermined and therefore suffers from interesting
ambiguities. Therefore, interactions in the perception of shape, material, and luminous environment are bound
to occur.
Textures of illuminated rough materials depend strongly on the illumination and viewing directions. Luminance
histogram-based measures such as the average luminance, its variance, shadow and highlight modes, and
the contrast provide robust estimates with regard to the surface structure and the light field. Human observers
performance agrees well with predictions on the basis of such measures. If we also take into account the spatial
structure of the texture it is possible to estimate the illumination orientation locally. Image analysis on the
basis of second order statistics and human observers estimates correspond well and are both subject to the
bas-relief and the convex-concave ambiguities. The systematic robust illuminance flow patterns of local illumination
orientation estimates on rough 3D objects are an important entity for shape from shading and for light
field estimates. Human observers are able to match and discriminate simple light field properties (e.g. average
illumination direction and diffuseness) of objects and scenes, but they make systematic errors, which depend
on material properties, object shapes and position in the scene. Moreover, our results show that perception of
material and illumination are basically confounded. Detailed analysis of these confounds suggests that observers
primarily attend to the low-pass structure of the light field. We measured and visualized this structure, which
was found to vary smoothly in natural scenes in- and outdoors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.