Research in early (low-level) vision, tooth for machines and humans, has
traditionally been based on the study of idealized images or image patches such as step edges, gratings, flat fields, and Mondrians. Real images, however, exhibit much richer and more complex structure, whose nature is determined by the physical and geometric properties of illumination, reflection, and imaging. By understanding these physical relationships, a new kind of early vision analysis is made possible. In this paper, we describe a progression of models of imaging physics that present a much more complex and realistic set of image relationships than are commonly assumed in early vision research. We begin with the Dichromatic Reflection Model, which describes how highlights and color are related in images of dielectrics such as plastic and painted surfaces. This gives rise to a mathematical relationship in color space to separate highlights from object color. Perceptions of shape, surface roughness/texture, and illumination color are readily derived from this analysis. We next show how this can be extended to images of several objects, by deriving local color variation relationships from the basic model. The resulting method for color image analysis has been successfully applied in machine vision experiments in our laboratory. Yet another extension is to account for inter-reflection among multiple objects.
We have derived a simple model of color inter-reflection that accounts for the basic phenomena, and report on this model and how we are applying it. In general, the concept of illumination for vision should account for the entire "illumination environment", rather than being restricted to a single light source. This work shows that the basic physical relationships give rise to very structured image properties, which can be a more valid basis for early vision than the traditional idealized image patterns.