Adaptation exerts a continuous influence on visual coding, altering both sensitivity and appearance whenever there is a
change in the patterns of stimulation the observer is exposed to. These adaptive changes are thought to improve visual
performance by optimizing both discrimination and recognition, but may take substantial time to fully adjust the
observer to a new stimulus context. Here we explore the advantages of instead adapting the image to the observer,
obviating the need for sensitivity changes within the observer. Adaptation in color vision adjusts to both the average
color and luminance and to the variations in color and luminance within the scene. We modeled these adjustments as
gain changes in the cones and in multiple post-receptoral mechanisms tuned to stimulus contrasts along different color-luminance
directions. Responses within these mechanisms were computed for a range of different environments, based
on images sampled from a range of natural outdoor settings. Images were then adapted for different environments by
scaling the responses so that for each mechanism the average response equaled the response to a reference environment.
Transforming images in this way can increase the discriminability of different colors and the salience of novel colors. It
also provides a way to simulate how the world might look to an observer in different environments or to different
observers in the same environment. Such images thus provide a novel tool for exploring color appearance and the
perceptual and functional consequences of adaptation.
Kyle C. McDermott, Igor Juricevic, George Bebis, Michael A. Webster, "Adapting images to observers," Proc. SPIE 6806, Human Vision and Electronic Imaging XIII, 68060V (14 February 2008);