Adaptation exerts a continuous influence on visual coding, altering both sensitivity and appearance whenever there is a
change in the patterns of stimulation the observer is exposed to. These adaptive changes are thought to improve visual
performance by optimizing both discrimination and recognition, but may take substantial time to fully adjust the
observer to a new stimulus context. Here we explore the advantages of instead adapting the image to the observer,
obviating the need for sensitivity changes within the observer. Adaptation in color vision adjusts to both the average
color and luminance and to the variations in color and luminance within the scene. We modeled these adjustments as
gain changes in the cones and in multiple post-receptoral mechanisms tuned to stimulus contrasts along different color-luminance
directions. Responses within these mechanisms were computed for a range of different environments, based
on images sampled from a range of natural outdoor settings. Images were then adapted for different environments by
scaling the responses so that for each mechanism the average response equaled the response to a reference environment.
Transforming images in this way can increase the discriminability of different colors and the salience of novel colors. It
also provides a way to simulate how the world might look to an observer in different environments or to different
observers in the same environment. Such images thus provide a novel tool for exploring color appearance and the
perceptual and functional consequences of adaptation.
Adapting to the visual characteristics of a specific environment may facilitate detecting novel stimuli within that environment. We monitored eye movements while subjects searched for a color target on familiar or unfamiliar color backgrounds, in order to test for these performance changes and to explore whether they reflect changes in salience from adaptation vs. changes in search strategies or perceptual learning. The target was an ellipse of variable color presented at a random location on a dense background of ellipses. In one condition, the colors of the background varied along either the LvsM or SvsLM cardinal axes. Observers adapted by viewing a rapid succession of backgrounds drawn from one color axis, and then searched for a target on a background from the same or different color axis. Searches were monitored with a Cambridge Research Systems Video Eyetracker. Targets were located more quickly on the background axis that observers were pre-exposed to, confirming that this exposure can improve search efficiency for stimuli that differ from the background. However, eye movement patterns (e.g. fixation durations and saccade magnitudes) did not clearly differ across the two backgrounds, suggesting that how the novel and familiar backgrounds were sampled remained similar. In a second condition, we compared search on a nonselective color background drawn from a circle of hues at fixed contrast. Prior exposure to this background did not facilitate search compared to an achromatic adapting field, suggesting that subjects were not simply learning the specific colors defining the background distributions. Instead, results for both conditions are consistent with a selective adaptation effect that enhances the salience of novel stimuli by partially discounting the background.