The interpretation of medical images relies heavily on visual inspection by human observers. Many studies have explored how sensory and cognitive factors in visual processing influence how medical images are perceived and evaluated. But how do these images influence visual processing itself? The visual system is highly adaptable and constantly adjusting to changes in the visual environment. These adjustments recalibrate and optimize visual coding not only for simple properties of the world like the average light level, but also for complex features like the average blur or texture in a scene. Adaptation thus affects everything we see. The unique visual characteristics of radiological images suggest that they may hold the radiologist in unique states of adaptation. I will illustrate how this adaptation influences contrast sensitivity and the appearance of medical images. One proposed function of adaptation is to highlight novel information by “filtering out” the expected characteristics of scenes, and I will illustrate the implications of this by considering how adaptation may affect visual search for novel or suspicious features in medical images.
Visual adaptation is widely assumed to optimize visual performance, but demonstrations of functional benefits beyond the case of light adaptation remain elusive. The failure to find marked improvements in visual discriminations with contrast or pattern adaptation may occur because these become manifest only over timescales that are too long to probe by briefly adapting observers. We explored the potential consequences of color contrast adaptation by instead “adapting” images to simulate how they should appear to observers under theoretically complete adaptation to different environments, and then used a visual search task to measure the ability to detect colors within the adapted images. Color salience can be markedly improved for extreme environments to which the observer is not routinely exposed, and may also be enhanced even among naturally occurring outdoor environments. The changes in performance provides a measure of how much in theory the visual system can be optimized for a given task and environment, and can reveal the extent to which differences in the statistics of the environment or the sensitivity of the observer are important in driving the states of adaptation. Adapting the images also provides a potential practical tool for optimizing performance in novel visual contexts, by rendering image information in a format that the visual system is already calibrated for.
In vision and color research, it is often desirable to precisely control the spectral content of light stimuli. Some
demanding research applications require replicating or producing natural or novel complex spectral illumination.
However, complex spectral distributions, common in the real world, often prove difficult to simulate in the lab. Past
researchers have combined LCD technologies with broadband sources and wavelength dispersing elements, such as
gratings, to produce approximations to natural distributions. These devices have been limited in contrast, temporal
resolution, and precision by the nature of the LCD itself. We show here how a spectrally-dispersed broadband source
modulated with Digital Light Processor (DLP) technology provides for rapid and precise spectral shaping of visual
stimuli at intensity and precision levels previously unattainable using other light modulating technologies, and present a
sample application consisting of data from color vision experiments designed to probe the visual system's differential
response to narrow versus broad band color stimuli.
Adaptation exerts a continuous influence on visual coding, altering both sensitivity and appearance whenever there is a
change in the patterns of stimulation the observer is exposed to. These adaptive changes are thought to improve visual
performance by optimizing both discrimination and recognition, but may take substantial time to fully adjust the
observer to a new stimulus context. Here we explore the advantages of instead adapting the image to the observer,
obviating the need for sensitivity changes within the observer. Adaptation in color vision adjusts to both the average
color and luminance and to the variations in color and luminance within the scene. We modeled these adjustments as
gain changes in the cones and in multiple post-receptoral mechanisms tuned to stimulus contrasts along different color-luminance
directions. Responses within these mechanisms were computed for a range of different environments, based
on images sampled from a range of natural outdoor settings. Images were then adapted for different environments by
scaling the responses so that for each mechanism the average response equaled the response to a reference environment.
Transforming images in this way can increase the discriminability of different colors and the salience of novel colors. It
also provides a way to simulate how the world might look to an observer in different environments or to different
observers in the same environment. Such images thus provide a novel tool for exploring color appearance and the
perceptual and functional consequences of adaptation.
We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.
Adapting to the visual characteristics of a specific environment may facilitate detecting novel stimuli within that environment. We monitored eye movements while subjects searched for a color target on familiar or unfamiliar color backgrounds, in order to test for these performance changes and to explore whether they reflect changes in salience from adaptation vs. changes in search strategies or perceptual learning. The target was an ellipse of variable color presented at a random location on a dense background of ellipses. In one condition, the colors of the background varied along either the LvsM or SvsLM cardinal axes. Observers adapted by viewing a rapid succession of backgrounds drawn from one color axis, and then searched for a target on a background from the same or different color axis. Searches were monitored with a Cambridge Research Systems Video Eyetracker. Targets were located more quickly on the background axis that observers were pre-exposed to, confirming that this exposure can improve search efficiency for stimuli that differ from the background. However, eye movement patterns (e.g. fixation durations and saccade magnitudes) did not clearly differ across the two backgrounds, suggesting that how the novel and familiar backgrounds were sampled remained similar. In a second condition, we compared search on a nonselective color background drawn from a circle of hues at fixed contrast. Prior exposure to this background did not facilitate search compared to an achromatic adapting field, suggesting that subjects were not simply learning the specific colors defining the background distributions. Instead, results for both conditions are consistent with a selective adaptation effect that enhances the salience of novel stimuli by partially discounting the background.
Color vision is inseparable from spatial vision. Chromatic and achromatic aspects of visual experience together subserve our perception of the forms of objects. This view is supported by physiological studies demonstrating that both color and luminance are carried along with form information on the same optic nerve fibers, albeit at different spatial scales. These scale differences can be summarized by contrast sensitivity functions measured with chromatic and achromatic spatial sinusoids, and may be illustrated by digitally filtered images that separate achromatic and chromatic variations. Analyses of the chromatic content of natural images also demonstrate a close link with the chromatic and spatial tuning of neural pathways. While characteristic properties of natural scenes can predict general characteristics of visual coding, color can vary widely across individual images, and thus could not be represented optimally by a fixed visual system. However, color coding is not fixed, but rather adjusts to both the average color and distribution of colors in scenes through processes of adaptation. Such adjustments may support color constancy and coding efficiency, and may also optimize detection and discrimination of colors that are novel in an image. Finally, the spatial properties of color-coding mechanisms are essential to our perception of figure and ground. Chromatic (border) contrast enhances the difference between figure and ground, while homogenization of object surfaces is facilitated by short- and long-range processes of assimilation and color spreading.
To what extent do we have shared or unique visual experiences? This paper examines how the answer to this question is constrained by known processes of visual adaptation. Adaptation constantly recalibrates visual sensitivity so that our vision is matched to the stimuli that we are currently exposed to. These processes normalize perception not only to low-level features in the image, but to high-level, biologically relevant properties of the visual world. They can therefore strongly impact many natural perceptual judgments. To the extent that observers are exposed to and thus adapted by a different environment, their vision will be normalized in different ways and their subjective visual experience will differ. These differences are illustrated by considering how adaptation can influence human face perception. To the extent that observers are exposed and adapted to common properties in the environment, their vision will be adjusted toward common states, and in this respect they will have a common visual experience. This is illustrated by reviewing the effects of adaptation on the perception of image blur. In either case, it is the similarities or differences in the stimuli - and not the intrinsic similarities or differences in the observers - which determine the relative states of adaptation. Thus at least some aspects of our private internal experience are controlled by external factors that are accessible to objective measurement.
How well-focused an image appears can be strongly influenced by the surroundings context. A blurred surround can cause a central image to appear too sharp, while sharped surrounds can induce blur. We examined some spatial properties and stimulus selectivities of this 'simultaneous blur contrast.' Observers adjusted the focus of a central test image by a 2AFC staircase procedure that varied the slope of the image amplitude spectrum. The test were surrounded by 8 identical images with biased spectra, that were presented concurrently with the test for 0.5 sec on a uniform gray background. Contrast effects were comparable in magnitude for image sizes ranging from 1-deg to 4-deg in visual angle, but were stronger for test that were viwe4 in the periphery rather than fixated directly. Consistent biases were found for different types of grayscale images, including natural images, filtered noise, and simple edges. However, effects were weaker when surrounds and tests were drawn from different images, or differed in contrast-polarity or color, and thus do not depend on blur or on average spatial- frequency content per se. These induction effects may in part reflect a manifestation of selective contrast gain control
Blur is an intrinsic property of the retinal image that can vary substantially in natural viewing. We examined how processes of contrast adaptation might adjust the visual system to regulate the perception of blur. Observers viewed a blurred or sharpened image for 2-5 minutes, and then judged the apparent focus of a series of 0.5-sec test images interleaved with 6-sec of readaptation. A 2AFC staircase procedure was used to vary the amplitude spectrum of successive test to find the image that appeared in focus. Adapting to a blurred image causes a physically focused image to appear too sharp. Opposite after-effects occur for sharpened adapting images. Pronounced biases were observed over a wide range of magnitudes of adapting blur, and were similar for different types of blur. After-effects were also similar for different classes of images but were generally weaker when the adapting and test stimuli were different images, showing that the adaptation is not adjusting simply to blur per se. These adaptive adjustments may strongly influence the perception of blur in normal vision and how it changes with refractive errors.
The contrast sensitivity function (csf) is central to describing spatial vision and to models of visual coding, yet little is known about the form of the function under natural viewing conditions. We examined how contrast sensitivity is affected by adaption states that should arise in the course of normal viewing. Webster and Miyahara showed that adaptation to the low-frequency biases in natural scenes selectively reduces sensitivity at low frequencies. Here we examine how these sensitivity changes depend on the properties of observers, by varying subjects' refractive state or by measuring adaptation to chromatic contrast rather than luminance contrast. Defocus and physical blurring have similar effects, altering the adaptation only for strongly blurred images. Switching to chromatic contrast induces larger sensitivity changes at low frequencies, consistent with the different csf's for color and luminance. Thus natural viewing may lead to characteristic adaptation states that differ for luminance and color. To examine the basis for these sensitivity changes, we adapted to 1/f patterns filtered over different frequency bands. Adding lower frequencies to images reduces the adaption induced by higher frequencies. Thus in natural-image adaptation, the low-frequency bias may result - not from the bias in the input spectra - but because the adaptation at different spatial scales is not independent.
We examined figural after-effects in natural images by using as adapt and test stimuli images of human faces, for which small changes in configuration are highly discriminable. Observers either matched a face to a memorized face or rated faces as either `normal' or `distorted', before or after viewing a distorted image of the same face. Prior adaptation strongly biases face recognition: after viewing the distorted image, the original face appears distorted in a direction opposite to the adapting distortion. However, no after-effects are observed when either the adapting image or the test image is inverted, indicating that the adaptation is not to the distortion gradient in the image (which is the same for upright or inverted images), but depends instead on the specific configuration of the stimulus. We further show that the figural after-effects for face images are highly asymmetric, for adapting to the original face has little effect on the perception of a distorted face. This asymmetry suggests that adaptation may play an important normalizing role in face perception. Our results suggest that in normal viewing figural after-effects may play a prominent role in form perception, and could provide a novel method for probing the mechanisms underlying human face perception.
We examined visual search for color within the distributions of colors that characterize natural images, by using a foraging task designed to mimic the problem of finding a fruit among foliage. Color distributions were taken from spectroradiometric measurements of outdoor scenes and used to define the colors of a dense background of ellipses. Search times were measured for locating test colors presented as a superposed circular target. Reaction times varied from high values for target colors within the distribution (where they are limited by serial search based on form) to asymptotically low values for colors far removed from the distribution (where targets pop out). The variation in reaction time follows the distribution of background contrasts but is substantially broader. In further experiments we assessed the color organization underlying visual search, and how search is influenced by contrast adaptation to the colors of the background. Asymmetries between blue-yellow and red-green backgrounds suggest that search times do not depend on the separable L-M and S- (L+M) dimensions of early postreceptoral color vision. Prior adaptation facilitates search over adaptation to a uniform background, while adaptation to an inappropriate background impedes search. Contrast adaptation may therefore enhance the salience of novel stimuli by partially discounting the ambient background.
Color perception depends profoundly on adaptation processes that adjust sensitivity in response to the prevailing pattern of stimulation. We examined how color sensitivity and appearance might be influenced by adaptation to color distributions that are characteristic ofnatural images. Color distnl<utions were measured for natural scenes by successively recording each scene with a digital camera through 31 interference filters, or by sampling an array of locations within each scene with a spectroradiometer. The images were used to reconstruct the L, M, and S cone excitation at each spatial location, and the contrasts along three post-receptoral axes [L+M, L-M, or S-(L+M)]. Chromatic contrasts varied principally along a bluish-yellowish axis along which L-M and S-(L+M) signals were highly correlated, with weaker correlations between luminance and chromaticity. We use a two-stage model (von Kries scaling followed by decorrelation) to show how adaptation might influence color appearance by selectively reducing sensitivity to the principal axes of the color distributions, and compare these predictions to empirical measurements of asymmetric color matches obtained after adaptation to successive random samples drawn from natural color distributions.