Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).
Many objects in our visual field compete for neural representation. Both bottom-up, sensory-driven processes (luminance detection) as well as top-down mechanisms (attention and familiarity) can affect the result of this competition. In this study, visual evoked potentials were used to measure the changes induced by both stimulus variables and attention processes. The stimulus set consisted of a grayscale sine wave grating pattern with different degrees of spatially random noise. This stimulus set was generated using the ALOPEX optimization algorithm. This algorithm generated a series of sequential images while converging from a completely random noise pattern to the sine wave grating pattern template. All of the patterns in the stimulus set were normalized for average luminance during the ALOPEX convergence process. Additionally, the stimulus content of each pattern was quantified using a number of image processing algorithms including space-averaged global contrast, image entropy, central moments, 2D Fourier transform, and 2D wavelet transform. The visual evoked potentials were recorded using the same pattern set for different attention states of the subjects. The results presented demonstrate the contrasting affects of noise and attention on both the time and frequency components of the visual evoked potential recorded from different lobes of the brain.