In order to automate the image evaluation task, an engineering model for predicting the visual differences of color images is developed. The present CVDP consists of a color appearance model, a set of contrast sensitivity functions, the modified cortex transform, and a multichannel interaction model for masking effects. Based ona pixel-by- pixel difference metric similar to the CIELAB color difference, the predictions of the simplified CVDP are found to correlate fairly with the psychophysical test results over 51 pairs of natural images with some detection failures. These failures can be eliminated by including additional image quality metrics: the clarity in the shadow and highlight areas and the graininess in the mid-tone areas. The modified model is found to be able to identify 55 percent of those visually indistinguishable image pairs. The preliminary results using the complete CVDP for selected image pairs indicate that the effects of masking introduce only little changes to the results of the simplified CVDP.