James Clerk Maxwell demonstrated the first color photograph in a lecture to the Royal Society of Great Britain in 1861. He used this demonstration to illustrate Thomas Young's idea that human vision uses three kinds of light sensors. This demonstration led to a great variety of color photographic systems using both additive and subtractive color. Today, we have photographic, video, still digital, and scanning image- capture devices. We have electrophotographic, ink jet, thermal and holographic hard-copy systems, as well as, cathode ray tube, liquid-crystal display, and other light emission color devices. The major effort today is to get control of all these technologies so that the user can, without effort, move his color, digital image from one technology to another without changing the appearance of the image. The strategy of choice is to use colorimetry to calibrate each device. If all prints and displays sent the same colorimetric values from every pixel the images, regardless of the display, will appear identical. The problem is that prints and displays have very different color gamuts. A more satisfactory solution is needed. In my view, the future emphasis of color will be in models of human vision that calculate the color appearance, rather than the color match. All the technologies listed above work one pixel at a time. The response at every pixel is dependent on the input at that pixel, regardless of whether the imaging system is chemical, photonic or electrical. Humans are different. The color they see at a pixel is controlled by that pixel and all the other pixels in the field of view. Human color vision is a spatial calculation involving the whole image. In the future, we will see more models that compute the color appearance from spatial information and write color sensations on media, rather than attempting to write the quanta catch of visual receptors.