Some results concerning lighting for human color vision can be generalized to robot color vision. These results depend mainly on the spectral sensitivities of the color channels, and their interaction with the spectral power distribution of the light. In humans, the spectral sensitivities of the R and G receptors show a large overlap, while that of the B receptors overlaps little with the other two. A color vision model that proves useful for lighting work---and which also models many features of human vision---is one in which the "opponent color" signals are T = R - G, and D = B - R. That is, a "red minus green" signal comes from the receptors with greatest spectral overlap, while a "blue minus yellow" signal comes from the two with the least overlap. Using this model, we find that many common light sources attenuate red-green contrasts, relative to daylight, while special lights can enhance red-green contrast slightly. When lighting changes cannot be avoided, the eye has some ability to compensate for them. In most models of "color constancy," only the light's color guides the eye's adjustment, so a lighting-induced loss of color contrast is not counteracted. Also, no constancy mechanism can overcome metamerism---the effect of unseen spectral differences between objects. However, we can calculate the extent to which a particular lighting change will reveal metamerism. I am not necessarily arguing for opponent processing within robots, but only presenting results based on opponent calculations.