Accuracy of skin segmentation algorithms is highly sensitive to changes in lighting conditions. When the lighting condition in a scene is different from that in the training examples, miss-classification rate of the skin segmentation algorithms is high. Using color constancy approach we aim to compensate for skin color variations to achieve accurate skin color segmentation. Skin color constancy is realized in an unsupervised manner by using the color changes observed on a face for different illuminations to drive the model. By training on few faces of different ethnicities, our model is able to generalize the color mapping for any unseen ethnicity. The color changes observed are used to learn the color mapping from one lighting condition to the other. These mappings are represented in a low dimensional subspace to obtain basis vector fields. Using these basis vector fields we can model the nonlinear color changes to transform skin colors in arbitrary lighting conditions to a reference lighting condition. We show the proof of concept of unsupervised skin color constancy on faces from the PIE database. Skin segmentation is performed on the color compensated faces using a Skin Distribution Map (SDM), which is trained on skin colors in reference lighting condition.