White balance is an algorithm proposed to mimic the color constancy mechanism of human perception. However, as
shown by its name, current white balance algorithms only promise to correct the color shift of gray tones to correct
positions; for other color values, white balance algorithms process them as gray tones and therefore produce undesired
color biases. To improve the color prediction of white balance algorithms, in this paper, we propose a 3-parameter nondiagonal
model, named as PCA-CLSE, for white balance. Unlike many previous researches which use the von Kries
diagonal model for color prediction, we proposed applying a non-diagonal model for color correction which aimed to
minimize the color biases while keeping the balance of white color. In our method, to reduce the color biases, we
proposed a PCA-based training method to gain extra information for analysis and built a mapping model between
illumination and non-diagonal transformation matrices. While a color-biased image is given, we could estimate the
illumination and dynamically determine the illumination-dependent transformation matrix to correct the color-biased
image. Our evaluation shows that the proposed PCA-CLSE model can efficiently reduce the color biases.
In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in
3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the
2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.
A region based illumination-normalization method which only uses gray level values in an image as information is proposed in this paper. The general purpose of illumination normalization is to reduce lighting effect when the testing images are captured in different environment and supplies useful and uniform data. We apply the algorithm in our face recognition system. The algorithm first divides the testing face images into two parts using homomorphic filtering method. One is the face feature, F0, and the other one is the illumination information, I0. Next, an illumination reference model is constructed from a set of normal face images. The illumination information of the testing face image is then adjusted according to the illumination reference model. The modified illumination information, Iα, and original face feature, F0, are at last combined to make up the normalized face image. The face recognition result is improved after the algorithm is applied.