Proc. SPIE. 10752, Applications of Digital Image Processing XLI
KEYWORDS: Facial recognition systems, Matrices, Image enhancement, Databases, RGB color model, Light sources and illumination, Wavelets, High dynamic range imaging, Statistical analysis, Principal component analysis
Lighting variation is a challenge for face recognition. This paper proposes a new enhancement method called waveletsubband booster, which can be used to redeem face image quality, to overcome this problem. The efficient brightness detector was used to classify the color face image into three classes of bright, normal, or dark. The RGB color channels of the face image were respectively transformed to the discrete wavelet domain. Each subband coefficients of the RGB color channels were then adjusted by multiplying the singular value matrices of these frequency subband coefficient matrices with the boosting coefficients. An image denoising model was further applied. Then the 2D inverse discrete wavelet transform was performed to obtain the boosted color face image without the lighting effect. The experimental results demonstrated the efficiency of the proposed methodology. The proposed method not only yield boosted images that are good as they were taken under normal lighting, but also significantly improve the accuracy and computation speed for face recognition.
In previous studies on human face recognition, illumination pretreatment has been considered to be among the most crucial steps. We propose the illumination compensation algorithm two separated singular value decomposition (TSVD). TSVD consists of two parts, namely the division of high- and low-level images and singular value decomposition, which are implemented according to self-adapted illumination compensation to resolve the problems associated with strong variation of light and to improve face recognition performance. The mean color values of the three color channels R, G, and B are used as the thresholds, and two subimages of two types of light levels are then input with the division of the maximal mean and minimal mean, which are incorporated with light templates at various horizontal levels. The dynamic compensation coefficient is proportionately adjusted to reconstruct the subimages. Finally, two subimages are integrated to achieve illumination compensation. In addition, we combined TSVD and the projection color space (PCS) to design a new method for converting the color space called the two-level PCS. Experimental results demonstrated the efficiency of our proposed method. The proposed method not only makes the skin color of facial images appear softer but also substantially improves the accuracy of face recognition, even in facial images that were taken under conditions of lateral light or exhibit variations in posture.
Low-contrast profile images are frequently encountered in medical practice, and the correct interpretation of these images is of vital importance. This study introduces a contrast enhancement technique based on singular value decomposition (SVD) to enhance low-contrast fracture x-ray images. We propose a development of the traditional singular value solution by applying a feature selection process on the extracted singular values. The proposal calls for the establishment of a feature space in which the interpretability or perception of information in images for human viewers is enhanced, while noise and blurring are reduced. In this approach, the area of interest is manually cropped, and histogram equalization (HE) and singular value selection procedures are then conducted for comparative study. This approach exploits the spectral property of SVD, and the singular value selection algorithm is developed based on the corresponding Fourier domain technique for high frequency enhancement. The proposed method can generate more enhanced views of the target images than HE processing. Ten physicians confirm the performance of the proposed model using the visual analog scale (VAS). The average VAS score improves from 2.5 with HE to 8.3 using the proposed method. Experimental results indicate that the proposed method is helpful in fracture x-ray image processing.
Wafer sawing performance must be closely monitored to ensure a satisfactory integrated circuits manufacturing yield. The inspection must allow the GO/NG decision to be fast and reliable, while also assuring that the training of the inspector is simple and not time consuming. The traditional neural-network approach to inspect images, while simple to implement, presents some disadvantages, including training efficiency and model effectiveness. Based on contour detection of the sawing lane, this work proposes a novel method combined with cross-center localization of sawing lanes, detection of sawing track, and four signatures to detect the abnormality of sawing effectively and timely. Our method does not need pretraining but runs faster and provides a better method with more effectiveness, higher flexibility, and immediate feedback to the sawing operation. An experiment using real data collected from an international semiconductor package factory is conducted to validate the performance of the proposed framework. The accurate acceptance rate and the accurate rejection rate are both 100%, while the false acceptance rate and false rejection rate are both zero as well. The results demonstrate that the proposed method is sound and useful for sawing inspection in industries.
We use image-locating techniques and a traditional whiteboard with two cameras to construct an electronic whiteboard (EWB) with a size of 88×176 cm corresponding to 1280-×1024-pixel resolution. We employ two strategies achieve the goal: (1) we develope a modified scale and bilinear interpolation (MSBI) method for pen locating and acceleration operation, and obtain high accuracy detection; and (2) a block parameter database (BPD) is created to improve the accuracy. For the BPD, we divide the whiteboard image into several blocks and record each block parameter (the X and Y coordinates) to follow pen position calculation. Experimental results demonstrate that the MSBI method can correctly calculate the pen position. Additionally, the BPD strategy is better than the traditional method as it improves the accuracy and decreases the maximum detection error from 6 to 3 pixels. The simulation results prove our method is an effective and low-cost EWB technique.
In this paper, QR bar code and image processing techniques are used to construct a nested steganography scheme. There are two types of secret data (lossless and lossy) embedded into a cover image. The lossless data is text that is first encoded by the QR barcode; its data does not have any distortion when comparing with the extracted data and original data. The lossy data is a kind of image; the face image is suitable for our case. Because the extracted text is lossless, the error correction rate of QR encoding must be carefully designed. We found a 25% error correction rate is suitable for our goal. In image embedding, because it can sustain minor perceptible distortion, we thus adopted the lower nibble byte discard of the face image to reduce the secret data. When the image is extracted, we use a median filter to filter out the noise and obtain a smoother image quality. After simulation, it is evident that our scheme is robust to JPEG attacks. Compared to other steganography schemes, our proposed method has three advantages: (i) the nested scheme is an enhanced security system never previously developed; (ii) our scheme can conceal lossless and lossy secret data into a cover image simultaneously; and (iii) the QR barcode used as secret data can widely extend this method's application fields.
An elliptic face segmentation algorithm, called a facial component extractor (FCExtractor), was recently proposed. The algorithm is based on a novel overcomplete wavelet template, a support vector machine (SVM) classifier, and wavelet entropy filtering. It is designed to consistently detect and segment the eyes-nose-mouth T-shaped facial region via ellipse. Thereafter, head orientation is estimated by using the ratio of cheeks. To evaluate the effectiveness of the FCExtractor, we introduce a face detection measure based on the distance between the expected and segmented eye-mouth triangle circumscribed circle areas. We then apply the local description of the segmented face through normalization, illumination normalization, log-polar mapping, and self-eigenface to achieve recognition. The novelty of this approach for face representation comes from the derivation of the likelihood fitness function for self-eigenface selection of a discriminative subset and the adaptive threshold value. The approach maximizes the differences amidst face images of different persons, and it also minimizes the expression and pose variations of the same person. Experimental results on available databases and a live sequence show that our method is superior to conventional methods based on rectangular face segmentation against complex scenes.
We propose a novel two-step approach for eyes detection in complex scenes including both indoor and outdoor environments. This approach adopts face localization to eye extraction strategy. First, we use energy analysis to remove most noise-like regions to enhance face localization performance, and then use the head contour detection (HCD) approach to search for the best combinations of facial sides and head contours with an anthropometric measure, and thereafter the face-of-interest (FOI) region is located. In the meantime, via the deedging preprocessing for facial sides, a wavelet subband interorientation projection method is adopted to select eye-like candidates. Along with the geometric discrimination among the facial components, such as the eyes, nose, and mouth, this eye verification rule verifies the selected eyes candidates. The eye positions are then marked and refined by the bounding box of FOI region as the ellipse being the best fit of the facial oval shape. The experimental results demonstrate that the performance of our proposed method has significant improvement compared to others on three head-and-shoulder databases.