This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.
This paper proposes an interactive psoriasis lesion segmentation algorithm based on Gaussian Mixture Model (GMM). Psoriasis is an incurable skin disease and affects large population in the world. PASI (Psoriasis Area and Severity Index) is the gold standard utilized by dermatologists to monitor the severity of psoriasis. Computer aid methods of calculating PASI are more objective and accurate than human visual assessment. Psoriasis lesion segmentation is the basis of the whole calculating. This segmentation is different from the common foreground/background segmentation problems. Our algorithm is inspired by GrabCut and consists of three main stages. First, skin area is extracted from the background scene by transforming the RGB values into the YCbCr color space. Second, a rough segmentation of normal skin and psoriasis lesion is given. This is an initial segmentation given by thresholding a single gaussian model and the thresholds are adjustable, which enables user interaction. Third, two GMMs, one for the initial normal skin and one for psoriasis lesion, are built to refine the segmentation. Experimental results demonstrate the effectiveness of the proposed algorithm.
Hand gesture recognition has attracted more interest in computer vision and image processing recently. Recent works for hand gesture recognition confronted 2 major problems. The former one is how to detect and extract the hand region from color-confusing background objects. The latter one is the expensive computational cost by considering the kinematic hand model with up to 27 degrees of freedom. This paper proposes a stable and real-time static hand gesture recognition system. Our contributions are listed as follows. First, to deal with color-confusing background objects, we take the RGB-D (RGB-Depth) information into account, where foreground and background objects can be segmented well. Additionally, a coarse-to-fine model is proposed, which utilizes the skin color and helps us extract the hand region robustly and accurately. Second, considering the principal direction of hand region is random, we introduce the principal component analysis (PCA) algorithm to estimate and then compensate the direction. Finally, to avoid the expensive computational cost of traditional optimization, we design a fingertip filter and detect extended fingers via calculating their distances to palm center and curvature easily. Then the number of extended fingers will be reported, which corresponds to the recognition result. Experiments have verified the stability and high-speed of our algorithm. On the data set captured by the depth camera, our algorithm recognizes the 6 pre-defined static hand gestures robustly with average accuracy about 98.0%. Furthermore, the average computational time for each image (with the resolution 640×480) is 37ms, which can be extended to many real-time applications.