A fully automatic system for human detection and tracking in front of an Interactive Whiteboard is presented. When a
person is between a projector and the projection area, deleterious effects can be created from light shining on the face.
We developed a stereo vision system that can be used to mitigate problems arising from this issue by accurately
detecting the human body and masking the face. We present two main parts of this system: namely, automatic system
calibration and the human detection and tracking. We use a checkerboard pattern that is projected on the whiteboard at
start-up for automatic calibration. Grid patterns from two images are processed, and points between them are detected
and localized. A projective transform is used to set the homography between the two images. Testing shows precise
automatic calibration, with an average RMS error of 0.4 pixels in the off-line test. Human detection and tracking is
accomplished using a similarity measure, foreground segmentation, principle component analysis, body shape feature
extraction, disparity measure, and location estimation. We achieved an average detection rate of 97.7 % in the off-line
tests. The method was fully implemented in a real-time system and testing showed the system to be very robust.
We evaluated three algorithms for prostate boundary segmentation from 3D ultrasound images. In the parallel segmentation method, the 3D image was sliced into parallel, contiguous 2D images, whereas in the rotational method, the image was sliced in a rotational manner. Using either method, four points were selected on a central slice and used to initiate a 2D deformable model. The segmented contour was propagated to adjacent slices until the entire prostate was segmented. In the volume-based method, the 3D image was segmented directly without slicing it. Each segmentation algorithm was applied to four 3D images, and the results were compared to manual segmentation. Average volume errors of -8.58%, -1.95% and -5.01% were estimated for the parallel, rotational and volume-based methods, respectively. Approximately 20% of the slices required editing in the parallel method, whereas 13% required editing in the rotational method. Although only one surface segmented using the volume-based method needed editing, manual editing was difficult in 3D. Segmentation times, including editing, ranged from 42 to 82 seconds for the parallel method, from 27 to 52 seconds for the rotational method, and up to 55 seconds for the volume-based method. Based on these results, we recommend the rotational segmentation method.
Our slice-based 3D prostate segmentation method comprises of three steps. 2) Boundary deformation. First, we chose more than three points on the boundary of the prostate along one direction and used a Cardinal-spline to interpolate an initial prostate boundary, which has been divided into vertices. At each vertex, the internal and external forces were calculated. These forces drived the evolving contour to the true boundary of the prostate. 3) 3D prostate segmentation. We propoaged the final contour in the initial slice to adjacent slices and refined them until all prostate boundaries of slices are segmented. Finally, we calculated the volume of the prostate from a 3D mesh surface of the prostate. Experiments with the 3D US images of six patient prostates demonstrated that our method efficiently avoided being trapped in local minima and the average percentage error was 4.8%. In 3D prostate segementation, the average percentage error in measuring the prostate volume is less than 5%, with respect to the manual planimetry.