A common limitation of laser line three-Dimensional (3D) scanners is the inability to scan objects with surfaces that are
either parallel to the laser line or that self-occlude. Filling in missing areas adds some unwanted inaccuracy to the 3D
model. Capturing the human head with a Cyberware PS Head Scanner is an example of obtaining a model where the
incomplete areas are difficult to fill accurately. The PS scanner uses a single vertical laser line to illuminate the head
and is unable to capture data at top of the head, where the line of sight is tangent to the surface, and under the chin, an
area occluded by the chin when the subject looks straight forward. The Cyberware PX Scanner was developed to obtain
this missing 3D head data. The PX scanner uses two cameras offset at different angles to provide a more detailed head
scan that captures surfaces missed by the PS scanner. The PX scanner cameras also use new technology to obtain color
maps that are of higher resolution than the PS Scanner. The two scanners were compared in terms of amount of surface
captured (surface area and volume) and the quality of head measurements when compared to direct measurements
obtained through standard anthropometry methods. Relative to the PS scanner, the PX head scans were more complete
and provided the full set of head measurements, but actual measurement values, when available from both scanners,
were about the same.
The objects captured with three-dimensional scanners are, by themselves, of limited value. The real power of 3D scanning emerges as applications derive useful information from the point clouds. Extracting measurements from 3D human body scans is an important capability for those interested in clothing and equipment design, human factors evaluation, and web commerce, among other applications. In order to be practical, measurement extraction functions must be fast, accurate, and reliable. Automation is critical for processing the large numbers of scans envisioned by most developers. In this paper we report two functions for identifying feducial points (landmarks) on the human face. First, we used a template-matching approach where a predefined template of 34 face landmarks is matched to a head scan using a small subset of the template landmarks. Once the template is in place, interrogating local surface geometry refines landmark location. This approach allows us to locate a large number of landmarks quickly, and, more importantly, it allows us to place important but hard to locate landmarks. In our second approach, we used image-processing methods to locate a small blue dot that has been positioned on the face prior to scanning.
Automatic extraction of anatomic landmarks from three-dimensional (3-D) head scan data is a typical, also challenging application of 3-D image analysis. This paper explored approaches to automatically identify landmarks based on their geometric appearance in a 3-D data set. We investigated the geometric features of most important landmarks of the head/face, especially invariant surface characteristics such as mean and Gaussian curvature, and other external characteristics as well. Based on the analysis of these features, we define a number of methods and operators to locate each extractable landmark from 3-D scan data. Starting from nose, the process to locate face landmarks can be conducted in a structural way and we reduced the image analysis of each landmark to a local area. Ideally, the characteristic map derived from a 3-D digital image should deliver a meaningful image for analysis. However, due to noise and void in the data set, it is not unusual the characteristic map has to be post-processed or re-computed. A number of experiments is conducted to find the suitable computational technique and additional steps are taken to obtain a satisfied characteristic map.
Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).
Most current 3-D head scanners cannot capture a complete surface of the head due to limitation in view. As a postprocessing aid, we developed an automated method for approximating the top of the head surface. The top-of-head surface is usually the largest void area in a 360-degree head scan such as these obtained with a Cyberware PS head scanner. In this paper, we describe a two-step B-spline curve/surface approximation process to reconstruct the top ofhead from raw data set.
KEYWORDS: Image segmentation, 3D scanning, 3D image processing, Data modeling, 3D modeling, Image processing algorithms and systems, Data acquisition, Whole body imaging, Medical imaging, Chest
This paper presents a segmentation algorithm for 3D whole body surface scan data. The algorithm is based on 2D projection of 3D data and has achieved good result in a number of limited surface shapes. The method has been successfully employed to extract the torso, arm, and leg segments of the human body.
KEYWORDS: 3D scanning, Natural surfaces, 3D image processing, Tissues, Head, Clouds, Personal protective equipment, Scanners, Tissue optics, Data centers
Surface area coverage is an important feature for evaluating the functionality of personal protective equipment and clothing. This paper present an approach for calculating surface area coverage of protective clothing by superimposing two 3D whole body scan images: a scan of a 'nude' human/mankind body and a scan of a clothed body. The basic approach is to align two scans and calculate the per vertex distance field between the two scanned surfaces. Because the clothed body has an extra surface layer relative to the nude scan, the distance field may be used to define covered or uncovered regions by setting a distance threshold based on the thickness of the clothing or equipment. This paper discusses the procedures required for estimating surface area coverage including data slicing, sorting, mesh generation and the computation of the distance field. Although the above method is straightforward to describe, some difficulties related to human body scanning had to be overcome in the practical application of the method. Some of these challenges included: 1) registration of two scan data sets with different shapes, 2) the frequency occurrence of void data, especially in the clothing scan; and 3) the clothing/equipment may cause tissue compression and deformation. This paper discusses these problems and our current solutions.
KEYWORDS: Principal component analysis, Image classification, Vegetation, Dielectric polarization, Radar, L band, Visualization, Image processing, Data acquisition, RGB color model
We are currently exploring the relationship between spatial statistical parameters of various geophysical phenomena and those of the remotely sensed image by way of principle component analysis (PCA) of radar and optical images. Issues being explored are the effects of incorporating PCA into land cover classification in an attempt to improve its accuracy. Preliminary results of using PCA in comparison with unsupervised land cover classification are presented.
KEYWORDS: 3D metrology, 3D modeling, Data modeling, 3D scanning, Image quality, 3D image processing, Data centers, Head, Image processing, Electrical engineering
For 3D digitizers to be useful data collection tools in scientific and human factors engineering applications, the models created from scan data must match the original object very closely. Factors such as ambient light, characteristics of the object's surface, and object movement, among others can affect the quality of the image produced by any 3D digitizing system. Recently, Cyberware has developed a whole body digitizer for collecting data on human size and shape. With a digitizing time of about 15 seconds, the effect subject movement, or sway, on model fidelity is an important issue to be addressed. The effect of sway is best measured by comparing the dimensions of an object of known geometry to the model of the same object captured by the digitizer. Since it is difficult to know the geometry of a human body accurately, it was decided to compare an object of simple geometry to its digitized counterpart. Preliminary analysis showed that a single cardboard tube would provide the best artifact for detecting sway. A tube was attached to the subjects using supports that allowed the cylinder to stand away from the body. The stand-off was necessary to minimize occluded areas. Multiple scans were taken of 1 subject and the cylinder extracted from the images. Comparison of the actual cylinder dimensions to those extracted from the whole body images found the effect of sway to be minimal. This follows earlier findings that anthropometric dimensions extracted from whole body scans are very close to the same dimensions measured using standard manual methods. Recommendations for subject preparation and stabilization are discussed.
Developments in laser digitizing technology now make it possible to capture very accurate 3D images of the surface of the human body in less than 20 seconds. Applications for the images range from animation of movie characters to the design and visualization of clothing and individual equipment (CIE). In this paper we focus on modeling the user/equipment interface. Defining the relative geometry between user and equipment provides a better understanding of equipment performance, and can make the design cycle more efficient. Computer-aided fit testing (CAFT) is the application of graphical and statistical techniques to visualize and quantify the human/equipment interface in virtual space. In short, CAFT looks to measure the relative geometry between a user and his or her equipment. The design cycle changes with the introducing CAFT; now some evaluation may be done in the CAD environment prior to prototyping. CAFT may be applied in two general ways: (1) to aid in the creation of new equipment designs and (2) to evaluate current designs for compliance to performance specifications. We demonstrate the application of CAFT with two examples. First, we show how a prototype helmet may be evaluated for fit, and second we demonstrate how CAFT may be used to measure body armor coverage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.