Open Access
1 July 2008 Locating and tracing of anatomical landmarks based on full-field four-dimensional measurement of human body surface
Author Affiliations +
Abstract
Four-dimensional (4D) (3D+time) measurement systems make it possible today to measure objects while moving and deforming. One of the fields where 4D systems prove themselves useful is medicine—particularly orthopedics and neural sciences—where measurement results may be used to estimate dynamic parameters of a patient's movement. Relatively new in 4D, optical full-field shape measurement systems capture more data than standard marker-based systems and open new ways for clinical diagnosis. However, before this is possible, the appropriate 4D data processing and analysis methods need to be developed. We present a new data analysis path for 4D data input as well as new shape parameters describing local features of a surface. The developed shape parameters are easier and quicker to calculate than standard surface parameters, such as curvatures, but they give results that are very similar to the latter. The presented 4D data analysis path allows characteristic areas on the body, so-called anatomical landmarks, to be located and traces them in time along the measurement sequence. We also present the general concepts and describe selected steps of the developed 4D data analysis path. The algorithms were implemented and tested on real and computer-generated data representing the surface of lower limbs. Finally, we give sample processing and analysis results.

1.

Introduction

The measurement of human movement is very important in today’s world of medicine. The introduction of 4D (3D+time) measurement systems for medical applications allows a patient’s motion to be monitored and various dynamical parameters to be estimated. Nowadays human movement measurement systems are widely used in many medical areas, including orthopedics, neurosciences, and sport medicine.1

Today’s 4D measurement systems for medical purposes are mainly based on the use of retroreflective markers. These are usually circular or spherical objects attached to a patient’s skin or clothes in points of high medical interest, including anatomical landmarks such as joints, vertebrae, etc. During the measurement, a patient performs defined movements, such as limb flexion/extension, walking, cycling, etc. in the system’s calibrated measurement volume. The measurement volume is observed by a set of video cameras, whose sequences of images are used to calculate the markers’ position in time with the use of photogrammetric techniques. Consequently, the markers’ trajectories are used to animate a virtual skeleton. The analysis of a skeleton’s movement results in the computation of desired motion parameters.

This technique has been known and successfully used for more than a decade. It gives sufficient accuracy at very high measurement frequencies, even exceeding 200frames persecond (fps). Moreover, over the years, adequate marker arrangement techniques, as well as analysis and diagnostic software, have been developed, resulting in successful utilization of 4D marker systems all over the world. The use of markers—which is a foundation of this method—implies some drawbacks to the applications, however:

  • 1. First, the acquired information refers only to the location on the skin where markers are placed. No other parts of body are measured. This limits the application of marker systems only to issues for which tracing of markers is sufficient. Measuring other parameters, such as the arm volume or the area of the calf cross section, is not possible with this system.

  • 2. Second, as the markers are attached to skin or clothes, they move together with them and very often slip over the underlying anatomical structures to which the markers were to point. According to research, these so-called soft tissue artifacts may reach up to 40mm .2

  • 3. Finally, fixing markers to a patient’s body should be done by a qualified person. However, even for a good specialist it is very unlikely that markers would be fixed in exactly the same places during a number of measurements conducted over intervals of a few weeks. Fortunately, novel 4D measurement systems that do not have these weaknesses are beginning to appear.

2.

Markerless 4D Measurement Systems

The recent development of digital-optical instrumentation and computer hardware made it possible to extend optical full-field measurements to objects whose shape and location change in time. 4D systems of this kind are mainly based on laser triangulation or fringe projection.3, 4, 5, 6 The use of the markerless technique automatically avoids the problem of marker-over-bone slipping. However, the main advantage of optical full-field measurement systems over marker-based systems is that for each time instant (hereafter called a time frame or frame) a 3D data set representing the whole measured surface of the object is produced, usually in the form of a point cloud. State-of-the-art marker-based systems can trace up to 100 markers attached to a patient’s skin. Markerless systems can give more than 1 million points from each time frame. The proper analysis of this amount of data could result in much more accurate location and tracing of anatomical landmarks, as well as more dependable clinical diagnosis than marker-based equipment.

However, before such a diagnosis is possible, appropriate 4D data processing and analysis methods need to be developed. The measurement method is simply too new and there are no ready-for-use algorithms that would allow robust analysis of measurement results.

The methods and algorithms presented here were tested on computer-generated data, as well as on real experimental data that was acquired by a 4D measurement system developed in the AURORA project.6 It utilizes four modules, which measure a patient from four directions (Fig. 1 ). Each module is a measurement system based on laser triangulation. It consists of a projection unit (a laser scanner) and a detection unit (a CCD camera). During the measurement, laser scanners project horizontal lines onto the object and a patient performs certain moves, e.g., leg flexion/extension. The detectors capture a time sequence of images showing lines projected onto the patient’s lower limbs at a rate of 15fps . Based on the raster deformation, the shape of the legs surface is calculated. In order to avoid detecting lines projected by another unit, the adjacent modules project alternately.

Fig. 1

(a) A visualization of a commercial version of the 4D measurement system developed in the AURORA project. (b) Laser lines projected onto the patient’s legs.

044039_1_035804jbo1.jpg

3.

4D Data Processing and Analysis: The Concept

We present here processing and analysis algorithms for 4D data sets from optical full-field 4D measurement systems for medical purposes.6 The particular aim of our proposed method is to find and trace in time the location of certain anatomical landmarks that are detectable on the body’s surface and will enable the development of a kinematical model of humans. At present, we are focused on the lower limbs only and on the most clearly visible structures, such as the patella, the dorsal part of the knee, the malleolus lateralis and malleolus medialis, the heel, and the toes, but future analysis will be extended to other anatomical structures of high medical interest.

In order to process 4D data for use in computer-aided clinical diagnosis, a dedicated software processing and analysis path has been developed (Fig. 2 ).7

Fig. 2

4D data processing and analysis path for monitoring anatomical landmarks.

044039_1_035804jbo2.jpg

Data from the measurement system are delivered as a time sequence of point clouds. Several point clouds are generated for every frames. One point cloud is generated per frame from every directional measurement module: Each directional point cloud is called a 2.5D point cloud, as it can be represented as function of 2 parameters, e.g., z=z(x,y) . The first operation is the preliminary processing of point clouds. Directional point clouds are resampled if needed and merged to create a combined full 3D point cloud representing the whole surface of lower limbs in every time frame. Next, feature searching takes place. In this step, maps of full 3D surface parameters are calculated, regions of interest are selected, point groups connected with desired anatomical landmarks are discriminated, and their parameters are calculated. Then groups that are of interest to the operator are traced in time. The trajectories of traced groups are analyzed by an external biomechanical module to calculate the parameters of a lower-limb skeleton model. The data generated by the biomechanical module may also be used for the automatic recognition of anatomical landmarks. Finally, the skeleton parameters may be used to support the clinician in medical diagnosis.

The next section describes the numerical methods we developed.

4.

4D Data Processing and Analysis Algorithms

The proposed 4D data processing and analysis path separates operations conducted on each frame individually from operations conducted on the time sequence of data. The purpose of this separation is to minimize the propagation of possible processing errors from one frame to another. In addition, and, which seems very convenient, a 4D data set processed frame by frame may be regarded as a series of 3D data sets that allows the use of processing methods developed for 3D data. However, it should be remembered that for 4D data sets the entire amount of collected data is usually many times larger than for a 3D data set. If we suppose that the frame rate is 15fps and the measurement takes 60s , then, as a result, 900 time frames are captured, meaning 900 times more data is collected. This also means that the required processing would take 900 times more time. Nevertheless, the use of 4D measurements for medical applications sets a requirement for a relatively short time, such as a few minutes, for data processing and analysis, as the clinicians wish to have results while the patient is still in the consulting room. Thus, some of the methods used for 3D data processing may be not sufficient for the processing of 4D data sets. This chapter presents algorithms used for 4D data processing, starting from raw point clouds generated by the measurements system and ending with point groups traced along the whole measurement sequence. Data sets used as examples originate from two sources. Real data come from a singular directional measurement module based on laser triangulation, whereas full surface point clouds were generated using computer graphics software.

4.1.

Individual Frame Analysis

As stated above, every time frame is first processed and analyzed individually. When the feature vector is filled with calculated parameter values in each time frame, only then is the series of these vectors regarded as a time sequence of data. The frame processing begins with preprocessing, which consists of noise reduction and resampling. These operations are conducted for every directional point cloud separately. Then directional point clouds are merged into one combined point cloud and surface shape parameters are calculated. The next step is selection of regions of interest based on the shape parameters’ maps. The last operations are discrimination of point groups and calculation of their parameters.

4.1.1.

Noise reduction

Every measurement method gives results burdened with noise. The existence of noise may propagate along data processing paths and worsen final effects. Therefore, the removal of measurement noise is desired. The real measurement data in our project comes from a laser-based measurement system. The laser projects a set of parallel horizontal lines onto a patient’s skin. As human legs have a certain shape continuity, it is safe to assume that measurement noise may be removed by locally fitting a parabola to points belonging to a single line of a laser’s raster. Of course, the neighborhood used for parabola fitting has to be small enough, e.g., only a few points left and right, so as not to change the general shape of the curve. The results from the noise reduction algorithm are shown in Fig. 3 .

Fig. 3

Reduction of noise in measurement data with local parabola fitting, (a) A sample point cloud with two lines of points selected; (b) a close-up of selected points before noise reduction; (c) a close-up of the same lines after noise reduction.

044039_1_035804jbo3.jpg

4.1.2.

Spatial resampling

Spatial resampling is used to acquire the desired data density of a point cloud. It is especially useful in preprocessing of data acquired by a laser-based system where points are arranged along raster lines and where the data density differs greatly in the horizontal and vertical directions. Spatial resampling allows a point cloud of uniform data density to be obtained, which allows better surface analysis and visualization.

Due to the fact that point clouds resulting from a single measurement module are only 2.5D, where z=z(x,y) , it is possible to adapt resampling methods from image processing. We have exploited this possibility. In our resampling algorithm, the z -value for a given (x,y) pair is calculated as a weighted average of z -values of four closest points, with the weight of each neighbor inversely proportional to its distance to the considered point in the xy -plane:

Eq. 1

z=i=14zi(di)2i=141(di)2,
where z is the z -coordinate of the new point, zi is the z -coordinate of the ith neighbor, and di is the distance of the ith neighbor to the considered point in the xy -plane. A sample point cloud before and after spatial resampling is shown in Fig. 4 .

Fig. 4

Sample point cloud before and after spatial resampling process, (a) A sample point cloud with some points selected; (b) a close-up of selected points before spatial resampling; (c) a close-up of the same fragment of surface after spatial resampling.

044039_1_035804jbo4.jpg

When all directional point clouds in a time frame have been resampled, they are merged. During this operation, each directional point cloud is translated and rotated according to multidirectional calibration matrices that describe the location of measurement modules in space. The result is a full 3D combined point cloud. From this point, all operations are conducted on full 3D merged point clouds.

4.1.3.

Calculation of surface shape parameters

This section describes ealgorithms used for searching for possible anatomical landmarks to detect on the surface of a patient’s lower limbs. We present the main idea and explain our new parameters that describe the shape of a surface.

The method of searching for anatomical landmarks used with a shape measurement system has to be based on parameters that in some way describe the character of a surface in every considered point. Particularly, these parameters must discriminate among different surface types and be invariant of an object’s orientation in space. Algorithms for calculating these parameters should be relatively fast and numerically stable. There a mathematical apparatus that is well suited to this purpose. Differential geometry introduces curvatures—a set of measures representing the amount by which a geometric object deviates from being flat. Curvatures—the most popular for surface analysis are mean and Gaussian curvatures—are a very convenient tool for surface analysis. They allow discrimination among various types of surfaces: ellipsoids, paraboloids, hyperboloids, and planes. They also indicate if the surface is convex or concave in the neighborhood of a considered point. Another good feature of curvatures is that their values do not depend on the object’s orientation in the coordinate system. The problem with using curvatures is that they are calculated based on a parametric form of the analyzed surface. There are developed robust algorithms that estimate curvature values for an ordered data set, such as a triangle mesh.8 For unordered data, such as combined point clouds, a parametric patch, such as a quadric patch, needs to be fitted to the considered fragment of the point cloud before curvatures can be calculated. Unfortunately, for full 3D point clouds, fitting quadric patches is a time-consuming operation. Every surface patch to be fit requires solving sets of differential equations with many parameters, where the number of parameters for a quadric patch is 10. It is worth saying that for the most reliable results, these patches should be calculated for every point in the 3D cloud. Besides, some additional issues related to discrete mathematics make calculating mean and Gaussian curvatures uncomfortable for full 3D point clouds.9 This has been our motivation to develop custom surface shape parameters that would have the same functionality for surface analysis as curvatures, yet would require less computational power to be calculated and could be applied to full 3D point clouds. As a result, two parameters have been developed, C1 and C2 . The first parameter describes the convexity or concavity in the considered point, whereas the latter determines whether the curvature is directional in this point or not.

Briefly speaking, the C1 parameter describes how much the surface in the neighborhood of the considered point deviates from a plane. The value of C1 is calculated as follows. For a given point, a spherical neighborhood of a preselected radius is taken, typically 1015mm . For all points in the sphere, hereafter called neighbors, a best-fit plane (BFP) is calculated. A BFP is a plane for which the sum of squared distances of all neighbors to this plane is the lowest:

Eq. 2

i=1N(di)2=min,
where N is the number of neighbors, di is the distance of the ith neighbor to the plane, which is calculated as

Eq. 3

di=Axi+Byi+Czi+DA2+B2+C2,
where xi , yi , zi are the ith point coordinates, and A , B , C , D are the parameters of the plane to which the distance is calculated. The resulting best-fit plane parameters are ( ABFP , BBFP , CBFP , DBFP ).

The C1 value is calculated as a weighted average of signed distances of each neighbor to the BFP:

Eq. 4

C1=i=1N(diwi)i=1Nwi,
where N is the number of neighbors, di is the ith neighbor’s distance to the BFP, and wi is the weight of the ith neighbor. The distance di is calculated with a modified point-to-plane distance formula (3). It can give positive and negative values as a result of omitting the absolute value operator in the standard formula’s numerator. A positive distance value is assumed when the point lies in the half-space pointed to by the surface’s normal vector. If the surface’s normal vector is not given, then the BFP’s normal vector is used. The weight of each neighbor is related to its distance to the considered point. The higher the distance, the lower the weight. The weight decreases with the distance proportionally to the Gaussian curve. The C1 values are positive for convex areas, negative for concave areas, and zero for planes (Fig. 5 ).

Fig. 5

Distribution of the C1 parameter over the lower limb’s surface. (a) Real measurement data; (b) computer-generated data. The red color represents areas of positive parameter values, the green color represents areas of parameter values close to zero, and the blue color represents areas of negative parameter values. (Color online only.)

044039_1_035804jbo5.jpg

Because the C1 parameter alone was not sufficient to describe the local surface shape for anatomical landmark detection, an additional parameter was developed. The C2 parameter describes the distribution of normal vectors in the neighborhood of the considered point in a way that distinguishes areas of unidirectional curvature e.g., cylindrical areas and omnidirectional curvature e.g., spherical areas The C2 value is calculated as follows (Fig. 6 ). The surface’s normal vectors from all neighbors are taken to start in the origin of the coordinate system [Figs. 6b and 6e]. There the vectors create a structure resembling a bouquet for a sphere, a fan for a cylinder, or something in between. The next step is the calculation of the BFPnorm . The BFPnorm is a plane crossing the origin of the coordinate system and fit to the vectors’ ends [Figs. 6c and 6f]. The value of the C2 parameter is calculated as the average distance of the vectors’ ends to the BFPnorm using a standard point-to-plane formula giving only nonnegative values:

Eq. 5

C2=i=1N(dnorm)iN,
where (dnorm)i is the distance of the ith normal vector’s end to the BFPnorm and N is the number of vectors which equals the number of neighbors.

Fig. 6

The idea behind the C2 parameter calculation. (a) Normal vectors placed on a sphere; (b) normal vectors from a sphere taken to start in the origin of the coordinate system; (c) normal vectors from a sphere and their BFPnorm plane; (d) normal vectors placed on a cylinder; (e) normal vectors from a cylinder taken to start in the origin of the coordinate system; (f) normal vectors from a cylinder and their BFPnorm plane.

044039_1_035804jbo6.jpg

The same length for all normal vectors is assumed in order to maintain the consistency of C2 values for all points in the cloud. The C2 values are zero for planes and cylindrical surfaces and positive for other surface types. The highest values of C2 are obtained for spheres and for saddle shapes (Fig. 7 ).

Fig. 7

Distribution of C2 over the lower limb’s surface. (a) Real measurement data; (b) computer-generated data. The red color represents areas of high parameter values, and the green color represents areas of low (close to zero) parameter values. (Color online only.)

044039_1_035804jbo7.jpg

The analysis of the distribution of the C1 and C2 values allows discrimination of various surface types in a way similar to that used with mean (H) and Gaussian (K) curvatures (Table 1 ).

Table 1

Mean (H) and Gaussian (K) curvatures and the C1 and C2 parameters for sample surface types.

Surface type H K C1 C2
Plane0000
Parabolic convex > 0 0 > 0 0
Parabolic concave <0 0 <0 0
Elliptic convex > 0 > 0 > 0 > 0
Elliptic concave <0 > 0 <0 > 0
Hyperbolic <0 <0 <0 > 0
=0 =0
> 0 > 0

The comparison distribution maps of standard and custom shape parameters is shown in Fig. 8 . It shows that both standard and custom shape parameters describe the surface similarly. Distribution maps of the C1 parameter are qualitatively similar to maps of mean curvature (H) . Distribution maps of the C2 parameter are qualitatively similar to the maps of absolute value of Gaussian curvature (K) .

Fig. 8

Comparison of standard and custom shape parameters for different neighborhood radii (computer-generated data).

044039_1_035804jbo8.jpg

Although the custom shape parameters give results similar to well-known standard parameters, there is a reason for the application of the former: the computation time. The overall computation time of a shape parameter distribution maps for a given point cloud depends on two factors:

  • the radius length of the neighborhood needed by the parameters used to produce maps that allow discrimination of desired landmarks,

  • the time needed to calculate the values of these parameters for the neighborhood of given radius length.

The length of the neighborhood radius is directly connected to the number of points belonging to the neighborhood. Obviously, the more neighbors there are, the longer the calculations will take. Therefore, the shape parameter that allows detection of required landmarks using a shorter neighborhood radius prevails from this point of view.

The two left columns in Fig. 8 present distribution maps of the curvatures H and K calculated for a radius of 15mm and the parameters C1 and C2 calculated for a radius 10mm . It turns out that these two data sets can lead to detection of similar areas of interest [Figs. 9a and 9b ]. The two right columns in Fig. 9 present distribution maps of the curvatures H and K calculated for a radius of 20mm and the parameters C1 and C2 calculated for a radius of 15mm . Also, in this case, these two data sets may be used to produce similar results [Figs. 9c and 9d]. It is also noticeable that the resulting arrangement of regions of interest changes its character from two vertically oriented groups in the upper part of the knee to one oriented horizontally. The change of arrangement takes place for a radius of 18mm for standard curvatures and 13mm for custom parameters C1 and C2 . This seems to prove that the use of C1 and C2 at shorter radii can give similar analysis results to those obtained with mean and Gaussian curvatures calculated at longer neighborhood radii.

Fig. 9

Sample regions of interest selected for (a) curvatures H and K at 15mm ; (b) parameters C1 and C2 at 10mm ; (c) curvatures H and K at 20mm ; (d) parameters C1 and C2 at 15mm . The criterion of detection of a region of interest was experimentally defined as follows. A region of interest is assumed detectable if the ratio of the lowest value in the ROI to the medium value in the ROI is higher than 1.50.

044039_1_035804jbo9.jpg

The second factor mentioned was the time needed to calculate the values of the shape parameters for the neighborhood of a given radius length. The C1 and C2 parameters are calculated with simple algorithms requiring much less computational power than required to calculate maps of mean and Gaussian curvatures. A comparison of the time needed to calculate the distribution maps of C1 and C2 as well as H and K for a sample point cloud, is presented in Table 2 . The comparison shows that calculations of the maps of C1 and C2 with neighborhood radii of 1015mm , which are the most useful for the analysis of lower limbs, require less than 10% of the time required to calculate curvature maps of mean and Gaussian curvatures for the same radius.

Table 2

Comparison of time needed to calculate the distribution maps of C1 and C2 as well as H and K for a sample point cloud. The analyzed point cloud consists of 72,015 points and its average density is 25points∕cm2 . The computer used for the comparison is a 2.0-GHz Intel Core2 Duo CPU with 2.0GB@667MHz RAM.

Neighborhood radius (mm)5101520
Calculation time TC1+C2 (s)1.13.78.216.3
Calculation time TH+K (s)14.245.7103.0230.9

The use of the C1 and C2 parameters for point cloud analysis can lead to shortening the calculation time by: using a smaller neighborhood for calculating each point’s parameters and using simpler formulas than those used for standard curvature computation. Results presented in Table 2 show that the use of the C1 and C2 parameters may shorten the processing by more than a factor of 20. Given that the calculation of shape parameters takes more than 95% of the total computation time in the presented analysis method, using C1 and C2 may save a significant amount of time

4.1.4.

Selecting regions of interest

The distribution maps of C1 and C2 are used to find areas connected with anatomical landmarks whose shape are distinguishable on the lower limbs’ surface. Most anatomical landmarks, such as the patella, malleoli, and heel, are convex elliptical areas of high C1 and C2 values. Some landmarks, such as the dorsal knee part, are saddle-shaped, where C1 is close to zero and C2 is high. The rest of the leg surface usually has a cylindrical character of positive C1 and C2 values close to zero. Checking C1 and C2 values for every point in the cloud allows the operator to select areas of possible interest (Fig. 10 ). The area selection algorithm uses the standard binarization algorithm. The C1 and C2 threshold values depend on the type of surface to be selected. Thresholds vary for different anatomical landmarks.

Fig. 10

Sample regions of interest for real data. (a) C1 distribution map; (b) C2 distribution map; (c) areas of interest selected for C1> 0.25 and C2> 0.30 (colored white).

044039_1_035804jbo10.jpg

Due to the fact that the binarization process is conducted for each point individually, regardless of its neighborhood, the selected areas have rough borders, sometimes very small areas containing only a few points not connected to any anatomical landmark are selected. In order to smooth the borders of selection areas, very small areas were removed and thin connections between areas of erosion and dilation where broken. Both of these techniques were adapted from 2D image processing. In 2D image processing, erosion followed by dilation is called an opening operation.

In erosion, all selected pixels that lie closer than a given radius to the selection area border are deselected. In our method, the erosion algorithm works as follows. For each selected point, that is, a point belonging to one of the selected areas, a spherical neighborhood of a given radius is taken. For all neighbors, the ratio of the number of selected points to the number of all neighbors is calculated. If the ratio is lower than a given threshold, the point is assumed to belong to the borderline and is deselected. The two parameters that are used with erosion of the selection area are the erosion radius and the threshold ratio.

In dilation, all unselected pixels that lie closer than a given radius to the selection area border are selected. In our method, the dilation algorithm works as follows. For each unselected point, a spherical neighborhood of a given radius is taken. For all neighbors, the ratio of the number of selected points to the number of all neighbors is calculated. If the ratio is higher than a given threshold, the point is assumed to be close to the borderline and is selected. The two parameters used with dilation of the selection area are the dilation radius and the threshold ratio.

Binarization followed by erosion and dilation results in smooth-edged selected areas resembling the desired anatomical landmarks (Fig. 11 ). At this moment, however, the selected areas are not yet distinguishable. Selected points need to be divided into labeled point groups. This is achieved with another algorithm adapted from 2D image processing—segmentation. As an effect of segmentation, all selected pixels arranged in a consistent group are assigned a label that is unique in the picture domain—no two groups have the same label. When adapted to 3D point clouds, the requirement of adjacency of pixels is replaced with the condition stating that two selected points are assigned the same label if the distance between them is lower than or equal to the given threshold value (Fig. 12 ).

Fig. 11

Sample regions of interest smoothed by erosion and dilation. (a) Raw areas; (b) eroded areas (they are smaller and some very small areas were completely deselected); (c) dilated areas (enlarged and having smooth borders).

044039_1_035804jbo11.jpg

Fig. 12

Point groups created by segmentation of selected points. (a) Selected areas of interest; (b) groups created from the areas (the contrast of C1 distribution map has been lowered so as not to interfere with the groups).

044039_1_035804jbo12.jpg

When undistinguishable selected points have been transformed into labeled point groups, a feature vector is created for each point group. It contains such parameters as

  • center of mass,

  • normal vector,

  • number of points belonging to the group,

  • parameters of the group’s long axis,

  • group length,

  • group width,

  • group bounding box.

The calculation of each point group’s parameters is the last step in single time-frame processing.

4.2.

Analysis of a Time Sequence of Frames

After all frames have been processed individually, they are ready to be analyzed as a sequence in time. First, the operator points to groups of interest. Then the selected groups are traced in time in the whole measurement sequence. In the end, the trajectories of groups and the changes of their parameters in time can be exported and can be further analyzed by other biomechanical software.

4.2.1.

Tracing point groups

Anatomical landmarks releted to point groups are traced. At this step of system development, the operator point to point groups based on his or her a priori knowledge. It is planned that the automatic landmark recognition module will to handle this in the future. The algorithm of the group tracing module assumes that anatomical landmarks are detectable in whole measurement sequence or at least in large parts of it. For each anatomical landmark, tracing starts with a frame where a point group was indicated; it is conducted for consecutive frames. The group in the following frame corresponding to the group in the current frame is called its successor. Searching for a successor in the frame following the current frame involves looking for the group whose center of mass is closest to the original group in the current frame. This simple technique is aided by motion prediction (Fig. 13 ). Motion prediction enables tracing of relatively fast-moving groups, such as malleoli, toes, heels, etc. Using this technique, the successor is searched for in the segment of space where the original group’s center of mass would be if it continued motion with the same velocity as it was moving from the previous frame to the current. Obviously, motion prediction can be performed only if the coordinates of the group in the current and previous time frames are known. It cannot be used to find the displacement between the first two frames, i.e., the frame where the tracing is started and the next frame, which do not need to be the first two frames in the measurement sequence. Thus, tracing can only begin from a frame where a considered point group is moving very little or not at all.

Fig. 13

The idea of motion prediction. The standing leg represents data in the previous frame. The half-raised leg represents data in the current frame. The fully raised leg represents data in the next frame. The gray arrow represents the known displacement of the point group referring to the toes between the current and the previous time frames. The region where the toes will be searched for in the next frame is represented by the black circle. Its center is calculated by extrapolation of the previous-to-current-frame displacement vector. The extrapolated vector is represented by the black arrow.

044039_1_035804jbo13.jpg

Analogously to the forward tracing, the presented method can also be used for backward tracing, that is, tracing point groups in frames prior to the one where tracing is started. This feature lets the examiner choose the point group in any frame of the sequence. The group is then traced symmetrically forward and backward in the whole measurement sequence.

5.

Sample Results

The developed processing algorithms were tested preliminarily on data captured during real measurements as well as data generated with computer graphics software.

Figures 14 and 15 show the results of using the developed algorithms with real measurement data. The aim was to trace groups related to patellae. The input data were acquired with the presented 4D measurement system. The subject was bending the knees from 0to15degrees while standing. The captured point clouds were processed in accordance with the presented algorithms in order to find areas related to the patellae. Next, the groups were automatically traced along the measurement sequence producing the trajectories shown in Fig. 14.

Fig. 14

Sample trajectories (thick dark blue lines) of groups related with patellae calculated for data coming from a real measurement. (Color online only.)

044039_1_035804jbo14.jpg

Fig. 15

Displacement of the groups (along the x -, y -, and z -axes) traced in Fig. 14.

044039_1_035804jbo15.jpg

The plots shown in Fig. 15 present the displacement of the traced groups along the x -axis (axis horizontalis marked in figure with red diamonds), y -axis (axis verticalis marked with green squares), and z -axis (axis sagittalis marked with blue triangles). The analysis of the plots seems to confirm the consistency between the tracing results and reality. In the first phase of crouching, in order to prevent the person from falling back, the quadriceps contract, pulling on the patellae. That is why the patellae are moving approximately 10mm upwards (green squares in Fig. 15) but very little forward (blue triangles in Fig. 15) during the first 10 frames of the analyzed sequence. In the next frames, the patellae are quickly moving forward and descending. This corresponds to the phase when the leg is actually being bent.

The developed algorithms were also tested on a full 3D model of lower limbs created with computer graphics software. The input data represented a subject raising and bending her right leg. Trajectories of groups related to a knee, malleoli, toes, and a heel calculated for computer-simulated data are depicted in Fig. 16 . In this case, only a qualitative visual estimation can be performed, as the utilized model is not anatomically correct.

Fig. 16

Sample trajectories of groups related with a knee, malleoli, toes, and a heel calculated for computer-simulated data.

044039_1_035804jbo16.jpg

6.

Estimation of Accuracy

The accuracy of markerless tracing is a problem comprising several issues. The most important are

  • the accuracy of the measurement system (the 4D scanner),

  • the density of point clouds resulting from the measurement,

  • the accuracy of the selection of the region of interest related to the desired anatomical structure,

  • the traceability of the selected region of interest in time,

  • issues related to body mechanics and figure.

The first two factors depend directly on the 4D scanner and influence all other parts of data processing. The laser-based measurement system, data from which were used in this paper, produced point clouds of 0.8-mm -single-point accuracy. The accuracy tests were conducted with use of a white 20-cm×20-cm plane and on a white ball with a 40-mm radius. The measurement error was higher on areas whose surface shape caused a laser beam to form a wider line on the surface of the object. Such areas were the top and bottom parts of the ball if the projected raster lines were projected in a horizontal direction. This is also the reason why for scanning lower limbs, the laser’s raster should be formed of horizontal lines [Fig. 1b].

The density of a point cloud resulting from the measurement determines the size of possible structures to be traced. The laser scanner used produced data of high density in the horizontal direction and low density in the vertical direction. The numbers are 1 point per mm and 1 line per 10mm , respectively. This resulted in the need for spatial resampling of the point cloud and different analysis accuracy in horizontal and vertical directions.

The tracing accuracy tests resulted in 0.5-mm tracing accuracy in the horizontal direction and 1.2mm in the vertical direction. The tests used a white 40-mm -radius ball carried by a robotic arm for 500mm with a speed of 10mm per second in various directions in the measurement volume.

We realize that the presented test results concern only the optomechanical factors of the total tracing accuracy. Estimating the biomedical accuracy is another complex task requiring much work in the engineering and medical areas. Initial experiments have been done in this field. They seem to prove that for nonobese patients, the location of certain anatomical landmarks, specifically tuberositas tibiae, with the use of C1 and C2 parameters is as accurate as by palpation.10

7.

Weaknesses and Prospected Problems

The primary weakness of markerless tracing is due to the fact that the anatomical structures of interest to examiners are not always detectable on the skin’s surface. First, only the shallow underskin structures may be detectable on the skin’s surface. Second, even if a structure is detectable in a given body position, it may become untraceable when the patient moves. Joints are especially prone to this because of their high movability, which often results in a change in the type of local shape of the related skin area. This weakness strongly limits the anatomical structures that may to be traced and the range of traceable movements.

Another problem is the differences between individual patients. They include the differences in the musculoskeletal system and the amount of fatty tissue and loose skin. These interindividual differences set requirements for individual analysis of each patient. Obesity or folds of loose skin decrease the tracing accuracy or may even render it impossible.

The last, but not least, technical issue is the huge amount of data captured during every measurement. Such data have to be processed as fast as possible in order to return the analysis results to the examiner. Although the introduction of the C1 and C2 parameters shortens the computation time, it still remains very long. For example, a 60-second -long frame sequence captured at 15fps needs about one hour for the shape parameters to be calculated. Fortunately, further code optimization may be applied and may further shorten the computation time.

8.

Conclusion

We believe that the introduction of 4D measurement systems based on optical full-field measurement may help people suffering from neural and musculoskeletal diseases as well as various injuries. They can be used for prophylactic examinations as well as for treatment/rehabilitation monitoring. The measurement is quite an inexpensive process, so it can be performed on large numbers of people. Moreover, no harmful or possibly harmful radiation is emitted during the measurements, so they can be repeated on a short-term basis.

The custom shape parameters C1 and C2 presented here allow analysis of full 3D surfaces very similar to the one provided by mean and Gaussian curvature. However, the time of calculation time for C1 and C2 is many times shorter than that for curvatures, which is particularly important in medical practice.

Due to the novelty of the presented systems, the research covered here refers to basic problems of processing and analysis of 4D measurement data for medical applications. The mentioned issues need to be investigated and solved before more advanced questions can be considered. We believe that our research can be a basis for other teams working on this topic and hope that our common effort will contribute to patients’ overall welfare.

9.

Future Work

Although much work has been done, there is still a lot to do. We are continuously working on developing new parameters of surface shape that would expand the functionality of C1 and C2 or replace them. Also, an automatic landmark detection module based on fuzzy logic is under development. If the operation of all processing modules gives good results, the programming code will need to be further optimized in order to get the results in the shortest time possible. Finally, many measurements using the four-directional system (which currently is in its last phase of development) have to be conducted and the obtained results have to be compared with other medical imaging techniques, such as CT or MRI.

Acknowledgments

The work described herein have been financially supported by the Ministry of Science and Higher Education of Poland (contract 0596/T02/2007/321) and the European Union Frame Programme 6 (AURORA project, contract COOP-CT-2003-508203).. The authors would like to thank Walter Rapp, Catherine Hetzer, Jos Vander Sloten, Bart Haex, Nico Bogaert, Tom DeWilde, Kjell Heitmann, Małgorzata Kujawińska, Marcin Kowalski, and Marcin Bączyk for fruitful discussions.

References

1. 

M. E. Steiner, “Dynamic lateral patellar tilt in the anterior cruciate ligament—deficient knee; a magnetic resonance imaging analysis,” Am. J. Sports Med., 29 (5), 593 –599 (2001). 0363-5465 Google Scholar

2. 

U. Della Croce, “Soft tissue artifacts in human movement analysis,” (2006). Google Scholar

3. 

K. Kraus, Photogrammetry. Fundamentals and Standard Processes, 1 Dümmler, Bonn, Germany (1993). Google Scholar

4. 

W. Frobin and E. Hierholzer, “Rasterstereography: A photogrammetric method for measurement of body surfaces,” Photogramm. Eng. Remote Sens., 47 (12), 1717 –1724 (1981). 0099-1112 Google Scholar

5. 

R. Sitnik, M. Kujawińska, and J. Woźnicki, “Digital fringe projection system for large-volume 360° shape measurement,” Opt. Eng., 41 (2), 443 –449 (2002). https://doi.org/10.1117/1.1430422 0091-3286 Google Scholar

6. 

6FP co-operative research project, project acronym: AURORA, project full name: Contact-free Dynamical Volumetric Measurements of Lower Body with Functional Clinical and Diagnostic Capacity, contract COOP-CT-2003–508203. Google Scholar

7. 

M. Witkowski, R. Sitnik, M. Kujawińska, W. Rapp, M. Kowalski, B. Haex, and S. Mooshake, “4D measurement system for automatic location of anatomical structures,” Proc. SPIE, 6191 131 –141 (2006). 0277-786X Google Scholar

8. 

A. Razdan and M. Bae, “curvature estimation scheme for triangle meshes using biquadratic Bézier patches,” CAD, 37 (14), 1481 –1491 (2005). 0010-4485 Google Scholar

9. 

I. Douros and B. Buxton, “Three-dimensional surface curvature estimation using quadric surface patches,” (2002). Google Scholar

10. 

C. Hetzer, W. Rapp, M. Witkowski, R. Sitnik, B. Haex, N. Bogeart, J. Vander Sloten, and T. Horstmann, “Evaluation of newly developed algorithms for calculating anatomical landmarks from surface contours on the lower extremities,” J. Biomech., 39 (Suppl 1), S575 (2006). 0021-9290 Google Scholar
©(2008) Society of Photo-Optical Instrumentation Engineers (SPIE)
Robert Sitnik and Marcin Witkowski "Locating and tracing of anatomical landmarks based on full-field four-dimensional measurement of human body surface," Journal of Biomedical Optics 13(4), 044039 (1 July 2008). https://doi.org/10.1117/1.2960017
Published: 1 July 2008
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Clouds

Data processing

Algorithm development

3D metrology

Skin

Data analysis

Shape analysis

Back to Top