## 1.

## Introduction

All range images begin with a series of range measurements, and the quality of the range image depends on the quality of each of those measurements. The quality of a range measurement depends on measurement uncertainty and measurement resolution; however, spatial uncertainty is also strongly affected by environmental factors, such as return signal intensity and relationship to a measurement’s immediate neighbors. One or more of these factors can be expressed as a metric representing the deviation of some quality attribute associated with the measurement from a predefined standard. Spatial measurement quality represents the degree of confidence one can place in how accurately a measurement represents the position of a real surface in the environment. Laser range scanners can also provide intensity information that may be used in representing the surface so quality attributes relating to return signal intensity are useful. In this paper, contemporary approaches to evaluating measurement quality attributes are reviewed, including measurement uncertainty, return signal intensity, range, sampling density, and relationship to neighboring points. This review focuses particularly on measurement quality metrics for ground-based laser range scanners that can be adapted for automated systems. Within this context, measurement quality metrics provide a way to direct and terminate automated scanning procedures.

Perceptual quality metrics can be either objectively or subjectively defined;^{1} however, only objective quality metrics are useful for automating data acquisition. For this reason, only objective quality metrics are considered here. Objective quality metrics can be further classified as referenced or unreferenced.^{1} Unreferenced quality metrics use no benchmark; thus, automated processes can only evaluate the change in a quality attribute in response to some action. Referenced quality metrics can be evaluated in the same manner as unreferenced quality metrics, but the size of the deviation of an attribute from a reference can be used to evaluate whether the measurement should be either retained or ignored. For this reason, referenced quality metrics are preferred for automated systems in which thousands, or even millions, of measurements may be obtained.

Referenced range measurement quality metrics quantify the relationship of a quality attribute to some previously established benchmark or reference. These metrics can then be used to either compare methods or systems, or they can be used in an iterative process to maximize some qualitative attribute of a range image.^{2} Quality metrics appear most often in the guise of a weighting parameter when merging measurements or data sets. Two important components of a referenced quality metric are a clearly defined quality benchmark against which to compare the current state of the range image, and a quality scale to indicate the degree to which the range measurement quality attribute deviates from the benchmark.

In this paper, metrics for quantifying the quality of measurements are reviewed. For purposes of discussion, these metrics have been classified as measurement uncertainty based, signal intensity based, range based, and neighborhood based. As will be demonstrated, considerable work remains to ensure that the quality of measurements and points used to construct virtual models is effectively and comprehensively defined.

## 2.

## Spatial Resolution

The spatial resolution of a laser range scanner measurement is dependant on the size of the laser spot that illuminates the surface at the point the measurement is obtained. For pulsed laser systems, the spatial resolution is also dependant on the pulse length of the system. The spatial resolution can be divided into range resolution and angular resolution. Angular resolution is the minimum angular distance between features such that they can be resolved as separate features. Range resolution is the minimum distance between angularly resolved features such that they can be distinguished as separate features.^{3} The angular resolution of a laser range scanner is defined by the Rayleigh criterion^{4} and represents the size of the smallest feature that can be angularly resolved.^{3, 5}

The laser projects a spot into the surface being scanned, and the region in which the surface intersects the laser spot is referred to as the beam footprint. Features within the footprint contribute to the return signal intensity, which is used to obtain the spatial measurement that approximates the position of a portion of the surface.^{6} The area covered by the beam footprint is generally not measured by laser range scanners; thus, it is approximated by a model of the area of the laser spot that illuminates the area. Ideally, the laser spot area should be the same as the beam footprint area; however, environmental factors, such as spatial discontinuities^{7} or dense fog,^{8} can result in the beam footprint deviating from that predicted by the laser spot model. Moreover, if the surface normal is assumed to be oriented along the line of sight in the laser spot model, then surface angulation can result in a discrepancy between actual and predicted beam footprint areas. Quality metrics provide a way to predict by how much the beam footprint of a measurement might deviate from that predicted by the laser spot model.

The spatial resolution of a measurement can be represented by the instantaneous resolution, which assumes the footprint is stationary at the time the measurement is acquired, or the effective resolution, which takes into account the procession of the footprint over the surface during the acquisition process. When the term resolution is used in this paper, unless otherwise stated, it refers to the instantaneous resolution. The term footprint and laser spot are also used interchangeably in this paper, although the terms are strictly equivalent only when the surface is continuous within the laser spot.

## 3.

## Measurement Uncertainty-Based Metrics

Measurement uncertainty is the most common attribute used to assess measurement quality. Range measurement uncertainty is generally modeled as an independent zero-mean Gaussian process added to the quantity returned by the range sensor; that is,

where $\mathbf{x}$ is the ground truth position or surface characteristic, $\widehat{\mathbf{x}}$ is the quantity returned by the sensor, and $\mathbf{e}\sim \mathcal{N}(0,\Sigma )$ is the additive zero-mean Gaussian noise process with measurement covariance $\Sigma $ . This may not always be a valid assumption; environmental effects and nonlinear bias in the sensors may cause the observed measurement distribution to become distinctly non-Gaussian. In practice, Gaussian models provide the benefit of simplifying mathematical analyses and result in an approximation of how a system should behave under a broad range of circumstances. Non-Gaussian models are highly situation dependant, therefore are rarely used for predicting measurement uncertainty.The uncertainty associated with the range sensor is referred to here as the radial error and is one attribute that can be used to evaluate measurement quality. Rotational or translational position are referred to here generically as positional error and represent two more attributes which can be employed to evaluate the quality of a measurement. Figure 1 shows one example of a triangulation laser range scanner system in which the angular position of the laser spot on a surface in the environment is controlled by two rotating mirrors. Similar dual-axis optical scanning configurations are used in time-of-flight (TOF) systems and other laser range systems by combining orthogonal galvanometers, rotating mirrors, or motors. As a result, the geometrical model and measurement uncertainties can be generalized to a variety of laser range scanning systems.

## 3.1.

### Measurement Uncertainty

Measurement uncertainty is represented by a covariance matrix, generally based on a model of the root-mean-square (rms) sensor error along each axis of motion employed by the scanner and on a model of the error associated with the range sensor. Sensor variance is often based on a model of the sensor error, rather than on the spread of repeated measurements acquired *in situ*, because it is often not practical to obtain a large enough set of repeated measurements to derive a situation-specific variance profile. These models are generally obtained under ideal conditions for specific materials and surface orientations. As a result, there can be a significant discrepancy between the model sensor variance and what would be observed using a repeated measures approach in the field. For example, if the variance model of a system was based on white cardboard, then the model variance would significantly underestimate the variance resulting from black felt.^{9} This can be a significant issue where the type of material being scanned cannot be known *a priori* or where the object being scanned may consist of multiple types of material. In general, measurement uncertainty cannot be considered a sufficient quality metric on its own because it depends heavily on a variety of other attributes. In the following sections, various attributes that can result in true measurement uncertainty deviating from model-based measurement uncertainty are identified.

## 3.2.

### Positional Uncertainty

Assuming a fixed-viewpoint scanner, such as the one shown in Fig. 1, the positional uncertainty is a function of the mechanisms used to control the orientation of the laser and the photosensor.^{10} These mechanisms are typically precision galvanometers or rotating motors, and the positional uncertainty reflects the variation in real laser/sensor orientations when the galvanometer or motor indicate that it has achieved a given angular position. In the case of fixed pattern projection systems, error positioning is often due to the stability of the optomechanical system. The acquisition of range and angular position measurements are generally synchronized, but synchronization errors, or jitter, can result in the true angular position differing from the angular position at the instant the range measurement is acquired.^{11}

Although the laser is often modeled as originating either from the scanner viewpoint or from a fixed point near the viewpoint, its true origin may vary depending on the scanner geometry.^{12} Well-calibrated laser range scanner systems account for this complexity; however, the transformation between sensor data and spherical or Cartesian coordinates can introduce errors.^{13, 14} As a result, rotational uncertainty may not be constant, as is often assumed. A similar situation arises for laser range scanner systems using motor-controlled rotating bases. Thermal effects, wobble and jitter, and mirror nonplanarity can also cause the final reflection point position and output orientation to deviate from a Gaussian distribution.

## 3.3.

### Radial Uncertainty

Range measurement uncertainty depends on how the interaction of the laser with the surface is measured. In TOF systems, the range is determined by the time between the pulse being generated and being detected. In triangulation systems, the range measurement depends on the position of the signal peak on a photodetector array. In both cases, a significant portion of the range measurement uncertainty is the ambiguity of the location of the signal peak.

Range uncertainty is typically assumed constant for TOF scanners, as shown in Fig. 2 . Specifically,

where ${\sigma}_{R}$ is the range measurement error, $c$ is the speed of light, and ${\sigma}_{\tau}$ is the time measurement error. The last term represents the uncertainty in the temporal location of the signal peak. This is found bywhere ${T}_{r}$ is the pulse rise time and SNR is the signal-to-noise ratio.^{15, 16}The range measurement error is determined by the signal bandwidth,

^{15}amplitude of the return signal,

^{17}thermal drift,

^{17, 18}crosstalk between the transmitter and receiver,

^{18}timing jitter,

^{19}and nonuniformities and changes in the returning signal shape.

^{15, 18, 19}For example, different surface materials can change the shape of the return signal resulting in significantly different error distributions.

^{10}Moreover, feedback within the sensor can result in a measurement being affected by the previous measurement, violating the assumption that there is no correlation among range measurements.

Laser motion while the signal is being emitted is negligible for pulse TOF systems because the pulse duration is so short, but it can affect continuously modulated laser systems. This motion can distort the return signal and introduce ambiguity into the true measurement. Consider, as well, that the range measurement equation is given by

where $\tau $ is the propagation delay.^{16, 20, 21}This assumes that the TOF between the laser and the surface is equal to the TOF between the surface and the sensor. This may not always be the case, especially if the true origin of the laser pulse varies as a function of the mirror angles.

The peak uncertainty of a triangulation scanner is typically dominated by speckle noise.^{22} This can be modeled as

## Eq. 5

$${\sigma}_{a}=\frac{\lambda f}{D\phantom{\rule{0.2em}{0ex}}\mathrm{cos}\left(\beta \right)\sqrt{2\pi}},$$^{23}Speckle noise arises when speckle elements on the surface illuminated by the laser spot are large when compared to the wavelength of the laser light.

^{22, 24}Under this assumption, each speckle element becomes a point emitter with respect to the photodetector array. Interference patterns are generated when each speckle element reflects light from the laser onto the photodetector array,

^{22, 25}as shown in Fig. 3 . There, they constructively and destructively interact to form a speckle image on the photodetector array.

^{22, 24, 25}

Speckle noise is generally countered by integrating a single measurement over several intensity samples as the laser spot is moved over the surface being scanned.^{26} Figure 4
illustrates the reduction in speckle after integration. This is complicated by the need to minimize aliasing by ensuring that the measurements are, where possible, separated by a distance less than the radius of the laser spot (Fig. 4).^{7} Similarly, the range uncertainty in an amplitude-modulation continuous-wave scan can be decreased by increasing the sampling rate.^{27}

## 3.4.

### Environmental Effects

The mechanical effects described in Sec. 3.3 can be included in a model of expected range and rotational uncertainty; however, many environmental factors, summarized in Table 1
, can cause the true measurement uncertainty to deviate from the model. For example, measurement uncertainty can increase with increasing incidence angle,^{28, 29, 30, 31} a reduction in surface reflectivity,^{10, 32} and an increase in ambient lighting.^{6, 33}

## Table 1

Environmental factors affecting measurement uncertainty.

Error Source | Effect |
---|---|

Range | Range uncertainty generally increases with range |

Angle of Incidence | Range uncertainty increases with increased angle of incidence |

Surface Material | Translucent non-homogeneous materials increase range uncertainty |

Surface Complexity | Surface discontinuities introduce range errors |

Reflectivity | Range uncertainty increases with a decrease in reflectivity |

Ambient Lighting | Range uncertainty increases with an increase in ambient lighting |

Equation ^{5} assumes that the size of the spot projected onto the photodetector array has not been distorted by occlusion, surface orientation, or other environmental effects. Figure 5
shows the effect of laser spot distortion arising from a surface discontinuity.^{7} In this case, the discontinuity occludes part of the laser spot so that the spot centroid no longer coincides with the signal peak. This introduces an error into the horizontal location of the signal peak, denoted here as
$\Delta x$
. This results in a range error
$\Delta z$
, which is compounded by the surface orientation with respect to the direction of the laser. The deviation of the surface normal from the laser path is denoted here as
$\gamma $
. Sudden changes in surface height are not uncommon and represent a reduction in measurement quality that is not captured by model-based measurement uncertainty.

Different surface materials can also affect the accuracy of range measurements. Figure 6
demonstrates the effect of a partially translucent material, such as marble, in which the laser may penetrate part way into the surface before sufficient light is reflected to estimate the distance to the surface.^{7} In this case, the range measurement does not represent the surface of the material, and the actual range measurement obtained depends on the reflective and refractive qualities of the material. According to Beraldin ,^{34} translucent surfaces like marble change the shape of the laser spot on the photodetector array of a triangulation scanner, resulting in the range estimate being in error. As well, the nonhomogeneity of the material increases the range measurement uncertainty.^{16} Translucent nonhomogeneous materials can also feature a greater measurement uncertainty as well as a bias that increases with the distance between the scanner and the surface.^{35}

Surface complexity is not limited to variations in the height and frequency of surface structures; transitions between areas of different surface reflectivity can affect the accuracy of a range measurement,^{7, 36} as illustrated in Fig. 7
. Different materials with different reflectivity properties can also generate very different range measurement uncertainties.^{10} The change in reflectivity for different portions of the laser spot results in a shift in the signal peak that introduces an error into both the range measurement and return signal intensity, a topic discussed in Sec. 4. Moreover, a reduction in surface reflectivity can result in an increase in range measurement uncertainty.^{33}

Increasing the surface orientation with respect to the line of sight of the scanner can result in an elongation of the laser spot, which increases peak detection uncertainty.^{31} This problem is most pronounced when the length of the baseline is significant with respect to the distance to the surface, as is the case with triangulation laser range scanners, even when operating in the far field. Moreover, increased surface orientation with respect to the line of projection of the laser increases the spot size on the surface, resulting in more speckle elements contributing to the spot projected onto the photodetector array. Because the range uncertainty of triangulation laser range scanners is dependant on the surface orientation, model-based range uncertainty is not sufficient to represent the quality of a range measurement.

## 3.5.

### Measurement Uncertainty as a Quality Metric

Measurement spatial uncertainty has often been used as a way to quantify the quality of the measurement. For example, Sequeira ^{37, 38} and Sequeira and Goncalves^{39} used range sensor uncertainty as part of a reliability metric generated from the weighted sum of measurement attributes. They recognized that spatial uncertainty is not a sufficient metric and therefore combined it with other measurement quality metrics. The combining of quality metrics to generate a more holistic view of measurement quality is discussed in Sec. 7. Some range sensors, such as the triangulation scanner shown in Fig. 2, have range measurement variance that increases with the square of the distance between the scanner and the surface.^{16, 20, 23, 40, 41} In this case, using range sensor uncertainty as a quality metric means that measurements closer to the scanner are considered to be of higher quality.

If the measurements are being merged using a modified Kalman minimum variance estimator (MKMV) approach,^{42, 43} then the measurement variance becomes a function of the number of measurements that are merged to form a point in a virtual model. Moreover, the merged measurements could be obtained from different viewpoints; thus, range measurement uncertainty alone is insufficient as a quality metric. To counter this problem, the covariance matrix may be used as a multidimensional quality metric. For example, using the MKMV approach, two measurements
${\widehat{\mathbf{x}}}_{i}$
and
${\widehat{\mathbf{x}}}_{j}$
are merged to form a point
$\mathbf{x}$
in the virtual model. The point is generated using the weighted sum

One drawback of Sequeira’s weighting method is that it was only applied to radial uncertainty. Table 1 illustrates the reasoning behind considering only radial uncertainty: it is the attribute that is generally affected by environmental factors. In Sequeira’s case, the metric was only applied to range images and not to the merged data; thus, this approach was sufficient for the purpose for which it was designed. Rotational uncertainty could be assumed constant and, thus, ignored. The method, however, is not generalizable to data merged using the MKMV method. Consider that the covariance of $\mathbf{x}$ is found by

thus, the radial and rotational uncertainties of $\Sigma $ are less than the radial and rotational uncertainties of either ${\Sigma}_{i}$ or ${\Sigma}_{j}$ . If only the radial uncertainty is considered, then the reduction in rotational uncertainty is never taken into account. Similar issues arise when combining data from multiple types of scanners, each which may have different radial and rotational uncertainties.The MKMV weighting factors, although effective quality metrics for measurement merger, are less effective for representing the quality of the measurement from the perspective of spatial measurement uncertainty. Ideally, an uncertainty metric should represent the uncertainty of a measurement as a scalar value so that the relative quality of measurements can be compared along a single vector rather than within a multidimensional space. On the other hand, reducing a multidimensional parameter to a single dimension risks losing potentially important information; therefore, the choice of unidimensional representation must be carefully chosen.

Although the covariance matrix approach addresses the issue of ignoring potentially valuable information in the position uncertainty attribute, it does not address the issues of surface complexity and orientation increasing the effective measurement uncertainty above the level predicted by the model. As a quality metric, range uncertainty and even measurement covariance are useful quality metrics but not sufficient by themselves. In particular, metrics evaluating surface spatial complexity, surface orientation, and changes in surface reflectivity need to be examined to augment measurement spatial uncertainty as a quality metric.

## 4.

## Signal Intensity-Based Metrics

It was noted in Sec. 3.4 that a decrease in surface reflectivity can result in an increase in measurement uncertainty. Surface reflectivity can be assessed by examining how the intensity of the received signal varies from what would be expected for a surface of known reflectivity; however, signal intensity measurements can vary significantly as a result of such factors as range,^{32} high incidence angles,^{28, 31, 32} low reflectivity,^{10, 18} atmospheric attenuation,^{44} sharp discontinuities,^{16, 45} and translucency of the material being scanned.^{16, 40} For example, the return signal intensity decreases with an increase in angle of incidence and decreases with an increase in distance between the scanner and the surface when the transmitted signal power remains constant.^{32} As a result, quality metrics provide a way to predict the extent to which the actual reflectivity of a surface might deviate from that predicted from the return signal intensity.

Figure 5 illustrated that surface discontinuities can result in range errors; however, a change in the shape of the signal intensity profile in a triangulation laser range scanner can also result in a reduction in return signal intensity. When the shape of the peak is sufficiently distorted, as is the case with mixed measurements, it becomes difficult to locate its centroid. Laser spots that cross edges can result in smeared or multiple return signals that result in ambiguous range measurements, what is referred to as mixed measurement error.^{32, 46} Mixed measurements are a result of receiving reflected energy from two surfaces within the laser spot and are often interpreted as a range measurement somewhere between the two surfaces.^{6, 33, 47} Hebert and Krotkov^{6} referred to the interdependence of measured range with signal intensity as range/intensity crosstalk. TOF systems calculate range by comparing the return signal to the transmitted signal, thus are more sensitive to signal intensity changes. Figure 8
shows that a discontinuity in surface reflectivity can also reduce the return signal intensity.^{7} As a result, quality metrics provide a way to predict the extent to which the spatial position of the measurement might be in error as a result of the return signal intensity deviating from that predicted using a model of the laser range scanner optics.

Some surfaces may be difficult, if not impossible, to scan because the return signal is diffusely scattered, what is referred to as volumetric scattering.^{46, 48} Surfaces that exhibit this property include glass, hair,^{46} and grass.^{48} Figure 6 illustrates that translucent materials can also reduce the strength of the return signal.^{7} Other surfaces are excessively absorbent so the signal is of insufficient intensity to obtain a range measurement, while other surfaces may be very highly reflective that the photodetector is saturated.^{46} The absence of a return signal, referred to as a nonreturn measurement, can be a valuable piece of information but is almost always discarded.

Given a reference material, the change in return signal intensity can be modeled as a function of range. A shift in the return signal intensity from the model value can then be used as a metric of the quality of a measurement. Measurement spatial uncertainty is also affected by return signal intensity; thus, both, variables are important in assessing measurement quality, and neither are sufficient by themselves. Moreover, signal intensity shifts can indicate the presence of mixed pixels and surface material transitions, either of which may introduce errors into the range measurement. The challenge is in how to determine the cause of the intensity shift, given that only the spatial position and deviation in signal intensity from a model value are known. Deviations from model return intensity can arise from several different environmental conditions; therefore, return intensity, even when combined with spatial position and model spatial uncertainty, is not sufficient to completely represent the quality of a measurement. Table 2 summarizes the factors that affect return signal intensity.

## Table 2

Environmental factors affecting return signal intensity.

Error Source | Effect |
---|---|

Range | Return signal intensity decreases with an increase in range |

Angle of incidence | Return signal intensity decreases with increased angle of incidence |

Surface material | Translucent non-homogeneous materials can reduce return signal intensity |

Surface complexity | Surface discontinuities can reduce return signal intensity |

Reflectivity changes | Return signal intensity decreases with a decrease in reflectivity |

## 4.1.

### Intensity as a Quality Attribute

Signal intensity is rarely used as a quality metric; it is more often used as a weighting factor for combining measurements. For example, Godin ^{49} used the compatibility of signal intensities between correspondence pairs of measurements prior to iterative closest point (ICP) registration. Given two intensity measurements
${h}_{i}$
and
${h}_{j}$
, the compatibility
$C({h}_{i},{h}_{j})$
is found by

^{10}, ${h}_{i}$ and ${h}_{j}$ are quality attributes associated with measurements ${\widehat{\mathbf{x}}}_{i}$ and ${\widehat{\mathbf{x}}}_{j}$ respectively; however, this metric only assessed the quality of association between two measurements, not the quality of each measurement. Fiocco

^{50}defined a reflectivity quality metric for each measurement. It took the form

## Eq. 11

$$\rho =\{\begin{array}{ll}1& {\rho}_{\mathrm{min}}\u2a7d{\rho}_{i}\u2a7d{\rho}_{\mathrm{max}}\\ 0& \text{otherwise},\end{array}\phantom{\}}$$^{37, 38}simply applied a weighting factor to the detected signal intensity.

One drawback of Fiocco ’s method is that it employs a binary scale, which, while useful for the application for which it was designed, lacks the generalizability of a sliding scale. Sequeira’s approach of using a weighting factor avoids this problem, but does not address the issue of the ideal reflectivity changing with an increase in range. As with Fiocco’s method, the weighted intensity approach used by Sequeira was sufficient for the application for which it was designed but is not applicable to medium-range scanning without some modifications to take into account the relationship between range and return signal intensity. Fiocco avoids this problem by using reflectivity, which is independent of range.

## 5.

## Range-Based Metrics

It was noted in Sec. 3.3 that measurement spatial uncertainty generally increases with increased range and, in Sec. 4, that return signal intensity generally decreases with increased range. The range measurement itself can be used to represent the quality of a measurement. For example, Sequeira ^{37, 38} and Fiocco ^{50} each used the range portion of the measurement as part of their reliability metrics. Figure 9
graphically demonstrates how the quality of a measurement decreases as the distance between the scanner and the surface that generated the measurement increases.

In general, the farther a surface is from the scanner, the larger the area encompassed within the laser spot. The size of the spot projected onto a surface is represented by the beam width at the point of intersection. The beam width depends on the distribution of irradiance, which is often assumed to follow a Gaussian distribution. Specifically,

## Eq. 12

$$I(r,\zeta )={I}_{c}\phantom{\rule{0.2em}{0ex}}\mathrm{exp}(-\frac{2{r}^{2}}{w{\left(\zeta \right)}^{2}}),$$^{51, 52}Figure 10 shows the irradiance profile centered on the central axis and the spot size $w\left(\zeta \right)$ as a function of distance from the beam waist.

The surface formed by
$w\left(\zeta \right)$
represents the distance
$r$
from the central axis at which the beam irradiance falls to
$1\u2215{e}^{2}=0.135$
. As a result, the volume bounded by
$w\left(\zeta \right)$
represents the region within which 86.5% of the beam irradiance is contained.^{51, 52, 53} The laser spot defined in this way represents the portion of the surface being scanned from which most of the laser irradiance is being reflected. As a result, the laser spot represents the smallest region that can be resolved by the laser range scanner.

The boundary of $w\left(\zeta \right)$ can be approximated by the hyperbolic equation,

## Eq. 13

$${\left(\frac{w\left(\zeta \right)}{w\left(0\right)}\right)}^{2}-{\left(\frac{\zeta}{{\zeta}_{0}}\right)}^{2}=1,$$^{52}The focal length also represents the distance from the lens to the beam waist.

Range can act as a proxy for the resolution of a measurement under the assumption that the focal length remains fixed and the surface is farther from the scanner than the beam waist. Under these conditions, measurements closer to the scanner can be considered to be of higher quality than those farther from the scanner. Although range is generally not referred to as an indicator of the quality of a measurement, this relationship is implied when the more distant of a pair of measurements is dropped as part of the registration process.

Fiocco ^{50} defined a distance quality metric based on the minimum and maximum range limits. In practice, a scanner is bounded by the minimum and maximum effective range, defined by a variety of factors, including the laser power, beam spread, and photodetector sensitivity. Sequeira ^{37, 38} simply applied a weighting factor to the range measurement to obtain a quality metric.

Only long range scans (those for which
$\zeta \u2aa2{\zeta}_{0}$
, referred to as far-field measurements^{51}) are guaranteed to have a measurement resolution decrease with range. Medium range scanners may be used for surfaces that are at, or even less than, the distance to the beam waist. Surfaces that are closer than the beam waist have an inverse relationship between resolution and range, as shown in Fig. 10. In this case, measurement quality decreases with distance. As a result, Fiocco ’s and Sequeira ’s methods are only applicable to the situation for which they were designed; laser range scanners in which the surface is farther from the scanner than the beam waist. For medium-range scanning, the surface may be placed such that it coincides as much as possible with the beam waist. A more general-purpose resolution-based quality metric should be applicable to both long and medium range scanner data, as well as data from scanners with multiple focal lengths. The use of laser spot size in assessing measurement quality will be addressed in Sec. 6.

## 6.

## Neighborhood-Based Metrics

Attributes such as surface orientation, or spatial or reflectivity discontinuities, cannot be determined from single measurements; they can only be inferred from groups of measurements located in close spatial proximity to each other. Spatially related measurements are referred to here as a neighborhood and are used to model a small portion of the surface being scanned to predict some aspect of that surface, such as its orientation. The class of neighborhood-based metrics encompasses all quality metrics defined by the neighborhood of a measurement.

Neighborhood-based quality metrics attempt to infer some aspect of a measurement by its relationship to its immediate neighbors. For purposes of discussion, a neighborhood is defined as a point $\widehat{\mathbf{p}}$ and the set of all points $P=\{{\widehat{\mathbf{p}}}_{0},\dots {\widehat{\mathbf{p}}}_{K}\}$ considered to be the immediate neighbors of $\widehat{\mathbf{p}}$ by some commonly accepted criteria. It is assumed that this criterion is either the Euclidean or the rotational distance, although the discussion could apply to other distance metrics.

Two neighborhood-based quality metrics are considered: those based on interpoint distance and those based on vertex orientation with respect to the line of sight. The former is a measure of the density of the measurements in a neighborhood, which, in turn, indicates how finely the surface has been sampled. The latter is used to estimate the orientation of the surface at a spatial location of the measurement and is the most commonly used quality metric after measurement uncertainty.

Surface complexity can also be evaluated using edge-detection techniques. Specifically, spatial (illustrated in Fig. 5) and intensity (illustrated in Figs. 7 and 8) discontinuities result in range measurement errors so the quality of measurements corresponding to discontinuities are of lower quality than measurements arising from surfaces without discontinuities. Edge detection, applied to either spatial data, intensity data, or both, can be used to detect the presence of discontinuities, which are one type of surface complexity. A complete review of edge-detection techniques is, however, beyond the scope of this paper. For surveys on edge-detection techniques, see Argyle,^{54} Davis,^{55} Peli and Malah,^{56} Ziou and Tabone,^{57} Trichili ,^{58} Xiao ,^{59} and Basu.^{60}

## 6.1.

### Distance Metrics

Distance metrics are typically used to evaluate two attributes: distance to neighboring points and the density of points in the neighborhoods. The latter is referred to as sampling density, which is the number of measurements per unit area of the surface being modeled. Densely sampled surfaces have the greatest possibility of detecting important surface features that might be missed by more sparse sampling methods. On the other hand, dense scanning techniques generate a large number of points, many of which may be redundant if the surface being scanned lacks significant surface features. With respect to quality, densely sampled surfaces, to within certain limits, have the greatest probability of generating high-quality models; thus, sampling density is a measure of the potential quality of the final model.

According to Shannon sampling theory, given a band-limited signal, the sampled signal will contain all the information in the band-limited signal only if the sampling frequency is more than twice the signal bandwidth.^{61} This is also known as the Shannon-Nyquist sampling theorem^{62} or simply the Nyquist sampling theorem.^{63} This means that the distance between samples must be less than half the smallest feature size resolvable to the scanner;^{64} that is,

^{63}On the other hand, measurement quality does not improve in proportion to the amount by which the sampling rate exceeds the Nyquist rate;

^{65}thus, the sampling rate is often defined to be only slightly higher than the Nyquist rate. The Nyquist rate, therefore, represents a quality breakpoint.

Shannon sampling requires a band-limited signal, and diffraction in the optical system ensures this by imposing a limit on the size of features that can be resolved. The Rayleigh criteria represents the resolution limit of the scanning system even if measurement noise were negligible.^{3, 4} In the case of a perfectly focused, diffraction-limited optical system, laser physics still imposes a limit on the size of the feature that can be resolved, given by the Rayleigh criteria. If
$\Delta d$
represents the minimum distance between beam footprint peaks at which they can be separately resolved, then the Nyquist rate is given by

If
$2\Delta d$
is large with respect to
$d$
, then fine details are blurred;^{3} however, if the
$d$
is too large, then fine details are missed. It is convenient, in the absence of other information about the system, to choose a sampling density slightly less than the smallest angular beamwidth such that
$d<\mathrm{min}\left\{w\left(\zeta \right)\right\}$
within the volume of interest. The goal of scanning a surface is to achieve an intersample surface distance
$\u2a7d\Delta x$
, given that the laser scanner is, under ideal conditions, unable to resolve features at
$<2\Delta d$
. Sampling density and intersample distance, therefore, are useful in assessing model quality.

Klein and Sequeira^{66} and Klein and Zachmann^{67} compared the actual sample density
$\beta \left(\mathbf{p}\right)$
to the expected sampling density
$F(\mathbf{x},V,m)$
. The expected sampling density was found using

## Eq. 19

$$B(\mathbf{x},V,m)=\mathrm{min}[{\beta}_{\mathrm{max}}\left(\mathbf{x}\right),F(\mathbf{x},V,m)]-\mathrm{min}[{\beta}_{\mathrm{max}}\left(\mathbf{x}\right),\beta \left(\mathbf{x}\right)],$$Fiocco ^{50} used a less complicated method for defining the density of a set of measurements than proposed by Klein and Sequeira^{66} and Klein and Zachmann.^{67} They defined the density quality metric as

## Eq. 20

$$s=\{\begin{array}{cc}\frac{{s}_{\mathrm{max}}-{s}_{i}}{{s}_{\mathrm{max}}}& {s}_{i}\u2a7d{s}_{\mathrm{max}}\\ 0& \text{otherwise},\end{array}\phantom{\}}$$^{37, 38}used the weighted average-distance between neighboring points as a quality metric.

One drawback of the quality metrics employed by Refs. 37, 38, 50, 66, 67 is that they ignore measurement spatial uncertainty, which also affects the resolution of the system.^{3, 68} In particular, spatial uncertainty makes it difficult to know, precisely, the extent of the region covered by each laser spot. Another drawback of these metrics is that they do not make clear whether quality is being assessed relative to the desired resolution
$\Delta x$
or the attainable resolution
$2\Delta d$
. The former is generally constant while the latter depends on surface orientation, the presence of spatial or reflectivity discontinuities, and the size of the laser spot illuminating the surface. In some cases,
$\Delta x$
may not even be attainable for certain combinations of range and surface orientation.

## 6.2.

### Orientation Metrics

A typical approach to generating the orientation of a measurement is to obtain a mesh model of the surface and use the normals of each of the mesh elements to estimate the normal of the surface at the measurement.^{22, 42, 69, 70, 71} Orientation is often represented by the surface normal, which is generally found by taking the average of the normals of all Delaunay facets that have this measurement as a vertex.^{42, 70} The exception is Hoppe ,^{72} who preferred to use the normal of a plane fit to the neighborhood of the measurement. The benchmark for the grazing angle attribute is the angle that generates the most accurate range measurement; that is, when the surface normal is oriented along the line between the surface and the scanner. Assuming the maximum grazing angle is one in which the surface normal is perpendicular to the line between the surface and the scanner, the scale of the grazing angle attribute is from 0 (best quality) to
$\pi \u22152$
(worst quality) radians. This is often represented as the cosine of the grazing angle,^{30, 70} which has a range of 1 (best quality) to 0 (worst quality).

Often the deviation of the return signal intensity from the ideal Lambertian model is represented by the surface normal^{25, 42, 69, 70} or grazing angle.^{73} The reasoning is that the signal intensity decreases with increasing surface orientation; thus, surface orientation can be used as a proxy for signal intensity. However, return signal intensity is affected by all the factors summarized in Table 2. Therefore, this assumption is true only in the absence of other factors, such as surface spatial complexity and changes in surface reflectivity. Surface orientation also affects the uncertainty of range measurements,^{74} particularly for triangulation laser range scanners; thus, surface orientation as a metric can affect quality metrics for both spatial uncertainty and return signal intensity.

Fiocco ^{50} used the deviation of the line of sight to the scanner from the surface normal as a quality metric. This metric took the form

^{70}used the cosine of the grazing angle to weight measurements prior to ICP registration. Soucy and Laurendeau

^{30}showed that the squared cosine of the grazing angle corresponds to the relative illuminance received by the photodetector. They used this metric to perform a weighted merge of measurements from different viewpoints such thatwhere

## Eq. 23

$${W}_{i}=\frac{{\mathrm{cos}}^{2}\left({\gamma}_{i}\right)}{{\sum}_{j=1}^{N}\phantom{\rule{0.2em}{0ex}}{\mathrm{cos}}^{2}\left({\gamma}_{j}\right)}$$## Eq. 24

$$\mathrm{cos}\left({\gamma}_{i}\right)=\frac{{\stackrel{\u20d7}{n}}_{i}^{T}{\widehat{\mathbf{x}}}_{i}}{{\widehat{R}}_{i}},$$^{22}employed a similar approach to merging measurements that co-occupied the same voxel. Soucy and Laurendeau

^{30}demonstrated that the reflectivity of the surface was directly proportional to the square of Eq.

^{24}. Because measurement quality was expected to be directly proportional to the amount of light returned to the sensor, ${\mathrm{cos}}^{2}\left(\gamma \right)$ would better represent measurement quality than $\mathrm{cos}\left(\gamma \right)$ ; however, this was based on the assumption that the reflectivity change was primarily caused by high surface orientation. The relationship is less clear when the surface reflectivity is more complex.

Scott ^{73} suggested that basing quality solely on the grazing angle of a measurement ignores the objective effects of high grazing angle in favor of a more subjective metric. Surface orientation, in particular, ignores factors that affect the shape and peak height of the intensity profile, such as surface reflectivity changes. Moreover, the surface normal is the average of the orientations along each Delaunay edge extending from a point. As a result, it is possible to have a wide range of vertex normals but a surface normal oriented along the line of sight. Finally, for systems in which the baseline is not insignificant with respect to range, the line of sight could be defined with respect to the photodetector, the laser, or the scanner origin, each yielding a different result. As a result, surface orientation is important but insufficient as a quality metric.

An alternative to grazing angle for representing surface orientation of a range image obtained using a raster scan pattern is the facet edge length ratio. In this case, the ratio of longest to shortest edge of a Delaunay facet is used to assess the quality of the facet and, by extension, its measurements. Sequeira used this approach to discard the facet if the ratio was too large.^{37} Consider the image on the left in Fig. 11
, which represents a two-dimensional Delaunay triangulation of a range image; when seen in three dimensions, facets on a discontinuity are elongated with respect to their neighbors. The ratio between the longest and shortest edge should ideally be 1:1; that is, the triangles should be equilateral. As the surface orientation increases with respect to the line of sight from the scanner, the ratio between the longest and shortest edges increases. Specifically, given a facet
${F}_{i}$
with edges
${E}_{i}=\{{e}_{i,1},{e}_{i,2},{e}_{i,3}\}$
, the facet edge ratio
${w}_{i}$
can be found by

## Eq. 25

$${w}_{i}=\frac{\mathrm{min}\phantom{\rule{0.2em}{0ex}}{E}_{i}}{\mathrm{max}\phantom{\rule{0.2em}{0ex}}{E}_{i}}\u220a(0,1].$$The facet ratio represents a quality metric in which the neighborhood is limited to the three measurements bounding the Delaunay facet. High-quality measurements would be those in which the facet ratio was close to 1, whereas those in which
${w}_{i}$
was very small would be considered to be low-quality measurements. Low-quality measurements would have elongated facets indicating steep surface slopes. A drawback of this method is that it is specifically designed to assess the quality of facets and can only be applied to measurements as a side benefit. Moreover, it is specifically designed to work with regularly spaced raster patterns. Nonraster patterns can feature large edge ratios, even if the surface is relatively flat, as illustrated in Fig. 12
. Arrangements in Figs. 12, 12, 12 contain facets with large facet ratios regardless of the range value associated with them. Although well suited to the purpose for which it was designed, facet ratio is not easily adapted to use as a general-purpose quality metric representing surface orientation. Fiocco’s method as well as the more popular grazing angle metric described by Eq. ^{24} are better suited as general-purpose surface orientation quality metrics.

## 7.

## Total Quality Metric

Quality metrics are generally combined to generate an overall measure of quality, referred to here as a total quality metric. Scott ^{2} cited two common examples of how quality metrics could be combined: weighted summation and composite binary pass/fail. The weighted summation approach takes the form

^{37}to determine the total quality of each measurement in a range image. To ensure that ${Q}_{i}\u220a[0,1]$ , the weight values can be restricted such that ${\sum}_{j=1}^{{N}_{C}}{w}_{j}=1$ . Meanwhile, the binary product approach has the formwhere ${C}_{T,j}$ is a threshold quality limit for the $j$ ’th quality metric. In this case, $({C}_{i,j}\ge {C}_{T,j})=1$ when the quality metric equals or exceeds the threshold value, and $({C}_{i,j}\ge {C}_{T,j})=0$ otherwise.

The choice of how quality metrics are combined depends on the application and the relative weight placed on each of the quality metrics. The weighted summation approach allows the researcher to tailor the contribution of each of the quality metrics to the overall measurement quality without any one metric dominating the result. For example, Fiocco ^{50} experimentally derived the weights for each sensor used in the experiment. They also standardized the weighting factors such that each sensor technology could be represented by a single weighting factor that modified each of the metric weights. Sequeira ^{37, 38} also used the weighted-sum approach but did not indicate how the weights were derived. The binary product approach is effective if the goal is to simply exceed some preset quality level.

## 8.

## Unresolved Quality Issues

Several quality attributes are notably absent from contemporary, and even emerging, quality metrics. In particular, no quality metric has been developed to address the motion of the laser spot during the acquisition process. This is of particular interest in triangulation scanners where multiple sample intervals may be integrated to combat speckle noise. No quality metric has been defined to quantify the effect of measurement resolution. Even using range as a quality metric only addresses measurement resolution by proxy. In fact, neighborhood-based metrics do not consider the issue of measurement density or proximity that is less than the measurement resolution of the system. No metric has addressed the problem of measurement repeatability, most likely because it requires multiple range images of the same surface, which substantially increases scanning time. Finally, surface complexity is only imperfectly evaluated using surface orientation.

Measurement quality metrics are rarely combined into a total quality metric. As a result, operations such as measurement merge, range image registration, and deciding whether or not to delete a measurement are often based on inadequate information. For example, although a maximum likelihood merge of two measurements is statistically valid, the covariance matrix only partially describes the quality of the measurement. In fact, a measurement with relatively large covariance may be of substantially lower quality than a measurement with relatively small covariance when other factors, such as distortion of the signal peak and surface orientation, are taken into account. A more comprehensive approach to applying measurement quality to manipulating measurements is required.

Finally, nonreturn measurements are generally treated as having no qualitative value, thus are often ignored during data collection. This means that information about regions of the environment that cannot be scanned is lost. Future research should examine what can be learned about the environment being scanned from the absence of a return signal.

## 9.

## Conclusions

Quality metrics have featured significantly in contemporary research; however, most quality metrics have been designed for specific application or specific algorithms, and are often used independently. Measurement uncertainty has been used extensively to represent measurement quality, but many environmental factors affect measurement uncertainty, making it insufficient as an independent quality metric.

The relationship of range and resolution to measurement quality depends on the beam width. Additional work is required to better define the relationship between measurement quality and resolution for midfield measurements, where parallax must be taken into account. Sampling density has also been featured in various forms as a quality metric, although most approaches are highly application specific. Absent from the literature is a more detailed analysis of how sampling density is related to measurement quality and how to quantify sampling density as a quality metric in a generalized fashion. Surface orientation has also been used extensively as a quality metric, although it is also insufficient as an independent quality metric. Reflectivity is affected by surface materials, orientation, and surface complexity; thus, this factor has been used to represent measurement quality. Given, however, that reflectivity is affected by multiple factors, it, too, is insufficient as an independent quality metric.

The current state of the art in quality metrics performs adequately in assessing the quality of measurements within the context of specific applications, but are often not readily generalizable. Few researchers combine quality metrics so that the strengths of one may offset the weakness of the other. This paper was a first step in assessing the relationship among the various quality metrics currently in use. More work is needed to develop a more comprehensive approach to measurement quality assessment.

## Acknowledgments

We thank the National Research Council of Canada for providing funding for this research through the Graduate Student Scholarship Supplement, as well as for providing facilities and equipment.

## references

## Biography

**David MacKinnon** holds a BSc (1990) in mathematics from the University of Prince Edward Island (PEI), a BSc (2001) in electrical and computer engineering from the University of New Brunswick, and both an MASc (2003) and PhD (2008) in electrical engineering from Carleton University. He is currently a research associate at the National Research Council Canada’s Institute for Information Technology, working in the area of measurement standards in 3D metrology. Between 1991 and 1998, he worked as a statistician, first with the PEI Food Technology Centre, then with the UPEI Clinical Research Centre. He is currently an Engineer-in-Training with the Association of Professional Engineers and Geoscientists of New Brunswick.

**Victor Aitken** holds a BSc (1987) in electrical engineering and mathematics from the University of British Columbia, and the MEng (1991) and PhD (1995) degrees in electrical engineering from Carleton University, Ottawa. He is currently an associate professor and chair of the Department of Systems and Computer Engineering at Carleton University, Ottawa, and is a member of the Professional Engineers of Ontario. His research interests include control systems, state estimation, data and information fusion, redundancy, sliding mode systems, nonlinear systems, vision, and mapping and localization for navigation and guidance of unmanned vehicle systems with applications in underground mining, landmine detection, and exploration.

**François Blais** is principal research officer and group leader of visual information technology, at the National Research Council Canada’s Institute for Information Technology. He received his BSc and MSc in electrical engineering from Laval University, Quebec City. Since 1984, his research has resulted in the development of a number of innovative 3D sensing technologies licensed to various industries and applications, including space and the in-orbit 3D laser inspection of NASA’s Space Shuttles. He led numerous R&D initiatives in 3D and the scanning and modeling of important archeological sites and objects of art, including the masterpiece Mona Lisa by Leonardo Da Vinci. He received several awards of excellence for his work and is very active on the international scene with scientific committees, more than 150 publications and patents, invited presentations, and tutorials.