Development of a reconstruction quality metric for optical three-dimensional measurement systems in use for hot-state measurement object

Abstract. Optical three-dimensional (3-D) geometry measurements are state of the art when it comes to contactless quality control and maintenance of the shape of technical components that exclude tactile measurements due to filigree or internal structures. Optical inspection methods are also characterized by a fast and high-resolution 3-D inspection of complex geometries. And due to their noncontact principle, they can carry out measurements in places that would otherwise not be accessible due to harsh environmental conditions or specimens such as hot forged parts. However, there are currently no methods to estimate the reconstruction quality for the optical 3-D geometry measurements of hot objects. The mainly used geometric measurement standards cannot be used for the characterization of hot measurements since the calibrated geometrical values are not transferable to high temperatures. For the development of such a metric, we present the fundamentals of the concepts and algorithms for an estimation of the reconstruction quality are presented and evaluated using a two-dimensional simulation model. The generated findings were applied to the 3-D geometry measurement of a hot object in a laboratory environment. The results are compared with general state-of-the-art reconstruction quality metrics.


Introduction
In this paper, we introduced a suitable metric to estimate the reconstruction accuracy of an optical three-dimensional (3-D) measurement system used under the influence of a refractive index gradient. Such conditions may be caused by the heat transfer from a hot measurement object into the ambient air, which leads to a deflection of the measurement light from its assumed linear path. This light deflection effect is an additional source of uncertainty when measuring the 3-D geometry of a hot-state object. The uncertainty estimation cannot be based on the comparison of the measured data with the calibrated geometry of a measurement standard, as is regular practice, since the calibrated features correspond to precisely the calibration at defined conditions, usually ambient pressure p ¼ 101325 Pa and temperature T ¼ 293 K. Also, the geometry of any component at hot-state cannot be measured by a tactile coordinate measurement machine since it cannot be excluded that the elevated temperatures have damaging effects on the tip or may cause conditions for which the system is no longer calibrated.
Members of this research group have tried to estimate the added uncertainty by analyzing the acquired data by comparing it with a mathematical model 1 and have conducted a multiphysics simulation 2 to estimate the effect of the light deflection on a simulated 3-D geometry measurement. In addition, we have measured hot objects in different ambient pressure situations 3 and from different angles 4 to try to estimate metrics to evaluate the reconstruction quality. An indirect metric was found in the estimation of the inhomogeneous refractive index field induced by the *Address all correspondence to Lorenz Quentin, E-mail: lorenz.quentin@imr.uni-hannover.de heat transfer of the hot object into the ambient air using the background-oriented schlieren method. 5,6 Nevertheless, a direct proof and estimation of the influence of the light deflection effect was not conducted.
Other groups have also developed 3-D measurement techniques for hot-state components, but they omitted a reconstruction quality estimation or based it on the predicted geometry of a heated component. The geometry prediction was based on a combination of the measured object temperature, the thermal expansion coefficient of the components material, and the measured geometry of the component at room temperature. Standard quality estimation methods for optical 3-D geometry measurement systems, e.g., the backprojection error from Hartley and Zisserman, 7 were not applied. The results from the literature are recapitulated in the following to give an impression of the objectives of other researchers and the achieved uncertainties and methods to estimate those uncertainties.
Liu et al. 8 used a line projection device combined with a two-camera stereo vision setup to calculate the diameter of a hot cylindrical forged part. The radiation from self-emission of the hot object was blocked by a lowpass filter. There was no hot reference object available for the experiment, wherefore the accuracy was not estimated. Their proposed solution was based on the calculation of the fundamental matrix from the corresponding pixel points 7 and the matching of points to the epipolar constraint. Their second validation approach was based on a heating simulation and the expected expansion of the measurand. Here, their results match with an error of ∼0.7%. Zhang et al. 9 used a laser light section method for the geometry reconstruction of cylindrical and rectangular hot heavy forged parts. The main research topic was the investigation of movement of the measurement system and the width of the laser line projection. An accuracy estimation was not conducted. Zatočilová et al. 10 developed a stereo vision system for rotationally symmetric forgings under high temperature. However, since they were not able to identify homologous points in both cameras, they opted to reconstruct the hull of the object by fitting two perperdicular planes in the images. They estimated that their possible accuracy was in the millimeter range.
Schöch et al. 11,12 developed a holistic geometry measurement system for arbitrary component shapes based on multiple light section scanners and a linear motion platform. They estimated the accuracy of their system at room temperature to be around 0.15 mm for a diameter measurement of 300 mm. For validation under hot conditions, they measured low-expansion workpieces (lengths l ¼ f70 mm; 130 mmg with a thermal expansion coefficient of 0.63 × 10 −6 K −1 ) at about 850°C. While the reconstructed object lengths were within the expected range, there was no explicit statement of the achieved accuracy or the influence of the light deflection effect by the inhomogeneous refractive index field. They additionally estimated the theoretical maximum influence of the light deflection effect on light-section measurements to be about 30 μm for objects at a temperature of 100°C. Their calculation was based on the light deflection caused by a homogeneous refractive index field of a slightly different refractive index outside the sensor housing. They concluded that the light deflection effect can be neglected due to its diminishing importance compared to the size of the measurand of about 300 mm in diameter.
Du et al. 13 used a two-dimensional (2-D) laser radar, comprising a single laser beam and two servo-motors, to measure the geometry of cylindrical hot heavy forged parts. The measurement error was estimated to about 2 mm for a cylindrical reference object of 450 mm × 275 mm (height × diameter) at room temperature. For the same object at hot state, they estimated the error to be 5 mm for the diameter and 8 mm for the height. The single point error was not investigated, and the reference method was not stated. Zhang et al. 14 combined a light section system with reflection models to reconstruct the edges of a hot component. This simplifies the evaluation of the acquired 3-D data to calculate the length of an object. The accuracy of the method was estimated by comparing the reconstructed the geometry of the cold object with a reference measurement; therefore, neither a single point accuracy nor a hot reference method was given. Hawryluk and Dworzak et al. 15,16 used the combination of a light-section laserscanner with a measuring arm, which is a six-axis serial arm reconstructing the 3-D position of its tip by measuring its joint angles. The system was used to reconstruct the geometry of the tools used in a hot-forming process. A wear analysis of the tools was conducted based on the acquired geometry. In our opinion, the wear thus described is not sufficiently examined in terms of its plausibility due to uncertainties caused by inhomogeneous ambient conditions or inherent uncertainties. Bračun et al. 17 used a laser-scanner to reconstruct the geometry of a hot specimen. They focused on the compensation of different lighting conditions due to inhomogeneous emission coefficients on the object's surface as well as the measurement accuracy. They identified scale chips on the object's surface and the surface mircotopography as the most significant influences. Zhou et al. 18 used a stereo vision system in combination with a line projector. The main research was focused on the line detection along measurement object edges in the stereoscopic image pairs. The detected lines were used to reconstruct the object's geometry edges through epipolar geometry. 7 Zhang et al. 19 introduced a laser scanner-based shape reconstruction method for large ring forgings. The main aspects of their approach were given by data fusion of consecutive measurements from different sensors to the holistic geometry of the component. For the validation of their data, they ompared the measurement results to the components radius, which was measured by a caliper. The estimated errors were within 1.2 mm for a radius of about 450 mm.

Basics of Optical 3-D Geometry Measurement
Optical 3-D geometry measurement techniques are state of the art when it comes to contactless quality control for the shape of technical components, especially for free-form objects. 20 The mapping of an arbitrary 3-D point x ¼ ðx; y; zÞ T onto a pixel location u b ¼ ðu b ; v b Þ T on the sensor of a camera is described by the perspective projection of the pinhole camera model, according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 4 7 6 Here, K i and ðRtÞ i describe the projection matrix and the pose of the used camera i, respectively. There is a loss of depth information in the process, i.e., a reduction of a point in 3-D space x to a pixel on a 2-D sensor u b . The information about the z-location of the 3-D point x is stored in λ.
The optical measurement of a 3-D geometry therefore needs to compensate for the loss of dimensions, e.g., through the use of two or more cameras observing the same scene. The reconstruction is then based on the triangulation of homologous 2-D points in multiple cameras. A successful reconstruction needs the satisfaction of Eq. (1) for each corresponding point; therefore, the identification of the system-inherit parameters [K i , ðRtÞ i for each camera i] and the detection of the homologous points are necessary.
A calibration routine using a known target is used to compute those camera parameters, e.g., the one developed by Zhang. 21 The reconstruction itself is then based on the intersection of two lines from two cameras i; j, resulting in a reconstructed 3-D point x 0 ðu c;i ; u c;j Þ. Since these lines in 3-D space are usually skewed due to a leftover calibration error or numerical uncertainties, different algorithms were developed to reconstruct the 3-D point. A popular method is an optimization based on the epipolar geometry. 7 However, homologous points may not be detectable on measurement objects without a structured surface. An active triangulation system, which incorporates an additional illumination unit, is used to counter this. One approach uses different projection images to add an artificial structure to the surface, e.g., by projecting a pseudorandom pattern to create homologous points. Such structured surface can then be reconstructed using the described triangulation technique with multiple cameras. A different approach uses a coded projection sequence to retrieve the corresponding projector pixel u p in a given camera pixel location u p ðu c Þ, e.g., the multifrequency phase-shift method developed by Peng et al. 22 The projector is modeled here as an inverse camera to then reconstruct 3-D-points from the given pixel values of camera c and projector p, i.e., x 0 ðu p ; u c Þ, using the described triangulation techniques.
For the researched metric to estimate the reconstruction accuracy of an optical 3-D geometry measurement technique, in our case a fringe projection system, it is necessary to identify corresponding points in a setup comprising one projector and multiple cameras. To this means, Bräuer-Burchard et al. 23 and Reich et al. 24 investigated the additional knowledge gained from a system with more than one camera n c > 1 and a single projector n p ¼ 1. They developed an algorithm to change the basis of the calculation from the camera sensor to the virtual projector sensor u p ðu c Þ → u c ðu p Þ. Using this method unlocks additional stereo pairs for triangulation by finding the corresponding camera pixels, i.e., not only reconstructing the 3-D points x 0 i ¼ fðu p ; u c;i Þ but also using x 0 i;j ¼ fðu c;i ; u c;j Þ. There are m ¼ ðn − 1Þ! stereo pairs in such a system with n ¼ n p þ n c optical devices. Therefore, there are m reconstructed 3-D points x 0 i;j ðu p Þ for each projector pixel. To benefit from the fractional projector pixel locations retrieved by the phase-shift algorithm, they developed a method to arbitrarily scale the projector resolution. The main advantages of this method are the optimization of the point cloud density by changing the projector resolution in a virtual raster (VR).

Proposed Method
First, a simplified model of the influence of refractive index fields on the reconstruction of 3-D points is described. The model is based on the compression of an inhomogeneous refractive index field to a medium containing two refractive indices with a discreet and sharp boundary layer. The assumptions are used to explain the background of the proposed method. Based on this, the presented method is brought into relation with the known backprojection error and the deviation of corresponding object points.

Theoretical Background
Suppose a ray of light, traveling through a medium with refractive index n 1 and a medium with n 2 , is subject to refraction at the boundary interface between n 1 and n 2 . The angle of refraction β is calculated from the incident angle α and the quotient for refractive indices n 1 n 2 , according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 4 4 4 Suppose the refraction is caused by a discreet and plane parallel medium of thickness d w ; the length l r of the refracted ray in n 2 is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 3 7 9 The refracted ray of light is shifted parallel by p compared with the virtual nonrefracted ray, according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 6 ; 3 2 4 p ¼ l r sinðα − βÞ: Let us suppose the ray of light is connecting one arbitrary 3-D point x k and the projection center of a camera c i (see Fig. 1). The view ray a k;i for the camera with center c i then connects the camera center and the first intersection of the ray of light with the interface layer d k;i , according to This view ray is subject to a different amount of light deflection p k;i for each different camera position c i and for each different point x k . For a 3-D geometry measurement system based on stereo triangulation, the reconstruction of an arbitrary point x k in a system with more than two cameras n > 2 results in m ¼ ðn − 1Þ! reconstructed points x 0 k;i;j , each one from a combination of cameras to a stereo pair i; j ∈ m. For assumed homogeneous refractive index conditions, all reconstructed points are, theoretically, equal to the corresponding and observed point in space, according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 6 1 5 for each camera pair i; j ∈ m and n 1 ¼ n 2 . In inhomogeneous conditions n 1 ≠ n 2 , each reconstructed point x 0 k;i;j is subjected to the combined light deflections p k;i and p k;j of both light paths, which, again, is different for each camera, i.e., p k;i ≠ p k;j . Therefore, the distance b k;i;j between the point in space x k and the reconstructed points x 0 k;i;j is expected to increase with larger thicknesses d w and larger refractive index differences Δn 1;2 ¼ jn 1 − n 2 j. While being different for different used camera pairs, this distance b k;i;j cannot not be measured in a real setup because it is not feasible to accurately reconstruct the point x k . Therefore, it is proposed to base the estimation of the reconstruction quality on the spatial extent of the set of reconstructed points x 0 k;i;j . A natural choice for an easy-to-calculate representation for said extent is the statistical deviation of all of the reconstructed points x 0 k;i;j for i; j ∈ m.

Estimation of the Reconstruction Quality for Multistereo-Pair Systems
The main requirement for all of the proposed methods is the unambiguity of the corresponding pixel locations. This is easily achievable for random-pattern or single-point triangulation systems due to the unambiguous nature of these systems. For fringe projection systems, this requirement can be achieved using Bräuer-Burchardt's VR method (see Sec. 2), in which all camera pixels are calculated as a function of the projector pixel u c;i ¼ fðu p Þ. Different calculation methods for a quality estimation are shown by Hartley and Zisserman. 7 The commonly used method is the backprojection, i.e., the recalculation of the pixel location u b from the reconstructed 3-D point x [see Eq. (1)]. The Euclidean distance e b to the activated pixel used for the reconstruction u is then called backprojection error, according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 6 ; 3 0 7 When applying this method to a setup with more than two cameras n > 2, all stereo pairs m are taken into account. The backprojection error is then calculated separately for each pair i; j ∈ m, according to To generalize the results, the average norm of the reprojection error for each reconstructed point within all observing cameras is computed, according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 ; 1 7 0 To take the assumptions from Sec. 3.1 into account, the backprojection can also be based on the mean of correspondingly reconstructed points, i.e., on E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 1 1 6 ; 1 0 5 Now, we calculate the backprojection error e b;i;m for each camera or projector i ∈ n from that mean point x 0 m , according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 1 1 6 ; 5 5 6 To make this error comparable throughout systems with different numbers of cameras and projectors n, the average of this value is calculated, according to However, this value might be a function of the used camera resolutions. Therefore, we also propose basing the error in the metric system, i.e., by calculating the reconstruction quality as the statistical variance of the reconstructed points, according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 3 ; 1 1 6 ; 4 1 0 A summary of the proposed reconstruction quality metrics as a flowgraph is shown in Fig. 2.

Experimental Setup
In this section, the setup for the experiments is shown. The proposed methods are tested as a 2-D simulation model under homogeneous refractive index conditions, i.e., n 1 ¼ const:, n 2 ¼ const:, n 1 ≠ n 2 . In addition, an optical 3-D fringe projection setup is used in a laboratory environment under homogeneous conditions, i.e., with a glass window, and under inhomogeneous conditions, i.e., measuring a hot object. In the simulation model, the light propagation is assumed to be in one plane as it simplifies the refraction and line-line intersection calculation.

Model Setup
To test the proposed approach, a simplified 2-D simulation is set up (see Fig. 3). For the model, a homogeneous and plane parallel refractive index field of thickness d w and refractive index n 2 is inserted into an otherwise also homogeneous propagation medium with a refractive index of n 1 . The path of light from a point x k ¼ ðx; yÞ T in 2-D space to the center c i ¼ ðc x;i ; c y;i Þ T of a virtual camera i is calculated using Snellius' law (dashed lines) and a brute-force method. The mean of the leftover point-line distance from camera center c i to the refracted path of light is about 10 −4 . The vector from the camera center c i to the first intersection d k;i between n 1 → n 2 is then used as directional vector a 0 k;i (dashed lines) to calculate the virtual intersections E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 4 ; 1 1 6 ; 1 0 1

3-D Measurement Setup
The used optical 3-D fringe projection setup combines one green-LED projector (Wintech PRO 4500 based on TI DLP LightCrafter 4500) and four cameras (Allied Vision Prosilica GT 2050 and Prosilica GT 2300 with MeViS-C lens) to a multicamera fringe projection system [see Figs. 4(a) and 5(a)]. There are bandpass filters on the camera lenses used to block incoming radiation from the self-emission of the hot objects. The triangulation bases and triangulation angles are listed in Table 1.
For an areal measurement, Peng et al.'s 22 multifrequency phase-shift method is used to calculate the projector pixel in a given camera's pixel u p ¼ fðu c;i Þ by projecting a coded sinusoidal sequence onto a specimen. The measurement object with a pattern of the projected sequence is shown in Fig. 4(b). Bräuer's 23 VR method is then used to calculate the phasemaps u c;i ðu p Þ, therefore having calculated n c ¼ 4 corresponding pixels for each projector pixel u p .
The 3-D-points are reconstructed from each available view-ray pair using epipolar geometry. 7 Camera-camera pairs on the same side of the projector (c lR ∶c uR and c lL ∶c uL ) are not used here since the expected uncertainty is higher compared with the other pairs due to a smaller triangulation angle and triangulation base (see Table 1). The number of measured 3-D points for each projector pixel is shown in Fig. 5(b). (a) (b) Fig. 4 The used measurement setup. (a) Image of four-camera one-projector 3-D geometry measurement system. The short code for the identification of the cameras is also shown; (b) Image of the used measurement object at T c ≈ 1300 K. One pattern of the sinusoidal sequence is projected onto the object. The object lays in a prism holder with an anchor point on the tail end.
A series of experiments is conducted to compare and evaluate the proposed methods. In these experiments, the geometry of a cylinder of diameter d c ¼ 50 mm is measured, which is placed in a prism holder with an anchor point (see Fig. 4(b)). The reconstruction quality is estimated for this cylinder under regular conditions T c ≈ 300 K and with the cylinder at the forging temperature of T c ≈ 1300 K. To test the methods under the influence of a known refractive index field, a perspex disc with a thickness of d w ¼ 5 mm and a refractive index of n 2 ≈ 1.49 is placed between the measurement unit and the cold specimen (not shown). Each experiment consists of five consecutive measurements per measurement scenario, each taking about a span of 2 s. The experimental setup is kept constant between each set of measurements except for the measurement object, which is placed in an oven with T oven ≈ 1300 K for about 1 h. Each measurement contains the reconstruction of x 0 i;j ðu p Þ for each stereo pair i; j ∈ m. The proposed reconstruction quality metrics are then calculated from the identical measurement results, the per-pixel mean of those metrics are shown in the result maps (Figs. 7-9), and all measured points are evaluated for the histograms in Fig. 10.  Table 1 Triangulation base and triangulation angle for all available stereo pairs in the fringe projection setup. The code for the stereo combination is taken from Fig. 4(a).

Results
The results for the 2-D simulation model and the 3-D fringe projection system are shown in this section. The expectations for the measurement results are based on the experience of this research group in regards to the geometry measurement of hot objects. In general, the reconstruction quality is expected to decrease when measuring a hot object through the heat-induced refractive index field compared with the measurement of the same cylinder at room temperature, as long as the cylinder is placed in a similar pose for both measurements. The exact same placement is not reached nor is the same diameter of the cylinder measured since both are influenced by the thermal expansion of the cylinder from T c ≈ 300 K to T c ≈ 1300 K. The measurement with the inserted glass window should yield a lower quality reconstruction value than both other measurements. This is mainly due to Eq. (4), by which the amount of light deflection is described. Overall, it is not expected to reach a perfect reconstruction quality since there is an intrinsic leftover error from the (always) imperfect calibration routine.

Results for the 2-D Simulation Model
The points of investigation are tested in the 2-D simulation model (see Sec. 4.1 and Fig. 3).
For varying thicknesses d w , the deviations e m increase with increasing thickness d w . There are also larger interpoint variances for an increasing thickness d w based on the location of the backprojected point x k . These are detected by observing the thickness of the more transparent points in the background.
The deviations e m for a scenario with an ambient refractive index of n 1 ¼ 1.00028 (air at standard conditions) are shown in Fig. 6(b). Here, similar observations can be made compared to Fig. 6(a). However, the decrease of deviations e m when overcoming the threshold Δn 1;2 > 0.25 is not a part of the discussion of results. The relative interpoint differences are larger compared with those in Fig. 6(a), while being smaller on an absolute scale. The minimum deviation is at the expected place of Δn 1;2 ¼ 0. The results from the laboratory experiment are split into the three proposed quality estimation methods fe b;s ; e b;m ; E m g since they vary in magnitude. The maps of the single camera backprojection error e b;s ðu p Þ [from Eq. (9)] are shown in Fig. 7. The region of interest, from which the data are extracted, is shown in Fig. 5(b). The regular backprojection error is not homogeneously distributed and has a mean value of about 0.443 pixel for T c ≈ 300 K. There seem to be few differences compared with the quality estimation of the cylinder at T c ≈ 1300 K with a mean deviation of ∼0.445 pixel [see Fig. 7(b)]. There are larger differences in the measurement with the inserted window in the light path of about 2.05 pixel [see Fig. 7(c)].
The maps of the backprojection error e b;m ðu p Þ for each averaged corresponding object point x 0 m and Eq. (12) are shown in Fig. 8, while the maps from the metric deviations of the corresponding 3-D points E m and Eq. (13) are shown in Fig. 9. In both cases, the differences from the measurements of the cold cylinder T c ≈ 300 K to the measurements of the warm cylinder T c ≈ 1300 K [compare Figs. 8(a) to 8(b) and Figs. 9(a) to 9(b)] seem to be small but detectable, while the differences between deviations from the cold measurement to the measurement with the inserted window are larger.
The histograms of these results (see Fig. 8) are sorted by the used evaluation metric. Therefore, the comparable results from e b;s are shown in Fig. 10(a) Fig. 10.
For the overall objective of detecting the influence of the heat induced refractive index gradient, the pixelwise differences between T c ≈ 300 K and T c ≈ 1300 K on the respective estimation method seem to be insignificantly small. Even a statistical analysis of the acquired results yielded no comprehensible conclusion for e b;s [see Fig. 10(a)], while being subject to interpretation for e b;m and E m [see Figs. 10(b) and 10(c)]. In the histograms, the distribution of both error metrics seems to be (approximately) Gaussian, and there is a difference of ≈11% for e b;m and ≈17% for E m . The difference is both significant and robust considering the evaluated number of points of about k ≈ 10 6 . The relative small reconstruction quality difference is mainly due to the relative large error in the reference results, which are a consequence of an imperfect calibration process. Extrapolating from the 2-D results, the measurement with d w ¼ 5 and Δn 1;2 ¼ 0.5 yields e m ≈ 1.3 and the measurement with d w ¼ 5 and Δn 1;2 ¼ −0.00028 yields e m ≈ 7 × 10 −3 , i.e., a relative difference of e m;Δn 1;2 ¼0.5 ∕e m;Δn 1;2 ¼−0.00028 ≈ 185. Comparing this with the relative differences in e b;m of ≈3.72 and in E m of ≈3.76 yields a much larger sensitivity of the model experiment compared with the measurements under laboratory conditions. This is mainly due to the additional influence of the calibration error in the workshop experiments.
The intent was to directly compare the displayed results as a difference map, e.g., as E m ðu p ; v p Þ T c ¼300 K − E m ðu p ; v p Þ T c ¼1300 K . This task proved to be impractical. The reasons for this effect are manifold.
The occurring and to be quantified light deflection moves the reconstructed points on the VR, so u p ðx T c ¼300 K Þ ≠ u p ðx T c ¼1300 K Þ. In addition, the distribution of the reconstruction errors is inhomogeneous, even for the experiment in standard conditions [see, e.g., Fig. 7(a)]. Therefore, a small change in the object's geometry or placement results in a change of the reconstruction error at that pixel position. These changes occur easily, mainly due to the different measurement conditions of the experiment at different states. Another reason is the surface of the object, which is constantly changing throughout heating and cooling periods as is its diameter. In addition, the position of the object cannot be assumed to be fixed, even though the cylindrical measurement object is placed in a prism holder with an anchor point.

Conclusion and Future Work
In this paper, a method for the estimation of the reconstruction quality of an optical 3-D geometry measurement system under inhomogeneous refractive index conditions was presented. The method is based on a multicamera single-projector fringe projection system without using additional geometric standards, e.g., a sphere or a cylinder with a calibrated diameter. The reconstruction metrics are based on redundantly measured 3-D points and the analysis of the geometric mean of these points. The theoretical background for the proposed methods, as well as the results of a 2-D simulation model, was laid out. In the experiment, the influence of the inhomogeneous refractive index gradient was joined in a homogeneous refractive index field of varying thicknesses and refractive index gradients. The deviation of the reconstructed points e m was introduced as a reconstruction quality metric for the 2-D case, and its feasibility was verified by the results of the simulation model. Here, the mean deviation is proportional to the optical wavelength difference e m ∝ d w Δn 1;2 for small refractive index differences Δn 1;2 < 0.1.
For the 3-D laboratory experiment, the feasibility of the proposed quality metrics was proven by the comparison of an uninfluenced measurement with a measurement in which a glass plate was inserted into the optical path. However, the comparison of a measurement of a cylinder at T c ≈ 300 K with the measurement at T c ≈ 1300 K revealed no differences for the state-of-the-art quality metric of the single-camera backprojection error e b;s . The evaluation of the results for the quality metric based on redundantly reconstructed points showed a small but significant difference between the hot and cold measurements. Here, the metric deviation E m yields a slightly higher sensitivity to the influence of the refractive index gradient than the backprojection e b;m of the mean points x 0 m . Overall, the results from the proposed reconstruction quality metric yield detectable differences when measuring hot objects compared with cold objects. This enables the estimation of a relative reconstruction quality for measurements in which geometric measurement standards cannot be used. For these, it is necessary to establish a ground truth for each measurement scenario by measuring a similar object in cold state.
In the future, the proposed and presented reconstruction quality metrics will be used to assess different compensation methods for the light deflection effect induced by an inhomogeneous refractive index field around wrought-hot objects. The combination of the presented metric and a similar geometric measurement standard can be used to calculate an absolute quality metric in reference to said standard. Also, it is intended to reduce or restructure the intrinsic reconstruction error to enable an analysis of hot objects without the need for a comparison with a cold object of similar shape. engineering in 2008 and his postdoctoral lecturing qualifications in 2016 from the LUH. His current research interests are optical metrology from macro-to nanoscale and optical simulations.
Eduard Reithmeier is a professor at the LUH and head of the Institute of Measurement and Automatic Control. He received his diplomas in mechanical engineering and in math in 1983 and 1985, respectively, and his doctorate degree in mechanical engineering from the Technische Universität München in 1989. His research focuses on system theory and control engineering.