Full simulation model for laser triangulation measurement in an inhomogeneous refractive index field

Abstract. The optical inspection of wrought hot workpieces between subsequent forming steps of a multistage process chain can yield diverse advantages. Deficient components can be detected in an early forming stage. Moreover, the eliminated cooling economizes heating energy. The present workpiece temperature can be exploited in the following chain steps. Challenges arise due to the heat input into the air surrounding the workpiece, as triangulation techniques rely on homogeneous optical conditions. The effect of an inhomogeneous refractive index field (RIF) in air on a 3-D geometry measurement by optical triangulation is modeled exemplary by a virtual measurement of a hot cylinder. To our knowledge, this is the first simulation approach that fully considers both light deflection from the illumination unit to object and from object to camera. Simulated measurement results in a homogeneous and an inhomogeneous RIF are compared. The presented approach predicts measurement deviations in inhomogeneous optical media and can help to design actuated or computer-assisted compensation routines in order to reduce deflection effects when measuring hot objects.


Introduction
The optical triangulation method is a state-of-the-art technique to acquire geometry data of complex freeform geometries and used in different scales. 1 A common industrial application is the inspection of formed metal sheets in the automotive sector by fringe projection systems, 2 whereas endoscopic systems for confined spaces with small measurements heads are being investigated for in-situ inspection tasks (e.g., the restoration of turbine blades 3 ).
Both fringe pattern and laser light section method require homogeneous measurement conditions in terms of the surrounding optical medium's refractive index, as a rectilinear projection of light is assumed in optical triangulation. 4 Although the refractive index of air depends on various parameters-such as humidity, pressure, and the CO 2 content-it varies only slightly if temperature and pressure can be considered constant. 5 As most measurements are performed under normal conditions, the hypothesis of a rectilinear light propagation is usually valid or accurate enough.
In subproject C5 of the Collaborative Research Centre 1153 (CRC) Process chain to produce hybrid high performance components by Tailored Forming, the geometry of high-temperature, hybrid workpieces is meant to be inspected via optical triangulation techniques between subsequent forming steps. The condition monitoring of critical workpiece features-such as the joining zone of different materials in a hybrid component-can help to discard deficient parts in an early manufacturing stage. Another advantage of an immediate-and therefore high-temperatureinspection is the economization of energy, as the present workpiece temperature can be exploited in the following forming chain steps. Unfortunately, the hypothesis of a rectilinear light propagation is violated when optically measuring hot objects: workpiece temperatures of more than 1000°C lead to a non-neglectable heat input into the surrounding air and induce a heat and thereby a density reduction, which creates a locally differing refractive index. 6 The resulting 3-D refractive index field's (RIF) shape, extension, and magnitude is time-variant and highly depends on the object's temperature, geometry, and the present air flow conditions. 7 The light propagation is affected, as its path is bent toward more dense air layers. Most articles in this field neglect this deflection effect, [8][9][10][11] which is legitimate if the light path deflection is too small to be reproduced by the applied measurement system. Ghiotti et al. 12 present a high-speed measuring system based on multiple laser scanning triangulation sensors to acquire the geometry of freeform parts with temperatures up to 1200°C. The refraction of the laser light due to a heat input in air is not considered, as a maximum error of 30 μm is assumed to occur for the described measurement scenario.
In order to model the light path in (inhomogeneous) media, Fermat's principle has to be adhered to. A modern and general version of Fermat's principle is formulated with respect to variational calculus: between two points G 1 and G 2 , a light ray takes the path, which is extremal with respect to variations of this path. A mathematical formulation for the optical path length (OPL) is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 3 2 6 ; 1 5 3 where nðsÞ is the refractive index of the traversed medium and a function of location s. 13 In this paper, the effect of an inhomogeneous RIF in air on a 3-D optical triangulation measurement is numerically modeled. The exemplary measurement is performed by simulating the geometry acquisition of a hot cylinder via lightsection method. The approach fully considers both light deflection from the illumination unit (laser with telecentric lens) to object and from object to the detection unit (pinhole camera). As the path of stationary optical length between laser and camera is not known, a solution to Eq. (1) can only be gained by an iterative approximation procedure, optimizing the path between object (cylinder) and camera by ray tracing. The simulations are performed with the software Comsol Multiphysics, 14 as the software provides both a simulation module for numerical heat transfer calculations, as well as a ray tracing module.

Former Work
In a former SPIE proceedings contribution, the authors experimentally investigated the effect of a convective density flow on a 3-D geometry measurement of a hot steel pipe by light-section method from above. 15 To realize measurements with reduced refractive index inhomogeneity, triangulation measurements have been conducted by controlling the RIF's shape via superimposition of an external laminar air flow. The laminar flow allowed the acquisition of reference geometry data of a hot object subject to thermal expansion but only slightly affected by the RIF. The experimental results revealed an interesting fact: The hot cylinder's geometry measured under full influence of the RIF led to a significantly smaller cylinder radius compared to the hot measurement with reduced convective flow. As the cylinder's temperature was just slightly differing between the two measurements, the documented change in radius could not be caused by difference in thermal geometry expansion. Therefore, it must have been induced by light deflection in the RIF. The design of the light section experiment permitted a-more or less accurate-documentation of the virtual geometry manipulation due to heat induced inhomogeneous RIFs but did not allow a deeper analysis on the nature of deflection.
Superimposing laminar flow in order to "homogenize" the RIF is a rather complex method in order to gain a hot non-RIF-affected reference measurement, to which a hot RIF-affected measurement can be compared to. Furthermore, the success of this approach highly depends on the object's geometry and its influence on the external air flow behavior. If a hot object would not be subject to thermal expansion, a cold object measurement could serve as reference in order to exclusively expose the RIF effect on a measurement. This can be achieved by means of software: if just the heat input into air but not the measurement object's thermal expansion is numerically modeled, the object's geometry in hot and cold state are the same. In this scenario, deviations from the geometry in hot state are exclusively caused by the RIF and can be revealed by simply comparing the object's geometry in hot and cold state.
The starting point for the present article are former simulation results of the laser light path manipulation from the virtual illumination unit to measurement object due to refractive inhomogeneity. The simulation setup is now extended by a virtual camera and a multistep ray tracing optimization in order to model a complete triangulation process.

Simulation Design: Assumptions and Boundary
Conditions This section comprises information on the geometrical simulation setup, the virtual triangulation sensor, and a detailed overview of the boundary conditions and theoretical models, such as the camera pinhole model and the derivation of the RIF induced by heat transfer.

Geometrical Setup and Refractive Index Field
The quantification of the virtual geometry manipulation by optical inhomogeneity in air requires a reference geometry. The geometry choice is guided by numerical needs: a horizontal cylinder guarantees robust conditions for the crucial density simulations based on heat transfer, as a numerically stable convective heat and density flow is building up above the shaft. This is indispensable for the derivation of the RIF. The geometrical dimensions of the simulation setup are outlined in Fig. 1. The cylinder has a diameter of 27 mm and a length of 170 mm. It has a starting temperature of 900°C, 1100°C, or 1250°C, respectively. These parameters are similar to a Tailored Forming workpiece after forming, postulating a slight cooling effect down to 900°C, considering workpiece handling time.
The heat transfer simulation requires the specification of involved materials. As a start, a simple steel monomaterial is chosen for the cylinder geometry in order to limit the simulation complexity. The relevant material parameters are listed in Table 1, e.g., the steel cylinder's thermal conductivity or specific heat capacity. Humid air at a pressure of 1 atm is postulated as surrounding medium. Furthermore, the expected convective flow is restricted to a laminar character. Turbulences are not reproduced in the model to save calculation costs and in order to keep the analysis of the subsequent ray tracing results as simple as possible. Further information on the used heat transfer equations goes beyond the scope of this paper and can be found in the provided software user guide for the heat transfer module. 14  Fig. 1 Geometrical dimensions of the simulation setup in mm with two cross sections of the hot steel cylinder (here: T steel ¼ 1250°C) and resulting inhomogeneous RIF. The RIF was derived from a heat transfer simulation after a simulation time of t ¼ 15 s. The virtual triangulation sensor comprises a matrix camera and a laser line generator approximated by several discrete laser locations defining a plane via ray tracing. The triangulation angle α is 60 deg. 3-D geometry data is gained via laser light-section method by intersecting laser plane and camera line-of-sight. In order to reveal the effect of the sensor location on the measurement result, the sensor is rotated by an angle β (0 deg, 15 deg, 30 deg) around the cylinder axis.
The following simulation routine has been implemented to gain an inhomogeneous 3-D RIF: First, the heat transfer from the hot measurement object into the surrounding air is simulated in order to gain a scalar 3-D density field with locally varying density values. The simulation is stopped after a simulation time of t ¼ 15 s since this is the planned maximum time to position the hot measurement object in front of the sensor in an experimental setup. Subsequently, the density values are used to derive a scalar 3-D RIF using the Ciddor equation. 5 Ciddor introduced an equation for the refractive index in air dependent on wavelength, temperature, pressure, humidity, and CO 2 content. By using the ideal gas law and postulating isobaric state, a relationship between density and refractive index can be deduced. This approach is only accurate for moderate temperatures, as the Ciddor equation is only valid up to 100°C. Assuming that a density of ρ ¼ 0 g cm 3 results in a refractive index n ¼ 1, the Ciddor equation can be linearly extrapolated for extreme density values in air that develop near the hot object. An exemplary simulation result for the RIF is displayed in Fig. 1 for a temperature of T steel ¼ 1250°C, revealing the convective density flow above the cylinder and its symmetrical shape. A summary of the hypothesized simulation boundary conditions is given in Table 1.

Optical Triangulation in Inhomogeneous Media: Simplified Outline
A simplified outline of a 2-D triangulation measurement setup with RIF effect, illumination unit (laser), and camera sensor is given in Fig. 2. To enhance clarity, the RIF is approximated by discrete air layers with different refractive index values n 1 , n 2 , n 3 , and n 4 . The air layer directly next to the hot cylinder surface features the lowest refractive index (n 1 ). For demonstration purposes, the sensor is positioned laterally to the measurement object. A 2-D point is represented by a bold character (e.g., A m ). Index m indicates a measured point. The blue (solid) line encodes the unaffected light path assuming homogeneous optical conditions, the red (dashed) line encodes the affected path in an inhomogeneous field. The surface of the cylinder is reconstructed by intersecting the activated camera's line-of-sight with the laser line (or in 3-D with the laser plane), leading to a measurement difference ðB m − A m Þ when comparing affected and unaffected scenario. The difference between the actual laser point A to the measured location by triangulation A m in a homogeneous scenario (cold cylinder) is small, if a high triangulation accuracy is assumed. This is indicated by depicting A and A m in the same location ( w A ≈ w A m ). A loss in geometry information due to the sensor discretization is not considered in the simplified outline Fig. 2.

Virtual Triangulation Sensor
The actual simulation has been realized with a virtual 3-D triangulation sensor using the light-section method. It comprises a matrix camera and a telecentric laser line generator (see Fig. 1). The measurement results are given in world coordinate system w K, if not declared differently. The laser is approximated by several discrete and equidistant laser rays, differing in the w y-discharge value for ray tracing. As a telecentric laser line generator is used (fan angle is 0 deg), the start vector defining the rays' tracing direction is assumed to be constant. Laser line generators with fan angles greater than 0 deg would require different ray tracing start vectors to reproduce the beam expansion. The virtual camera's projection center and the laser are positioned in distance of 300 mm to the world coordinate system w K. The triangulation angle α is 60 deg. In order to examine the effect of the sensor pose on the measurement result, a rotation angle β is defined to adjust the sensor location relatively to the cylinder axis. The exemplary angles are β ¼ 0 deg, 15 deg, and 30 deg.
To keep the simulation routine as simple as possible, the virtual camera is modeled as ideal pinhole camera. This precondition leads to a set of assumptions: • The camera's pinhole (aperture) is infinitesimally small and modeled by projection center C proj . Light diffraction effects when passing the pinhole are neglected, as well as lens distortion and aberration effects. • The camera's depth of field is unlimited, blurring effects due to defocused imaging are not modeled. (2) with f as the camera's physical focal length in mm, m x and m y as the pixel size in mm pixel in x-and y-direction, and c x and c y as the shift in pixel between the two coordinate systems img K and cam K. λ is a scaling factor in mm that parametrizes the length of the camera's line-of-sight through a certain pixel img u. The camera matrix K img;cam comprises the intrinsic parameters of the modeled pinhole camera. In an experimental setup, the camera parameters can be approximated by a calibration routine (e.g., according to Ref. 17).
If a 2-D point needs to be reprojected into 3-D space, the scaling factor λ needs to be known (the length of the camera's line-of-sight). To this end, Eq. (2) can be transformed to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 3 2 6 ; 6 5 0 The transformation between two different coordinate systems (for instance, between the world and the camera coordinate frame) can easily be realized with the help of transformation matrix T, according to the definition in Eq. (4): where T cam;w combines rotation and translation to transform homogeneous data points from one coordinate frame to another. The rotation matrix is built from orthonormal vectors r 1 ¼ ðr 11 ; r 21 ; r 31 Þ T , r 2 ¼ ðr 12 ; r 22 ; r 32 Þ T , and r 3 ¼ ðr 13 ; r 23 ; r 33 Þ T , the translation vector according to t ¼ ðt x ; t y ; t z Þ T . The basic triangulation routine is realized by a simple plane-line-intersection, as outlined in Fig. 3(a) (e.g., according to Ref. 18). The exemplary viewing direction onto the displayed triangulation setup is indicated in Fig. 3(b) (with the cylinder cross-section, white arrow). The camera sensor is displayed in front of the camera's projection center (unlike the depiction in Fig. 2). This is done for demonstration purposes and in order to display the camera according to the mathematical definition of the pinhole model, as given in Eqs. (2) and (3). Although physically not correct, this basic mathematical camera pinhole definition (with sensor in front of the projection center) is commonly used, as it simplifies the description of the mapping of a 3-D point onto the 2-D sensor (image is not upside down, no need for negative signs, e.g., according to Ref. 19, p. 370 ff.). The mathematical definition of the camera's line-of-sight in coordinate frame cam K is represented by line g, the laser plane is given in the Hessian normal form and is represented by plane E. If a laser line is projected onto the measurement object, the line is deformed subject to the object's geometry. This line deformation is captured by the camera. A specific laser line dot activates a specific camera pixel img u. If the camera's line-of-sight g through this specific pixel is constructed and intersected with the laser plane E, the 3-D information of the laser line point can be reconstructed. As laser plane E is given in the simulation in the coordinate frame of the laser laser K, it first has to be transformed into the coordinate frame of the camera cam K by an appropriate transformation matrix T laser;cam to intersect g and E.

Ray Tracing in Inhomogeneous Optical Media
In this section, theoretical background information on the used ray tracing algorithm is given. The derived iterative optimization routine is presented in a step-by-step pseudocode format in order to enhance comprehensibility.

Theoretical Background
The following Eqs. (5)-(7) are taken from the provided ray tracing software user guide. 14 A derivation of the presented equations goes beyond the scope of this paper. Nevertheless, the equations are cited to provide physical background information for inhomogeneous ray tracing. More detailed information can be found in Born et al., 20 Saleh and Teich, 21 and Krueger. 22 The ray tracing algorithm in Comsol is deduced from the principles of wave optics. Basic assumptions are that the electromagnetic ray is observed at locations far from the light source and that its amplitude changes very slowly with time and place. The electromagnetic field can therefore be approximated locally by plane waves. The mathematical description of the amplitude is neglected. In this case, the rays' phase is nearly linearly dependent on time and position according to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 6 3 ; 2 6 0 ψðr; tÞ ≈ k · r − ω · t þ ψ 0 ; with phase ψ, position vector r, wave vector k, time t, angular frequency ω, and ψ 0 as arbitrary phase shift. 22 Equation (5) allows the derivation of six coupled first-order ordinary differential Eqs. (6) and (7) given in vector notation: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 6 3 ; 1 8 5 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 6 3 ; 1 4 4 The equations need to be solved with respect to k and r to calculate ray trajectories in inhomogeneous media. Fermat's principle can be gained from these equations, using the socalled Eikonal. 21 Fermat's principle is defined based on the path of light, but not on the path direction. This means that the light path can be simulated in either way, as long as it passes the same two points: from object to camera or inversely from camera to object. This so-called inverse principle 13 is helpful for the iterative ray tracing optimization: starting point for ray tracing is the camera and not the laser light point on the measurement object.

Measurement Simulation with Iterative Ray
Tracing Optimization An iterative approximation of the light path from laser incidence location on the cylinder surface to camera needs to be implemented in order to approximate the corresponding camera pixel location on which the laser dot is projected on. Alternatively, referring to the inverse principle, the camera can be starting point for the iterative approximation, as light takes the same way from point G 1 to G 2 as from point Provided the inhomogeneous RIF around the cylinder has been derived, the measurement simulation for a single data point can be summed up by the subsequent steps, referring to the parameter labeling in Fig. 4.
1. Ray tracing from laser illumination unit to measurement object to gain light incidence location w B l on object surface in world coordinate system w K. 2. Optimization: Inverse ray tracing from camera to object surface to iteratively approximate location w B l by RIF-affected camera line-of-sight through pixel location img u l on camera sensor. The resulting position on the object surface after k iteration steps is defined as w B k . A helper coordinate system h K is introduced in order to calculate distances parallel to the camera sensor.

Set start values:
• The start pixel img u s ¼ img u 1 ¼ img ðu 1 ; v 1 Þ T is gained by mapping w B l linearly (without RIFeffect) onto the camera sensor with the help of Eqs. (2) and (4)  -  Fig. 4 Approximation of laser dot location w B l ¼ w ðB l;x ; B l;y ; B l;z Þ by camera line-of-sight via multi-step ray tracing optimization. For demonstration purposes, only the zy-plane of the world coordinate system w K is illustrated. At the beginning of the optimization process, the directional vector for ray tracing cam a 1 is constructed through camera projection center C proj and start pixel img u s ¼ img u 1 ¼ img ðu 1 ; v 1 Þ T . cam a k is iteratively adapted in dependency of the distance h d k ¼ j h ðB k;x ; B k;y Þ T − h ðB l;x ; B l;y Þ T Þj between the target location h B l and the actual position on the cylinder surface h B k in helper coordinate system h K , parallel to the camera sensor. By comparing the corresponding x-and y -values, the new pixel location is defined. Example for x-direction, in case of h d 1 < h d min : Determine distance of step k ¼ 1 via x ≤ 0, therefore the pixel u-location for the next iteration step k ¼ 2 needs to be adapted according to img Δu k is reduced with every iteration step. The optimization procedure is stopped, if the absolute value of the Euclidean norm h d k is smaller than the maximum deviation radius h r max acceptable. Another stop criterion is the maximum number of iteration steps k.
3.2 Reconstruction of 3-D point by triangulation (intersection of camera's line-of-sight through img u min and laser plane) The main challenge arises from step 2 in which the projected laser dot location in terms of pixel location img u l is approximated. As an idealized pinhole model is hypothesized, light mapped onto the 2-D camera sensor is forced to pass projection center C proj . In a first step, the start pixel img u s ¼ img u 1 is calculated by linearly mapping w B l onto the camera sensor with the help of Eqs. (2) and (4). Due to the pinhole assumption, a directional vector cam a k can be constructed through projection center C proj and img u 1 , leading to the light discharge direction for ray tracing in iteration step k ¼ 1. After the initial ray tracing simulation (step 2.3), the actual distance h d k ¼ j h ðB k;x ; B k;y Þ T − h ðB l;x ; B l;y Þ T Þj between the actual light incidence location w B k and the target location w B l is calculated in helper coordinate system h K (step 2.4). There is no need to compare the z-values, as depth information is lost when imaging. If the condition h d k < h d min is fulfilled, h d min being the smallest distance value so far, both img u min and h d min are updated with the actual step values. Provided the maximum number of iterations k stop is not reached and h d min is not smaller than the maximum deviation radius h r max allowed, the new pixel location img u kþ1 is determined according to step 2.5 with iterative pixel step size Δu k . Δu k is adapted in dependency of the actual iteration step k and the width w sg and height h sg of the pixel search grid. To prevent an erroneously mapping of undercut points onto the camera sensor, the z-distance is finally checked in step 3.1 by calculating the Euclidean norm j h ΔB k;z;min j ¼ j h B k;z;min − h B l;z j. If j h ΔB k;z;min j is bigger than an initially defined threshold value h z max , the corresponding camera point is not used for triangulation.
In an experimental (not simulated) triangulation measurement, the limited lateral resolution of the camera sensor restricts the exact mapping of a 3-D world point onto the sensor. Furthermore, a light section measurement depends on the accurate localization of the laser's center line in the camera image, e.g., by fitting Gaussian distribution curves into the laser line's intensity profiles. This approach permits subpixel accuracy. To take this discrete and virtual increase in pixel number into account, the value for cam u min is rounded to a virtual pixel size of 0.25 pixel. As the lateral resolution of the camera sensor is 22 μm (compare to  Table 1), the maximum deviation radius h r max is set to a value of 1 μm. A stricter threshold is not necessary, as even the assumed subpixel accuracy of 0.25 pixel only allows a mapping of areas of 5.5 μm × 5.5 μm onto the camera sensor.

Results
The results in this section are based on the boundary conditions according to Table 1 and the geometry setup depicted in Fig. 1. First, detailed information on a triangulation measurement from above is presented (to ensure full manipulation by the inhomogeneous RIF on the laser light path). The gained results are analyzed in Sec. 5.2. Results for different cylinder temperatures and sensor poses are presented in Sec. 5.3.

Cylinder Geometry Measurement by Light-Section Method
The steel cylinder temperature is set to 1250°C. The triangulation sensor is not rotated around the cylinder axis (β ¼ 0 deg) to realize a measurement from above under full influence of the RIF (see Fig. 1, right side). Nine discrete light paths from laser to camera are simulated-only differing in the w y-discharge location (equidistantly arranged from 0 to −10 mm) but with same directional vectors (laser with telecentric lens). The parameter nomenclature is given in Fig. 5, and the corresponding simulation results are depicted in Figs. 6 and 7. The laser incidence location on the cylinder for a homogeneous RIF (cold cylinder) is given in the world coordinate frame w K as w A, the laser incidence location for an inhomogeneous RIF (hot cylinder) as w B. The corresponding locations on the camera sensor are img B and img A and given in pixel in the coordinate frame img K. Normally, the pixel location on the sensor is defined by the letter u. This nomenclature is deviated from in this section to ensure a clear distinction of parameters and in order to avoid the introduction of further indexes. The measured 3-D points for cold and hot cylinder scenario are w A m and w B m (see Fig. 5). The results in Fig. 7 are given as distances between two points [e.g., ð w B − w AÞ], whereas not only the difference between the scalar entries [e.g., Δx, Δy, and Δz] are presented but also the 2-D or 3-D Euclidean norms of the distances between two points (e.g., h d min , d euclid;xyz , or d euclid;uv ).
In an experimental triangulation measurement, a camera sensor always operates as low pass filter, as information is lost due to discretization. The difference between the actual laser point location w A to the measured location by triangulation w A m in a homogeneous scenario (cold cylinder) is relatively small, as a subpixel accuracy of Δu ¼ Δv ¼ 0.25 pixel is assumed for the detection of the points and no further light deflection is induced by the surrounding medium air in a cold scenario. This is indicated in the measurement outline in Fig. 5 by depicting w A and w A m in the same location ( w A ≈ w A m ).
The term quasi-continuous indicates the fact that a result is given not rounded to the camera sensor's discretization limitation of 0.25 pixel but according to the output of the ray tracing optimization routine. The iteration step k defines the optimization routine's variable pixel step size (compare  Fig. 6 Distance h d min ¼ j h ΔB xy;min j between real and optimized laser point locations, given in the helper coordinate frame h K . Simulation result for a triangulation measurement from above (β ¼ 0 deg) and a cylinder temperature of T steel ¼ 1250°C (compare to setup depicted in Fig. 1). The parameter nomenclature is given in Fig. 5. Δz Fig. 7 Simulation results for a triangulation measurement from above (β ¼ 0 deg) and a cylinder temperature of T steel ¼ 1250°C (see setup depicted in Fig. 1). The parameter nomenclature is given in Fig. 5. to step 2.5 in Sec. 4.2). Therefore, depending on the routine's stop criteria, the sensor pixel on which the laser dot is projected on, is determined more accurately than given by the sensor's subpixel accuracy (0.25 pixel). As this more accurate pixel location is still not continuous-the iteration does stop at a discrete value k-the nonrounded results are called quasi-continuous. As the RIF induced deflection effects are easier to interpret without sensor discretization, the simulated data in Fig. 7 are given for quasi-continuous sensor conditions. Discrete results (rounded to 0.25 pixel) are discussed later. The quality of the iterative ray tracing optimization according to Sec. 4.2 is checked by analysis of Fig. 6: The maximum deviation radius h r max has been set to 1 μm. Therefore, the difference h d min ¼ j h ΔB xy;min j between the actual laser incidence location h B and the optimized location h B min in the helper coordinate frame h K has to be smaller than this threshold value. This is the case, as all values for h d min stay below 1 μm.
The simulated curve in Fig. 7(a) depicts the displacement of the laser incidence location ( w B − w A) on the cylinder surface in the world coordinate frame w K. The curve reveals the effect of the cylinder curvature on the measurement: With decreasing laser w y-discharge values, Δz and Δy are continuously getting smaller (the absolute value is increased). This is due to the changing cylinder's surface gradient Δz Δy , when distancing from the origin of coordinate frame w K in negative w y-direction [see Fig. 5(b), the cylinder's "shoulder slope" is getting steeper]. The Δx value increases [see definition of w x-axis in Fig. 5(a)]. This geometry effect due to the cylinder curvature is not or only slightly obeyed to for w y ¼ 0 mm. In this case, the laser's start vector (for ray tracing) is directly pointing on the origin of coordinate frame w K. The surface gradient Δz Δy ð w y ¼ 0Þ is 0. Therefore, the cylinder's curvature does not "boost" small RIF-induced deflection values, if a light ray hits the surface in the vicinity of the origin of coordinate frame w K.
The simulated data in Fig. 7(b) gives information on the difference between the actual laser incidence location on the hot cylinder surface w B and the measured point w B m . A thought experiment helps to reveal the significance of the data: Provided the light deflection from laser to object only happens inside the laser plane, and no further deflection occurs from object to camera, the measured point w B m would not differ from the real point w B. This means that in the very unlikely event of a pure laser plane-bound light deflection, and if no deflection from object to camera is induced at all, the correct 3-D point would be triangulated. Figure 7(b) proves that this is not the case for the simulated triangulation measurement. Furthermore, the great Euclidean distances up to 75 μm between measured 3-D point w B m and real 3-D point w B demonstrate clearly that the measurement deviation due to an inhomogeneous RIF cannot be neglected. The measurement result for homogeneous conditions [compare to Fig. 7(e)] shows a maximum deviation of ∼2.7 μm for d euclid;xyz . The results are given rounded to the camera sensor's discretization limitation of 0.25 pixel. This allows a mapping of areas of 5.5 μm × 5.5 μm on a quarter of a pixel. The maximum discretization error by rounding is therefore 2.75 pixel, which matches the maximum value of d euclid;xyz in Fig. 7(e). As no further light deflection is induced by the surrounding medium air in the simulation scenario with cold cylinder, the absolute distance between w A and w A m is very small and can be explained exclusively by sensor discretization The data in Fig. 7(b) gives no information on whether the 3-D point w B m is accidentally a point of the cylinder surface. In order to verify this, the closest distance between w B m and the numerical cylinder surface has to be calculated. The additional merit of such an analysis is restricted, as it only slightly helps to understand the deflection in the RIF.
Within the scope of this work, the measured points w A m in cold cylinder state are used as reference data for the evaluation of the measured points w B m in hot state. The difference between theses points is depicted in Fig. 7(c).

Analysis: Superimposition of Light Deflection
To allow an interpretation of the difference ð w B m − w A m Þ according to Fig. 7(c), the laser point displacement on the camera sensor ð img B − img AÞ in Fig. 7(d) has to be analyzed. The laser light displacement is a superimposition of the deflection in two different paths: the deflection from laser to cylinder surface and the deflection from cylinder surface to camera. If both paths' deflection effects are opposed to each other, the resulting pixel displacement on the sensor is reduced. Therefore, the decrease in region (II) in Fig. 7(d) can be explained. The basic procedure to separate the paths' deflection effects is depicted in Figs. 8(a)-8(c). An information in advance to avoid irritation: the deflection of point w B is not necessarily restricted to the laser plane [as depicted in Fig. 8(a), compare to red, dashed line]. This is for demonstration purposes only.
To gain the laser light displacement for the path "laser to object," both w A (blue, solid line) and w B (red, dashed line) are linearly projected onto the camera sensor, resulting in two corresponding pixel locations img A (blue, solid line) and img B laser→object (red, solid line) [compare to Fig. 8(a)]. By doing this, the deflection induced in the path "object to camera" is not taken into account. This deflection effect is derived according to image Fig. 8(b): The linear projection of w B onto the camera sensor in location img B laser→object is compared to the nonlinear, RIF-affected projection in location img B object→cam (see red, dashed line from w B to camera). If now both pixel displacement values are superimposed [see Fig. 8(c)], the resulting displacement is The superimposition must lead to the results depicted in Fig. 7(d). A special scenario is depicted in Fig. 8(c): the resulting pixel displacement from laser to camera can be close to zero, leading a small difference between w B m and m A m , even though the light deflection in the two different paths is non-neglectable [see Fig. 8(c), The depicted approach in Fig. 8 is nevertheless also valid for points w B m and m A m , which are reconstructed in different locations.
The suggested routine has been realized for the pixel displacement in Fig. 7(d) and is outlined in Fig. 9. Not only the laser point displacement on the camera sensor is given for both light paths [see Figs. 9(a)-9(c)] but also a graphical interpretation of the displacement on the sensor for the laser light path corresponding to w y ¼ −7.5 mm [see First of all, the resulting pixel displacement by superimposition in Fig. 9(c) is the same as in Fig. 7(d), the suggested approach is therefore legitimate. Moreover, especially the progression of the pixel displacement in the u-direction indicates that the induced light deflection values are opposite to each other: Δu laser→object is decreasing to a pixel value of approximately −4 for w y ¼ −10 mm [ Fig. 9(a)], whereas Δu object→cam increases to a value of more than 2 pixel [ Fig. 9(b)]. The resulting value Δu laser→cam varies around a value of −0.5 pixel [ Fig. 9(c)].
The value for d euclid;uv ð w y ¼ −7.5 mmÞ in Fig. 9(c) is therefore not contradictory: the consideration of both light paths results in a reduction of pixel displacement [see also graphical interpretation in Figs. 9(d)-9(f)], which again leads to a reduced difference for the gained values d euclid;uv in region (II) in Fig. 7(d). This special scenario is exemplary depicted in Fig. 8(c): the resulting 3-D point w B m (hot cylinder) is depicted in the same location as point w A m (cold cylinder). As the resulting pixel displacement is again increasing for light rays with w y < −7.5 mm, also the distance between w B m and w A m rises.
To gain a deeper understanding of an exemplary light refraction scenario, the rounded (discrete) interpretation of Fig. 7(c) is analyzed for the laser ray with a discharge value of w y ¼ 0 mm. The graphical result of this analysis is given in Fig. 10(a), based on the rounded pixel displacement [to sensor subpixel accuracy of 0.25 pixel, Fig. 10(b)]. First of all, there is only a slight difference between the graphs in Figs. 7(d) and 10(b). The difference would be bigger, if the subpixel accuracy was further limited-for instance, to a value of 0.5 pixel. The laser light path for a w y-discharge value of 0 mm is only marginally deflected in the w y-direction due to the symmetry of the RIF to the xzplane (in w K). The cylinder curvature only has small influence on the resulting incidence location w B on the cylinder surface. Therefore, only the xz-plane is depicted in the graphical analysis in Fig. 10(a). When the laser light enters the inhomogeneous RIF from the left side, the ray is deflected downward toward more dense air layers, where greater refractive index values are present [see vertical black lines in Fig. 10(a), the lines separate areas with different refractive index value]. This leads to the dashed light path. The closer the ray moves toward the cylinder, the more predominates a horizontal expansion and variation in the refractive index (see horizontal black lines). Therefore, the ray is refracted away from the cylinder, where the surrounding air is more dense. The ray reaches the cylinder in location w B. As the homogeneous RIF is basically symmetric to the zy-plane of the world coordinate frame w K (see Fig. 1), the path from cylinder to camera is flipped vertically in the graphical interpretation, according to Fig. 10(a). Based on this simplification, the triangulated measurement point w B m is reconstructed above the real cylinder surface. This matches the gained result for Δz and Δx [see Fig. 10(b)].
The interpretation in Fig. 10(a) explains the simulation result in Fig. 10(b) for the laser ray with discharge value of w y ¼ 0 mm. It also reveals the complexity of light refraction in an inhomogeneous RIF.

Comparison: Different Cylinder Temperatures and Sensor Poses
The result section is closed by a comparison of different measurement scenarios. To this end, the steel cylinder temperature (900°C, 1100°C, 1250°C) and the triangulation sensor pose are varied (0 deg, 15 deg, 30 deg). In Figs. 11(a)-11(c), different data curves for a measurement from above with β ¼ 0 deg are depicted, revealing the influence of a temperature increase: The pixel displacement on the camera sensor and the measurement difference ( w B m − w A m ) indicates an increase due to temperature only for the measurement points corresponding to the laser rays with w y-discharge values from 0 to −5 mm. This is due to the expansion of the inhomogeneous RIF (see Fig. 1, right side with cylinder cross-section): As the RIF variation develops its full effect directly above the hot cylinder due to the convective density flow, the spatial region in which a cylinder temperature increase takes effect, is expanded widely. Smaller light ray w y-discharge values do not lead to differences (−10 mm < w y < −5 mm), except for w y ¼ −10 mm. This might be explained by the analysis in Sec. 5.2: the resulting light deflection is reduced, due to the superimposition in the path from laser to object and from object to camera. This effect, based on the symmetry of the RIF, is not affected by a temperature increase, as the symmetry of the RIF does not change.
The discretization effect can be evaluated, when comparing Figs. 10(a) and 10(b). Due to the pixel rounding to a value of 0.25 pixel (subpixel accuracy of the camera sensor), the RIF-induced deflection effect is "discretized" as well. Curve (a.2) shows more abrupt steps than curve (a.1). The triangulation sensor's resolution therefore affects the "reproduction" of the deflection effect in an inhomogeneous RIF.
The effect of a sensor rotation around the cylinder axis (see Fig. 1, right side) on the laser point displacement is depicted in Fig. 11(d) for a cylinder temperature of 900°C. The displacement is reduced with increasing angle β. The simulated curves therefore indicate the obvious: if a measurement is not performed directly through the greatest expansion and variation of the inhomogeneous RIF, but rather sideways through less expanded regions, the pixel displacement is reduced and the corresponding measurement more trustworthy. Fig. 9 Laser point displacement on the camera sensor for a measurement scenario from above according to results in Fig. 7(d), separated into the deflection induced in different light paths: (a) for the light path "laser to object," (b) for the path "object to camera," and (c) for the complete path "laser to camera." (d)-(f) An exemplary graphical interpretation of the displacement on the sensor for the laser ray with w y ¼ −7.5 mm is given. for the rotated sensor is not depicted: The Euclidean distance is below 20 μm for all laser rays for β ¼ 15 deg and below 7 μm for β ¼ 30 deg.

Summary and Conclusion
In this paper, a virtual triangulation setup based on the light section method is presented, using a matrix camera with entocentric lens as detection unit and a telecentric laser line generator as illumination unit. Geometry measurements of a cylinder in different temperature states are simulated and compared in order to analyze the effect of an inhomogeneous RIF on triangulated measurement data. To this end, detailed information is given on the simulation design, comprising the numerical calculation of the inhomogeneous RIF via heat transfer simulations and the modeling of the virtual sensor (camera pinhole model), next to the reconstruction of 3-D points via triangulation (Sec. 3). In Sec. 4, theoretical background is given on the applied ray tracer, as well as an extensive description in pseudocode of the implemented iterative optimization routine in order to reproduce a point projection onto a pinhole camera, taking light refraction into account. Simulation results, using the derived virtual triangulation routine, are presented and discussed in detail in Sec. 5. The analysis of the measurement differences for homogeneous and inhomogeneous optical conditions leads to the following conclusions: The measurement object's geometry directly influences the laser point displacement on the object's surface, and therefore, the RIF-induced light deflection effects [compare to Fig. 7(a)]. Furthermore, the absolute light deflection, as seen by the measurement camera in terms of pixel displacement, is a superimposition of the deflection effects in the path illumination unit to object and object to detection unit (Fig. 9). These path deflections can be opposed in their effect, resulting in a much smaller camera pixel displacement than expected. Moreover, already the one-way light path manipulation from camera to object shows the complexity of light refraction in a heat-induced convective density flow: The refractive fields shape, extension, and magnitude can result in a refraction toward the hot object, as well as away from the object on the same light path 10(a), complicating the interpretation of the gained result.
By changing the triangulation sensor's pose in relation to the measurement object, the camera pixel displacement can be reduced [ Fig. 11(d)], resulting in more accurate triangulation results.
Lateral measurements or measurements from underneath the object are therefore an alternative to reduce light refraction effects in an experimental setup, as the refractive field's extension and deflection effect is limited. Unfortunately, this approach is not sufficient, if 360 deg geometry data is required at the same measurement time in order to capture a full shrinkage process of wrought-hot, hybrid workpieces.
A possible solution to allow high precision geometry measurements of hot objects is actuated or computer-assisted routines that either guarantee a rectilinear propagation of light despite the object's heat or that allow a subsequent correction of RIF-disturbed measurements. If compensation algorithms for a subsequent geometry data correction are meant to be derived from simulation results, all parameters of the real measurement setup have to be considered in the simulation (e.g., sensor resolution and pose, triangulation angle, object geometry and temperature). Especially, the dynamics of the heat-induced refractive field has to be taken into account, as an areal triangulation measurement by a structured light system requires the acquisition of an image sequence over time.

Forthcoming Work
The presented simulation model will be developed into a virtual fringe projection system to allow virtual areal measurements. To this end, the telecentric laser line generator is replaced by a projector. As a projector can be considered as an inverse pinhole camera, 23 the same model implementation is used for detection unit (camera) and illumination unit (projector). The complete virtual triangulation setup is defined and visualized in a MATLAB 24 script (e.g., the sensor pose, the camera's focal length, compare to Fig. 12), before the simulation boundary conditions are passed to the simulation platform in Comsol. The model will not be able to virtually reproduce the projection of an image sequence to solve the projector pixel-camera pixelcorrespondence problem. Fortunately, this is not necessary, as the correspondence problem is solved via the presented optimization routine. Further work will be the implementation of a camera lens distortion model, the investigation of the RIF dynamic and the parallelization of ray tracing routines, in order to speed up the whole virtual triangulation process. A faster and more dense reconstruction of surface data by virtual triangulation would allow the evaluation of the standard deviation, when fitting a cylinder into RIFaffected and nonaffected measurement data. This would enable a more general analysis of heat-induced light deflection and its effect on triangulation measurements.