Open Access
20 November 2018 Full simulation model for laser triangulation measurement in an inhomogeneous refractive index field
Rüdiger Beermann, Lorenz Quentin, Gunnar Stein, Eduard Reithmeier, Markus Kästner
Author Affiliations +
Abstract
The optical inspection of wrought hot workpieces between subsequent forming steps of a multistage process chain can yield diverse advantages. Deficient components can be detected in an early forming stage. Moreover, the eliminated cooling economizes heating energy. The present workpiece temperature can be exploited in the following chain steps. Challenges arise due to the heat input into the air surrounding the workpiece, as triangulation techniques rely on homogeneous optical conditions. The effect of an inhomogeneous refractive index field (RIF) in air on a 3-D geometry measurement by optical triangulation is modeled exemplary by a virtual measurement of a hot cylinder. To our knowledge, this is the first simulation approach that fully considers both light deflection from the illumination unit to object and from object to camera. Simulated measurement results in a homogeneous and an inhomogeneous RIF are compared. The presented approach predicts measurement deviations in inhomogeneous optical media and can help to design actuated or computer-assisted compensation routines in order to reduce deflection effects when measuring hot objects.

1.

Introduction

The optical triangulation method is a state-of-the-art technique to acquire geometry data of complex freeform geometries and used in different scales.1 A common industrial application is the inspection of formed metal sheets in the automotive sector by fringe projection systems,2 whereas endoscopic systems for confined spaces with small measurements heads are being investigated for in-situ inspection tasks (e.g., the restoration of turbine blades3).

Both fringe pattern and laser light section method require homogeneous measurement conditions in terms of the surrounding optical medium’s refractive index, as a rectilinear projection of light is assumed in optical triangulation.4 Although the refractive index of air depends on various parameters—such as humidity, pressure, and the CO2 content—it varies only slightly if temperature and pressure can be considered constant.5 As most measurements are performed under normal conditions, the hypothesis of a rectilinear light propagation is usually valid or accurate enough.

In subproject C5 of the Collaborative Research Centre 1153 (CRC) Process chain to produce hybrid high performance components by Tailored Forming, the geometry of high-temperature, hybrid workpieces is meant to be inspected via optical triangulation techniques between subsequent forming steps. The condition monitoring of critical workpiece features—such as the joining zone of different materials in a hybrid component—can help to discard deficient parts in an early manufacturing stage. Another advantage of an immediate—and therefore high-temperature—inspection is the economization of energy, as the present workpiece temperature can be exploited in the following forming chain steps. Unfortunately, the hypothesis of a rectilinear light propagation is violated when optically measuring hot objects: workpiece temperatures of more than 1000°C lead to a non-neglectable heat input into the surrounding air and induce a heat and thereby a density reduction, which creates a locally differing refractive index.6 The resulting 3-D refractive index field’s (RIF) shape, extension, and magnitude is time-variant and highly depends on the object’s temperature, geometry, and the present air flow conditions.7 The light propagation is affected, as its path is bent toward more dense air layers. Most articles in this field neglect this deflection effect,811 which is legitimate if the light path deflection is too small to be reproduced by the applied measurement system. Ghiotti et al.12 present a high-speed measuring system based on multiple laser scanning triangulation sensors to acquire the geometry of freeform parts with temperatures up to 1200°C. The refraction of the laser light due to a heat input in air is not considered, as a maximum error of 30  μm is assumed to occur for the described measurement scenario.

In order to model the light path in (inhomogeneous) media, Fermat’s principle has to be adhered to. A modern and general version of Fermat’s principle is formulated with respect to variational calculus: between two points G1 and G2, a light ray takes the path, which is extremal with respect to variations of this path. A mathematical formulation for the optical path length (OPL) is

Eq. (1)

OPL=G2G1n(s)ds,
where n(s) is the refractive index of the traversed medium and a function of location s.13

In this paper, the effect of an inhomogeneous RIF in air on a 3-D optical triangulation measurement is numerically modeled. The exemplary measurement is performed by simulating the geometry acquisition of a hot cylinder via light-section method. The approach fully considers both light deflection from the illumination unit (laser with telecentric lens) to object and from object to the detection unit (pinhole camera). As the path of stationary optical length between laser and camera is not known, a solution to Eq. (1) can only be gained by an iterative approximation procedure, optimizing the path between object (cylinder) and camera by ray tracing. The simulations are performed with the software Comsol Multiphysics,14 as the software provides both a simulation module for numerical heat transfer calculations, as well as a ray tracing module.

2.

Former Work

In a former SPIE proceedings contribution, the authors experimentally investigated the effect of a convective density flow on a 3-D geometry measurement of a hot steel pipe by light-section method from above.15 To realize measurements with reduced refractive index inhomogeneity, triangulation measurements have been conducted by controlling the RIF’s shape via superimposition of an external laminar air flow. The laminar flow allowed the acquisition of reference geometry data of a hot object subject to thermal expansion but only slightly affected by the RIF. The experimental results revealed an interesting fact: The hot cylinder’s geometry measured under full influence of the RIF led to a significantly smaller cylinder radius compared to the hot measurement with reduced convective flow. As the cylinder’s temperature was just slightly differing between the two measurements, the documented change in radius could not be caused by difference in thermal geometry expansion. Therefore, it must have been induced by light deflection in the RIF. The design of the light section experiment permitted a—more or less accurate—documentation of the virtual geometry manipulation due to heat induced inhomogeneous RIFs but did not allow a deeper analysis on the nature of deflection.

Superimposing laminar flow in order to “homogenize” the RIF is a rather complex method in order to gain a hot non-RIF-affected reference measurement, to which a hot RIF-affected measurement can be compared to. Furthermore, the success of this approach highly depends on the object’s geometry and its influence on the external air flow behavior. If a hot object would not be subject to thermal expansion, a cold object measurement could serve as reference in order to exclusively expose the RIF effect on a measurement. This can be achieved by means of software: if just the heat input into air but not the measurement object’s thermal expansion is numerically modeled, the object’s geometry in hot and cold state are the same. In this scenario, deviations from the geometry in hot state are exclusively caused by the RIF and can be revealed by simply comparing the object’s geometry in hot and cold state.

The starting point for the present article are former simulation results of the laser light path manipulation from the virtual illumination unit to measurement object due to refractive inhomogeneity. The simulation setup is now extended by a virtual camera and a multistep ray tracing optimization in order to model a complete triangulation process.

3.

Simulation Design: Assumptions and Boundary Conditions

This section comprises information on the geometrical simulation setup, the virtual triangulation sensor, and a detailed overview of the boundary conditions and theoretical models, such as the camera pinhole model and the derivation of the RIF induced by heat transfer.

3.1.

Geometrical Setup and Refractive Index Field

The quantification of the virtual geometry manipulation by optical inhomogeneity in air requires a reference geometry. The geometry choice is guided by numerical needs: a horizontal cylinder guarantees robust conditions for the crucial density simulations based on heat transfer, as a numerically stable convective heat and density flow is building up above the shaft. This is indispensable for the derivation of the RIF. The geometrical dimensions of the simulation setup are outlined in Fig. 1. The cylinder has a diameter of 27 mm and a length of 170 mm. It has a starting temperature of 900°C, 1100°C, or 1250°C, respectively. These parameters are similar to a Tailored Forming workpiece after forming, postulating a slight cooling effect down to 900°C, considering workpiece handling time.

Fig. 1

Geometrical dimensions of the simulation setup in mm with two cross sections of the hot steel cylinder (here: Tsteel=1250°C) and resulting inhomogeneous RIF. The RIF was derived from a heat transfer simulation after a simulation time of t=15  s. The virtual triangulation sensor comprises a matrix camera and a laser line generator approximated by several discrete laser locations defining a plane via ray tracing. The triangulation angle α is 60 deg. 3-D geometry data is gained via laser light-section method by intersecting laser plane and camera line-of-sight. In order to reveal the effect of the sensor location on the measurement result, the sensor is rotated by an angle β (0 deg, 15 deg, 30 deg) around the cylinder axis.

OE_57_11_114107_f001.png

The heat transfer simulation requires the specification of involved materials. As a start, a simple steel monomaterial is chosen for the cylinder geometry in order to limit the simulation complexity. The relevant material parameters are listed in Table 1, e.g., the steel cylinder’s thermal conductivity or specific heat capacity. Humid air at a pressure of 1 atm is postulated as surrounding medium. Furthermore, the expected convective flow is restricted to a laminar character. Turbulences are not reproduced in the model to save calculation costs and in order to keep the analysis of the subsequent ray tracing results as simple as possible. Further information on the used heat transfer equations goes beyond the scope of this paper and can be found in the provided software user guide for the heat transfer module.14

Table 1

Summary of simulation boundary conditions for heat transfer and ray tracing simulations.

CylinderSteel C22, Tsteel=900°C (1100°C, 1250°C), diameter of 27 mm, length of 170 mm, thermal conductivity k=9.6Wm·K, specific heat capacity c=410Jkg·K, thermal expansion intentionally not modeled (to reveal RIF effect only)
Surrounding mediumHumid air at ambient pressure (1 atm), Tair=20°C, simulation of heat and density field in air induced by heat flow from cylinder, restricted to laminar flow (no turbulences), ideal gas law
Refractive indexDensity-coupled, based on extrapolated Ciddor equation,5 using density result of heat transfer simulation after t=15  s
Triangulation sensor (light-section method)Camera: sensor size of 2048×2048  pixels, pixel size mx=my=5.5  μm, physical focal length f=75  mm, ideal pinhole camera model, field of view FoV=45.056  mm and lateral resolution of 22  μm (in distance of 300 mm), subpixel accuracy allowed
Telecentric laser-line generator: wavelength λ0=532  nm, line approximated by discrete laser positions, ray tracing defines laser plane
Sensor arrangement: triangulation angle of α=60  deg, rotation around cylinder axis with angle β (0 deg, 15 deg, 30 deg)
Ray tracingStep size Δs=100  μm, diffraction model, material discontinuity theory and wall accuracy order according to Ref. 14

The following simulation routine has been implemented to gain an inhomogeneous 3-D RIF: First, the heat transfer from the hot measurement object into the surrounding air is simulated in order to gain a scalar 3-D density field with locally varying density values. The simulation is stopped after a simulation time of t=15  s since this is the planned maximum time to position the hot measurement object in front of the sensor in an experimental setup. Subsequently, the density values are used to derive a scalar 3-D RIF using the Ciddor equation.5 Ciddor introduced an equation for the refractive index in air dependent on wavelength, temperature, pressure, humidity, and CO2 content. By using the ideal gas law and postulating isobaric state, a relationship between density and refractive index can be deduced. This approach is only accurate for moderate temperatures, as the Ciddor equation is only valid up to 100°C. Assuming that a density of ρ=0gcm3 results in a refractive index n=1, the Ciddor equation can be linearly extrapolated for extreme density values in air that develop near the hot object. An exemplary simulation result for the RIF is displayed in Fig. 1 for a temperature of Tsteel=1250°C, revealing the convective density flow above the cylinder and its symmetrical shape. A summary of the hypothesized simulation boundary conditions is given in Table 1.

3.2.

Optical Triangulation in Inhomogeneous Media: Simplified Outline

A simplified outline of a 2-D triangulation measurement setup with RIF effect, illumination unit (laser), and camera sensor is given in Fig. 2. To enhance clarity, the RIF is approximated by discrete air layers with different refractive index values n1, n2, n3, and n4. The air layer directly next to the hot cylinder surface features the lowest refractive index (n1).

Fig. 2

Principle outline of 2-D triangulation measurement with and without inhomogeneous RIF. To enhance clarity, the RIF is approximated by discrete air layers with n1, n2, n3, and n4. A 2-D point is represented by a bold character (e.g., A). Index m indicates a measured point. |(BBm)| is the Euclidean distance between the two points B and Bm. Blue (solid) line: light path from illumination unit (laser) to object (A) and from object to camera with homogeneous RIF. Red (dashed) line: light path with inhomogeneous RIF resulting in laser dot location B on the object’s surface. |(BA)|: distance indicating the light deflection on the object’s surface. |(BmAm)| (dotted): distance between unaffected and affected measured laser point locations. |(BBm)|: distance between actual and reconstructed laser point location with inhomogeneous RIF.

OE_57_11_114107_f002.png

For demonstration purposes, the sensor is positioned laterally to the measurement object. A 2-D point is represented by a bold character (e.g., Am). Index m indicates a measured point. The blue (solid) line encodes the unaffected light path assuming homogeneous optical conditions, the red (dashed) line encodes the affected path in an inhomogeneous field. The surface of the cylinder is reconstructed by intersecting the activated camera’s line-of-sight with the laser line (or in 3-D with the laser plane), leading to a measurement difference (BmAm) when comparing affected and unaffected scenario. The difference between the actual laser point A to the measured location by triangulation Am in a homogeneous scenario (cold cylinder) is small, if a high triangulation accuracy is assumed. This is indicated by depicting A and Am in the same location (AwAmw). A loss in geometry information due to the sensor discretization is not considered in the simplified outline Fig. 2.

3.3.

Virtual Triangulation Sensor

The actual simulation has been realized with a virtual 3-D triangulation sensor using the light-section method. It comprises a matrix camera and a telecentric laser line generator (see Fig. 1). The measurement results are given in world coordinate system Kw, if not declared differently. The laser is approximated by several discrete and equidistant laser rays, differing in the yw-discharge value for ray tracing. As a telecentric laser line generator is used (fan angle is 0 deg), the start vector defining the rays’ tracing direction is assumed to be constant. Laser line generators with fan angles greater than 0 deg would require different ray tracing start vectors to reproduce the beam expansion. The virtual camera’s projection center and the laser are positioned in distance of 300 mm to the world coordinate system Kw. The triangulation angle α is 60 deg. In order to examine the effect of the sensor pose on the measurement result, a rotation angle β is defined to adjust the sensor location relatively to the cylinder axis. The exemplary angles are β=0  deg, 15 deg, and 30 deg.

To keep the simulation routine as simple as possible, the virtual camera is modeled as ideal pinhole camera. This precondition leads to a set of assumptions:

  • The camera’s pinhole (aperture) is infinitesimally small and modeled by projection center Cproj. Light diffraction effects when passing the pinhole are neglected, as well as lens distortion and aberration effects.

  • The camera’s depth of field is unlimited, blurring effects due to defocused imaging are not modeled.

  • The light efficiency is sufficient despite pinhole assumption.

  • Image distance b and focal length f are equal (e.g., according to Ref. 4).

The mathematical description of the mapping of an arbitrary 3-D point Xcam=(X,Y,Z)Tcam in the camera coordinate frame Kcam onto the 2-D sensor frame Kimg in pixel position uimg=(u,v)Timg is given in Eq. (2) (e.g., according to Ref. 16, see also Fig. 3):

Eq. (2)

(uv1)img=1λ[fmx0cx0fmycy001](XYZ)cam,(uimg,1)T=1λKimg,camXcam.
with f as the camera’s physical focal length in mm, mx and my as the pixel size in mmpixel in x- and y-direction, and cx and cy as the shift in pixel between the two coordinate systems Kimg and Kcam. λ is a scaling factor in mm that parametrizes the length of the camera’s line-of-sight through a certain pixel uimg. The camera matrix Kimg,cam comprises the intrinsic parameters of the modeled pinhole camera. In an experimental setup, the camera parameters can be approximated by a calibration routine (e.g., according to Ref. 17).

Fig. 3

(a) Virtual triangulation setup comprising a pinhole camera and a telecentric laser line generator. (b) Viewing direction onto the measurement setup. The laser line generator projects a line onto the cylindrical measurement object. The line is deformed subject to the cylinder’s geometry. This line deformation is captured by the camera. To reconstruct the 3-D data of a measurement point, the line-of-sight through a specific, activated camera pixel is constructed. The resulting line g is intersected with laser plane E. g and E must be formulated in the same coordinate frame.

OE_57_11_114107_f003.png

If a 2-D point needs to be reprojected into 3-D space, the scaling factor λ needs to be known (the length of the camera’s line-of-sight). To this end, Eq. (2) can be transformed to

Eq. (3)

(XYZ)cam=λ[mxf0cxmxf0myfcymyf001](uv1)cam,Xcam=λ(Kimg,cam)1(uimg,1)T.

The transformation between two different coordinate systems (for instance, between the world and the camera coordinate frame) can easily be realized with the help of transformation matrix T, according to the definition in Eq. (4):

Eq. (4)

(XYZ1)cam=[r11r12r13txr21r22r23tyr31r32r33tz0001](XYZ1)w,(Xcam,1)T=Tcam,w(Xw,1)T.
where Tcam,w combines rotation and translation to transform homogeneous data points from one coordinate frame to another. The rotation matrix is built from orthonormal vectors r1=(r11,r21,r31)T, r2=(r12,r22,r32)T, and r3=(r13,r23,r33)T, the translation vector according to t=(tx,ty,tz)T.

The basic triangulation routine is realized by a simple plane-line-intersection, as outlined in Fig. 3(a) (e.g., according to Ref. 18). The exemplary viewing direction onto the displayed triangulation setup is indicated in Fig. 3(b) (with the cylinder cross-section, white arrow). The camera sensor is displayed in front of the camera’s projection center (unlike the depiction in Fig. 2). This is done for demonstration purposes and in order to display the camera according to the mathematical definition of the pinhole model, as given in Eqs. (2) and (3). Although physically not correct, this basic mathematical camera pinhole definition (with sensor in front of the projection center) is commonly used, as it simplifies the description of the mapping of a 3-D point onto the 2-D sensor (image is not upside down, no need for negative signs, e.g., according to Ref. 19, p. 370 ff.).

The mathematical definition of the camera’s line-of-sight in coordinate frame Kcam is represented by line g, the laser plane is given in the Hessian normal form and is represented by plane E. If a laser line is projected onto the measurement object, the line is deformed subject to the object’s geometry. This line deformation is captured by the camera. A specific laser line dot activates a specific camera pixel uimg. If the camera’s line-of-sight g through this specific pixel is constructed and intersected with the laser plane E, the 3-D information of the laser line point can be reconstructed. As laser plane E is given in the simulation in the coordinate frame of the laser Klaser, it first has to be transformed into the coordinate frame of the camera Kcam by an appropriate transformation matrix Tlaser,cam to intersect g and E.

4.

Ray Tracing in Inhomogeneous Optical Media

In this section, theoretical background information on the used ray tracing algorithm is given. The derived iterative optimization routine is presented in a step-by-step pseudocode format in order to enhance comprehensibility.

4.1.

Theoretical Background

The following Eqs. (5)–(7) are taken from the provided ray tracing software user guide.14 A derivation of the presented equations goes beyond the scope of this paper. Nevertheless, the equations are cited to provide physical background information for inhomogeneous ray tracing. More detailed information can be found in Born et al.,20 Saleh and Teich,21 and Krueger.22 The ray tracing algorithm in Comsol is deduced from the principles of wave optics. Basic assumptions are that the electromagnetic ray is observed at locations far from the light source and that its amplitude changes very slowly with time and place. The electromagnetic field can therefore be approximated locally by plane waves. The mathematical description of the amplitude is neglected. In this case, the rays’ phase is nearly linearly dependent on time and position according to

Eq. (5)

ψ(r,t)k·rω·t+ψ0,
with phase ψ, position vector r, wave vector k, time t, angular frequency ω, and ψ0 as arbitrary phase shift.22 Equation (5) allows the derivation of six coupled first-order ordinary differential Eqs. (6) and (7) given in vector notation:

Eq. (6)

dkdt=ωr,

Eq. (7)

drdt=ωk.

The equations need to be solved with respect to k and r to calculate ray trajectories in inhomogeneous media. Fermat’s principle can be gained from these equations, using the so-called Eikonal.21 Fermat’s principle is defined based on the path of light, but not on the path direction. This means that the light path can be simulated in either way, as long as it passes the same two points: from object to camera or inversely from camera to object. This so-called inverse principle13 is helpful for the iterative ray tracing optimization: starting point for ray tracing is the camera and not the laser light point on the measurement object.

4.2.

Measurement Simulation with Iterative Ray Tracing Optimization

An iterative approximation of the light path from laser incidence location on the cylinder surface to camera needs to be implemented in order to approximate the corresponding camera pixel location on which the laser dot is projected on. Alternatively, referring to the inverse principle, the camera can be starting point for the iterative approximation, as light takes the same way from point G1 to G2 as from point G2 to G1 [see Eq. (1)].

Provided the inhomogeneous RIF around the cylinder has been derived, the measurement simulation for a single data point can be summed up by the subsequent steps, referring to the parameter labeling in Fig. 4.

  • 1. Ray tracing from laser illumination unit to measurement object to gain light incidence location Blw on object surface in world coordinate system Kw.

  • 2. Optimization: Inverse ray tracing from camera to object surface to iteratively approximate location Blw by RIF-affected camera line-of-sight through pixel location ulimg on camera sensor. The resulting position on the object surface after k iteration steps is defined as Bkw. A helper coordinate system Kh is introduced in order to calculate distances parallel to the camera sensor.

    • 2.1 Set start values:

      • The start pixel usimg=u1img=(u1,v1)Timg is gained by mapping Blw linearly (without RIF-effect) onto the camera sensor with the help of Eqs. (2) and (4). usimg defines the initial directional vector a1cam for ray tracing (see step 2.2).

      • The width wsg and the height hsg of the search grid in pixel needs to be defined, which limits the search space around usimg. It should be defined wide enough in a new simulation scenario, as the maximum light deflection is not known beforehand.

      • The optimization procedure is stopped, if the distance dminh between Bkh and Blh is smaller than the maximum deviation radius rmaxh allowed (compare to step 2.4).

      • Another stop criterion is the maximum number of iteration steps kstop, for instance kstop=15.

      • Appropriate (high) start values need to be defined for Bk,minw, ΔBk,xy,minh, and dminh (further definitions in step 2.3).

      • The maximum distance |ΔBk,z,minh|=|Bk,z,minhBl,zh| between real and approximated light incidence location in helper coordinate system must be smaller than a threshold value zmaxh.

    • 2.2 Construction of directional vector akcam in camera coordinate frame Kcam through projection center Cproj and (updated) pixel location ukimg. ukimg has to be transformed into the camera coordinate frame Kcam first with the help of λ=f [see Eq. (3)].

    • 2.3 Ray tracing from projection center Cproj with akcam to calculate actual incidence location Bkw on object surface.

    • 2.4 The actual optimization step is evaluated based on the scalar distance dkh (Euclidean norm) between Bkh and Blh. dkh is calculated with the help of ΔBk,xyh according to ΔBk,xyh=(Bk,x,Bk,y)Th(Bl,x,Bl,y)Th. If the distance of the k’th step dkh=|ΔBk,xyh| is smaller than the distance of step (k1), the minimum values dminh and umincam are saved. Otherwise, the values of the former step are loaded to provide the correct pixel value for the determination of the new pixel location in step 2.5.

      • If (dkh<dminh): Update minimum values.

        • - umincam=ukcam,

        • - dminh=dkh,

        • - Bk,minw=Bkw and ΔBk,xy,minh=ΔBk,xyh.

      • Else (dkhdminh): Overwrite actual values with former minimum values.

        • - ukcam=umincam,

        • - dkh=dminh,

        • - Bkw=Bk,minw, and ΔBk,xyh=ΔBk,xy,minh.

    • 2.5 Check current iteration step and adjust pixel location.

      • If (k=kstop):

        • - Go to step 3.

      • If (dminh<rmaxh): Go to step 3.

      • If (k<kstop): Determine new pixel location uk+1cam via variable pixel step size Δuk=(Δuk,Δvk)T=(wsg2k,hsg2k)T and ΔBk,xyh.

        • - If (ΔBk,xh0): uk+1img=ukimg+Δuk.

        • - Else (ΔBk,xh>0): uk+1img=ukimgΔuk.

        • - If (ΔBk,yh0): vk+1img=vkimg+Δvk.

        • - Else (ΔBk,yh>0): vk+1img=vkimgΔvk.

        • - Start next iteration step: Go to step 2.2.

  • 3. Final quality check and triangulation.

    • 3.1 Quality check.

      • Check dminh, send warning if dminh>rmaxh.

      • Check |ΔBk,z,minh|=|Bk,z,minhBl,zh|, send warning if value greater than threshold value zmaxh. Hereby, undercut points that are erroneously mapped onto the camera are rejected and not used for triangulation.

    • 3.2 Reconstruction of 3-D point by triangulation (intersection of camera’s line-of-sight through uminimg and laser plane)

Fig. 4

Approximation of laser dot location Blw=(Bl,x,Bl,y,Bl,z)w by camera line-of-sight via multi-step ray tracing optimization. For demonstration purposes, only the zy-plane of the world coordinate system Kw is illustrated. At the beginning of the optimization process, the directional vector for ray tracing a1cam is constructed through camera projection center Cproj and start pixel usimg=u1img=(u1,v1)Timg. akcam is iteratively adapted in dependency of the distance dkh=|(Bk,x,Bk,y)Th(Bl,x,Bl,y)T)h| between the target location Blh and the actual position on the cylinder surface Bkh in helper coordinate system Kh, parallel to the camera sensor. By comparing the corresponding x- and y-values, the new pixel location is defined. Example for x-direction, in case of d1h<dminh: Determine distance of step k=1 via ΔB1,x=hB1,xhBl,xh. ΔB1,xh0, therefore the pixel u-location for the next iteration step k=2 needs to be adapted according to u2img=u1img+Δu1. Iteration step k=2, assuming d2h<d1h=dminh: Actual distance is ΔB2,xh=B2,xhBl,xh. As ΔB2,xh>0, the new pixel location is u3img=u2imgΔu2. Δuk is reduced with every iteration step. The optimization procedure is stopped, if the absolute value of the Euclidean norm dkh is smaller than the maximum deviation radius rmaxh acceptable. Another stop criterion is the maximum number of iteration steps k.

OE_57_11_114107_f004.png

The main challenge arises from step 2 in which the projected laser dot location in terms of pixel location ulimg is approximated. As an idealized pinhole model is hypothesized, light mapped onto the 2-D camera sensor is forced to pass projection center Cproj. In a first step, the start pixel usimg=u1img is calculated by linearly mapping Blw onto the camera sensor with the help of Eqs. (2) and (4). Due to the pinhole assumption, a directional vector akcam can be constructed through projection center Cproj and u1img, leading to the light discharge direction for ray tracing in iteration step k=1. After the initial ray tracing simulation (step 2.3), the actual distance dkh=|(Bk,x,Bk,y)Th(Bl,x,Bl,y)T)h| between the actual light incidence location Bkw and the target location Blw is calculated in helper coordinate system Kh (step 2.4). There is no need to compare the z-values, as depth information is lost when imaging. If the condition dkh<dminh is fulfilled, dminh being the smallest distance value so far, both uminimg and dminh are updated with the actual step values. Provided the maximum number of iterations kstop is not reached and dminh is not smaller than the maximum deviation radius rmaxh allowed, the new pixel location uk+1img is determined according to step 2.5 with iterative pixel step size Δuk. Δuk is adapted in dependency of the actual iteration step k and the width wsg and height hsg of the pixel search grid. To prevent an erroneously mapping of undercut points onto the camera sensor, the z-distance is finally checked in step 3.1 by calculating the Euclidean norm |ΔBk,z,minh|=|Bk,z,minhBl,zh|. If |ΔBk,z,minh| is bigger than an initially defined threshold value zmaxh, the corresponding camera point is not used for triangulation.

In an experimental (not simulated) triangulation measurement, the limited lateral resolution of the camera sensor restricts the exact mapping of a 3-D world point onto the sensor. Furthermore, a light section measurement depends on the accurate localization of the laser’s center line in the camera image, e.g., by fitting Gaussian distribution curves into the laser line’s intensity profiles. This approach permits subpixel accuracy. To take this discrete and virtual increase in pixel number into account, the value for umincam is rounded to a virtual pixel size of 0.25 pixel. As the lateral resolution of the camera sensor is 22  μm (compare to Table 1), the maximum deviation radius rmaxh is set to a value of 1  μm. A stricter threshold is not necessary, as even the assumed subpixel accuracy of 0.25 pixel only allows a mapping of areas of 5.5  μm×5.5  μm onto the camera sensor.

5.

Results

The results in this section are based on the boundary conditions according to Table 1 and the geometry setup depicted in Fig. 1. First, detailed information on a triangulation measurement from above is presented (to ensure full manipulation by the inhomogeneous RIF on the laser light path). The gained results are analyzed in Sec. 5.2. Results for different cylinder temperatures and sensor poses are presented in Sec. 5.3.

5.1.

Cylinder Geometry Measurement by Light-Section Method

The steel cylinder temperature is set to 1250°C. The triangulation sensor is not rotated around the cylinder axis (β=0  deg) to realize a measurement from above under full influence of the RIF (see Fig. 1, right side). Nine discrete light paths from laser to camera are simulated—only differing in the yw-discharge location (equidistantly arranged from 0 to 10  mm) but with same directional vectors (laser with telecentric lens). The parameter nomenclature is given in Fig. 5, and the corresponding simulation results are depicted in Figs. 6 and 7. The laser incidence location on the cylinder for a homogeneous RIF (cold cylinder) is given in the world coordinate frame Kw as Aw, the laser incidence location for an inhomogeneous RIF (hot cylinder) as Bw. The corresponding locations on the camera sensor are Bimg and Aimg and given in pixel in the coordinate frame Kimg. Normally, the pixel location on the sensor is defined by the letter u. This nomenclature is deviated from in this section to ensure a clear distinction of parameters and in order to avoid the introduction of further indexes. The measured 3-D points for cold and hot cylinder scenario are Amw and Bmw (see Fig. 5). The results in Fig. 7 are given as distances between two points [e.g., (BwAw)], whereas not only the difference between the scalar entries [e.g., Δx, Δy, and Δz] are presented but also the 2-D or 3-D Euclidean norms of the distances between two points (e.g., dminh, deuclid,xyz, or deuclid,uv).

Fig. 5

Basic triangulation setup with parameter nomenclature in world coordinate frame Kw. (a) Lateral view. (b) Frontal view on cylinder cross-section. A parametrizes a point for homogeneous measurement conditions and B for inhomogeneous conditions (hot cylinder). Index m indicates a measured (triangulated) point.

OE_57_11_114107_f005.png

Fig. 6

Distance dminh=|ΔBxy,minh| between real and optimized laser point locations, given in the helper coordinate frame Kh. Simulation result for a triangulation measurement from above (β=0  deg) and a cylinder temperature of Tsteel=1250°C (compare to setup depicted in Fig. 1). The parameter nomenclature is given in Fig. 5.

OE_57_11_114107_f006.png

Fig. 7

Simulation results for a triangulation measurement from above (β=0  deg) and a cylinder temperature of Tsteel=1250°C (see setup depicted in Fig. 1). The parameter nomenclature is given in Fig. 5. A parametrizes a point for homogeneous measurement conditions (cold cylinder), B a point for inhomogeneous conditions (hot cylinder) in the world coordinate frame Kw. (a) Distance between (actual) laser points on cylinder surface (hot and cold cylinder). (b) Distance between actual and measured laser points for inhomogeneous measurement conditions (hot cylinder). (c) Distance between measured (triangulated) laser points on cylinder surface (hot and cold cylinder). (d) Laser point displacement on the camera sensor in Kimg (hot and cold cylinder). (e) Distance between actual and measured laser points for homogeneous measurement conditions (cold cylinder).

OE_57_11_114107_f007.png

In an experimental triangulation measurement, a camera sensor always operates as low pass filter, as information is lost due to discretization. The difference between the actual laser point location Aw to the measured location by triangulation Amw in a homogeneous scenario (cold cylinder) is relatively small, as a subpixel accuracy of Δu=Δv=0.25  pixel is assumed for the detection of the points and no further light deflection is induced by the surrounding medium air in a cold scenario. This is indicated in the measurement outline in Fig. 5 by depicting Aw and Amw in the same location (AwAmw).

The term quasi-continuous indicates the fact that a result is given not rounded to the camera sensor’s discretization limitation of 0.25 pixel but according to the output of the ray tracing optimization routine. The iteration step k defines the optimization routine’s variable pixel step size (compare to step 2.5 in Sec. 4.2). Therefore, depending on the routine’s stop criteria, the sensor pixel on which the laser dot is projected on, is determined more accurately than given by the sensor’s subpixel accuracy (0.25 pixel). As this more accurate pixel location is still not continuous—the iteration does stop at a discrete value k—the nonrounded results are called quasi-continuous. As the RIF induced deflection effects are easier to interpret without sensor discretization, the simulated data in Fig. 7 are given for quasi-continuous sensor conditions. Discrete results (rounded to 0.25 pixel) are discussed later.

The quality of the iterative ray tracing optimization according to Sec. 4.2 is checked by analysis of Fig. 6: The maximum deviation radius rmaxh has been set to 1  μm. Therefore, the difference dminh=|ΔBxy,minh| between the actual laser incidence location Bh and the optimized location Bminh in the helper coordinate frame Kh has to be smaller than this threshold value. This is the case, as all values for dminh stay below 1  μm.

The simulated curve in Fig. 7(a) depicts the displacement of the laser incidence location (BwAw) on the cylinder surface in the world coordinate frame Kw. The curve reveals the effect of the cylinder curvature on the measurement: With decreasing laser yw-discharge values, Δz and Δy are continuously getting smaller (the absolute value is increased). This is due to the changing cylinder’s surface gradient ΔzΔy, when distancing from the origin of coordinate frame Kw in negative yw-direction [see Fig. 5(b), the cylinder’s “shoulder slope” is getting steeper]. The Δx value increases [see definition of xw-axis in Fig. 5(a)]. This geometry effect due to the cylinder curvature is not or only slightly obeyed to for yw=0  mm. In this case, the laser’s start vector (for ray tracing) is directly pointing on the origin of coordinate frame Kw. The surface gradient ΔzΔy(yw=0) is 0. Therefore, the cylinder’s curvature does not “boost” small RIF-induced deflection values, if a light ray hits the surface in the vicinity of the origin of coordinate frame Kw.

The simulated data in Fig. 7(b) gives information on the difference between the actual laser incidence location on the hot cylinder surface Bw and the measured point Bmw. A thought experiment helps to reveal the significance of the data: Provided the light deflection from laser to object only happens inside the laser plane, and no further deflection occurs from object to camera, the measured point Bmw would not differ from the real point Bw. This means that in the very unlikely event of a pure laser plane-bound light deflection, and if no deflection from object to camera is induced at all, the correct 3-D point would be triangulated. Figure 7(b) proves that this is not the case for the simulated triangulation measurement. Furthermore, the great Euclidean distances up to 75  μm between measured 3-D point Bmw and real 3-D point Bw demonstrate clearly that the measurement deviation due to an inhomogeneous RIF cannot be neglected. The measurement result for homogeneous conditions [compare to Fig. 7(e)] shows a maximum deviation of 2.7  μm for deuclid,xyz. The results are given rounded to the camera sensor’s discretization limitation of 0.25 pixel. This allows a mapping of areas of 5.5  μm×5.5  μm on a quarter of a pixel. The maximum discretization error by rounding is therefore 2.75 pixel, which matches the maximum value of deuclid,xyz in Fig. 7(e). As no further light deflection is induced by the surrounding medium air in the simulation scenario with cold cylinder, the absolute distance between Aw and Amw is very small and can be explained exclusively by sensor discretization (AwAmw).

The data in Fig. 7(b) gives no information on whether the 3-D point Bmw is accidentally a point of the cylinder surface. In order to verify this, the closest distance between Bmw and the numerical cylinder surface has to be calculated. The additional merit of such an analysis is restricted, as it only slightly helps to understand the deflection in the RIF.

Within the scope of this work, the measured points Amw in cold cylinder state are used as reference data for the evaluation of the measured points Bmw in hot state. The difference between theses points is depicted in Fig. 7(c). The measurement results are correlated to the laser point displacement on the camera sensor [compare norms deuclid,xyz and deuclid,uv in Fig. 7(c) and (d)], as a specific laser point location on the sensor is used to derive the camera’s line-of-sight in order to reconstruct the point’s 3-D data via line-plane-intersection (see Sec. 3.3). Oddly enough, an increasing laser point displacement on the cylinder surface [see Fig. 7(a)] does not necessarily result in increasing differences between the measured points (BmwAmw) [see Fig. 7(c)]. The values for the difference (BmwAmw) show a maximum value in region (I) for the laser ray with yw=0  mm. The difference decreases until it reaches the lowest value in region (II), only to increase again in region (III) [compare to Figs. 7(c) and 7(d)]. A detailed analysis of the simulated norms deuclid,xyz and deuclid,uv is given in the next subsection to explain this apparent contradiction.

5.2.

Analysis: Superimposition of Light Deflection

To allow an interpretation of the difference (BmwAmw) according to Fig. 7(c), the laser point displacement on the camera sensor (BimgAimg) in Fig. 7(d) has to be analyzed. The laser light displacement is a superimposition of the deflection in two different paths: the deflection from laser to cylinder surface and the deflection from cylinder surface to camera. If both paths’ deflection effects are opposed to each other, the resulting pixel displacement on the sensor is reduced. Therefore, the decrease in region (II) in Fig. 7(d) can be explained. The basic procedure to separate the paths’ deflection effects is depicted in Figs. 8(a)8(c). An information in advance to avoid irritation: the deflection of point Bw is not necessarily restricted to the laser plane [as depicted in Fig. 8(a), compare to red, dashed line]. This is for demonstration purposes only.

Fig. 8

Approach to separate the laser light displacement on the camera sensor into two separate parts: the deflection induced in the path “laser to object,” and the deflection from “object to camera.” (a) Displacement (BlaserobjectimgAimg) induced by path “laser to object”: Aw (blue, solid line) and Bw (red, dashed line) are linearly projected onto the camera sensor, resulting in two corresponding pixel locations Aimg (blue, solid line) and Blaserobjectimg (red, solid line) on the sensor. (b) Displacement (BobjectcamimgBlaserobjectimg) induced by path object to camera: The linear projection of Bw onto the camera sensor in location Blaserobjectimg is compared to the nonlinear, RIF-affected projection in location Bobjectcamimg (see red, dashed line from Bw to camera). (c) Displacement (BobjectcamimgAimg) induced by path “laser to camera”: Both pixel displacement values are superimposed, resulting in the total displacement value.

OE_57_11_114107_f008.png

To gain the laser light displacement for the path “laser to object,” both Aw (blue, solid line) and Bw (red, dashed line) are linearly projected onto the camera sensor, resulting in two corresponding pixel locations Aimg (blue, solid line) and Blaserobjectimg (red, solid line) [compare to Fig. 8(a)]. By doing this, the deflection induced in the path “object to camera” is not taken into account. This deflection effect is derived according to image Fig. 8(b): The linear projection of Bw onto the camera sensor in location Blaserobjectimg is compared to the nonlinear, RIF-affected projection in location Bobjectcamimg (see red, dashed line from Bw to camera). If now both pixel displacement values are superimposed [see Fig. 8(c)], the resulting displacement is (BlaserobjectimgAimg)+(BobjectcamimgBlaserobjectimg)=(BobjectcamimgAimg). The superimposition must lead to the results depicted in Fig. 7(d). A special scenario is depicted in Fig. 8(c): the resulting pixel displacement from laser to camera can be close to zero, leading a small difference between Bmw and Amm, even though the light deflection in the two different paths is non-neglectable [see Fig. 8(c), BmwAwAmw]. The depicted approach in Fig. 8 is nevertheless also valid for points Bmw and Amm, which are reconstructed in different locations.

The suggested routine has been realized for the pixel displacement in Fig. 7(d) and is outlined in Fig. 9. Not only the laser point displacement on the camera sensor is given for both light paths [see Figs. 9(a)9(c)] but also a graphical interpretation of the displacement on the sensor for the laser light path corresponding to yw=7.5  mm [see Figs. 9(d)9(f)]. The displacement values [e.g., Δvlaserobject or Δulaserobject] are marked in Figs. 9(a)9(c).

Fig. 9

Laser point displacement on the camera sensor for a measurement scenario from above according to results in Fig. 7(d), separated into the deflection induced in different light paths: (a) for the light path “laser to object,” (b) for the path “object to camera,” and (c) for the complete path “laser to camera.” (d)–(f) An exemplary graphical interpretation of the displacement on the sensor for the laser ray with yw=7.5  mm is given. (a)–(c) The corresponding displacement values Δu and Δv are marked.

OE_57_11_114107_f009.png

First of all, the resulting pixel displacement by superimposition in Fig. 9(c) is the same as in Fig. 7(d), the suggested approach is therefore legitimate. Moreover, especially the progression of the pixel displacement in the u-direction indicates that the induced light deflection values are opposite to each other: Δulaserobject is decreasing to a pixel value of approximately 4 for yw=10  mm [Fig. 9(a)], whereas Δuobjectcam increases to a value of more than 2 pixel [Fig. 9(b)]. The resulting value Δulasercam varies around a value of 0.5  pixel [Fig. 9(c)].

The value for deuclid,uv(yw=7.5  mm) in Fig. 9(c) is therefore not contradictory: the consideration of both light paths results in a reduction of pixel displacement [see also graphical interpretation in Figs. 9(d)9(f)], which again leads to a reduced difference for the gained values deuclid,uv in region (II) in Fig. 7(d). This special scenario is exemplary depicted in Fig. 8(c): the resulting 3-D point Bmw (hot cylinder) is depicted in the same location as point Amw (cold cylinder). As the resulting pixel displacement is again increasing for light rays with yw<7.5  mm, also the distance between Bmw and Amw rises.

To gain a deeper understanding of an exemplary light refraction scenario, the rounded (discrete) interpretation of Fig. 7(c) is analyzed for the laser ray with a discharge value of yw=0  mm. The graphical result of this analysis is given in Fig. 10(a), based on the rounded pixel displacement [to sensor subpixel accuracy of 0.25 pixel, Fig. 10(b)]. First of all, there is only a slight difference between the graphs in Figs. 7(d) and 10(b). The difference would be bigger, if the subpixel accuracy was further limited—for instance, to a value of 0.5 pixel. The laser light path for a yw-discharge value of 0 mm is only marginally deflected in the yw-direction due to the symmetry of the RIF to the xz-plane (in Kw). The cylinder curvature only has small influence on the resulting incidence location Bw on the cylinder surface. Therefore, only the xz-plane is depicted in the graphical analysis in Fig. 10(a).

Fig. 10

(a) Exemplary graphical interpretation of the light refraction scenario for the laser ray with yw=0  mm, (b) based on the rounded pixel displacement on the camera sensor.

OE_57_11_114107_f010.png

When the laser light enters the inhomogeneous RIF from the left side, the ray is deflected downward toward more dense air layers, where greater refractive index values are present [see vertical black lines in Fig. 10(a), the lines separate areas with different refractive index value]. This leads to the dashed light path. The closer the ray moves toward the cylinder, the more predominates a horizontal expansion and variation in the refractive index (see horizontal black lines). Therefore, the ray is refracted away from the cylinder, where the surrounding air is more dense. The ray reaches the cylinder in location Bw. As the homogeneous RIF is basically symmetric to the zy-plane of the world coordinate frame Kw (see Fig. 1), the path from cylinder to camera is flipped vertically in the graphical interpretation, according to Fig. 10(a). Based on this simplification, the triangulated measurement point Bmw is reconstructed above the real cylinder surface. This matches the gained result for Δz and Δx [see Fig. 10(b)].

The interpretation in Fig. 10(a) explains the simulation result in Fig. 10(b) for the laser ray with discharge value of yw=0  mm. It also reveals the complexity of light refraction in an inhomogeneous RIF.

5.3.

Comparison: Different Cylinder Temperatures and Sensor Poses

The result section is closed by a comparison of different measurement scenarios. To this end, the steel cylinder temperature (900°C, 1100°C, 1250°C) and the triangulation sensor pose are varied (0 deg, 15 deg, 30 deg).

In Figs. 11(a)11(c), different data curves for a measurement from above with β=0  deg are depicted, revealing the influence of a temperature increase: The pixel displacement on the camera sensor and the measurement difference (BmAmww) indicates an increase due to temperature only for the measurement points corresponding to the laser rays with yw-discharge values from 0 to 5  mm. This is due to the expansion of the inhomogeneous RIF (see Fig. 1, right side with cylinder cross-section): As the RIF variation develops its full effect directly above the hot cylinder due to the convective density flow, the spatial region in which a cylinder temperature increase takes effect, is expanded widely. Smaller light ray yw-discharge values do not lead to differences (10  mm<yw<5  mm), except for yw=10  mm. This might be explained by the analysis in Sec. 5.2: the resulting light deflection is reduced, due to the superimposition in the path from laser to object and from object to camera. This effect, based on the symmetry of the RIF, is not affected by a temperature increase, as the symmetry of the RIF does not change.

Fig. 11

Simulated measurement results for different cylinder temperatures and sensor poses. (a–b) Measurement from above (β=0  deg) for a cylinder temperature of 900, 1100 and 1250°C. deuclid,xyz parametrizes the Euclidean 3-D norm of the distances between a triangulated measurement point under homogeneous (Amw) and inhomogeneous conditions (Bmw). (c) Corresponding laser point displacement on the camera sensor. deuclid,uv defines the Euclidean 2-D norm of the distances between Aimg and Bimg. (d) Laser point displacement on the camera sensor for different sensor poses (β=0  deg,15  deg,30  deg, see Fig. 1), with a cylinder temperature of 900°C.

OE_57_11_114107_f011.png

The discretization effect can be evaluated, when comparing Figs. 10(a) and 10(b). Due to the pixel rounding to a value of 0.25 pixel (subpixel accuracy of the camera sensor), the RIF-induced deflection effect is “discretized” as well. Curve (a.2) shows more abrupt steps than curve (a.1). The triangulation sensor’s resolution therefore affects the “reproduction” of the deflection effect in an inhomogeneous RIF.

The effect of a sensor rotation around the cylinder axis (see Fig. 1, right side) on the laser point displacement is depicted in Fig. 11(d) for a cylinder temperature of 900°C. The displacement is reduced with increasing angle β. The simulated curves therefore indicate the obvious: if a measurement is not performed directly through the greatest expansion and variation of the inhomogeneous RIF, but rather sideways through less expanded regions, the pixel displacement is reduced and the corresponding measurement more trustworthy. The measurement difference (BmwAmw) for the rotated sensor is not depicted: The Euclidean distance is below 20  μm for all laser rays for β=15  deg and below 7  μm for β=30  deg.

6.

Summary and Conclusion

In this paper, a virtual triangulation setup based on the light section method is presented, using a matrix camera with entocentric lens as detection unit and a telecentric laser line generator as illumination unit. Geometry measurements of a cylinder in different temperature states are simulated and compared in order to analyze the effect of an inhomogeneous RIF on triangulated measurement data. To this end, detailed information is given on the simulation design, comprising the numerical calculation of the inhomogeneous RIF via heat transfer simulations and the modeling of the virtual sensor (camera pinhole model), next to the reconstruction of 3-D points via triangulation (Sec. 3). In Sec. 4, theoretical background is given on the applied ray tracer, as well as an extensive description in pseudocode of the implemented iterative optimization routine in order to reproduce a point projection onto a pinhole camera, taking light refraction into account.

Simulation results, using the derived virtual triangulation routine, are presented and discussed in detail in Sec. 5. The analysis of the measurement differences for homogeneous and inhomogeneous optical conditions leads to the following conclusions: The measurement object’s geometry directly influences the laser point displacement on the object’s surface, and therefore, the RIF-induced light deflection effects [compare to Fig. 7(a)]. Furthermore, the absolute light deflection, as seen by the measurement camera in terms of pixel displacement, is a superimposition of the deflection effects in the path illumination unit to object and object to detection unit (Fig. 9). These path deflections can be opposed in their effect, resulting in a much smaller camera pixel displacement than expected. Moreover, already the one-way light path manipulation from camera to object shows the complexity of light refraction in a heat-induced convective density flow: The refractive fields shape, extension, and magnitude can result in a refraction toward the hot object, as well as away from the object on the same light path 10(a), complicating the interpretation of the gained result.

By changing the triangulation sensor’s pose in relation to the measurement object, the camera pixel displacement can be reduced [Fig. 11(d)], resulting in more accurate triangulation results.

Lateral measurements or measurements from underneath the object are therefore an alternative to reduce light refraction effects in an experimental setup, as the refractive field’s extension and deflection effect is limited. Unfortunately, this approach is not sufficient, if 360 deg geometry data is required at the same measurement time in order to capture a full shrinkage process of wrought-hot, hybrid workpieces.

A possible solution to allow high precision geometry measurements of hot objects is actuated or computer-assisted routines that either guarantee a rectilinear propagation of light despite the object’s heat or that allow a subsequent correction of RIF-disturbed measurements. If compensation algorithms for a subsequent geometry data correction are meant to be derived from simulation results, all parameters of the real measurement setup have to be considered in the simulation (e.g., sensor resolution and pose, triangulation angle, object geometry and temperature). Especially, the dynamics of the heat-induced refractive field has to be taken into account, as an areal triangulation measurement by a structured light system requires the acquisition of an image sequence over time.

7.

Forthcoming Work

The presented simulation model will be developed into a virtual fringe projection system to allow virtual areal measurements. To this end, the telecentric laser line generator is replaced by a projector. As a projector can be considered as an inverse pinhole camera,23 the same model implementation is used for detection unit (camera) and illumination unit (projector). The complete virtual triangulation setup is defined and visualized in a MATLAB24 script (e.g., the sensor pose, the camera’s focal length, compare to Fig. 12), before the simulation boundary conditions are passed to the simulation platform in Comsol. The model will not be able to virtually reproduce the projection of an image sequence to solve the projector pixel—camera pixel—correspondence problem. Fortunately, this is not necessary, as the correspondence problem is solved via the presented optimization routine. Further work will be the implementation of a camera lens distortion model, the investigation of the RIF dynamic and the parallelization of ray tracing routines, in order to speed up the whole virtual triangulation process. A faster and more dense reconstruction of surface data by virtual triangulation would allow the evaluation of the standard deviation, when fitting a cylinder into RIF-affected and nonaffected measurement data. This would enable a more general analysis of heat-induced light deflection and its effect on triangulation measurements.

Fig. 12

MATLAB visualization of virtual triangulation setup and pose in relation to measurement object. The digital mirror device (DMD) plane identifies the projector’s sensor plane. An exemplary projection plane is depicted.

OE_57_11_114107_f012.png

Acknowledgments

We want to thank the Deutsche Forschungsgemeinschaft (DFG) for funding subproject C5 “Multiscale Geometry Inspection of Joining Zones” as part of the Collaborative Research Centre (CRC) 1153 Process chain to produce hybrid high performance components by Tailored Forming.

References

1. 

M. Rahlves and J. Seewig, Optisches Messen technischer Oberflächen., Beuth Verlag, Berlin (2009). Google Scholar

2. 

GOM GmbH, “Sheet metal forming—3D metrology in industrial sheet metal forming processes,” (2017) https://www.gom.com/industries/sheet-metal-forming/sheet-metal-forming-download-brochure.html Google Scholar

3. 

S. Matthias et al., “Fringe projection profilometry using rigid and flexible endoscopes,” Tech. Mess., 84 (2), 123 –129 (2017). https://doi.org/10.1515/teme-2016-0054 Google Scholar

4. 

J. Beyerer, F. Puente León and C. Frese, Automatische Sichtprüfung: Grundlagen, Methoden und Praxis der Bildgewinnung und Bildauswertung, Springer Vieweg, Berlin, Heidelberg (2012). Google Scholar

5. 

P. E. Ciddor, “Refractive index of air: new equations for the visible and near infrared,” Appl. Opt., 35 1566 –1573 (1996). https://doi.org/10.1364/AO.35.001566 APOPAI 0003-6935 Google Scholar

6. 

T. Dale and J. Gladstone, “On the influence of temperature on the refraction of light,” Phil. Trans. R. Soc. Lond., 148 887 –894 (1858). https://doi.org/10.1098/rstl.1858.0036 Google Scholar

7. 

R. Beermann et al., “Background oriented schlieren measurement of the refractive index field of air induced by a hot, cylindrical measurement object,” Appl. Opt., 56 4168 –4179 (2017). https://doi.org/10.1364/AO.56.004168 APOPAI 0003-6935 Google Scholar

8. 

T. Kreis et al., “Noncontacting measurement of distortion by digital holographic interferometry,” Materialwiss. Werkstofftech., 37 (1), 76 –80 (2006). https://doi.org/10.1002/(ISSN)1521-4052 MATWER 0933-5137 Google Scholar

9. 

H. Gafsi and G. Goch, “Calibration routine for in-process roundness measurements of steel rings during heat treatment,” Proc. SPIE, 8082 808231 (2011). https://doi.org/10.1117/12.889515 PSISDG 0277-786X Google Scholar

10. 

W. Liu et al., “Fast dimensional measurement method and experiment of the forgings under high temperature,” J. Mater. Process. Technol., 211 (2), 237 –244 (2011). https://doi.org/10.1016/j.jmatprotec.2010.09.015 JMPTEF 0924-0136 Google Scholar

11. 

A. Zatočilová, D. Paloušek and J. Brandejs, “Image-based measurement of the dimensions and of the axis straightness of hot forgings,” Meas. J. Int. Meas. Confed., 94 254 –264 (2016). https://doi.org/10.1016/j.measurement.2016.07.066 Google Scholar

12. 

A. Ghiotti et al., “Enhancing the accuracy of high-speed laser triangulation measurement of freeform parts at elevated temperature,” CIRP Ann., 64 (1), 499 –502 (2015). https://doi.org/10.1016/j.cirp.2015.04.012 CIRAAT 0007-8506 Google Scholar

13. 

E. Hecht, Optik, Oldenbourg Wissenschaftsverlag, 6 ed.De Gruyter, Munich (2014). Google Scholar

14. 

Comsol Multiphysics 5.1, “Heat transfer and ray optics module,” (2018) https://www.comsol.de/products September ). 2018). Google Scholar

15. 

R. Beermann et al., “Light section measurement to quantify the accuracy loss induced by laser light deflection in an inhomogeneous refractive index field,” Proc. SPIE, 10329 103292T (2017). https://doi.org/10.1117/12.2269724 PSISDG 0277-786X Google Scholar

16. 

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed.Cambridge University Press, Cambridge (2004). Google Scholar

17. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., 22 (11), 1330 –1334 (2000). https://doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

18. 

B. A. Abu-Nabah, A. O. ElSoussi and A. E. K. Al Alami, “Simple laser vision sensor calibration for surface profiling applications,” Opt. Lasers Eng., 84 51 –61 (2016). https://doi.org/10.1016/j.optlaseng.2016.03.024 Google Scholar

19. 

G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, 1 ed.O’Reilly & Associates, Sebastopol, California (2008). Google Scholar

20. 

M. Born et al., Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed.Cambridge University Press, Cambridge (1999). Google Scholar

21. 

B. E. A. Saleh and M. C. Teich, Grundlagen der Photonik, Wiley-VCH Verlag, Berlin (2008). Google Scholar

22. 

D. A. Krueger, “Spatial varying index of refraction: an open ended undergraduate topic,” Am. J. Phys., 48 183 –188 (1980). https://doi.org/10.1119/1.12169 AJPIAS 0002-9505 Google Scholar

23. 

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng., 45 083601 (2006). https://doi.org/10.1117/1.2336196 Google Scholar

24. 

The MathWorks, Inc, “MATLAB 2015b,” (2018) https://de.mathworks.com/products/new_products/release2015b.html September ). 2018). Google Scholar

Biography

Rüdiger Beermann is a research associate at the Institute of Measurement and Automatic Control at the Leibniz Universität Hannover. He received his diploma in mechanical engineering from the Leibniz Universität Hannover in 2013 and his state examination as a teacher for math and metal technology for vocational schools in 2015. His current research interests include the development of fringe projection systems for high temperature workpieces and thermal-optical simulations.

Lorenz Quentin is a research associate at the Institute of Measurement and Automatic Control at the Leibniz Universität Hannover. He obtained his diploma in mechanical engineering in 2016. His current research interests include the development of fringe projection systems for high temperature workpieces.

Gunnar Stein: Biography is not available.

Eduard Reithmeier is professor at the Leibniz Universität Hannover and head of the Institute of Measurement and Automatic Control. He received his diplomas in mechanical engineering and in math in 1983 and 1985, respectively, and his doctorate in mechanical engineering at the Technische Universität München in 1989. His research focuses on system theory and control engineering.

Markus Kästner is the head of the Production Metrology Research Group at the Institute of Measurement and Automatic Control at the Leibniz Universität Hannover. He received his PhD in mechanical engineering in 2008 and his postdoctoral lecturing qualifications from the Leibniz Universität Hannover in 2016 . His current research interests are optical metrology from macro- to nanoscale and optical simulations.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Rüdiger Beermann, Lorenz Quentin, Gunnar Stein, Eduard Reithmeier, and Markus Kästner "Full simulation model for laser triangulation measurement in an inhomogeneous refractive index field," Optical Engineering 57(11), 114107 (20 November 2018). https://doi.org/10.1117/1.OE.57.11.114107
Received: 18 May 2018; Accepted: 23 October 2018; Published: 20 November 2018
Lens.org Logo
CITATIONS
Cited by 14 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Cameras

Sensors

Ray tracing

Optical simulations

Content addressable memory

Refractive index

Device simulation

RELATED CONTENT

Geodesic lenses as pressure gauges
Proceedings of SPIE (March 25 1996)

Back to Top