Open Access
11 October 2017 Flexible decoupled camera and projector fringe projection system using inertial sensors
Author Affiliations +
Abstract
Measurement of objects with complex geometry and many self-occlusions is increasingly important in many fields, including additive manufacturing. In a fringe projection system, the camera and the projector cannot move independently with respect to each other, which limits the ability of the system to overcome object self-occlusions. We demonstrate a fringe projection setup where the camera can move independently with respect to the projector, thus minimizing the effects of self-occlusion. The angular motion of the camera is tracked and recalibrated using an on-board inertial angular sensor, which can additionally perform automated point cloud registration.

1.

Introduction

Structured light systems (SLS)1,2 are used to perform dimensional measurements on objects by projecting patterns of light, which are subsequently recorded by a camera. SLS have recently proliferated in industrial metrology, because they do not contact the object being measured and thus can be used with final-stage products; they operate much faster than standard contact coordinate measuring machines3 and provide a more complete measurement as they acquire a large number of measurement points simultaneously. Fringe projection techniques4,5 are a category of SLS where either continuous68 (usually sinusoidal) or binary fringes6,9,10 are projected on an object and images are taken at an angle in order to extract three-dimensional (3-D) topography information about the object. For the fringe projection technique to operate efficiently and for the system to have adequate depth resolution, the camera and projector must be noncollinear.1113 In this work, we do not consider techniques that use collinear fringe projection12,13 because as they do not employ standard phase stepping methods. In this work, we will focus on systems that use sinusoidal fringe projection and phase stepping.14

The noncollinearity criterion in fringe projection is a significant limitation when measuring complex objects with self-occlusions, such as those produced by additive manufacturing (AM) techniques.2,1517 During the measurement of complex objects, self-occlusions of the object geometry do not allow the camera to view all illuminated parts of the object. To acquire a full 3-D measurement of the object, the general solution is to place the object on a stage and rotate it multiple times with small angular increments to acquire a complete measurement, however, this approach is inefficient and time consuming. Another common solution to overcome object self-occlusions is to add extra cameras around the projector, but this increases the cost and the complexity of the system. Moreover, the use of additional cameras and projectors is not an ideal solution as the distances and relative poses between the projector and cameras are still static and cannot easily adapt to different objects. The algorithm presented in Ref. 18 has been shown to be tolerant to the arbitrary relative position and pose between camera and projector, but it requires calibration between each alteration and is thus time consuming.

In this work, we propose a system design capable of independent camera and projector motion through live tracking of the camera. Moreover, we show that the proposed system is more capable to overcome self-occlusions compared to traditional systems, where the camera is not free to move independently of the projector.

2.

Background

2.1.

Fringe Projection Procedure

The principle used in phase stepped fringe projection is the association of the absolute phase measured by the process to a physical height. To perform a coordinate measurement in fringe projection, at least three-phase stepped images of the projected pattern are acquired by the camera. Equations (1)–(3)19 describe the intensity of the sinusoidal pattern detected by the camera pixels in each image in the case of a three-step measurement

Eq. (1)

I1(x,y)=I(x,y)+I(x,y)cos[φ(x,y)Δφ],

Eq. (2)

I2(x,y)=I(x,y)+I(x,y)cos[φ(x,y)],

Eq. (3)

I3(x,y)=I(x,y)+I(x,y)cos[φ(x,y)+Δφ],
where x and y are the pixel coordinates, I1I3 are the pixel intensities recorded in the images, I is the background illumination intensity, I is the modulation intensity, φ is the wrapped phase of the sinusoidal pattern in the second image, and Δφ is the phase step, which in the case of a three-step acquisition is 2π/3. By solving the system of equations above, we can extract the relative phase in terms of the intensity in each image19 thus

Eq. (4)

φ(x,y)=arctan(3I1I32I2I1I3).

The phase is then unwrapped20 and converted into dimensional units by scaling the phase with an object of known height.

2.2.

Occlusions Created by High Aspect Ratio Features

Figure 1(a) shows the geometrical constraints in measuring trenches with high aspect ratios when a camera and projector system have a relatively large angle between them. In this case, no overlap between the projector and camera pixels is possible at the bottom of the trench and, therefore, the depth of the trench cannot be measured.

Fig. 1

Demonstration of a deep trench self-occlusion which could be measured if the angle between camera and projector is reduced.

OE_56_10_104106_f001.png

Deep trench self-occlusions can be overcome if the camera and object move independently of each other and adapt to the object being measured by moving closer together [Fig. 1(b)]. However, when the angular distance between the camera and projector is small, the system is less able to determine the phase difference between the projected and detected image.11 The optimal case for the measurement of such self-occlusions is, therefore, to keep the maximum angle difference between the camera and projector while still being able to view all the necessary parts of the object.

An additional advantage of being able to achieve independent angular motion of the camera with respect to the projector would be the acquisition of two sides of an illuminated ridge self-occlusion (see Fig. 2), a problem usually solved by registering the views of two independent cameras.

Fig. 2

Demonstration of the ability to measure both sides of a ridge self-occlusion by moving the camera to the other side of the projector.

OE_56_10_104106_f002.png

A flexible system that allows the camera–projector angles to vary both in azimuth and elevation would have the ability to minimize the effect of such self-occlusions and acquire almost all of the illuminated area of the object, thus making the measurement of a 3-D object more efficient. Such a system would also be able to perform automatic point cloud registration, whereby 3-D point clouds acquired from different parts of the object can be registered to a common coordinate system via use of inertial sensors. The use of inertial sensors for automatic point registration has recently been shown,21 but the sensor was not placed on the camera itself, rather on the projector–camera frame, which was static, and the camera and projector did not move relative to each other.

3.

Calibration

3.1.

Camera Calibration

The camera and projector cannot move independently to each other during the measurement due to the calibration procedure used in existing systems. Camera calibration for metrological applications is usually performed by well-established methods, such as linear calibration,22 the two-stage Tsai method,22,23 vanishing points,24 and the checkerboard plane technique (Zhang method).22,25 All these techniques are used to determine the matrix P shown in

Eq. (5)

w[xy1]=[XYZ1]P,
where w is the scale factor; x,y are the coordinates of the image along the horizontal and vertical directions, respectively; X,Y, and Z are the spatial coordinates of the corresponding pixel in the real world; and P is known as the projection matrix.

The P matrix contains both the extrinsic and intrinsic parameters of the camera. The intrinsic parameters of the camera are associated with the exact values of the internal camera parameters, such as the focal length, the optical centre, and the pixel size. The extrinsic parameters are associated with the location and pose of the camera with respect to a reference coordinate system, also referred to as the “global coordinates.”

When using the aforementioned methods to calibrate the camera, the camera is unable to move with respect to the projector during the calibration procedure, as this would require the whole system to be recalibrated. The reason for this is that, when moving the camera around an object, the extrinsic parameters vary as does the P matrix in Eq. (1), which has to be recalculated. The novelty of this work is that we first calibrate the intrinsic and extrinsic parameters with the Zhang method and then use inertial measurement unit (IMU) sensors to determine the change in extrinsic parameters when moving the camera, thus avoiding the need to recalibrate.

3.2.

Sensor Calibration

In order to use the angular data of the IMU sensor (LSM9DS0), it must first be calibrated. The variation of the azimuth angle sensor (which uses a magnetometer to acquire the data reported) with respect to the true rotation is initially very large. The calibration of the magnetometer sensor was carried out in a piecewise linear fashion via a look-up table at an interval of 30 deg. The true azimuth rotation was measured by the use of a rotation stage with angular markings. Any reported angular value from the sensor can be acquired by performing a linear interpolation between two appropriate calibration points. The deviation reported before calibration was on average 32.9 deg. After calibration, when the difference was measured at random angles within each linearly approximated region, the average error reported was 1.49 deg.

The tilt angle sensor in the IMU is an accelerometer and its response was linear when measured in the 0-deg to 90-deg range. The tilt angle was calibrated with a digital protractor with an accuracy of 0.1 deg. As the response of the tilt sensor was linear, the results were corrected by subtracting the error of the reported angle at the centre of the measured range (at a tilt of 45 deg). The average error before calibration was 3.6 deg and after calibration, when measured at different points, was confirmed to be 0.6 deg.

4.

Experiments

A schematic and photos of the experimental setup used are shown in Fig. 3. The design used ensures that the distance of the camera to the object does not change as it is rotated around the object. Keeping the radial distance invariant as the camera rotates around the object, camera tracking and point cloud registration can be performed by sole use of the azimuth and elevation angles, without requiring knowledge of the distance to the object. In this setup, the camera can be placed at five different elevation angles on each side of the arch by physically detaching and attaching the camera through threaded screws at each position.

Fig. 3

(a) A schematic of the setup showing the elevation (θ) and azimuth (φ) angles as well as the distance of the camera to the centre of the rotation plate (R) and images of the Raspberry Pi camera setup showing (b) the camera under the arch and (c) a full shot of the rotating setup on an optical wafer.

OE_56_10_104106_f003.png

The camera used is a modified Raspberry Pi camera with 5-megapixel resolution. The camera was attached to the front of a custom-made case. The Raspberry Pi board used was connected to the network via an USB Wi-Fi dongle.

The setup was evaluated using two AM manufactured objects printed in acrylonitrile butadiene styrene. The two objects were made to have different types of self-occlusions. The first was a pyramid with a tall ridge self-occlusion [Fig. 4(a)] and the other was a flat object with deep trench self-occlusions [Fig. 4(b)].

Fig. 4

(a) Pyramid-shaped and (b) deep trench 3-D printed objects tested with the flexible fringe projection setup. The size of the pyramid was 10  cm×10  cm×7  cm and the size of the trench object is 10  cm×10  cm×2  cm.

OE_56_10_104106_f004.png

The aim of the first experiment was to demonstrate the system’s ability to avoid the pyramid’s self-occlusions and acquire a larger part of the illuminated object. Two point cloud measurements of the AM manufactured pyramid object [Fig. 4(a)] were performed by moving the camera in two different positions while keeping the projector static. The two views were spaced angularly by 85  deg. The camera’s elevation angle in both cases was 30 deg from normal. By acquiring two different views while keeping the same projection illumination direction, more of the illuminated field was captured. The two point clouds were subsequently aligned to the computer aided design (CAD) model of the object via an iterative closest point (ICP) using CloudCompare (CC) software.26 The point clouds of the two views and the combined registration via ICP are shown in Fig. 5.

Fig. 5

ICP registered point clouds showing the data portion acquired from the two individual camera views as seen from the point of view of the projector. (a) The camera is situated at the left of the projector. (b) The camera at the right of the projector. (c) The combined data acquired from both views overlaid onto the object’s CAD model.

OE_56_10_104106_f005.png

To demonstrate the advantage gained when measuring an object manufactured with deep trenches, the AM object [Fig. 4(b)] was also measured at two different viewpoints with the setup proposed. The point clouds acquired from two different azimuth angles (Fig. 6) clearly show the ability of the system to measure the points in the bottom of the trench after rotating the camera closer to the projector. The camera’s elevation angle in both cases was set to 30 deg from normal.

Fig. 6

(a) Point cloud of an object with deep trenches, where the pits could not be measured due to the relative projector and camera angle (53.3 deg). (b) By changing the camera angle to 16.8 deg, the bottom parts of the trenches were successfully acquired the trench width and the separation between trenches was 1 cm. The size of the AM object is 10  cm×10  cm×2  cm.

OE_56_10_104106_f006.png

5.

Discussion and Conclusion

It was found that the use of a magnetometer sensor for azimuth angle tracking can cause large errors, up to ±30  deg, in the measurement when uncalibrated. The specific reasons for the high nonlinearity exhibited by the uncalibrated magnetometer was not investigated, nonetheless, with the calibration techniques used in this work we were able to reach accuracies of 1.49 deg for the azimuth measurement after calibration. The tilt sensor albeit essentially not used for tracking in this case, as all shots were taken at the same tilt angle, is an accelerometer sensor and was linear over the 0-deg to 90-deg range over which we tested it and could be calibrated by a simple offset. Specifically, we were able to achieve an average error of 0.6 deg for the camera tilt after calibration.

The calculated value of the movement in azimuth angle, when measuring two different parts of the AM pyramid object after ICP registration of the two point clouds acquired, was 86.75 deg. When comparing with the data acquired by the IMU sensor for the same motion (85.72 deg), a difference of 1.03 deg was observed. This difference is within the expected mean sensor error of 1.49 deg, which was achieved for the azimuth sensor after calibration. The mean difference between the ICP registered and IMU tracked point clouds, which resulted from the 1.03-deg azimuth sensor error, was measured to be 18  μm. The average point cloud to the CAD model error resulting from the sensor errors is expected to vary with the type and size of the object as the angular error would affect different objects differently. The detailed effect of the angular errors on registration accuracy was not the focus of this work and requires a more in-depth investigation, which is going to be part of future work.

With regards to the enhanced measurement of deep trenches using the system, it is clearly shown that by varying the angle of the camera closer to that of the projector, it is possible to acquire points at the bottom of the trenches of the AM object evaluated. In terms of the numerical differences of the point cloud distributions, these were acquired by initially registering them to the CAD model individually and then finding the mean distance of the less dense point cloud [Fig. 6(a)] from the denser one [Fig. 6(b)]. The point cloud of the deep trench artefact, which was taken when the camera was at a smaller angle to the projector [Fig. 6(b)], (i.e., in which the trenches were clearly visible), showed a mean error that was reduced by 59  μm compared to the point cloud that was taken at a larger angle [Fig. 6(a)]. The reason for this is not clear, but it was noticed that the point cloud in Fig. 6(a) was noisier compared to that in Fig. 6(b), especially in the areas close to the trench edges. Again, the detailed analysis of the errors warrants further investigation but the advantage provided in measuring deep trenches by the system is clear.

In both cases, the average differences in mean error between the point clouds were presented for the sake of completeness. The focus of this work was to demonstrate the advantages and the first realisation of a fringe projection system where the camera and projector can move independently during the measurement by use of inertial sensor tracking. The main advantage demonstrated is to allow complete 3-D measurement of objects with different types of high aspect ratio occlusions by adapting the setup during the course of the measurement process. Further investigation of the detailed effects of sensor errors on the point cloud accuracies and identifying other sources of error in the setup, as well as methods of reducing or mitigating them, will be part of future work.

This work, therefore, demonstrates that, for a calibrated fringe projection setup in which the camera can move radially around the object, the camera’s extrinsic parameters can be determined solely by the measurement of the two pose angles (i.e., the camera’s elevation and azimuth). The ability to calibrate the extrinsic parameters of a camera in real-time using data from an inertial sensor allows for the camera to be moved around the object and acquire a larger part of the illuminated area. Additionally, each image taken from a different point of view can be “tagged” with the position of the camera’s pose making the time-consuming task of point cloud registration more streamlined and automated. For setups where the distance of the camera to the object changes, however, the elevation and azimuth angles are not enough to track the position of the camera, as the distance of the camera to the object has to be measured in order to properly adjust the scale of the image, and the method shown here cannot be used.

It has been further shown that using the proposed setup, it is possible to minimize the effect of object self-occlusion in complex geometries using two examples. Measuring complex geometries with self-occlusions is important for AM objects that can produce a lot of lattice and biomimetic structures, which can contain deep-trench and tall-ridge self-occlusions. It was shown that point cloud registration to within 1 deg in azimuth is possible (compared to ICP) via sole use of the inertial sensors and without the need to use specific targets or multiple cameras.

A postprocessing algorithm, such as ICP, is still recommended to fully register the point clouds if alignment accuracy is paramount. However, if the highest possible alignment accuracy is not required, or if coarse initial alignment is needed, using the calibrated inertial sensors in the setup shown also allows for adequate registration with an angular error of 1 deg in azimuth, in which the specific pyramid artefact measured, translated to a mean point cloud displacement error of 18  μm.

The scope of the future work, aside from creating a framework to fully optimize the angle selection process between the projector and camera for different objects and characterizing the effect of the inertial sensor errors reported, would be to employ a two-step registration scheme whereby the point clouds collected would be initially registered through inertial sensing and, subsequently, through ICP to allow for a faster and fully automated point cloud registration process. Future work will also concentrate on creating a setup that allows automated motion of the camera in the elevation axis and, therefore, can also demonstrate the mitigation of self-occlusions, which are vertical to the reference measurement plane.

Acknowledgments

This work was supported by the Engineering and Physical Sciences Research Council (Grant No. EP/M008983/1). We acknowledge Patrick Bointon, Alexander Jackson-Crisp, and Matthias Hirsch for their assistance in machining and creating various parts of the setup and the measured objects.

References

1. 

J. Salvi, J. Pagès and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit., 37 827 –849 (2004). http://dx.doi.org/10.1016/j.patcog.2003.10.002 Google Scholar

2. 

P. I. Stavroulakis and R. K. Leach, “Invited review article: review of post-process optical form metrology for industrial-grade metal additive manufactured parts,” Rev. Sci. Instrum., 87 041101 (2016). http://dx.doi.org/10.1063/1.4944983 RSINAK 0034-6748 Google Scholar

3. 

R. Hocken and P. Pereira, Coordinate Measuring Machines and Systems, 20114347 2nd ed.CRC Press, Boca Raton (2011). Google Scholar

4. 

Z. H. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng., 50 1097 –1106 (2012). http://dx.doi.org/10.1016/j.optlaseng.2012.01.007 Google Scholar

5. 

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng., 48 149 –158 (2010). http://dx.doi.org/10.1016/j.optlaseng.2009.03.008 Google Scholar

6. 

R. Porras-Aguilar and K. Falaggis, “Absolute phase recovery in structured light illumination systems: sinusoidal vs. intensity discrete patterns,” Opt. Lasers Eng., 84 111 –119 (2016). http://dx.doi.org/10.1016/j.optlaseng.2016.04.010 Google Scholar

7. 

S. Lei, “A comparison study of digital sinusoidal fringe generation technique: defocusing binary patterns VS focusing sinusoidal patterns,” (2017). http://lib.dr.iastate.edu/etd/11740 Google Scholar

8. 

H. Zhao et al., “High-speed triangular pattern phase-shifting 3D measurement based on the motion blur method,” Opt. Express, 25 9171 –9185 (2017). http://dx.doi.org/10.1364/OE.25.009171 OPEXFF 1094-4087 Google Scholar

9. 

J. Dai, B. Li and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng., 52 195 –200 (2014). http://dx.doi.org/10.1016/j.optlaseng.2013.06.010 Google Scholar

10. 

R. Talebi and J. Johnson, “Binary code pattern unwrapping technique in fringe projection method,” in 17th Int. Conf. on Image Processing, Computer Vision, and Pattern Recognition, (2013). Google Scholar

11. 

C. Liu et al., “Coaxial projection profilometry based on speckle and fringe projection,” Opt. Commun., 341 228 –236 (2015). http://dx.doi.org/10.1016/j.optcom.2014.12.030 OPCOB8 0030-4018 Google Scholar

12. 

M. Takeda et al., “Absolute three-dimensional shape measurements using coaxial and coimage plane optical systems and Fourier fringe analysis for focus detection,” Opt. Eng., 39 61 –68 (2000). http://dx.doi.org/10.1117/1.602336 Google Scholar

13. 

A. Sicardi-Segade et al., “On axis fringe projection: a new method for shape measurement,” Opt. Lasers Eng., 69 29 –34 (2015). http://dx.doi.org/10.1016/j.optlaseng.2015.01.003 Google Scholar

14. 

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?,” Opt. Lasers Eng., 48 133 –140 (2010). http://dx.doi.org/10.1016/j.optlaseng.2009.09.001 Google Scholar

15. 

A. Hadi, F. Vignat and F. Villeneuve, “Design configurations and creation of lattice structures for metallic additive manufacturing,” in 14ème Colloque National AIP PRIMECA, (2015). Google Scholar

16. 

N. Gardan and A. Schneider, “Topological optimization of internal patterns and support in additive manufacturing,” J. Manuf. Syst., 37 417 –425 (2015). http://dx.doi.org/10.1016/j.jmsy.2014.07.003 JMSYEB 0278-6125 Google Scholar

17. 

C. Emmelmann et al., “Laser additive manufacturing and bionics: redefining lightweight design,” Phys. Procedia, 12 364 –368 (2011). http://dx.doi.org/10.1016/j.phpro.2011.03.046 PPHRCK 1875-3892 Google Scholar

18. 

H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett., 32 2438 –2440 (2007). http://dx.doi.org/10.1364/OL.32.002438 OPLEDP 0146-9592 Google Scholar

19. 

P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt., 45 5086 –5091 (2006). http://dx.doi.org/10.1364/AO.45.005086 APOPAI 0003-6935 Google Scholar

20. 

M. A. Herráez et al., “Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path,” Appl. Opt., 41 7437 –7444 (2002). http://dx.doi.org/10.1364/AO.41.007437 APOPAI 0003-6935 Google Scholar

21. 

Q. Tang et al., “Portable 3D scanning system based on an inertial sensor,” Proc. SPIE, 10250 102502E (2017). http://dx.doi.org/10.1117/12.2267213 PSISDG 0277-786X Google Scholar

22. 

X. Feng et al., “The comparison of camera calibration methods based on structured-light measurement,” in Congress on Image and Signal Processing, 155 –160 (2008). http://dx.doi.org/10.1109/CISP.2008.163 Google Scholar

23. 

R. Y. Tsai, “An efficient and accurate camera calibration technique for 3D machine vision,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 364 –374 (1986). Google Scholar

24. 

B. W. He and Y. F. Li, “Camera calibration from vanishing points in a vision system,” Opt. Laser Technol., 40 555 –561 (2008). http://dx.doi.org/10.1016/j.optlastec.2007.09.001 OLTCAS 0030-3992 Google Scholar

25. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., 22 1330 –1334 (2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

26. 

D. Girardeau-Montaut, “CloudCompare version 2.6.1 user manual,” (2017) http://www.danielgm.net/cc/doc/qCC/CloudCompare%20v2.6.1%20-%20User%20manual.pdf October ). 2017). Google Scholar

Biography

Petros Stavroulakis is currently a research fellow at the University of Nottingham. His current interests include machine vision, artificial intelligence and data fusion as applied to 3-D form measurement applications. He has previously worked on developing electronics at the National Physical Laboratory and has also led a Knowledge Transfer Partnership between Sencon UK Ltd. and City University London where he spearheaded the development of the company’s flagship optical thin film thickness gauges.

Richard Leach is currently a professor in metrology at the University of Nottingham and prior to this, spent 25 years at the National Physical Laboratory. His current interests are the dimensional measurement of precision and additive manufactured structures. Richard has over 350 publications including five textbooks. He is a visiting professor at Loughborough University and Harbin Institute of Technology.

Biographies for the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Petros Stavroulakis, Danny Sims-Waterhouse, Samanta Piano, and Richard Leach "Flexible decoupled camera and projector fringe projection system using inertial sensors," Optical Engineering 56(10), 104106 (11 October 2017). https://doi.org/10.1117/1.OE.56.10.104106
Received: 28 July 2017; Accepted: 21 September 2017; Published: 11 October 2017
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Cameras

Projection systems

Sensors

Clouds

Calibration

Imaging systems

Computer aided design

RELATED CONTENT

A method of multi-view intraoral 3D measurement
Proceedings of SPIE (February 19 2015)
3D-metrologies for industrial applications
Proceedings of SPIE (September 25 1997)
Topometric sensors for prototyping and manufacturing
Proceedings of SPIE (August 26 1996)

Back to Top