Open Access
14 August 2018 High-speed three-dimensional shape measurement using phase measurement profilometry without phase unwrapping
Feng Lu, Chengdong Wu, Jikun Yang
Author Affiliations +
Abstract
In the fast three-dimensional (3-D) shape measurement, it is an important factor to use the least number of the fringe pattern to get 3-D shape measurement of arbitrary objects. Although one-shot technologies can use one-frame deformed fringe pattern to get the wrapped phase, they are easily affected by the surface property and suffer from poor spatial resolution. In addition to the process, phase unwrapping may affect the quality of absolute phase. This paper proposes a fast measurement method based on a phase measurement profilometry and stereo vision system. This method can reconstruct 3-D surface without phase unwrapping. Using original image matching constraint, a rough parallax is used in the phase matching. To resist the false matching, subpixel parallax optimization is used to reduce the matching errors. To detect the edge point of wrapped phase, the average phase is calculated. Experimental results verify the feasibility of the proposed method and it can measure complex objects without phase unwrapping.

1.

Introduction

High-speed three-dimensional (3-D) shape measurement has been an important technology in industrial manufacturing, such as quality inspection, reverse engineering, 3-D sensing, and object recognition.1 To get accurate 3-D shape measurement results, many technologies have been researched, such as Moire technique,2 phase-measuring profilometry (PMP),36 and Fourier transform profilometry (FTP).79

In general, it is significant to use fewer fringe patterns to get the results in a high-speed measurement.10 Because one-shot technologies only need one frame to get the wrapped phase, they have the potential to be used in high-speed measurement. It mainly contains color coding,11 FTP,12 and composite pattern;13 however, these methods still have shortcomings. Color coding is encoded with different colors. It is easily affected by the surface color.14 Fourier transform is insensitive to the color, but it needs the extract from the fundamental frequency component, so the geometry changes are required to be lower than the frequency of the fringe pattern.15 Composite pattern with different frequencies is projected and different frequencies are filtered to obtain the deformed patterns with different frequencies.16 However, the computational cost is high and the measurement accuracy needs to be improved. Another problem is that the wrapped phase needs to be unwrapped to obtain the absolute phase; however, the process of the phase unwrapping will also affect the accuracy of absolute phase and it will also increase computing time of the measurement.17

Traditionally, the wrapped phase is always retrieved based on temporal or spatial phase unwrapping. Temporal phase unwrapping needs multiple frames to get absolute phase that will affect the computation complexity.18 Spatial phase unwrapping process is a process of integral accumulation point-by-point scanning that will make error spread backward.19

To balance the accuracy and speed, this paper proposes a high-speed 3-D shape measurement method based on phase measurement profilometry and stereo vision. A three-step phase-shifting measurement only needs three frames-deformed fringe patterns to get the wrapped phase. Using original image matching constraint, a rough parallax is used in the phase matching. To get the accurate corresponding point, phase matching and subpixel parallax optimization are used to preclude the false point. Phase matching is used to confirm the candidate points. When the points are at the edge of wrapped phase where the phase value is π or π, there are missing points or wrong corresponding points. An average phase value is used to increase the robustness of the high-speed measurement. Subpixel parallax optimization is used to find the true corresponding point based on the coordinate of subpixel. In the process of measurement, the dithering fringe pattern is used to overcome the gamma effect.

This paper is organized as follows: in Sec. 2, the principle of a three-step phase-shifting measurement and the flow of the proposed method are introduced. Section 3 illustrates the principle of phase matching and subpixel parallax optimization. Section 4 verifies the feasibility and accuracy of the proposed method. Conclusion is given in Sec. 5.

2.

Principle

2.1.

Three-Step Phase-Shifting Measurement

A three-step phase-shifting measurement is widely used in the 3-D shape measurement because it needs the least three deformed fringe patterns. In this paper, three-step fringe patterns with 2π/3 phase shifting value are used. The intensity of fringe pattern can be expressed as

Eq. (1)

I1=A(x,y)+B(x,y)cos[ϕ(x,y)2π/3],

Eq. (2)

I2=A(x,y)+B(x,y)cos[ϕ(x,y)],

Eq. (3)

I3=A(x,y)+B(x,y)cos[ϕ(x,y)+2π/3],
where A(x,y) is the average intensity and B(x,y) is the modulation intensity. ϕ is the wrapped phase to be calculated. It can be expressed as

Eq. (4)

ϕ(x,y)=tan13(I1I3)2I2I1I3.

This equation provides the wrapped phase ranging π to π with 2π discontinuities.

2.2.

Flow of the Proposed Method

The whole process of the proposed method is shown in Fig. 1. It mainly includes five steps.

  • Step 1: Preparation before measurement. It includes the generation of dithering pattern and stereo vision calibration. The dithering fringe pattern is insensitive to gamma of projector, which uses an one-bit binary instead of an eight-bit gray information to approximate the sinusoidal fringe pattern. Dithering fringe pattern can be used for high-speed measurement without gamma calibration. Another merit of dithering fringe pattern is that it is suitable to measure objects when wide fringe pattern is used. In this paper, the dithering fringe pattern is generated based on Ref. 20.

  • Step 2: Original image matching constraint. The original image can be captured from left and right cameras, respectively. Based on the feature of stereo vision, a rough matching can be implemented. It can be used to provide a rough parallax as a constraint condition.

  • Step 3: Calculation of the wrapped phase. PMP is applied to the captured image from left camera and right camera, respectively. Then wrapped phase is obtained based on Eq. (4).

  • Step 4: The wrapped phase matching. Based on the original image matching constraint, the wrapped phase matching can be performed to find the candidate points. It is difficult to match the points at the boundary of wrapped phase, where the phase is or π or π so the average phase is used to detect the edge point.

  • Step 5: Subpixel parallax optimization. To find the corresponding points precisely, the subpixel parallax optimization is used. The optimized parallax will correspond the target point. Once the stereo vision system is calibrated, the height of object can be calculated.

Fig. 1

The flowchart of the proposed method.

OE_57_8_085101_f001.png

In this paper, steps 4 and 5 are the main proposed methods, so the principle of them will be introduced in the following part.

3.

Principle of the Proposed Method

3.1.

Phase Matching

Traditionally, wrapped phase cannot be used to find the corresponding points, because it ranges from π to π with periodical change. Because dithering fringe pattern is used in this system, gamma effect of projector can be neglected. The original image matching constraint is to narrow the range of candidate points within the epipolar line. Without original image matching constraint, the false points will be considered and reconstructed, which will occupy the process time. Traditionally, absolute phase is required to find the correct corresponding points. The algorithm of retrieving absolute phase can be classified into the spatial method and temporal method. But both methods have demerits. Spatial method cannot be used to retrieve the phase of isolate objects and the phase error will spread along the direction of phase unwrapping. Temporal method needs multiple frames to unwrap the phase that will occupy the measurement speed. The process of wrapped phase matching without absolute phase is shown in Fig. 2. Because absolute phase is not used, there are some false corresponding points with the same phase value. These points can be defined as candidate points.

Fig. 2

The process of wrapped phase matching without absolute phase.

OE_57_8_085101_f002.png

The target of matching is to find the corresponding points in two cameras. In the theory of stereo vision, the rough parallax can be obtained by making a difference between the twoimages that are captured from left camera and right camera, respectively. Then, the rough parallax is applied to the phase matching. Because structure light increases the texture features, it can confirm the corresponding points accurately. For the stereo vision system, the point (xL,yL) in the left camera corresponds to the point (xR,yR) in the right camera. The original image matching constraint provides a rough corresponding parallax Par, which can be expressed as

Eq. (5)

Par=xRxL.

The parallax is used in the wrapped phase to find the target phase. When the phase of point (xL,yL) in the left wrapped phase is PhaseL(xL,yL), the corresponding phase of right wrapped phase is PhaseR(xR,yR), as shown in

Eq. (6)

PhaseL(xL,yL)=PhaseR(xR,yR)=PhaseR(xL+Par,yR).

In the proposed method, the point (xR,yR) and adjacent points are set to candidate points. Considering the computation complexity, adjacent candidate points are shown in Fig. 3.

Fig. 3

The relationship between the points in the left camera and right camera.

OE_57_8_085101_f003.png

To find the true point, the phase of left point PhaseL(xL,yL) subtracts the phase of these candidate points PhaseR(xR+s,yR) and the absolute difference can be expressed as

Eq. (7)

ΔPhase=|PhaseL(xL,yL)PhaseR(xR+s,yR)|,
where s is an integer and s[2,2]. ΔPhase is the difference between the point phase values. The least difference ΔPhase(xR_min,yR_min) is used and (xR_min,yR_min) is the coordinate of candidate point, which has the least phase difference.

Once the optimal point can be obtained, the parallax Parmin can be calculated as

Eq. (8)

Parmin=xR_minxL.

Phase value matching belongs to pixel matching and the parallax of subpixel is required.

Although the phase matching can reject most candidates, there are still some challenges. When the points are at the edge of wrapped phase where the phase is π or π, it is difficult to find the matching points precisely based on ΔPhase. To find the corresponding points accurately, the average phase value is used. As shown in Fig. 4, the middle point (xR,yR) is on the edge of wrapped phase and the phase PhaseR(xR,yR) is required to be calculated. The average phase value Phaseave(xR,yR) can be described as

Eq. (9)

Phaseave(xR,yR)=15i=22phase(xR+i,yR).

Fig. 4

The determination of boundary points based on the average phase. (a) Average phase is greater than zero and (b) average phase is smaller than zero.

OE_57_8_085101_f004.png

The phase value is monotonic during a period. When the phase value Phaseave(xR,yR) is greater than zero, it can be designed as π or it can be designed as π, which can be expressed as

Eq. (10)

Phase(xp,yp)={πif  Phaseave<0πif  Phaseave>0.

3.2.

Subpixel Parallax Optimization

In the wrapped phase, the phase value ranges from π to π and it changes periodically. Using this feature, the coordinate of subpixel can be obtained. If the coordinate of original image matching point is (xR_o,yR), the coordinate of subpixel (xR_sub,yR) can be obtained. Because the phase value monotonically increases under every period, the coordinate of subpixel (xR_sub,yR) is between the best corresponding point (xR_min,yR_min) and candidate point (xR_o,yR). The phase value of the subpixel in the right-wrapped phase can be set to PhaseR(xR_sub,yR). Because PhaseR(xR_sub,yR) represents the best matching point in the wrapped phase, it should be equivalent to the phase PhaseL(xL,yL) in the left wrapped phase. Then, it can be expressed as

Eq. (11)

PhaseR(xR_sub,yR)=PhaseL(xL,yL).

As shown in Fig. 5, when the original image matching point (xR_o,yR) is on the left of the best point (xR_min,yR_min), it means the xR_o<xR_min.

Fig. 5

The original image matching point and the points on the left of the best point.

OE_57_8_085101_f005.png

The coordinate of the subpixel can be expressed as

Eq. (12)

xR_sub=PhaseR(xR_sub,yR)PhaseR(xR_o,yR)PhaseR(xR_min,yR)PhaseR(xR_o,yR)(xR_minxR_o)+xR_o.

When the point (xR_o,yR) has the same x-coordinate of the best point (xR_min,yR_min), it means the xR_o=xR_min. As shown in Fig. 6, the coordinate of the subpixel can be expressed as

Eq. (13)

xR_sub=xR_min.

Fig. 6

The original image matching point has the same x-coordinate of the best point.

OE_57_8_085101_f006.png

Similarly, as shown in Fig. 7, when the original image matching point (xR_o,yR) is on the right of the best point (xR_min,yR_min), it means the xR_o>xR_min.

Fig. 7

The original image matching point is on the right of the best point.

OE_57_8_085101_f007.png

Then, the coordinate of the subpixel can be expressed as

Eq. (14)

xR_sub=PhaseR(xR_sub,yR)PhaseR(xR_min,yR)PhaseR(xR_o,yR)PhaseR(xR_min,yR)(xR_oxR_min)+xR_min.

Then, the subpixel parallax ParPhasesub can be obtained as

Eq. (15)

ParPhasesub=xsubxL.

Based on the baseline and subpixel parallax map, the height of the object can be calculated.

4.

Experiments

To verify the proposed method, a 3-D shape measurement system is developed. The system contains a projector (Samsung SP-P310MEMX) and two digital CCD cameras (Daheng MER-500-14U3M/C-L). The camera is attached with a 16-mm focal length lens (Computar M1614-MP). The camera resolution is 1024×768. The projector resolution is 800×600 and it has 0.49- to 2.80-m projection distance.

Figure 8 shows the measurement results based on the proposed method. Figure 8(a) shows the reconstructed result when only phase matching is used. Figure 8(b) shows the reconstructed result when phase matching and the average phase calculation are used. Figure 8(c) shows the reconstructed result from Fig. 8(b) when the subpixel parallax optimization is used.

Fig. 8

The images captured from left camera and right camera. (a) The original image captured from left camera, (b) the original image captured from right camera, (c) the deformed fringe pattern from left camera, (d) the deformed fringe pattern from right camera, (e) the wrapped phase from left camera, and (f) the wrapped phase from right camera.

OE_57_8_085101_f008.png

By comparing the measured surface, the effect of every step process is shown. Because the system contains two digital cameras, the stereo vision can be calibrated to reconstruct the measured object surface using phase matching. The measurement result is shown in Fig. 9(a). But it can be found that because there are wrong corresponding points, the surface is coarse. In addition, there are missing points at the edge of the wrapped phase. By adding edge detection into the phase matching, the missing points and blank vertical lines are removed as shown in Fig. 9(b). Compared with Fig. 9(a), Fig. 9(b) has more accurate matching results and less noise. It has smoother surface. But it also can be found that there are still noise. When the subpixel parallax optimization is used, the matching correctness rate is increased and the surface becomes smoother as shown in Fig. 9(c).

Fig. 9

The measurement results based on the proposed method. (a) The result from phase matching, (b) the result from phase matching plus edge detection, and (c) the result from the proposed method.

OE_57_8_085101_f009.png

To better illustrate the effect of the proposed method, a white house is measured. The measurement results are shown in Fig. 10. Original image without fringe patterns is shown in Fig. 10(a). Figure 10(b) shows the measurement result when the wrapped phase is unwrapped. Figure 10(c) shows the result based on the proposed method. From the measurement results, it can be found that the proposed method can generate better results with less noise. To better show the measurement results, the details of the door are shown. Because the proposed method is based on phase matching and subpixel parallax optimization, it can get more accurate corresponding points. In addition, it does not need phase unwrapping; the speed of measurement is higher.

Fig. 10

The measurement result of a white house with complicated surface. (a) Original image without fringe patterns, (b) the measurement result and details from absolute phase, and (c) the measurement result and details from the proposed method.

OE_57_8_085101_f010.png

To further quantify the matching accuracy of the proposed method, different masks’ surfaces are measured. The original masks are shown in Fig. 11. The measurement results are shown in Fig. 12. The measurement results from absolute phase matching are used as Ref. 18. The percent of correct matching points, the percent of wrong matching points, and the percent of missing matching points are calculated, respectively. These data are shown in Table 1.

Fig. 11

The original masks without deformed fringe pattern. (a) Monkey mask, (b) Iron Man mask, and (c) Santa Claus mask.

OE_57_8_085101_f011.png

Fig. 12

The measurement results of different masks. (a) The measurement results of monkey mask, (b) the measurement results of Iron Man mask, and (c) the measurement results of Santa Claus mask.

OE_57_8_085101_f012.png

Table 1

Matching precision based on the absolute phase (Unit: percentage %).

ObjectCorrectness rateError rateMissing rate
Monkey98.910.120.97
Iron Man99.320.110.57
Santa Claus99.420.090.07

From the data in Table 1, we can find that the proposed method can provide high correctness rate that ca be above 98%. Compared with Iron Man and Santa Claus, monkey has larger error and missing rate because monkey has the more holes on the surface. Based on our analysis, we think that the wrapped phase is obtained from one-shot frame fringe pattern and it is hard to get high-quality phase value at the edge of the holes. These holes will decrease the quality of the wrapped phase and signal-to-noise ratio. Iron Man has less holes and it has less error rate and less missing rate. Because there are no holes on the Santa Claus, it has the highest correctness rate in addition it has the lowest error rate and missing rate.

To further verify the property of the proposed method for the step-like objects, discontinuous blocks and continuous blocks are measured and the experimental results are compared with the results from absolute phase matching, which are shown in Fig. 13. Figures 13(a) and 13(b) show the original images. Figures 13(c) and 13(d) show the measurement results from absolute phase matching. The measurement results from the proposed method are show in Figs. 13(e) and 13(f). The height of measured blocks are shown in Tables 1Table 2Table 34.

Fig. 13

The measurement results comparison from discontinuous blocks and continuous blocks. (a) The original image of discontinuous blocks, (b) the original image of continuous blocks, (c) the measurement results of discontinuous blocks from absolute phase matching, (d) the measurement results of continuous blocks from absolute phase matching, (e) the measurement results of discontinuous blocks from the proposed method, and (f) the measurement results of continuous blocks from the proposed method.

OE_57_8_085101_f013.png

Table 2

The discontinuous blocks measurement from absolute phase matching (Units: mm).

Ideal height305060
Average height28.4352.1161.92
RMS0.770.580.43
Average error0.520.530.39
Maximum error0.580.710.47

Table 3

The continuous blocks measurement from absolute phase matching (Units: mm).

Ideal height305060
Average height28.3552.3261.81
RMS0.750.580.45
Average error0.530.530.36
Maximum error0.600.750.45

Table 4

The discontinuous blocks measurement from the proposed method (Units: mm).

Ideal height305060
Average height30.2150.2359.28
RMS0.340.310.25
Average error0.400.360.24
Maximum error0.460.410.37

From Tables 25, the ideal height, average height, RMS, average error, and maximum error are listed for different blocks. From these data, we can find that the height from the proposed method is closer to the ideal height. The height errors from the proposed method are smaller than that from absolute phase. This further verify the success of the proposed pattern. From the data, we can also find that the proposed method can be employed to measure discontinuous objects and continuous objects.

Table 5

The continuous blocks measurement from the proposed method (Units: mm).

Ideal height305060
Average height30.1550.1960.01
RMS0.330.350.24
Average error0.410.350.26
Maximum error0.480.430.40

5.

Summary

This paper propose a high-speed 3-D shape measurement algorithm based on PMP and stereo vision. It can use one-shot fringe pattern to get wrapped phase and reconstruct the 3-D shape without phase unwrapping. The original image matching constraint and subpixel phase matching are used to find the correct corresponding points. To increase the robustness and reduce missing rate of the boundary of the wrapped phase where phase value is π or π, the average value is used based on the adjacent phase value. By comparing the matching precision, the proposed method can get high-quality surface. Because the process of measurement can use any three consecutive deformed fringe patterns to get the wrapped phase and the matching process does not need phase unwrapping, the proposed method can be used in fast measurement.

Disclosures

The authors declare that they have no competing interests.

Acknowledgments

This work was supported by the National Key R&D Program of China (2017YBF1300900), the National Natural Science Foundation of China (U1713216), the Fund of Shenyang (17-87-0-00), and the Fundamental Research Funds for the Central Universities (N172604004).

References

1. 

H. Lin et al., “Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment,” Opt. Lasers Eng., 91 206 –215 (2017). https://doi.org/10.1016/j.optlaseng.2016.11.015 Google Scholar

2. 

D. Sarenac et al., “Three phase-grating moiré neutron interferometer for large interferometer area applications,” Phys. Rev. Lett., 120 (11), 113201 (2018). https://doi.org/10.1103/PhysRevLett.120.113201 Google Scholar

3. 

J. Peng et al., “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express, 24 (19), 21846 –21860 (2016). https://doi.org/10.1364/OE.24.021846 OPEXFF 1094-4087 Google Scholar

4. 

P. Zhou et al., “Phase error analysis and compensation considering ambient light for phase measuring profilometry,” Opt. Lasers Eng., 55 (7), 99 –104 (2014). https://doi.org/10.1016/j.optlaseng.2013.10.027 Google Scholar

5. 

Z. Zhang et al., “Three-dimensional shape measurements of specular objects using phase-measuring deflectometry,” Sensors, 17 (12), 2835 (2017). https://doi.org/10.3390/s17122835 SNSRES 0746-9462 Google Scholar

6. 

H. Lee, Y. K. Min and J. I. Moon, “Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming,” Opt. Eng., 56 (12), 124107 (2017). https://doi.org/10.1117/1.OE.56.12.124107 Google Scholar

7. 

H. Yun, B. Li and S. Zhang, “Pixel-by-pixel absolute three-dimensional shape measurement with modified Fourier transform profilometry,” Appl. Opt., 56 (5), 1472 –1480 (2017). https://doi.org/10.1364/AO.56.001472 APOPAI 0003-6935 Google Scholar

8. 

H. Li et al., “Optimal wavelength selection strategy in temporal phase unwrapping with projection distance minimization,” Appl. Opt., 57 (10), 2352 –2360 (2018). https://doi.org/10.1364/AO.57.002352 APOPAI 0003-6935 Google Scholar

9. 

W. Sun et al., “Advanced method of global phase shift estimation from two linear carrier interferograms,” J. Eur. Opt. Soc. Rapid Pub., 14 (1), 10 (2018). https://doi.org/10.1186/s41476-018-0076-x Google Scholar

10. 

Y. Hu et al., “Dynamic microscopic 3D shape measurement based on marker-embedded Fourier transform profilometry,” Appl. Opt., 57 (4), 772 –780 (2018). https://doi.org/10.1364/AO.57.000772 APOPAI 0003-6935 Google Scholar

11. 

T. Petkovic, T. Pribanic and M. Donlic, “Single-shot dense 3D reconstruction using self-equalizing De Bruijn sequence,” IEEE Trans. Image Process., 25 (11), 5131 –5144 (2016). https://doi.org/10.1109/TIP.2016.2603231 IIPRE4 1057-7149 Google Scholar

12. 

Z. Zhang et al., “Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase calculation at discontinuities in fringe projection profilometry,” Opt. Lasers Eng., 50 (8), 1152 –1160 (2012). https://doi.org/10.1016/j.optlaseng.2012.03.004 Google Scholar

13. 

A. Zhai et al., “A novel composite-structure-light 3D measurement method for improving accuracy based on two plus one phase shifting algorithm,” Optik—Int. J. Light Electron Opt., 124 (5), 461 –465 (2013). https://doi.org/10.1016/j.ijleo.2011.12.014 Google Scholar

14. 

S. Fernandez and J. Salvi, “One-shot absolute pattern for dense reconstruction using DeBruijn coding and windowed Fourier transform,” Opt. Commun., 291 (6), 70 –78 (2013). https://doi.org/10.1016/j.optcom.2012.10.042 OPCOB8 0030-4018 Google Scholar

15. 

M. A. Gdeisat, D. R. Burton and M. J. Lalor, “Eliminating the zero spectrum in Fourier transform profilometry using a two-dimensional continuous wavelet transform,” Opt. Commun., 266 (2), 482 –489 (2006). https://doi.org/10.1016/j.optcom.2006.05.070 OPCOB8 0030-4018 Google Scholar

16. 

K. Chen, J. Xi and Y. Yu, “Three-dimensional (3D) shape measurement of complex surface object using composite fringe patterns,” in 5th Int. Congress on Image and Signal Processing, 967 –971 (2013). https://doi.org/10.1109/CISP.2012.6469687 Google Scholar

17. 

J. Dai, Y. An and S. Zhang, “Absolute three-dimensional shape measurement with a known object,” Opt. Express, 25 (9), 10384 –10396 (2017). https://doi.org/10.1364/OE.25.010384 OPEXFF 1094-4087 Google Scholar

18. 

Z. Li et al., “Accurate calibration method for a structured light system,” Opt. Eng., 47 (5), 053604 (2008). https://doi.org/10.1117/1.2931517 Google Scholar

19. 

K. Chen, J. Xi and Y. Yu, “Quality-guided spatial phase unwrapping algorithm for fast three-dimensional measurement,” Opt. Commun., 294 (5), 139 –147 (2013). https://doi.org/10.1016/j.optcom.2013.01.002 OPCOB8 0030-4018 Google Scholar

20. 

Y. Xiao and Y. Li, “High-quality binary fringe generation via joint optimization on intensity and phase,” Opt. Lasers Eng., 97 19 –26 (2017). https://doi.org/10.1016/j.optlaseng.2017.05.006 Google Scholar

Biography

Feng Lu is a PhD student at the faculty of robot science and engineering at Northeastern University. He received his BS and MS degrees from Northeastern University in 2012 and 2015, respectively. His research interests are 3-D optical metrology, image processing, and machine learning.

Chengdong Wu is a dean of Faculty of Robot Science and Engineering, Northeastern University, Shenyang, China. His interest includes sensors application, intelligent building, and wireless sensors network.

Jikun Yang is a doctor of ophthalmology, General Hospital of Shenyang Military Region, Shenyang, China. Her interest includes sensors device, optical engineering, and image processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Feng Lu, Chengdong Wu, and Jikun Yang "High-speed three-dimensional shape measurement using phase measurement profilometry without phase unwrapping," Optical Engineering 57(8), 085101 (14 August 2018). https://doi.org/10.1117/1.OE.57.8.085101
Received: 25 May 2018; Accepted: 31 July 2018; Published: 14 August 2018
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Phase measurement

Phase matching

Fringe analysis

Cameras

3D metrology

Optical engineering

Stereo vision systems

RELATED CONTENT


Back to Top