Open Access
19 August 2017 Epipolar constraint of single-camera mirror binocular stereo vision systems
Xinghua Chai, Fuqiang Zhou, Xin Chen
Author Affiliations +
Abstract
Virtual binocular sensors, composed of a camera and catoptric mirrors, have become popular among machine vision researchers, owing to their high flexibility and compactness. Usually, the tested target is projected onto a camera at different reflection times, and feature matching is performed using one image. To establish the geometric principles of the feature-matching process of a mirror binocular stereo vision system, we proposed a single-camera model with the epipolar constraint for matching the mirrored features. The constraint between the image coordinates of the real target and its mirror reflection is determined, which can be used to eliminate nonmatching points in the feature-matching process of a mirror binocular system. To validate the epipolar constraint model and to evaluate its performance in practical applications, we performed realistic matching experiments and analysis using a mirror binocular stereo vision system. Our results demonstrate the feasibility of the proposed model, suggesting a method for considerable improvement of efficacy of the process for matching mirrored features.

1.

Introduction

With the rapid development of machine vision technology and increasing demand on three-dimensional (3-D) measurements, binocular stereo vision technology has been widely applied in many fields such as noncontact measurements, robot navigation, and on-line monitoring.13 Traditionally, a binocular stereo vision system is composed of two cameras or a moving camera, capturing the object’s images from different directions. However, sensors that are built using two cameras are characterized by large size and poor flexibility, while those that utilize one camera lack instantaneity and synchronization. Against this backdrop, virtual binocular stereo vision devices that use camera and catoptric mirrors have become a popular research venue in recent years.46 Compared with conventional two-camera vision systems, a virtual binocular stereo vision system is characterized by good synchronization, compact structure, low cost, and high flexibility. However, in the applications that use such virtual binocular stereo vision systems for 3-D measurements, the two pivotal tasks, calibration of the system and feature matching, are different from those used in traditional two-camera systems.

When using a two-camera stereo vision system, calibration refers to the process of determining the intrinsic parameters of the two cameras and their structural parameters.7 Then, feature matching is performed, and the feature points are effectively constrained using the conventional model with the epipolar constraint.8 However, for a single-camera virtual binocular vision system, each captured image is separated into two parts, which are projected by the real target and its mirror reflection. The usual approach is to separate the two parts of the image, creating a binocular system with two virtual cameras. Therefore, the calibration task of such a system consists of determining the intrinsic parameters of the single camera and the structural parameters of the two virtual cameras. Based on the above interpretation of a virtual binocular system, a two-step calibration method (TSCM),9,10 consists of the following steps. First, the catoptric mirrors are removed, using only the single camera to capture the calibration target images and to calculate the camera’s intrinsic parameters. Then, the catoptric mirrors are reincorporated into the system and one calibration target image is recaptured to determine the system’s structural parameters.

The epipolar geometry of catoptric stereo vision systems with mirrors has received increasing attention.1113 However, previous works mainly aimed at solving the problem for the condition in which the same target is projected onto the single camera at the same reflection times. In many practical conditions, the same target is imaged on the camera at different reflection times, yielding a mirror relation between the two image parts.1417 Thus, the two real cameras are replaced by virtual cameras formed by a single camera and mirrors.18 For a single-camera mirror system, the region of interest (ROI) usually covers the entire image, which significantly increases the matching error. To establish the geometric principles of the feature-matching process of a single-camera mirror binocular stereo vision system, we derived the epipolar constraint between two image parts for a single-camera model. This model combines the traditional epipolar constraint and the particularity of a single-camera mirror binocular stereo vision system, providing the constraint between the image coordinates of the real target and its mirror reflection.

2.

Review of the Traditional Epipolar Constraint Model

Epipolar geometry is convenient for describing and analyzing multicamera vision systems. It is used to represent the geometric relationship between two viewpoints of the same scene based on a few corresponding points in a pair of images. This relationship, which is formulated as a matrix (called the fundamental matrix), can further be used for simplifying the ROI, computing the displacements between cameras, and rectifying the stereo image pairs.

2.1.

Camera Model

Image formation in a camera can be described by a widely used pin-hole model.19 The coordinates of a 3-D point P=[x,y,z]T in the world coordinate system and its image plane coordinate p=[u,v]T are related through

Eq. (1)

s[uv1]=M[xyz1],
where s is a scale factor and M is a 3×4 matrix, called the perspective projection matrix. Because the homogeneous coordinates of a vector m can be written as m˜=[mT,1]T, we obtain sx˜=MX˜. Thus, the perspective projection matrix M can be written as follows:

Eq. (2)

M=A[Rt],
where A is a 3×3 matrix mapping the camera coordinate system to the image coordinate system, and [Rt] are the structure coefficients of the two-camera vision system (rotation matrix and translation vector, respectively), which transform the world coordinate system to the camera coordinate system. Note that the matrix A depends only on the system’s intrinsic parameters, which capture the optical, geometric, and digital properties of the camera, while the 3×4 matrix [Rt] contains only the extrinsic parameters, which describe the transformation between the two coordinate systems.

2.2.

Two-Camera Epipolar Constraint Model

Epipolar constraint is one of the most important principles in the binocular stereo vision, and is also a fundamental constraint underlying all self-calibration techniques.20 Consider a two-camera stereo vision system shown in Fig. 1. Note that P is a 3-D point; pl and pr are its projections onto image Il and image Ir, respectively; Cl and Cr are the optical centers of the left and right cameras, respectively. The plane π, defined by the three spatial points P, Cl, and Cr, is known as the epipolar plane. The intersection of the epipolar plane π with the image Ir is termed as the epipolar line, and is denoted by lpr. Thus, the corresponding point pl in the image plane Il of pr must be constrained to the line lpl. This model can also be described by geometric deduction, as shown in Fig. 1.

Fig. 1

Epipolar geometry of two images.

OE_56_8_084103_f001.png

Define Kl and Kr as the intrinsic matrices of the two cameras, respectively, and let [Rt] denote the transformation between the coordinate systems of the two cameras. Under the pin-hole model, the following equation holds:

Eq. (3)

p˜rTKrT[t]×RKl1p˜l.
Here, [t]× is an antisymmetric matrix defined by the translation vector t. To simplify the above equation, define matrix F as

Eq. (4)

F=KrT[t]×RKl1,
where F is known as the fundamental matrix of the two cameras. Thus, Eq. (3) can be reduced to a simpler expression

Eq. (5)

p˜rTFp˜l=0.
Geometrically, the factor Fp˜l defines the epipolar line of point pl in the right image. Equation (5) prescribes that for point pl the corresponding point in the right image is located on the corresponding epipolar line. Transposing Eq. (5) yields a symmetric relation linking the right image to the left image. The fundamental matrix F is of great significance for camera calibration and feature matching, because it is the only geometric constraint available for two uncalibrated images. Once corresponding pairs of points for the two images are obtained, the intrinsic matrices Kl, Kr, and the structure coefficients [Rt] can be determined.

3.

Single-Camera Mirror Epipolar Geometry

The epipolar geometry of a two-mirror system was first investigated by Gluckman and Nayar.21 They showed that the number of free parameters in the fundamental matrix can be reduced from 7 to 6 for a two-mirror system with no constraint on the locations of mirrors. In what follows, we develop a precise description of the epipolar constraint model of a single-camera binocular vision system.

3.1.

Single-Camera Mirror Binocular System

There are two types of mirror binocular stereo vision systems. For systems in the first category, the tested target is imaged using one real space path and one reflection path.16 These types of binocular stereo vision systems with one real image are shown in Fig. 2. There are two images of the point P, corresponding to the two different paths. For the mirror point P, the image is captured after one reflection. But for the real point P, the image is captured directly. Thus, this binocular vision device is equivalent to a device in which a real camera and a virtual camera are at fixed mirror symmetric positions.

Fig. 2

Mirror binocular stereo vision system: (a) imaging principle of real and virtual cameras and (b) real calibration plane image captured by the system.

OE_56_8_084103_f002.png

For systems in the second category, the tested target is projected onto a single camera via two different reflection paths.17 In this system, the tested target is projected onto a single camera after one or two reflections, as shown in Fig. 3. Imaging of the target in the field of view (FOV) can be separated into two reflection paths. Using the upper slope mirror, the target can be captured after one reflection. On the other hand, two reflections are needed for imaging performed using the lower mirror. Therefore, for this virtual binocular structure, the left and right virtual cameras and images exhibit a mirror relationship. Figure 3(a) shows two different paths from the target to the camera. As a four-side symmetric system, four pairs of target images from different directions can be captured simultaneously, as shown in Fig. 3(b). For each pair, binocular images are mirror-symmetric, as shown in Fig. 3(c).

Fig. 3

Multimirror binocular stereo vision system: (a) imaging principle of two virtual cameras, (b) an image captured by the system, and (c) an amplified view.

OE_56_8_084103_f003.png

3.2.

Single-Camera Mirror Epipolar Constraint Model

As is well known, in the traditional two-camera stereo reconstruction process, feature matching can be effectively performed using the epipolar constraint.22 However, for these mirror images, the epipolar constraint model exhibits different characteristics in comparison to traditional two-real-camera systems, because the same target point is captured from different paths by a single camera and forms two image points, as shown in Fig. 4.

Fig. 4

Epipolar point’s position in the mirror binocular vision system.

OE_56_8_084103_f004.png

Owing to the unicity of the real camera and the symmetry of the real camera and its reflection, two virtual epipolar points el and er should coincide in the single image. In addition, the two epipolar points and two virtual target points pl and pr should be coplanar. Thus, in the real image plane, the two epipolar points and two target points should be collinear, and the two epipolar points should have the same positions. In this approach, the analysis is started from the target point P and its symmetric point P. According to Eq. (1), the perspective projection of the two points can be expressed as follows:

Eq. (6)

{sp˜r=MP˜=[M1  m1]P  ˜sp˜l=MP˜=[M1  m1]P˜.

As stated in the previous section, the 3-D point P=[x,y,z]T can be denoted by its homogeneous coordinates P˜=[PT,1]T. Thus, the three relationships can be written as follows:

Eq. (7)

{P˜=[PT1]T  P˜=[PT1]TP=[Et]P˜,
where [Et] is the imaging transformation caused by the reflecting mirror, and E is the 3×3 identity matrix. Thus, Eq. (6) can be transformed as follows:

Eq. (8)

{sp˜r=M1P+m1  sp˜l=M1P+M1t+m1.

To simplify this model of imaging, the same element M1P+m1 can be eliminated from the above two equations, yielding the following expression for p˜r and p˜l

Eq. (9)

s(p˜lp˜r)=M1t.

Considering the above equation from a purely geometrical perspective, the expression p˜lp˜r describes the vector prpl in the image’s coordinate system, and t is equivalent to the vector PP in the camera’s coordinate system. Thus, Eq. (9) can be interpreted as follows. The two-dimensional (2-D) vector prpl is the projection of the 3-D vector PP from the camera’s coordinate system to the image’s coordinate system. Because Cr is the projection center from any viewed point including P, the principle shown in Eq. (9) can be used to derive another relationship among Cr, Cl, and e

Eq. (10)

s(p˜re˜)=M1t,
where t denotes the vector CrCl in the camera’s coordinate system. We assume that u is the identity vector normal to the mirror’s plane. Equations (9) and (10) can be further expressed as follows:

Eq. (11)

{s(p˜lp˜r)=aM1us(p˜re˜)=bM1u.
Transposing the constants a and b, and eliminating the same element M1u from the above equations, we obtain a relation linking e˜, p˜r, and p˜l

Eq. (12)

1/a(p˜lp˜r)=1/b(p˜re˜).

According to the above equation, it can be derived that the three image points pr, pl, and e are on the same straight line in the image’s plane. Because of the unicity of the real camera and the symmetry between the virtual camera and the real one, two virtual epipolar points el and er coincide at point e. In addition, the two epipolar points and two target points pl and pr are coplanar. Thus, in the real single image plane, the two epipolar points and the two target points are collinear, and the two epipolar points are at the same position, as shown in Fig. 5.

Fig. 5

Single-camera epipolar constraint principle of the mirror binocular vision system.

OE_56_8_084103_f005.png

3.3.

Single-Camera Multimirror Epipolar Constraint Model

Here, we introduce a system that is different from the one described in the previous section. In this system, the space points in the FOV can project on the single camera through one or two mirrors. As before, the analysis starts from the target point P and its symmetric points Po, Pl, and Pr that are reflected by different mirrors, as shown in Fig. 6. Here, Po is the virtual point reflected by mirror M2, and Pl is the virtual point reflected by mirror M1. Pr is the symmetric point of Po for mirror M3. The symmetric virtual cameras R and L of the real camera C are formatted according to the same principle. The virtual cameras R and L make the system equivalent to a binocular system.

Fig. 6

Imaging principle of the multimirror binocular system.

OE_56_8_084103_f006.png

The virtual cameras L and O are the reflections of the real camera C in mirrors M1 and M3, which are formatted by one reflection. The virtual camera R is symmetric with the virtual camera O about mirror M2, which is formatted by two reflections from the real camera C. According to Eq. (12), the space point P=[x,y,z]T projects to the real camera C and the virtual camera L and formats two image points p and pl, which conform to the following relationship:

Eq. (13)

1/al(p˜lp˜)=1/bl(p˜e˜).

The same relationships exist for the real camera C and the virtual camera O, and the virtual camera R is symmetric with the virtual camera O about mirror M3. Thus, the following equations can be derived:

Eq. (14)

1/ao(p˜op˜)=1/bo(p˜e˜o),

Eq. (15)

1/ar(p˜rp˜o)=1/br(p˜oe˜r).

The element p˜o can be eliminated by combining Eqs. (13)–(15). Thus, the relationship between two image points in the virtual cameras R and L can be derived as follows:

Eq. (16)

brbo(ao+bo)(ar+br)(p˜rp˜l)=[bl(al+bl)brbo(ao+bo)(ar+br)](p˜le˜lr),

Eq. (17)

e˜lr=[boao+bo(e˜o+e˜r)+bl(al+bl)e˜]/[bl(al+bl)brbo(ao+bo)(ar+br)],
where e˜lr are the homogeneous coordinates of individual epipolar points in the real image, and eo, er, and e denote the epipolar points determined by different virtual binocular structures. According to the above equations, the three image points pr, pl, and elr can be shown to be on the same straight line in the real image’s plane, similar to the system of one mirror, as shown in Fig. 7.

Fig. 7

Single-camera epipolar constraint principle for one and two reflections.

OE_56_8_084103_f007.png

4.

Experiments

Aiming at evaluating the performance of the proposed epipolar constraint model in practical applications, real experiments and analyses were performed using a mirror virtual binocular stereo vision system. Before performing the experiments and analyses, the virtual binocular vision system was calibrated. Thus, the coordinates of the epipolar point and the principal point, the distortion coefficients, and the focal length of the camera lens were obtained. In future experiments, the results of this calibration will be used in a direct manner.

4.1.

System Calibration

The experimental system was established according to the one-mirror system described in the previous section. 16 The experimental system included a camera and a reflecting mirror, which were fixed on the experimental platform. The camera was IMPERX-IGV-B1601M version, with the frame frequency 15 fps, resolution of 1624×1236  pixels, focal length of the lens at 8.5 mm, and size of the charge coupled device at 2/3 in. The setup is shown in Fig. 8.

Fig. 8

Experimental setup of the mirror virtual binocular system.

OE_56_8_084103_f008.png

To calibrate the system precisely, we used the TSCM9 and Zhou et al.’s method in Ref. 6. For the former method, the system’s intrinsic parameters and structural parameters were calibrated separately by removing the mirror and fixing it, while for the later one, these parameters were calibrated without removing the mirror. The calibration target used in this process was a ceramic plane with circular features organized in a 7×7 array; the diameters of the dots and the center-to-center distances between adjacent dots were 4 and 8 mm, respectively. The calibration plane images captured for calibrating the system’s intrinsic parameters and structural parameters are shown in Fig. 9.

Fig. 9

Calibration images used for TSCM: (a) and (b) two of eight images for calibration of intrinsic parameters and (c) single image for calibration of structural parameters.

OE_56_8_084103_f009.png

In the calibration process using the TSCM, eight calibration plane images were used for the extraction of intrinsic parameters; two of them are shown in Figs. 9(a) and 9(b). Calibration of structural parameters requires only one image, which includes the real calibration plane image and its mirror reflection, as shown in Fig. 9(c). However, in the calibration process using Zhou et al.’s method, four mirror images of the calibration plane were captured for the extraction of intrinsic parameters, which included a total of eight calibration plane subimages, as shown in Fig. 10. Calibration of structural parameters requires only one image, which is the same as TSCM. The results of the two calibration procedures are listed in Table 1.

Fig. 10

Four calibration images used for Zhou et al.’s method.

OE_56_8_084103_f010.png

Table 1

System calibration results using two methods.

ParametersResults using TSCMResults using Zhou et al.’s method
fx,fy (pixel)1948.149, 1348.2171948.158, 1348.175
(u0,v0) (pixel)(827.193, 620.490)(827.253, 620.587)
k1,k20.001196, 0.023150.001201, 0.02434
R[0.7470.00350.7720.00310.9950.0110.7550.01580.744][0.7470.00350.7720.00310.9950.0110.7550.01580.744]
T (mm)(480.508,5.283,20.17)T(480.508,5.283,20.17)T
ERP (pixel)0.070560.08283
(xe,ye) (pixel)(1436.179, 712.248)(1436.253, 712.587)

Here, fx and fy denote the focal lengths of the lens in the x and y directions; (u0,v0) are the image coordinates of the principal point; k1 and k2 represent the two-order lens distortion coefficients; R and T are the rotation matrix and the translation vector, respectively, of the virtual binocular structure; ERP denotes the reprojection error; and (xe,ye) are the coordinates of the single epipolar point. The structural parameters were calibrated using the same image and have the same rotation matrix R and transition vector T. For a single camera vision system, the reprojection error is a significant parameter showing the mapping accuracy from the 3-D space to 2-D images. According to the calibration results, the TSCM achieved higher reprojection accuracy compared to Zhou et al.’s method. Thus, in the following feature-matching experiment, the TSCM calibration results will be used.

4.2.

Validation Experiment

The experimental setup has been precisely calibrated using the TSCM, as described in the previous section. To validate the proposed single-image mirror epipolar constraint model, a feature-matching experiment of the calibration plane based on the single image was performed. The validated process is shown in Fig. 11. First, an image of the calibration plane was captured by the calibrated experimental setup for testing, as shown in Fig. 11(a). The calibration plane that was used here was the same as that used in the calibration experiment.

Fig. 11

Image processing in validation experiment: (a) the tested source image, (b) extraction of feature point coordinates, (c) lines based on feature points and epipolar point, and (d) reconstruction of all feature points using the matching results of the proposed model.

OE_56_8_084103_f011.png

Then, we extracted the coordinates of all of the 2×49 feature points of the tested image, using the ellipse fitting method.23 The results of this extraction, after correcting for the image distortion, are shown in Fig. 11(b). Next, we established the relation between the two arrays of coordinates of feature points in a point-by-point manner, according to the proposed epipolar constraint model, as shown in Fig. 11(c). Finally, by rebuilding all the 49 feature points according to the stereo vision model and previous calibration results, we obtained the 3-D space point coordinates. For a more scrupulous validation of the proposed mirror epipolar constraint model, three additional experiments were performed according to the same process as the first experiment. The four reconstruction results are shown in Fig. 11(d).

The error analysis of the four experiments was performed with the aim of precisely evaluating the errors between real feature and reconstructed points. We calculated coordinates of all points in the real camera coordinate system and analyzed the offset distances between these points and the real calibration plane by comparing the measured distance of the adjacent feature point with the real value of 8 mm. The results of the four experiments are listed in Table 2. It can be seen that the average absolute and relative errors are 0.05 mm and 0.6%, respectively.

Table 2

Measured distance of adjacent feature point.

ImageNumber of feature pointsAverage absolute errors (mm)Average relative errors (%)
1490.05420.68
2490.04980.62
3490.05100.64
4490.05260.66

4.3.

Real Feature-Matching Experiment

To validate the proposed epipolar constraint rule on practical applications, a real feature-matching experiment was performed and is reported in this section. For comparison, we also report the results of feature matching performed without the epipolar constraint. The real target image captured by the experimental system is shown in Fig. 12(a). Feature extraction was performed using the oriented FAST and rotated BRIEF (ORB) method,24 and the results of matching obtained without the constraint rule are shown in Fig. 12(b). Apparently, the matching errors are very high. The results obtained using the proposed epipolar constraint model are shown in Fig. 12(c). Clearly, the proposed epipolar constraint model yields better results, and we conclude that it increases the accuracy of the feature-matching process. It should be pointed out that the proposed epipolar constraint model is used to constrain the target feature point to a line, which is different from a point-to-point matching method.

Fig. 12

Results of the feature-matching experiment: (a) the tested real target image, (b) results of feature matching using the ORB algorithm, and (c) results of feature matching using the proposed epipolar constraint model to eliminate invalid feature points.

OE_56_8_084103_f012.png

5.

Conclusion

In many cases of using mirror virtual binocular vision systems, the same target is imaged using one camera and different reflections, leading to a mirror relation between the left and right parts of the captured image; in this situation, the two real cameras of the traditional binocular system are replaced by virtual cameras, which are formed by a single camera and mirrors. To perform the feature matching process effectively, a single-camera mirror feature-matching rule, i.e., a mirror epipolar constraint model, used in the 3-D reconstruction process, was established here for a mirror virtual binocular vision system. To validate the proposed epipolar constraint model and to evaluate its performance in practical applications, system calibration experiments, error analysis, and realistic feature-matching experiments were performed using a virtual binocular stereo vision system and the results of these experiments were analyzed and reported. The results showed that the proposed epipolar constraint method is feasible and can increase the accuracy of feature matching.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61372177).

References

1. 

D. Murray and J. J. Little, “Using real-time stereo vision for mobile robot navigation,” Auto. Robot, 8 (2), 161 –171 (2000). http://dx.doi.org/10.1023/A:1008987612352 Google Scholar

2. 

H. Kieu et al., “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol., 25 (35401), 1 –7 (2014). MSTCEP 0957-0233 Google Scholar

3. 

P. Zhang et al., “Sub-aperture stitching interferometry using stereovision positioning technique,” Opt. Express, 18 (14), 15216 –15222 (2010). http://dx.doi.org/10.1364/OE.18.015216 OPEXFF 1094-4087 Google Scholar

4. 

I. Cinaroglu and Y. Bastanlar, “A direct approach for object detection with catadioptric omnidirectional cameras,” Signal Image Video Process., 10 (2), 413 –420 (2016). http://dx.doi.org/10.1007/s11760-015-0768-2 Google Scholar

5. 

X. Feng and D. Pan, “Research on the application of single camera stereo vision sensor in three-dimensional point measurement,” J. Mod. Opt., 62 (15), 1204 –1210 (2015). http://dx.doi.org/10.1080/09500340.2015.1024775 JMOPEW 0950-0340 Google Scholar

6. 

F. Zhou et al., “A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors,” Measurement, 46 (3), 1147 –1160 (2013). http://dx.doi.org/10.1016/j.measurement.2012.10.031 0263-2241 Google Scholar

7. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal., 22 (11), 1330 –1334 (2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

8. 

J. Tian, Y. Ding and X. Peng, “Self-calibration of a fringe projection system using epipolar constraint,” Opt. Laser Technol., 40 (3), 538 –544 (2008). http://dx.doi.org/10.1016/j.optlastec.2007.08.009 OLTCAS 0030-3992 Google Scholar

9. 

K. B. Lim and Y. Xiao, “Virtual stereovision system: new understanding on single-lens stereovision using a biprism,” J. Electron. Imaging, 14 (4), 043020 (2005). http://dx.doi.org/10.1117/1.2137654 JEIME5 1017-9909 Google Scholar

10. 

B. Tu et al., “High precision two-step calibration method for the fish-eye camera,” Appl. Opt., 52 (7), C37 –42 (2013). http://dx.doi.org/10.1364/AO.52.000C37 Google Scholar

11. 

H. H. P. Wu et al., “Epipolar geometry of catadioptric stereo systems with planar mirrors,” IET Image Vision Comput., 27 (8), 1047 –1061 (2009). http://dx.doi.org/10.1016/j.imavis.2008.09.007 Google Scholar

12. 

J. Cui, J. Huo and M. Yang, “Novel method of calibration with restrictive constraints for stereo-vision system,” J. Mod. Opt., 63 (9), 835 –846 (2016). http://dx.doi.org/10.1080/09500340.2015.1106602 JMOPEW 0950-0340 Google Scholar

13. 

G. L. Mariottini et al., “Catadioptric stereo with planar mirrors: multiple-view geometry and camera localization,” Visual Servoing via Advanced Numerical Methods, 3 –21 Springer, London (2010). Google Scholar

14. 

H. H. P. Wu and S. H. Chang, “Fundamental matrix of planar catadioptric stereo systems,” IET Comput. Vis., 4 (2), 85 –104 (2010). http://dx.doi.org/10.1049/iet-cvi.2008.0021 Google Scholar

15. 

J. Gijeong, K. Sungho and K. Inso, “Single-camera panoramic stereo system with single-viewpoint optics,” Opt. Lett., 31 (1), 41 –43 (2006). http://dx.doi.org/10.1364/OL.31.000041 OPLEDP 0146-9592 Google Scholar

16. 

Z. Y. Zhang and H. T. Tsui, “3d reconstruction from a single view of an object and its image in a plane mirror,” Proc. IEEE, 4 1174 –1174 (1998). http://dx.doi.org/10.1109/ICPR.1998.711905 Google Scholar

17. 

F. Zhou et al., “Omnidirectional stereo vision sensor based on single camera and catoptric system,” Appl. Opt., 55 (25), 6813 –6820 (2016). http://dx.doi.org/10.1364/AO.55.006813 Google Scholar

18. 

Y. Cui et al., “Precise calibration of binocular vision system used for vision measurement,” Opt. Express, 22 (8), 9134 –9149 (2014). http://dx.doi.org/10.1364/OE.22.009134 OPEXFF 1094-4087 Google Scholar

19. 

P. D. Lin and K. S. Chi, “Comparing two new camera calibration methods with traditional pinhole calibrations,” Opt. Express, 15 (6), 3012 –3022 (2007). http://dx.doi.org/10.1364/OE.15.003012 OPEXFF 1094-4087 Google Scholar

20. 

Z. Y. Zhang, “Determining the epipolar geometry and its uncertainty: a review,” Int. J. Comput. Vis., 27 (2), 161 –195 (1998). http://dx.doi.org/10.1023/A:1007941100561 IJCVEQ 0920-5691 Google Scholar

21. 

J. Gluckman and S. K. Nayar, “Planar catadioptric stereo: geometry and calibration,” Proc. IEEE, 1 22 –28 (1999). http://dx.doi.org/10.1109/CVPR.1999.786912 Google Scholar

22. 

X. Tan et al., “Feature matching in stereo images encouraging uniform spatial distribution,” Pattern Recognit., 48 (8), 2530 –2542 (2015). http://dx.doi.org/10.1016/j.patcog.2015.02.026 Google Scholar

23. 

D. K. Prasad, M. K. H. Leung and C. Quek, “Ellifit: an unconstrained, non-iterative, least squares based geometric ellipse fitting method,” Pattern Recognit., 46 (5), 1449 –1465 (2012). http://dx.doi.org/10.1016/j.patcog.2012.11.007 Google Scholar

24. 

E. Rublee et al., “ORB: an efficient alternative to SIFT or SURF,” Proc. IEEE, 58 2564 –2571 (2011). http://dx.doi.org/10.1109/ICCV.2011.6126544 Google Scholar

Biography

Xinghua Chai received his MS degree from Beijing Information Science and Technology University in 2013. He is working toward his PhD in the School of Instrumentation Science and Optoelectronics Engineering, Beihang University. His research directions are vision measurement and vision sensors.

Fuqiang Zhou received his BS, MS and PhD degrees from the School of Instrument, Measurement and Test Technology from Tianjin University in 1994, 1997, and 2000, respectively. He joined the School of Automation Science and Electrical Engineering at Beihang University as a postdoctoral research fellow in 2000. Now, he is a professor in the School of Instrumentation Science and Optoelectronics Engineering at Beihang University. His research directions include computer vision, image processing, and optical metrology.

Xin Chen is a doctoral candidate in Beihang University. She received her master's degree in 2014 from Taiyuan University of Technology and BS degree in 2011 from Tianjin Agricultural University. Her research interests include machine vision, image processing, and vision measurement.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Xinghua Chai, Fuqiang Zhou, and Xin Chen "Epipolar constraint of single-camera mirror binocular stereo vision systems," Optical Engineering 56(8), 084103 (19 August 2017). https://doi.org/10.1117/1.OE.56.8.084103
Received: 8 May 2017; Accepted: 25 July 2017; Published: 19 August 2017
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Cameras

Mirrors

Stereo vision systems

Calibration

Imaging systems

Visual process modeling

Systems modeling

Back to Top