## 1.

## Introduction

Real-time position and orientation measurement is an indispensable component of industrial large-scale metrology, particularly in applications such as robot positioning in automated manufacture, large aerospace structures assembly, and autonomous guided vehicle navigation.^{1}2.3.^{–}^{4} Recently, optical measurement technology, as an advanced means of measurement, is widely applied in modern large-scale industry with its portability, noncontact, high precision, and large measurement range. Considering the need for accurate, flexible, and efficient solutions of real-time large-scale position and orientation measurement, several distributed large-scale metrology systems^{5}^{,}^{6} based on optical techniques are currently available, ranging from digital photogrammetry,^{7}^{,}^{8} the indoor global positioning system (iGPS),^{9}^{,}^{10} to the workspace measuring and positioning system (wMPS).^{11} The operating theories of these distributed optical systems for position and orientation measurement are essentially the same: having a series of measuring stations working cooperatively, the optical information is collected to determine the coordinates of a set of control points attached to the moving object; subsequently, the position and orientation of the object can be calculated.^{12}

It is intuitively clear that the basic problem to be solved is coordinate measurement of a spatial point in the workspace. With regard to the distributed metrology systems mentioned previously, the coordinate of a spatial point is obtained by ensuring that the optical information from two or more measuring stations is acquired. However, this requirement cannot be satisfied in most cases because of the occlusion caused by many factors, including the complex structure of the moving object, the obstacles and physical obstructions in the working volume, and the limited field of view of the optical devices. When occlusion occurs in practical applications, it is often the case that we can get information from only one measuring station for each point and sometimes even none for some points. Consequently, measurement cannot be accomplished.

In order to overcome this problem, a lot of work has been conducted and several alternative solutions have been proposed. A straightforward approach is to improve the connectivity of the system through using a large number of transmitting and receiving devices, but it is not a preferred selection according to the cost efficiency. Considering the compromise among cost, system connectivity, and measurement accuracy, other methods have been presented to avoid occlusion through optimizing the deployment of the devices, which are based on a variety of optimal positioning algorithms.^{13}14.^{–}^{15} Although these optimal methods have the potential to solve the problem, they all suffer because (1) the preprocessing feature of them makes measurement complex and time-consuming, (2) when the trajectory of the moving object is changed, these methods may not be available for their poor flexibility, and (3) most of the algorithms are studied and developed on the basis of computer simulations without practical test in applications.

This paper presents two real-time position and orientation measurement methods with occlusion handling for distributed optical large-scale metrology systems, which should be used in combination for practical applications. All the work is carried out by using the wMPS as a verification platform in this paper. With the two proposed methods, three control points and six control points are used to establish the constraints respectively. Then, the position and orientation measurement can be accomplished conveniently and accurately even if occlusion occurs. In addition, since all the processes are on-line, the proposed approaches do not need complex preprocessing procedure and also present good adaptability for trajectory adjustment.

This paper is organized as follows. In Sec. 2, the wMPS, as the verification platform, is described and its operating features are outlined. Section 3 describes the two proposed methods for real-time position and orientation measurement with occlusion handling, including the mathematical model and the establishment of the constraints. In Sec. 4, experiment is performed to verify the feasibility and accuracy of the proposed method. At last, discussion and conclusion are given.

## 2.

## wMPS Technology and Operating Features

The wMPS is a laser-based measurement device for large-scale metrology applications, which is developed by Tianjin University, China. As shown in Fig. 1, a typical setup of the wMPS is composed of transmitters, receivers, signal processors, a scale bar, and a terminal computer.

The transmitter consists of a rotating head and a stationary base. With two line laser modules fixed on the rotating head and several pulsed lasers mounted around the stationary base, the transmitter generates three optical signals: two fan-shaped planar laser beams rotating with the head and an omnidirectional laser strobe emitted by the pulsed lasers synchronously when the head rotates to a predefined position of every cycle. The receiver captures the three signals and then converts them into electrical signals through a photoelectrical sensor. The signal processor distinguishes between the electrical signals obtained from different transmitters and then extracts the information of the laser planes from them. Subsequently, using wireless Ethernet, the information is sent to the terminal computer to calculate the coordinates of the receiver.

It is noteworthy that the locations of the transmitters should be determined before the start of the measurement, which is a part of the setup procedure. This is achieved by a calibration algorithm known as bundle adjustment.^{16}^{,}^{17} With this algorithm, the positions and orientations of the transmitters with respect to the global coordinate frame can be calculated by using a calibrated scale bar. Once the system setup is completed, the measurement can be performed. The transmitters distributed around the working volume rotate at different speeds to allow the signals from them to be differentiated. When the laser planes emitted from at least two transmitters intersect at a receiver, the equations of the planes are exactly known from the information captured by it. Then the coordinates of the receiver can be obtained by least-squares solution of these plane equations.

Compared with the iGPS, another instrument designed for large-scale metrology, the wMPS uses the same operating mode, that is, distributed optical measurement based on rotary-laser scanning. But the measurement principles and the mathematical models of them are essentially different. The iGPS is based on multiple angle measurements, while the wMPS discussed in this paper is based on multiplane intersection. Nevertheless, they are both good selections for large-scale orientation and position measurement with some powerful features, especially the multitasking capability.

In order to determine the position and orientation, the conventional method with the wMPS is measuring the global coordinates of three or more receivers attached to the moving object simultaneously, which is based on the location principle of the wMPS. Then, the position and orientation of the object can be calculated directly through a mathematical method such as quaternion algorithm.^{18}^{,}^{19} However, if occlusion occurs during measurement, it is often the case that the receivers cannot be located because they can capture signals from only one or even none of the transmitters. Consequently, the conventional method is not feasible. Therefore, we present two methods with occlusion handling to address this problem. The details will be described in the section below.

## 3.

## Position and Orientation Measurement with Occlusion Handling

## 3.1.

### Measurement Schematic and Mathematical Model

The measurement schematic of the proposed methods is the same as the conventional method, which is shown in Fig. 2. $N(N\ge 2)$ transmitters are distributed around the working volume to construct a measurement network, and $M(M\ge 3)$ receivers are integrated with the moving object to create a coordinate frame that moves with it. The coordinates of these receivers in the object coordinate frame are precalibrated.

According to the measurement schematic described previously, the mathematical model constructed with the $n$’th$(n=1,2\cdots N)$ transmitter and the $i$’th$(i=1,\phantom{\rule{0ex}{0ex}}2\cdots M)$ receiver is shown in Fig. 3. For simplicity, the transmitter can be treated as two planes rotating around a public axis in anticlockwise direction and an omnidirectional pointolite emitting laser pulses with a fixed frequency at the origin point. The receiver can also be simplified as a mass point ${P}_{i}$ at the optical center of its photoelectrical sensor. The local coordinate frame of the transmitter is defined as follows: the rotation shaft of the two laser planes is defined as axis ${z}_{\mathrm{t}n}$. The origin ${o}_{\mathrm{t}n}$ is the intersection of laser plane 1 and axis ${z}_{\mathrm{t}n}$. The axis ${x}_{\mathrm{t}n}$ is in laser plane 1 at the initial time (the time when the pulsed lasers emit the omnidirectional laser strobe) and perpendicular to axis ${z}_{\mathrm{t}n}$. The axis ${y}_{\mathrm{t}n}$ is determined according to the right-hand rule.

The equations of the two laser planes in ${o}_{\mathrm{t}n}-{x}_{\mathrm{t}n}{y}_{\mathrm{t}n}{z}_{\mathrm{t}n}$ at the initial time can be represented with three characteristics: ${\mathbf{n}}_{n1}={(\begin{array}{ccc}{n}_{11}& {n}_{12}& {n}_{13}\end{array})}^{T}$, ${\mathbf{n}}_{n2}={(\begin{array}{ccc}{n}_{21}& {n}_{22}& {n}_{23}\end{array})}^{T}$, and $\mathrm{\Delta}{d}_{n}$, which are precalibrated as intrinsic parameters of the transmitter.^{20} ${\mathbf{n}}_{n1}$ and ${\mathbf{n}}_{n2}$ are the normal vectors of the two laser planes at the initial time, respectively. $\mathrm{\Delta}{d}_{n}$ is the deviation between the two laser planes along the axis ${z}_{\mathrm{t}n}$. At the initial time shown in Fig. 3(a), the receiver captures the omnidirectional laser strobe and starts the timer. As shown in Fig. 3(b), when laser plane 1 passes through the receiver point ${P}_{i}$, the time ${t}_{n1i}$ is recorded. Assuming that the angular velocity of the transmitter is $\omega $, the corresponding normal vector of it can be given by

## (1)

$${\mathbf{n}}_{n1}^{\prime}=\left[\begin{array}{ccc}\mathrm{cos}\text{\hspace{0.17em}}\omega {t}_{n1i}& -\mathrm{sin}\text{\hspace{0.17em}}\omega {t}_{n1i}& 0\\ \mathrm{sin}\text{\hspace{0.17em}}\omega {t}_{n1i}& \mathrm{cos}\text{\hspace{0.17em}}\omega {t}_{n1i}& 0\\ 0& 0& 1\end{array}\right]{\mathbf{n}}_{n1}=\left[\begin{array}{c}{n}_{11}\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\omega {t}_{n1i}-{n}_{12}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\omega {t}_{n1i}\\ {n}_{11}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\omega {t}_{n1i}+{n}_{12}\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\omega {t}_{n1i}\\ {n}_{13}\end{array}\right].$$In a similar way, for laser plane 2

## (2)

$${\mathbf{n}}_{n2}^{\prime}=\left[\begin{array}{ccc}\mathrm{cos}\text{\hspace{0.17em}}\omega {t}_{n2i}& -\mathrm{sin}\text{\hspace{0.17em}}\omega {t}_{n2i}& 0\\ \mathrm{sin}\text{\hspace{0.17em}}\omega {t}_{n2i}& \mathrm{cos}\text{\hspace{0.17em}}\omega {t}_{n2i}& 0\\ 0& 0& 1\end{array}\right]{\mathbf{n}}_{n2}=\left[\begin{array}{c}{n}_{21}\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\omega {t}_{n2i}-{n}_{22}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\omega {t}_{n2i}\\ {n}_{21}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\omega {t}_{n2i}+{n}_{22}\mathrm{cos}\text{\hspace{0.17em}}\omega {t}_{n2i}\\ {n}_{23}\end{array}\right].$$Therefore, when the two laser planes pass through the receiver successively, corresponding equations of them in ${o}_{\mathrm{t}n}-{x}_{\mathrm{t}n}{y}_{\mathrm{t}n}{z}_{\mathrm{t}n}$ can be expressed as

## (3)

$$\{\begin{array}{l}{a}_{\mathrm{t}n1}x+{b}_{\mathrm{t}n1}y+{c}_{\mathrm{t}n1}z+{d}_{\mathrm{t}n1}=0\\ {a}_{\mathrm{t}n2}x+{b}_{\mathrm{t}n2}y+{c}_{\mathrm{t}n2}z+{d}_{\mathrm{t}n2}=0\end{array},$$As mentioned previously, the positions and orientations of the transmitters with respect to the global coordinate frame are calibrated by bundle adjustment. Consequently, the equations of the laser planes in the global coordinate frame can be obtained as

## (4)

$$\{\begin{array}{l}{a}_{gn1}x+{b}_{gn1}y+{c}_{gn1}z+{d}_{gn1}=0\\ {a}_{gn2}x+{b}_{gn2}y+{c}_{gn2}z+{d}_{gn2}=0\end{array}.$$In Eq. (4), the coefficient can be calculated through Eq. (5).

## (5)

$$[\begin{array}{cccc}{a}_{\text{gnp}}& {b}_{\text{gnp}}& {c}_{\text{gnp}}& {d}_{\text{gnp}}\end{array}]\phantom{\rule{0ex}{0ex}}=[\begin{array}{cccc}{a}_{\text{tnp}}& {b}_{\text{tnp}}& {c}_{\text{tnp}}& {d}_{\mathrm{t}np}\end{array}]{\left[\begin{array}{cc}{\mathbf{R}}_{\mathrm{t}n}^{\mathrm{g}}& {\mathbf{T}}_{\mathrm{t}n}^{\mathrm{g}}\\ 0& 1\end{array}\right]}^{-1},\phantom{\rule[-0.0ex]{1em}{0.0ex}}(p=1,2),$$## 3.2.

### Determine the Position and Orientation with Three-Point Distance-Plane Constraint

Assuming that occlusion occurs, three noncollinear ones of the receivers attached to the moving object can capture the optical signals from one transmitter respectively. Then the position and orientation of the object can be calculated through the algorithm based on the three-point distance-plane constraint described below.

We define the corresponding coordinate pairs of the three receivers as $\{{\mathbf{P}}_{\mathrm{o}i},{\mathbf{P}}_{\mathrm{g}i}\},(i=1,2,3)$. ${\mathbf{P}}_{\mathrm{o}i}={(\begin{array}{ccc}{x}_{\mathrm{o}i}& {y}_{\mathrm{o}i}& {z}_{\mathrm{o}i}\end{array})}^{\mathrm{T}}$ is the coordinates related to the object coordinate frame, which is precalibrated before measurement. ${\mathbf{P}}_{\mathrm{g}i}=\phantom{\rule{0ex}{0ex}}{(\begin{array}{ccc}{x}_{\mathrm{g}i}& {y}_{\mathrm{g}i}& {z}_{\mathrm{g}i}\end{array})}^{\mathrm{T}}$ is the unknown coordinates related to the global coordinate frame. Assuming that the $i$’th receiver captures the signals from the $n$’th transmitter, the three-point distance-plane constraint can be established by using the plane equations from Eq. (4) and the distances between the three receivers, which can be expressed as

## (6)

$$\{\begin{array}{l}{P}_{\mathrm{g}i\_1}={a}_{gn1}{x}_{\mathrm{g}i}+{b}_{gn1}{y}_{\mathrm{g}i}+{c}_{gn1}{z}_{\mathrm{g}i}+{d}_{gn1}=0\\ {P}_{\mathrm{g}i\_2}={a}_{gn2}{x}_{\mathrm{g}i}+{b}_{gn2}{y}_{\mathrm{g}i}+{c}_{gn2}{z}_{\mathrm{g}i}+{d}_{gn2}=0\\ {D}_{ij}={\Vert {\mathbf{P}}_{\mathrm{g}i}-{\mathbf{P}}_{\mathrm{g}j}\Vert}_{2}-{\Vert {\mathbf{P}}_{\mathrm{o}i}-{\mathbf{P}}_{\mathrm{o}j}\Vert}_{2}=0\end{array},$$From Eq. (6), we can make an optimal objective function.

## (7)

$$E=\sum _{i=1}^{3}({P}_{\mathrm{g}i\_1}^{2}+{P}_{\mathrm{g}i\_2}^{2})+\sum _{i=1}^{3}\sum _{\begin{array}{l}j=1\\ j\ne i\end{array}}^{3}{D}_{ij}^{2}=\mathrm{min}.$$The coordinates of the three receivers can be obtained by solving Eq. (7), and then the position and orientation of the object can be calculated by using a mathematical method known as quaternion algorithm.^{18}^{,}^{19} For this nonlinear objective function, Eq. (7) should be solved by an iterative optimization method such as Levenberg–Marquardt algorithm.^{21}^{,}^{22} The problem of initial value selection then arises. In order to overcome this problem, we use the coordinates of the three points measured at the moment before occlusion occurs as the initial value for the first measurement, and then during occlusion the earlier result obtained by the proposed method can be used as the initial value for the next measurement.

## 3.3.

### Determine the Position and Orientation with Six-Point Multiplane Constraint

The method described above has presented a convenient solution based on the three-point distance-plane constraint. However, it does not work for all cases. If occlusion exists in the whole working volume constantly, the initial value of the iterative process cannot be determined, and the three-point method will be unavailable. Therefore, we propose another method based on the six-point multiplane constraint to address this issue. The requirement of this method is that six of the receivers attached to the moving object can capture the optical signals from one transmitter respectively.

Similar to the three-point method, we define the corresponding coordinate pairs of the six receivers as $\{{\mathbf{P}}_{\mathrm{o}i},{\mathbf{P}}_{\mathrm{g}i}\},\phantom{\rule{0ex}{0ex}}(i=1,2\cdots 6)$. Then the position and orientation of the moving object can be given by

## (8)

$${\mathbf{P}}_{\mathrm{g}i}={\mathbf{R}}_{\mathrm{o}}^{\mathrm{g}}{\mathbf{P}}_{\mathrm{o}i}+{\mathbf{T}}_{\mathrm{o}}^{\mathrm{g}},$$Assuming that the $i$’th receiver captures the signals from the $n$’th transmitter, the six-point multiplane constraint can be established by substituting Eq. (8) into Eq. (4).

## (9)

$$\{\begin{array}{l}{P}_{\mathrm{g}i\_1}={a}_{\mathrm{g}n1}{x}_{\mathrm{o}i}{r}_{1}+{a}_{\mathrm{g}n1}{y}_{\mathrm{o}i}{r}_{2}+{a}_{\mathrm{g}n1}{z}_{\mathrm{o}i}{r}_{3}+{b}_{\mathrm{g}n1}{x}_{\mathrm{o}i}{r}_{4}+{b}_{\mathrm{g}n1}{y}_{\mathrm{o}i}{r}_{5}+{b}_{\mathrm{g}n1}{z}_{\mathrm{o}i}{r}_{6}\\ +{c}_{\mathrm{g}n1}{x}_{\mathrm{o}i}{r}_{7}+{c}_{\mathrm{g}n1}{y}_{\mathrm{o}i}{r}_{8}+{c}_{\mathrm{g}n1}{z}_{\mathrm{o}i}{r}_{9}+{a}_{\mathrm{g}n1}{t}_{\mathrm{x}}+{b}_{\mathrm{g}n1}{t}_{\mathrm{y}}+{c}_{\mathrm{g}n1}{t}_{\mathrm{z}}+{d}_{\mathrm{g}n1}=0\\ {P}_{\mathrm{g}i\_2}={a}_{\mathrm{g}n2}{x}_{\mathrm{o}i}{r}_{1}+{a}_{\mathrm{g}n2}{y}_{\mathrm{o}i}{r}_{2}+{a}_{\mathrm{g}n2}{z}_{\mathrm{o}i}{r}_{3}+{b}_{\mathrm{g}n2}{x}_{\mathrm{o}i}{r}_{4}+{b}_{\mathrm{g}n2}{y}_{\mathrm{o}i}{r}_{5}+{b}_{\mathrm{g}n2}{z}_{\mathrm{o}i}{r}_{6}\\ +{c}_{\mathrm{g}n2}{x}_{\mathrm{o}i}{r}_{7}+{c}_{\mathrm{g}n2}{y}_{\mathrm{o}i}{r}_{8}+{c}_{\mathrm{g}n2}{z}_{\mathrm{o}i}{r}_{9}+{a}_{\mathrm{g}n2}{t}_{\mathrm{x}}+{b}_{\mathrm{g}n2}{t}_{\mathrm{y}}+{c}_{\mathrm{g}n2}{t}_{\mathrm{z}}+{d}_{\mathrm{g}n2}=0\end{array},$$The ${\mathbf{R}}_{\mathrm{o}}^{\mathrm{g}}$ and ${\mathbf{T}}_{\mathrm{o}}^{\mathrm{g}}$ to be determined include 12 unknown parameters. From Eq. (9), each receiver gives two equations. Therefore, the ${\mathbf{R}}_{\mathrm{o}}^{\mathrm{g}}$ and ${\mathbf{T}}_{\mathrm{o}}^{\mathrm{g}}$ can be determined by the solution of the linear equations provided by the six receivers. However, in practical applications, ${\mathbf{R}}_{\mathrm{o}}^{\mathrm{g}}$ determined with this method does not in general satisfy the orthogonal constraint condition. Thus, we consider Eq. (9) and the orthogonal constraint of ${\mathbf{R}}_{\mathrm{o}}^{\mathrm{g}}$.

## (10)

$$\{\begin{array}{l}{f}_{1}={r}_{1}^{2}+{r}_{2}^{2}+{r}_{3}^{2}-1=0\\ {f}_{2}={r}_{4}^{2}+{r}_{5}^{2}+{r}_{6}^{2}-1=0\\ {f}_{3}={r}_{7}^{2}+{r}_{8}^{2}+{r}_{9}^{2}-1=0\\ {f}_{4}={r}_{1}{r}_{4}+{r}_{2}{r}_{5}+{r}_{3}{r}_{6}=0\\ {f}_{5}={r}_{1}{r}_{7}+{r}_{2}{r}_{8}+{r}_{3}{r}_{9}=0\\ {f}_{6}={r}_{4}{r}_{7}+{r}_{5}{r}_{8}+{r}_{6}{r}_{9}=0\end{array}.$$## (11)

$$E=\sum _{i=1}^{6}({P}_{\mathrm{g}i\_1}^{2}+{P}_{\mathrm{g}i\_2}^{2})+M\sum _{j=1}^{6}{f}_{j}^{2}=\mathrm{min},$$^{21}

^{,}

^{22}The initial value for the iterative method can be calculated through the linear solution of Eq. (9).

## 4.

## Experiment and Results

According to recent study,^{11} the wMPS can present high accuracy for coordinate measurement in a large volume. As an off-the-shelf device, it has been successfully used in industrial applications for real-time target detection with the conventional method.^{23} Therefore, a comparison experiment is conducted with the wMPS between the conventional method and the proposed methods.

## 4.1.

### Setup of the Verification Platform

The experimental setup used to validate the proposed methods is shown in Fig. 4.

As illustrated in Fig. 4, a target object is used in the experiment, whose position and orientation are measured for comparison. Six receivers are fixed on the object to provide a coordinate frame for it, and the coordinates of the receivers in this coordinate frame were experimentally identified using a precision coordinate measurement machine.

Two wMPS transmitters are placed $\sim 4\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$ from the industrial robot to construct the measurement network. The intrinsic parameters of them are

Bundle adjustment calibration described in Sec. 2 is carried out in a $3\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}\times 3\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}\times 2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$ working volume around the robot, and the positions and orientations of the transmitters with respect to the global coordinate frame are then determined.

## 4.2.

### Experiment Procedure and Results

As shown in Fig. 4, the target object was in conjunction with the end-effector of the robot, whose position and orientation changed accordingly when the robot moved. During the experiment, the robot was moved to 10 different locations in the working volume, and the corresponding positions and orientations of the object were measured by the conventional method and the proposed methods for comparison. At each location, an experiment was performed in the same way by using a removable obstacle. First, without the obstacle, each receiver captured the signals from both transmitters, and the position and orientation of the object were recorded by the conventional method as a reference. After that, with the robot keeping still, we placed the obstacle between the transmitters and the object to simulate occlusion in practical applications. Optical signals from the transmitters to the receivers were partially interrupted by the obstacle, and the position and orientation of the object were then measured by using the three-point method with receivers 1, 3, 5 and the six-point method with all six receivers.

The position and orientation of the object are given by the translation vector ${\mathbf{T}}_{\mathrm{o}}^{\mathrm{g}}={(\begin{array}{ccc}{t}_{x}& {t}_{y}& {t}_{z}\end{array})}^{\mathrm{T}}$ and the rotation matrix ${\mathbf{R}}_{\mathrm{o}}^{\mathrm{g}}$, which relate the object coordinate frame to the global coordinate frame. In order to compare the results intuitively, ${\mathbf{R}}_{\mathrm{o}}^{\mathrm{g}}$ is represented by three single rotation angles as usual, which are pitch $\omega $, yaw $\phi $, and roll $\kappa $. Then, the deviations between the positions and orientations measured by the two proposed methods and the conventional method are shown in Figs. 5 and 6, respectively.

As we can see from Fig. 5, the deviations of the rotation angles with the three-point method are kept within $0.1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{deg}$, and the position accuracy exceeds $0.15\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$. Also, from Fig. 6, the orientation accuracy and position accuracy of the six-point method exceed 0.04 deg and 0.08 mm, respectively. The experimental results demonstrate that the proposed methods are entirely feasible when occlusion occurs and also exhibit good accuracy.

Another experiment was also designed to compare the measurement results for both methods when the obstacle is removed. Without the obstacle, we controlled the robot to make it move to 10 different locations in the working volume. At each location, the corresponding positions and orientations of the object were measured by the three-point method with receivers 1, 3, 5 and the six-point method with all six receivers, and then compared with the ground truth obtained by the conventional method. The comparison results are shown in Figs. 7 and 8.

From Figs. 7 and 8, it is clear that when the obstacle is removed, we come to the same conclusion as the measurement results with the obstacle. Furthermore, it is worthy to note that the accuracy of the six-point method is superior to the three-point method, which is caused by the redundant data provided by more control points.

## 5.

## Discussion

Concerning the two methods proposed in this paper, there are some issues to be discussed as below.

1. The two methods are appropriate to different cases because of their individual characteristics. The three-point method needs less control points, but it will be unavailable if occlusion exists in the whole working volume constantly because the iterative initial value cannot be determined. The six-point method is not limited by the condition of occlusion, but more control points should be used. Furthermore, from the experimental results, the accuracy of the six-point method is superior to the three-point method due to the redundant data provided by more control points. Therefore, in practical applications, these two methods should be used in combination for an efficient compromise among the number of available control points, the feasibility of the mathematical calculation, and the measurement accuracy.

2. With regard to these two methods, it is worthy to note that if redundant data are obtained from more control points, we can reduce uncertainty and achieve a higher measurement precision.

3. The proposed methods can also be applied to distributed optical large-scale measurement systems other than wMPS, such as iGPS and stereo vision system, by simply changing the communication model and the constraint equations.

## 6.

## Conclusions

Two real-time position and orientation measurement methods with occlusion handling have been presented for distributed optical large-scale measurement systems, which should be used in combination for practical applications. These techniques are based on the constraints established by three control points and six control points, respectively, and the measurement principles are expounded in detail. The feasibility and accuracy of the proposed methods are verified by comparing the results of position and orientation measurement with the conventional method. The experiment reveals that the orientation deviations of the three-point method and the six-point method are kept within 0.1 and 0.04 deg, respectively, and the position accuracy exceeds 0.15 and 0.08 mm. It clearly demonstrates that the methods are feasible and also exhibit good accuracy. The proposed approaches address the occlusion problem of the conventional position and orientation measurement method, and expand the actual application of distributed optical large-scale measurement systems.

## Acknowledgments

This work was funded by the National Natural Science Funds of China (51225505) and the National High-Technology & Development Program of China (863 Program, 2012AA041205). The authors would like to express their sincere appreciation to them, and comments from the reviewers and the editor were very much appreciated.

## References

J. HefeleC. Brenner, “Robot pose correction using photogrammetric tracking,” Proc. SPIE 4189, 170–178 (2001).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.417194Google Scholar

N. JayaweeraP. Webb, “Metrology-assisted robotic processing of aerospace applications,” Int. J. Comput. Integr. Manuf. 23(3), 283–296 (2010).ICIMEE0951-192Xhttp://dx.doi.org/10.1080/09511920903529255Google Scholar

W. W. ZhangB. H. ZhuangY. Zhang, “Novel navigation sensor for autonomous guide vehicle,” Opt. Eng. 39(9), 2511–2516 (2000).OPEGAR0091-3286http://dx.doi.org/10.1117/1.1287991Google Scholar

N. Moraleset al., “Real-time adaptive obstacle detection based on an image database,” Comput. Vis. Image. Underst. 115(9), 1273–1287 (2011).CVIUF41077-3142http://dx.doi.org/10.1016/j.cviu.2011.05.004Google Scholar

F. Franceschiniet al., Distributed Large-Scale Dimensional Metrology: New Insights, Springer, London (2011).Google Scholar

W. Cuyperset al., “Optical measurement techniques for mobile and large-scale dimensional metrology,” Opt. Laser Eng. 47(3), 292–300 (2009).OLENDN0143-8166http://dx.doi.org/10.1016/j.optlaseng.2008.03.013Google Scholar

C. ReichR. RitterJ. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000).OPEGAR0091-3286http://dx.doi.org/10.1117/1.602356Google Scholar

D. H. Zhanget al., “Exploitation of photogrammetry measurement system,” Opt. Eng. 49(3), 037005 (2010).OPEGAR0091-3286http://dx.doi.org/10.1117/1.3364057Google Scholar

G. Mosqueiraet al., “Analysis of the indoor GPS system as feedback for the robotic alignment of fuselages using laser radar measurements as comparison,” Robot. Copmut. Integr. Manuf. 28(6), 700–709 (2012).RCIMEB0736-5845http://dx.doi.org/10.1016/j.rcim.2012.03.004Google Scholar

A. R. Normanet al., “Validation of iGPS as an external measurement system for cooperative robot positioning,” Int. J. Adv. Manuf. Technol. 64(1–4), 427–446 (2013).0268-3768http://dx.doi.org/10.1007/s00170-012-4004-8Google Scholar

Z. Xionget al., “Workspace measuring and positioning system based on rotating laser planes,” Mechanika 18(1), 94–98 (2012).ZPLMAV0458-1563http://dx.doi.org/10.5755/j01.mech.18.1.1289Google Scholar

Z. Zhanget al., “Improved iterative pose estimation algorithm using three-dimensional feature points,” Opt. Eng. 46(12), 127202 (2007).OPEGAR0091-3286http://dx.doi.org/10.1117/1.2818202Google Scholar

M. GalettoB. Pralio, “Optimal sensor positioning for large scale metrology applications,” Precis. Eng. 34(3), 563–577 (2010).PREGDL0141-6359http://dx.doi.org/10.1016/j.precisioneng.2010.02.001Google Scholar

F. Franceschiniet al., “A review of localization algorithms for distributed wireless sensor networks in manufacturing,” Int. J. Comput. Integr. Manuf. 22(7), 698–716 (2009).ICIMEE0951-192Xhttp://dx.doi.org/10.1080/09511920601182217Google Scholar

M. Lagunaet al., “Diversified local search for the optimal layout of beacons in an indoor positioning system,” IIE Trans. 41(3), 247–259 (2009).IIETDM0740-817Xhttp://dx.doi.org/10.1080/07408170802369383Google Scholar

B. Triggset al., “Bundle adjustment—a modern synthesis,” in Proc. of the Int. Workshop on Vision Algorithms: Theory and Practice, pp. 298–372, Springer-Verlag, London (2000).Google Scholar

Y. Jeonget al., “Pushing the envelope of modern methods for bundle adjustment,” IEEE Trans. Pattern Anal. 34(8), 1605–1617 (2012).ITPIDJ0162-8828http://dx.doi.org/10.1109/TPAMI.2011.256Google Scholar

B. K. P. Horn, “Closed form solution of absolute orientation using unit quaternions,” J. Opt. Soc. Am. A 4(4), 629–642 (1987).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.4.000629Google Scholar

M. Y. Liuet al., “Fast object localization and pose estimation in heavy clutter for robotic bin picking,” Int. J. Robot. Res. 31(8), 951–973 (2012).IJRREL0278-3649http://dx.doi.org/10.1177/0278364911436018Google Scholar

D. B. Laoet al., “Optimization of calibration method for scanning planar laser coordinate measurement system,” Opt. Precis. Eng. 19(4), 870–877 (2011).1004-924Xhttp://dx.doi.org/10.3788/OPE.Google Scholar

J. J. Moré, “The Levenberg-Marquardt algorithm: implementation and theory,” Numer. Anal. 105–116 (1978).NFAODL0163-0563http://dx.doi.org/10.1007/BFb0067690Google Scholar

R. Behlinget al., “A Levenberg-Marquardt method with approximate projections,” Comput. Optim. Appl., 1–22 (2013).CPPPEF0926-6003http://dx.doi.org/10.1007/s10589-013-9573-4Google Scholar

Z. Xionget al., “Application of workspace measurement and position-ing system in aircraft manufacturing assembly,” Aeronaut. Manuf. Technol. 21(1), 60–62 (2011).Google Scholar

## Biography

**Zhexu Liu** is a PhD candidate in precision measuring technology and instruments at Tianjin University, and he received his MS degree in precision measuring technology and instruments from Tianjin University in 2011. His research interests are in photoelectric precision measuring and large-scale metrology.

**Jigui Zhu** received his BS and MS degrees from the National University of Defense Science and Technology of China in 1991 and 1994, and his PhD degree in 1997 from Tianjin University, China. He is now a professor at the State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University. His research interests are focused on laser and photoelectric measuring technology, such as industrial online measurement and large-scale precision metrology.