## 1.

## Introduction

The plume tracker in a photoelectric acquisition, tracking, and pointing (ATP) system needs to lock a settled tracking point on the target stably for accurate measurement or aiming at the target.^{1}^{,}^{2} Many attempts have been made to improve the stability of the tracking point extraction, especially for the extended target.^{3}4.^{–}^{5} For targets that extend beyond the track gate, the leading edge tracking algorithm is a robust approach that can quickly find and settle on the target nose. In this way, it guarantees locating the target in the center of the track gate and the background taking up some ratio in the field of view to avoid failure of segmentation^{6} or tracking point drift.^{7} But due to the airstream disturbance introduced by the tail flame, the segmentation thresholds are influenced easily and instable, especially when the target is a vehicle with strong tail flame. In this situation, the vehicle body is prone to be segmented into multiple fragments even if morphological operations are used. After target feature recognition and association, some small fragments are excluded and the target with the highest degree of confidence is left. Unfortunately, the tracking points extracted based on the leading edge tracking algorithm would be jittery between consecutive frames and introduce severe tracking stability problem. Figure 1 shows this situation.

Besides the basic leading edge tracking algorithm shown in Fig. 2(a), which finds the tip of the target directly, some improved leading edge tracking algorithms have been proposed for reducing the jitter of the tracking point, such as the polynomial fitting algorithm shown in Fig. 2(b), the 19-point leading edge tracking algorithm shown in Fig. 2(c), and the correlation tracking algorithm.^{4} However, the result of the polynomial fitting algorithm may not be on the target, and it is sensitive to the segmentation error and time cost. Although the result of the 19-point leading edge tracking algorithm is stable because it calculates the centroid of the region near the tip of the target, if there is tip jitter introduced by big segmentation error, the result is not satisfied yet. Due to the template drift phenomenon,^{7} the correlation tracker suffers from gradual drift of the tracking region out of the template and losing the target eventually. Although some algorithms against template drifting have been proposed in recent years,^{8}^{,}^{9} they are not able to work well with the target with strong tail flame due to severe airstream disturbance. To reduce jitter and improve tracking precision, it is important to analyze the characteristics of vehicle imaging carefully and find out what introduces the jitter.

## 2.

## Characteristic of Vehicle Infrared Image

In the boost phase of a vehicle, the engine jets high-velocity and high-temperature airstream to acquire enough backward thrust. These airstreams form the tail flame of the vehicle and exhibit the characteristic of extended target in the infrared image. The temperature of the vehicle body increases rapidly as the flight velocity increases and the body imaging becomes more and more distinct. So the whole imaging of the vehicle in boost phase includes both body and plume. From the image acquired, it can be seen that the gray-level distribution of the target in this stage consists of three regions. Figure 3(a) shows the real infrared target image after edge enhancement, and Fig. 3(b) shows the three-dimensional gray-level distribution of the original image. The vehicle body is zoomed in to show distinctly in Fig. 3(b), and the gray-level fluctuation along its central axis is shown in Fig. 3(c).

From Fig. 3, it can be seen that the first region A is the background with low and uniform gray-level distribution introduced by the atmospheric thermal radiation. The second region is the tail flame, which can be subdivided into three parts further. First, there is a region B1 with the uniform and saturated pixels near the central axis. Along the direction away from the central axis, a boundary layer transition region B2 with distinct contour boundary divides the tail flame into these three parts, and its pixel gray level descends rapidly near the boundary. Outside the region B2, there is a region B3 with lower gray level than B1 but higher grey level than the background region A. The gray-level distribution of the region B3 is uniform on the whole, but there are some disturbances leading to random segmentation error introduced by the instable airstream outside it. The third region C is the vehicle body imaging with gray level between the background and the tail flame. The gray-level distribution in this region is nonuniform and severely fluctuant as shown in Fig. 3(c).

## 3.

## Stable Tracking Point Extraction

The conventional leading edge tracking algorithm finds the target frontal along the target moving direction.^{2} It is inaccurate when the target velocity is low relative to the ATP system. In this paper, we define the target frontal by combining the moving direction and the angle of the target principal axis. Defined as in Eq. (1), ${m}_{i,j}$ is the central moment of the target with $i\times j$ order. The $(\overline{x},\overline{y})$ is the target centroid and $f(x,y)$ is the gray of image pixel at $(x,y)$.

The moment of inertia of the target and its differential are defined as in Eqs. (2) and (3). ${m}_{2,0}$, ${m}_{0,2}$, and ${m}_{1,1}$ are obtained from Eq. (1):

## (2)

$$I(\theta )=\sum _{(x,y)\in R}\sum {[(y-\overline{y})\mathrm{cos}\text{\hspace{0.17em}}\theta +(x-\overline{x})\mathrm{sin}\text{\hspace{0.17em}}\theta ]}^{2}f(x,y),$$## (3)

$${I}^{\prime}(\theta )=({m}_{2,0}-{m}_{0,2})\mathrm{sin}\text{\hspace{0.17em}}2\theta -2{m}_{1,1}\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}2\theta .$$The angle of the target principal axis is the angle of the target minimal moment of inertia. So let ${I}^{\prime}(\theta )=0$, we can get two results ${\theta}_{1}$ and ${\theta}_{2}$:

From Eqs. (4) and (5), it cannot be decided which one is the angle of the target minimal moment of inertia. So the ${I}^{\prime \prime}(\theta )$ is introduced to discriminate them:

## (6)

$${I}^{\prime \prime}(\theta )=[2({m}_{2,0}-{m}_{0,2})\mathrm{cos}\text{\hspace{0.17em}}2\theta +4{m}_{1,1}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}2\theta ]f(x,y).$$${\theta}_{1}$ and ${\theta}_{2}$ are substituted in Eq. (6), if ${I}^{\prime \prime}({\theta}_{i})>0$, then this ${\theta}_{i}$ is the angle of the target minimal moment of inertia, which is labeled as ${\theta}_{\text{axis}}$. Then the direction of the target nose could be calculated from Eq. (7). ${\theta}_{v}$ is the target moving direction.

## (7)

$${\theta}_{n}=\{\begin{array}{ll}{\theta}_{\text{axis}}& |{\theta}_{\text{axis}}-{\theta}_{v}|<|{\theta}_{\text{axis}}+\pi -{\theta}_{v}|\\ {\theta}_{\text{axis}}+\pi & |{\theta}_{\text{axis}}-{\theta}_{v}|\ge |{\theta}_{\text{axis}}+\pi -{\theta}_{v}|\end{array}.$$Along the direction of ${\theta}_{n}$, the tip of the target could be found. Unfortunately, there are severe segmentation errors between consecutive frames, leading to tracking jitter due to the nonuniform gray-level distribution of the target, especially on the vehicle body. In this paper, a novel method based on the chord-arc ratio filtering for contour smoothing is proposed. After the contour smoothing, the big fluctuations and the spiculate arcs on the target contour are removed. Figure 4(a) shows the sketch of the filtering method.

As Fig. 4(a) shows, the contour tracking algorithm is applied first. Then all contour points arranged in order are processed one by one along the same direction, clockwise or anticlockwise. Let us assume that the point processed currently is $P\mathit{cur}$ and the number of contour points is *lenContour*. From the next point of $P\mathit{cur}$ along the processed direction, all points $Pi$ in the circle with radius $R$ and centered at $P\mathit{cur}$ are searched along the same direction. The value of $R$ is the upper limit length of the chord between the two endpoints of the arc being removed. It decides the max width of the vehicle body that could be eliminated. The length of the chord between $P\mathit{cur}$ and $Pi$ is calculated and labeled as $lenChd[i]$. $lenArc[i]$ is the length of the minor arc between them. Then the chord-arc ratio is calculated as in Eq. (8):

## (8)

$$\mathrm{ratio}\mathit{Cur}[i]=lenChd[i]/lenArc[i],\phantom{\rule[-0.0ex]{1em}{0.0ex}}i=1\dots N.$$$N$ is the number of points searched in the circle along the search direction. After completing searching in the circle for the point $P\mathit{cur}$, the minimal ratio ${\mathrm{Ratio}}_{\mathrm{min}}$ is obtained using Eq. (9) for $Pcur$.

If ${\mathrm{Ratio}}_{\mathrm{min}}<{\mathrm{Ratio}}_{\mathrm{th}}$ and corresponding $lenChd[i]$ meets $lenChd[i]>lenCh{d}_{\mathrm{th}}$, then the point $Pcur$ is labeled as should be connected, and the value of the ${\mathrm{Ratio}}_{\mathrm{min}}$, the points $P\mathit{cur}$ and $Pi$ are recorded. The ${\mathrm{Ratio}}_{\mathrm{th}}$ and $lenCh{d}_{\mathrm{th}}$ are thresholds for the chord-arc ratio and the length of the chord, respectively. The parameter $lenCh{d}_{\mathrm{th}}$ is introduced to obtain smoother results after filtering and reduce remaining irregular spiculate arc. It decides the minimal width of the vehicle body that could be eliminated.

After completing the minimal chord-arc ratio calculation for every contour point, the second traversal is applied again. If the point $P\mathit{cur}$; processed currently is recorded as should be connected, then the contour points on the minor arc between it and the point $Pi$ recorded are removed, and a new straight line connecting them is inserted in to form a new arc. After the second traversal is completed, a smoother contour is obtained. The contour smoothing result of a real target contour is shown in Fig. 4(b). The yellow pixels are contour points reserved after filtering, and the white pixels are contour points that have been eliminated. It can be seen that the unstable and spiculate part of the contour that is introduced by the vehicle body has been removed and the portion reserved is the smoother contour of the plume.

As Fig. 5(a) shows, although most of the big fluctuations on contour have been removed after smoothing, there are still some small fluctuations remaining as ${\mathrm{Ratio}}_{\mathrm{th}}$ and $lenCh{d}_{\mathrm{th}}$ introduced are still sensitive to the random segmentation error between consecutive frames. Then the tracking point extracted based on the basic leading edge tracking algorithm, the polynomial fitting algorithm, or the 19-point leading edge tracking algorithm is still jittery. For extracting a stable tracking point further, a novel method based on the minimal inscribed circle of contour after filtering is proposed.

First, a preliminary frontal point $Pa$ is obtained with the 19-point leading edge tracking algorithm. Because it calculates the tracking point based on the result after contour smoothing, it is called revised 19-point leading edge tracking. Due to the effect of the contour smoothing, the jitter of $Pa$ is lower than the result without smoothing. Then the centroid point $Pb$ of the circle region with radius $Rb$ and centered at the point $Pa$ is calculated. The value of $Rb$ is chosen based on the max width of the flame. A suitable value of $Rb$ could improve the stability of $Pb$ calculated. If $Rb$ is too small, the stability of $Pb$ is influenced easily by the residual fluctuation on the vehicle body contour after filtering. If it is too big, it would be influenced by the instable airstream on the caudal region of the tail flame. In the paper, $Rb$ is $3\times {\text{Width}}_{f\mathrm{max}}/4$, where ${\text{Width}}_{f\mathrm{max}}$ is the max width of the tail flame. Obviously, the point $Pb$ is more stable than $Pa$ because it is an ensemble average of the frontal part of the target. But based on the analysis of the plume imaging characteristic, it can be seen that because most of the plume pixels are saturated inside the boundary layer transition region B2, the contribution of the centroid of the frontal region cannot compensate for influence of the tip ($Pa$) jitter, because the tip ($Pa$) jitter introduces regions used for calculating centroid shifting in consecutive frames. Based on the fact that the region B3 has more uniform and higher gray-level distribution than the pixels outside the boundary between B3 and background region A, a tracking point extraction method based on the minimal inscribed circle of the frontal part of the plume contour is presented for reducing jitter further. After the minimal inscribed circle $Cins$ centered at $Pb$ of the contour is obtained, the tracking point $Pt$ is extracted as the cross shown in Fig. 5(a), which is the intersection of $Cins$ and $Lp$ near the tip. $Lp$ is the straight line with the angle of the target principal axis and through the point $Pb$. From Figs. 5(a) and 5(b), it can be seen that there is even severe segmentation error on the vehicle body between consecutive frames; however, when $Pa$ and $Pb$ jitters, only the radius of $Cins$ changes and the tracking point $Pt$ is stable.

## 4.

## Experimental Results

Figure 6 shows the tracking point extraction results and the jitter calculated. The revised 19-point leading edge algorithm adopts the point $Pa$ as the tracking point. The tracking jitter is the difference between the original synthetical value and the result from the least squares fitting of the tracking point. Figures 6(a) and 6(b) show the results of several consecutive frames on $x$ and $y$, respectively, based on the revised 19-point leading edge tracking algorithm $Pa$, the center of the minimal inscribed circle $Pb$, and the ultimate tracking point $Pt$ extracted by the method proposed in this paper. Figures 6(c) and 6(d) show the jitter comparison among the algorithms in the extraction results of several consecutive frames on the azimuth and the elevation, respectively. In the test infrared image sequence, one pixel occupies 15.23 arc sec. Based on the proposed method, the results show the jitters are no more than 0.077 pixel RMS (1.17 in. RMS) on azimuth and 0.25 pixel RMS (3.87 in. RMS) on elevation and the stability is improved 15.3 times and 21.4 times, respectively. Obviously, it achieves better performance in tracking stability.

The algorithm proposed was implemented on the DSP TMS320C6455 (1.2 GHZ) for real-time tracking point extraction as one module of the image processing and target tracking machine in the ATP system. For $320\times 256\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixel}$ 14-bit real infrared image, the target features and the time cost are shown in Table 1.

## Table 1

Target features and the time cost.

Frame | Widthfmax | Num of contour points | Target area | Time cost (ms) |
---|---|---|---|---|

2000 | 93 | 445 | 12,211 | 1.23 |

4076 | 75 | 361 | 7962 | 1.07 |

5743 | 58 | 297 | 5396 | 0.98 |

6824 | 47 | 213 | 4693 | 0.92 |

## 5.

## Conclusion

In conclusion, a novel method based on the contour smoothing and minimal inscribed circle is presented for improving the stability of the tracking point extraction. It is insensitive to the segmentation threshold and error. The theoretical analysis and the experimental results show that the tracking point jitter is reduced and the tracking stability is improved dramatically based on the method proposed.

Besides the situation of tracking the plume target talked about in the article, the proposed method could be used for tracking the infrared rigid target like a plane too. The contour smoothing method based on the chord-arc ratio filtering could be extensively used in other image analysis scenes such as the object defect detection, the object recognition, and so on.

## Acknowledgments

The authors gratefully acknowledge support by the 863 Project from Science and Technology Department (No. G107309, No. G107302). The authors also want to express gratitude to the anonymous reviewers whose thoughtful comments and suggestions improved the quality of the article.

## References

B. L. Ulich, “Overview of acquisition, tracking, and pointing system technologies,” Proc. SPIE 887, 40–63 (1988).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.944208Google Scholar

J. N. Sanders-Reed, “Multi-target, multi-sensor, closed loop tracking,” Proc. SPIE 5430, 1–19 (2004).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.518557Google Scholar

P. D. Hill, “Real-time video edge tracking algorithms,” Proc. SPIE 1950, 141–151 (1993).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.156599Google Scholar

J. W. BukleyR. M. Cramblitt, “Comparison of image processing algorithms for tracking illuminated targets,” Proc. SPIE 3692, 234–243 (1999).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.352866Google Scholar

Z. PengQ. ZhangA. Guan, “Extended target tracking using projection curves and matching pel count,” Opt. Eng. 46(6), 064401 (2007).OPEGAR0091-3286http://dx.doi.org/10.1117/1.2746913Google Scholar

F. GallandP. Réfrégier, “Information-theory-based snake adapted to multi region objects with different noise models,” Opt. Lett. 29(14), 1611–1613 (2004).OPLEDP0146-9592http://dx.doi.org/10.1364/OL.29.001611Google Scholar

J. Ahmedet al., “Real-time edge-enhanced dynamic correlation and predictive open-loop car-following control for robust tracking,” Mach. Vis. Appl. 19(1), 1–25 (2008).MVAPEO0932-8092http://dx.doi.org/10.1007/s00138-007-0072-4Google Scholar

J. Y. PanB. Hu, “Robust object tracking against template drift,” in Proc. IEEE Int. Conf. on Image Processing, Vol. 3, pp. III-353–III-356, IEEE, San Antonio, TX (2007).Google Scholar

T. HanM. LiuT. Huang, “A drifting-proof framework for tracking and online appearance learning,” in Proc. IEEE Workshop Applications of Computer Vision, pp. 1–10, IEEE, Austin, TX (2007).Google Scholar

## Biography

**Tao Lei** received his MS degree from the Graduate School of Chinese Academy of Science in 2006 and PhD degree in signal and information processing from the same school in 2013. From 2006, he worked at the Institute of Optics and Electronics (IOE), Chinese Academy of Sciences. Now he is an associate researcher. His research interests include signal processing, image processing, target recognition and tracking, and real-time data processing.

**Sihan Yang** received her MS degree in computer science from Chengdu University of Technology in 2006. She has been a PhD candidate since 2008 and is also a lecturer in the same university. Her current research interests include image processing, computer vision, and target recognition and tracking.

**Ping Jiang** received his MS degree from the University of Electronic Science and Technology of China in 2004 and his PhD degree from the Sichuan University in 2011. From 2002, he worked at the IOE, Chinese Academy of Sciences. Now he is an associate researcher. His current research interests are in the areas of target tracking, machine vision, and information fusion.