Technique for radar and infrared search and track data fusion

Abstract. An algorithm for the fusion of data used for target search and tracking originated by a bidimensional radar and infrared search and track is proposed and described. To permit a fusion process between two systems whose measurements are not completely comparable, a new strategy for mixing the system states is introduced, thus obtaining a set of homogeneous measurements. We describe the rationale behind the method and develop mathematical aspects necessary to obtain the equation for the fusion of tracking data. An analysis of the estimation errors associated with the proposed model is also described. Some simulation results demonstrate the capabilities of the presented technique.


Introduction
The idea behind the current proposal for a data fusion system derives from the evidence of some specific problems we encountered during our trials with radar and infrared search and track (IRST) for surveillance and tracking. We can summarize and represent what was highlighted during the tests in the following simple and concise way: • Especially in complex scenarios, it is not trivial to associate tracks from one system to tracks from the other, mainly because they measure quantities of different type. • In air to sea scenarios, track seductions caused by strong maritime clutter causes exchange of the markers associated to radar targets, failing association with IRST tracks. • In long range tracking, radar and IRST track association fails due to the inaccuracies of the distances estimated by the IRST and the inability to properly link the angles provided by the two systems. • In order to operate the data fusion, kinematic ranging techniques [1][2][3] are used in IRST to estimate the track range. This technique is not fully reliable and in some cases it does not permit correct track to track association.
This paper describes a simple but effective method to remove the above limitations in the use of the two systems and to operate without the need of any kinematic ranging operation.
Currently, radar and IRST are operating together on various platforms and are considered complementary from an operational point of view. In general, radar systems are effective in long range detection, they are capable of providing precise target range and they are active systems; IRST systems are very precise in angular target tracking and they are completely passive systems.
The proposed data fusion approach intends to exploit the capabilities of the two sensors by mixing their specific characteristics, considering them as part of an integrated system.

Definition of the Context for Data Fusion
We assume at beginning that both the sensors, radar and IRST, provide detection data relevant to the target at the same sampling time T s (synchronous systems). Later on we will remove this limitation to deal with systems with different sampling time also (asynchronous systems).
We will disregard all the issues related to the registration problem 4,5 of the two sensors assuming that they have a common axis system that, in general, is the one of the host platform. Figure 1 is a top level block diagram of the data fusion system.

Sensing and Preprocessing System
This represents the sensor processing front-end; its main purpose is the extraction of all the possible radar and IRST targets. This is obtained, in general, by suitable statistical signal processing on a single acquisition (batch) basis, without any knowledge of the past events. The resulting data records are called "detection." Each detection can be originated by a real target of interest, by noise, or by clutter/background. A common situation is to operate with few detections originated by real targets and many detections originated by noise or clutter/background.

Data Processing and Fusion System
This part is devoted to the tracks from noise or clutter. It is based on time-by-time correlation of the detection. Data processors (DAP) perform estimation and prediction of tracks on the basis of kinematics. The fusion system, placed at the end of the DAPs, mixes the tracks data. Here we assume the DAPs and the fusion system are located in the same functional unit even if it is not strictly necessary. The data fusion system functional block will be expanded later on.
For our purpose we assume that, for each detection, the following data, in input to the radar data processing functional block, are available: range rðk · T s Þ and bearing βðk · T s Þ: For the IRST, for each detection, the following data, in input to the IRST data processing functional block, are available: azimuth azimuth φðk · T S Þ and elevation ϑðk · T S Þ: The angles β and ϕ derive from different sources, but they refer to the same angle although with different measurement noise.
For compactness, the sampling time T s is not indicated in the following formulas and whenever necessary we will omit the sampling counter k, too.
The geometry associated to the above parameters is schematically presented in Fig. 2.
The noise associated with the input data is assumed white, Gaussian, and uncorrelated.
We convert the data of radar from polar to Cartesian coordinates to avoid the use of the extended Kalman filters, xðkÞ ¼ rðkÞ · cos½βðkÞ yðkÞ ¼ rðkÞ · sen½βðkÞ : It is known that the above conversion produces correlation and bias in the noise of the converted coordinates, but that bias can be evaluated and therefore removed. 6 Now we can proceed in defining the output elements, the tracks, of the radar data processor (by the usual symbols for the description of dynamic systems): • estimation of the present state: ½x R ðkjkÞ;ŷ R ðkjkÞ • estimation of the future state (prediction): ½x R ðk þ 1jkÞ;ŷ R ðk þ 1jkÞ • state errors estimated variances σ 2 x and σ 2 y • estimated correlation between the state errors: ρ xy ¼ EfΔx R · Δy R g, where Δx R ¼x R − x and Δy R ¼ŷ R − y are the estimation errors onx R andŷ R , being x and y the true values of the slant coordinates. Efg stands for the expected value.
From the estimatesx R andŷ R we can get back the range and bearing filtered data:   :r Since the estimation errors onx R andŷ R are generally different at every sampling time and are correlated, too, the computation of the variance is not straightforward and some approximation must be taken to obtain (see Appendix A): x ·ðcos hσ 2 β · cos 2β þsin hσ 2 β ·sin 2β Þ þσ 2 y ·ðcos hσ 2 β · sin 2β þsin hσ 2 β · cos 2β Þþ2 ·ρ xy · sinβ ·cosβ Now, to implement the fusion of the data coming from two different sensors, we have to perform an additional step thus obtaining a homogeneous set of measurements. By using elevation ϑ from the IRST and the data from the radar, we can write the equations: While by using azimuth ϕ elevation ϑ from the IRST and range r from the radar we have: : Now we have two homogeneous sets of data: the vectors ζ R ¼ ½ζ x R ζ y R ζ z T and ζ I ¼ ½ζ x I ζ y I ζ z T , with T as the symbol of transpose operation, where of course ζ ZR ¼ ζ z I ¼ ζ z . The expressions for the most important statistic figures are reported in Appendix B. In particular it is shown that the expected values of the measurement errors have a bias. When the bias is not negligible, a suitable mitigation process has to be considered. 6 In a multiple target tracking scenario we have to associate the data from many possible targets. Simply the r 0 track coming from the radar and the i 0 track from the IRST are considered to be linked to the same object if, said X R and X I their respective states, being t the time. In our case, ζ R ¼ X R and ζ I ¼ X I . But ζ R and ζ I are affected by estimation errors that rarely allow Eq. (5) to be verified. In order to accept the hypothesis that two tracks coming from different sensors concern the same object (track to track association process), we have, therefore, to test that the following differences are inside a given bound for every k. So to properly associate ζ R and ζ I to the same target we proceed with the following criteria: where λ is a suitable constant. Equation (6) is justified by considering that the error ofβ andφ are independent and Gaussian with 0 mean.

Condition 2:
For all tracks satisfying the previous criteria compute: ζ x R ðkÞ ¼x R ðkjkÞ · cosθðkjkÞ ζ y R ðkÞ ¼ŷ R ðkjkÞ · cosθðkjkÞ ζ x I ðkÞ ¼rðkjkÞ · cosθðkjkÞ · cosφðkjkÞ ζ y I ðkÞ ¼rðkjkÞ · cosθðkjkÞ · sinφðkjkÞ ζ z ðkÞ ¼rðkjkÞ · sinθðkjkÞ : (7) Then, under the Gaussian approximation on ζ R and ζ I , verify that the following conditions are satisfied: being σ 2 x RI , σ 2 y RI the variances and ρ Δxy the correlation coefficient computed in Appendix C. With the approximation that ζ R and ζ I are Gaussian, γ 0 is a suitable threshold value evaluated by using techniques based on χ 2 test.
Condition 3: For all tracks satisfying the previous criteria compute: Then verify that the following conditions are satisfied: where σ 2 x RI ðk þ 1jkÞ, σ 2 y RI ðk þ 1jkÞ, and ρ Δxy ðk þ 1jkÞ can be computed as in the previous step, but using the predicted data, and γ 1 is evaluated in the same way as γ 0 .
We define "mixed detection" the system of Eq. (7) obtained by combining the data of two tracks which pass the above discrimination process. Equation (7) can be interpreted as a set of measures, whose measurement errors are σ 2 r , σ 2 β , σ 2 x R , σ 2 y R ,σ 2 x I , σ 2 y I , and σ 2 z evaluated in the Appendices.
3 Fusion and Tracking ζ x R ðkÞ and ζ x I ðkÞ both represent an estimation of the same quantity, as well as ζ y R ðkÞ and ζ y I ðkÞ. Now, the classical way to proceed for the fusion is by the following equation: 7 Because ζ z ðkÞ is common to both ζ R and ζ I vectors, it can be ignored in computation and the matrix A becomes a 2 × 2 elements matrix while the estimateζ RI ðkÞ a 2 elements vector. Let Δζ RI , Δζ I , and Δζ R be the errors onζ RI ðkÞ; ζ I ðkÞ; ζ R ðkÞ, we have: The covariance and cross-covariance matrices of the errors Δζ R , Δζ I , and Δζ RI are While the covariance matrix of the difference Δζ R ðkÞ− Δζ I ðkÞ is We have 7 Since the polar to Cartesian conversion introduced above causes a correlation between the errors along the coordinate axes, the matrix R Δ , especially when the trajectories of the target is seen near 45 deg, tends to be singular with impacts on the stability of tracking. A different approach is then adopted. We consider ðζ R ; ζ I Þ as two different detections which have been determined by the same sensor from the same source. Now we designate 8 > < > : the probability density function (pdf) of the Gaussian random variables ζ R and ζ I , respectively, andẐ ¼Ẑðkjk − 1Þ the predicted "measure" for the present time evaluated at the previous time t − T s by the data fusion system, while SðkÞ is the relevant residual covariance matrix. In that case 8 provides the probability of ζ R ðkÞ, given that ζ I and ζ R represent the same object. Therefore, every mixed detection will generate a new "measure," whose components along the coordinate axes are provided by So we can write where its covariance matrix is being σ 2 x R , σ 2 x I , σ 2 y R , σ 2 y I , σ 2 z , CðΔx ζ R · Δy ζ R Þ, CðΔx ζ I · Δy ζ I Þ, CðΔx ζ I · Δx ζ R Þ, CðΔy ζ I · Δy ζ R Þ, CðΔx ζ I · Δy ζ R Þ, CðΔy ζ I · Δx ζ R Þ given in the Appendices. ζ α ðkÞ is the input to the processor devoted to the fused tracks and RðkÞ is the covariance matrix of its error.
All the mixed detections meeting the criteria with γ a suitable threshold are considered for an association with the fused track. It is possible that an IRST track could be associated to more radar tracks and vice versa. In fact, every IRST track with azimuth near to the bearing of a given radar track could pass, regardless of the elevation, the tests of the discrimination process for the mixed detection generation. This means that a given fused track can be unlikely associated with multiple mixed detections. In that case, we proceed by using the classical joint probabilistic data association (JPDA) algorithm, 9,10 with the following adaptation: the fused tracks and the IRST tracks are put in the role, respectively, of tracks and detections in the generation of the validation matrix and in the event matrices of JPDA algorithm. Then the exponent of the Gaussian probability density function relevant to each fused track to which the mixed detection is associated (the mixed detection has been previously generated using the relevant IRST track in event matrix), will be equal to −1∕2 the left hand side of Eq. (21). At this point we can proceed with the calculation of the Kalman gain, the estimation of the present and predicted state and the covariance matrixes of the relevant estimation errors. Figure 3 summaries with a block diagram the process described.

Asynchronous Systems
Now we remove the limitation that radar and IRST are synchronous. Our main objective is to align all data in time in order to implement the track-to-track association process conditions 1 to 3 described in Sec. 2. Let us designate T R and T I the sampling periods of radar and IRST, respectively. We select for the following description the sampling period T R of radar as reference and as the time when all the operations of the fusion system start. Let where t R and t I are the sampling times of radar and IRST. Referring to Fig. 4, we can proceed in the following way. The output data from the IRST referred to the radar time t R ¼ k · T I þ Δt are given by while the radar predictions at the time t I þ T I ¼ t R þ T I − Δt referred to the IRST predicted data are where ξ I , Φ I , P I , and Q I are the state, the transition matrix, the covariance matrices of the state error and process noise of the IRST track, while ξ R ; Φ R , P R , and Q R are the state, the transition matrix, the covariance matrix of the state error estimation and of the process noise of the radar track. The Eq. (23) are necessary to verify conditions 1 and 2. Once obtained the predicted stateξ R ðt I þ T I Þ and the predicted state error covariance matrix P R ðt I þ T I Þ we are ready to run the condition 3.
An alternative method, called interpolation between samples, certainly faster, but mainly, which requires no knowledge of the state equations of both the tracking systems, is as follows: since it is always Δt ≤ T I , assuming a three-dimensional state, it is reasonable to suppose a constant acceleration of the IRST track within the time interval Δt. Furthermore, in the absence of further information between two successive sampling times, we obtain the variances in an intermediate point as a linear interpolation between the variances of the estimated and predicted state errors. So we rewrite Eq. (23) to align the azimuth and elevation angles φðt I Þ, ϑðt I Þ and the state errors variances σ 2 being φðt I Þ ¼φðkjkÞ and ϑðt I Þ ¼θðkjkÞ, ω ψ ðt I Þ 1 ω ψ ðkjkÞ, and α ψ ðt I Þ ¼α ψ ðkjkÞ, with ψ ¼ φ; ϑ, estimated position, velocity, and acceleration of the IR track angles at the time instant t I ¼ k · T I . σ 2 I ðkjkÞ and σ 2 I ðk þ 1jkÞ designate the estimated and predicted variances at the times t I and t I þ T I , respectively.
For condition 3 we will proceed in a simpler way using the linear interpolation for both the positionsx R andŷ R and the variance σ 2

Simulation Results
We generated the target motions in order to stress the system and to check the robustness of algorithms also incurring in the risk to create pour realistic scenarios. Clutter is generated and uniformly distributed inside the observed volume. Both IRST and radar utilize the probabilistic data association (PDA) algorithms 9-12 for detection-gate association. Important is also the use of the JPDA algorithm in case of detection shared by more tracks. Moreover, in order to make more accurate the tracking and to reduce the false track probability, we assume the use of the interacting multiple model (IMM) algorithm. [9][10][11][12] The sampling time of the IRST is T I ¼ 157 ms, while the radar one is T R ¼ 1 s. To synchronize both data flows we used the interpolation between samples method. The radar measures range errors is 75 m rms and bearing error 0.5 deg rms in one case, 1.0 deg rms in the second. The IRST angular measurements error is 0.6 mrad rms.
Different motion models are used by IMM for the different process phases.
For the radar.
For the formation tracks the models are: 1. False target model: acceleration process of 35 m∕s 2 , time constant τ 0 ¼ 10 s, and detection probability P DR ¼ 0.  The data fusion system is characterized by the following features: , and a gate threshold γ ¼ 20.
Even for the fusion system we used the IMM algorithm characterized by these three motion models: 1. Linear motion model: acceleration process of 0.0025 m∕s 2 and time constant τ 0 ¼ 60 s. For the motion along the altitude direction we adopted only a linear motion model characterized by an acceleration process rms of 0.025 m∕s 2 and time constant τ z ¼ 60 s.
The probability of correct association of the mixed detection with the predicted tracks, i.e., of correct fusion of the data from the two sensors, was practically 100% in all performed tests, with the exception of the case where the distance between two targets was close to the resolution of the two sensors and the relative velocity was close to zero. In that condition, the probability falls to 50%.
We consider as merit figures for the data fusion system: 1. The increased precision of the fused tracks with respect to the radar ones. 2. The absence of bias in the estimation errors along the three coordinate axes.
We checked the above merit figures by using a target placed at the distance greater than 70 km, an altitude of 50 m, approaching the observer with a constant velocity of 250 km∕h along the x-and y-directions with an angle of 45 deg with respect to the observer. It is possible to see that the error in bearing impacts the performance of the radar as a standalone system, but it is practically negligible when data fusion is implemented. The general benefit in accuracy is ever evident.
It is interesting to observe in Figs. 5(a), 5(b), 6(a), and 6(b), the difference in behavior of radar estimation error with respect to data fusion estimation error: due to the polar-to-Cartesian coordinate conversion in radar data, to an error on the x-axis corresponds an error on y-axis, proportional and of opposite sign. In the specific case of the figures where the trajectory of the target is 45-deg off-axis,  It is interesting to observe the behavior of the fused tracks: when the coefficient α of the fusion equation, is around 0.5, i.e., when radar and IRST concur with equal weight to the fused detection, the equal-probability ellipse (gate) of the fused track results orthogonal to the observer-target direction (Fig. 7). The angular radar error causes an error proportional tor · σ β , the cross-range error, orthogonal to the range itself.
When the coefficient α tends to 0 the IRST data become prevailing in ruling the tracking. All the uncertainty due to the radar bearing disappears and the cross-range error is determined mainly by the IRST error angle. The gate is mainly determined by the radar error of the range estimation. The gate collapses almost in a segment (Fig. 8).
To verify the bias in the estimation error along each coordinate axis we used the normalized mean estimation error (NMEE) 13 based on N Monte Carlo runs. Using a confidence region of 95%, we consider the error without bias if it results where Fig. 8 The gate collapses almost in a segment when is α is around 0.
Δx q ðkÞ σ q ; k¼ 0; 1; 2; 3; ···; K being σ q the standard deviation of the error component Δx in the q'th run. Our NMEE test is constituted of N ¼ 30 Monte Carlo runs of length K ¼ 180 samples (3 min long), using the target of the previous example. Figure 9 shows the graphs of the normalized estimated mean error for 0.5 deg rms bearing error, the bounds of the confidence region, ½−0.359; 0.359, and the sample mean in each direction. We can observe that the samples μðkÞ are within the confidence interval, except for some isolated points.
Finally, Fig. 10 provides the graphs of the NMEE based on N ¼ 60 Monte Carlo runs, but with an angular radar measurement error of 1 deg. Also in this case we can observe that the NMEE is within the 95% bounds of the confidence interval, now ½−0.254; 0.254, except for some isolated points, showing a still acceptable zero mean estimation error of the fusion system.

Conclusions
The algorithm to fuse tracks, proposed in this paper, provides a general procedure to manage not homogeneous data as in the case of radar and IRST. The paper considers both synchronous and asynchronous sensors. For that, it introduced a specific synchronization technique of the data flows. The simulation results prove the validity of the method through the significant improvement in the accuracy of tracking and in the absence of bias in the state error of the tracks.
The extension of the algorithm to the case of threedimensional radar is fairly straightforward: in that case, in addition to the bearing, the radar also provides measurements of elevation making the generation of the mixed detection more immediate. Indeed, the comparison between radar and IRST tracks is no longer extended to all objects that are located inside a given band of uncertainty around the bearing of the radar track, as suggested by Eq. (6), but only to a reduced set of targets identified also by their elevation. That involves a computing time smaller than required by the case treated up to now.
The method can be also applied to tracking in polar coordinates by using the enhanced Kalman filter. In that case, the evaluation of the range and bearing variances is no more necessary. The enhanced Kalman filter introduces, as cons, a greater computing effort and instability in the case of significant measurement error or process noise or rough initializations.
It is clear that sensors, electro-optical, radar, or others, continue to be the core of the process of sensor data fusion because they provide the measurement of the reality. But the sensor data fusion can effectively mitigate defects and improves overall performance also in terms of reliability and data integrity, when these factors are important for the application. 14 That by means of the appropriate management of fertile data and redundant data provided by the sensors as described in the paper.
As far as the sensor data fusion methods and techniques, object of continuous improvement in formalization, 9,15,16 we think that the paper provides an effective theoretical contribution and an appropriate implementation solution.

Let
Δr ¼r − r be the range error and r the true value. The series expansion ofrðkÞ around ðx; yÞ, terminated to the first term of the expansion, because the others are negligible, gives Δr ≅ x · Δx R r þ y · Δy R r ≅ Δx R · cos β þ Δy R · sin β: Since Δx R and Δy R are Gaussian and zero-mean we can assume the same statistics for Δr. The variance will be EfΔr 2 jr; x; yg ¼ σ 2 x · cos 2 β þ σ 2 y · sin 2 β þ 2 · ρ xy sin β · cos β ¼ EfΔr 2 jβg with ρ xy ¼ EfΔx R · Δy R g.
Giorgio Balzarotti received his Laurea degree in electronic engineering in 1982 from Politecnico di Milano. He is senior engineer in Selex ES, and his main activity is related to sensors for defense. He worked in many national and international programs related to infrared and imaging systems. His research and development interests are in data and signal processing and in all aspects related to IRST systems.