|
1.IntroductionThe idea behind the current proposal for a data fusion system derives from the evidence of some specific problems we encountered during our trials with radar and infrared search and track (IRST) for surveillance and tracking. We can summarize and represent what was highlighted during the tests in the following simple and concise way:
This paper describes a simple but effective method to remove the above limitations in the use of the two systems and to operate without the need of any kinematic ranging operation. Currently, radar and IRST are operating together on various platforms and are considered complementary from an operational point of view. In general, radar systems are effective in long range detection, they are capable of providing precise target range and they are active systems; IRST systems are very precise in angular target tracking and they are completely passive systems. The proposed data fusion approach intends to exploit the capabilities of the two sensors by mixing their specific characteristics, considering them as part of an integrated system. 2.Definition of the Context for Data FusionWe assume at beginning that both the sensors, radar and IRST, provide detection data relevant to the target at the same sampling time (synchronous systems). Later on we will remove this limitation to deal with systems with different sampling time also (asynchronous systems). We will disregard all the issues related to the registration problem4,5 of the two sensors assuming that they have a common axis system that, in general, is the one of the host platform. Figure 1 is a top level block diagram of the data fusion system. 2.1.Sensing and Preprocessing SystemThis represents the sensor processing front-end; its main purpose is the extraction of all the possible radar and IRST targets. This is obtained, in general, by suitable statistical signal processing on a single acquisition (batch) basis, without any knowledge of the past events. The resulting data records are called “detection.” Each detection can be originated by a real target of interest, by noise, or by clutter/background. A common situation is to operate with few detections originated by real targets and many detections originated by noise or clutter/background. 2.2.Data Processing and Fusion SystemThis part is devoted to the tracks from noise or clutter. It is based on time-by-time correlation of the detection. Data processors (DAP) perform estimation and prediction of tracks on the basis of kinematics. The fusion system, placed at the end of the DAPs, mixes the tracks data. Here we assume the DAPs and the fusion system are located in the same functional unit even if it is not strictly necessary. The data fusion system functional block will be expanded later on. For our purpose we assume that, for each detection, the following data, in input to the radar data processing functional block, are available: and For the IRST, for each detection, the following data, in input to the IRST data processing functional block, are available: azimuth and The angles and derive from different sources, but they refer to the same angle although with different measurement noise.For compactness, the sampling time is not indicated in the following formulas and whenever necessary we will omit the sampling counter , too. The geometry associated to the above parameters is schematically presented in Fig. 2. The noise associated with the input data is assumed white, Gaussian, and uncorrelated. We convert the data of radar from polar to Cartesian coordinates to avoid the use of the extended Kalman filters, It is known that the above conversion produces correlation and bias in the noise of the converted coordinates, but that bias can be evaluated and therefore removed.6Now we can proceed in defining the output elements, the tracks, of the radar data processor (by the usual symbols for the description of dynamic systems):
The output elements, the tracks, of the IRST data processing function are the following: From the estimates and we can get back the range and bearing filtered data: Since the estimation errors on and are generally different at every sampling time and are correlated, too, the computation of the variance is not straightforward and some approximation must be taken to obtain (see Appendix A): Now, to implement the fusion of the data coming from two different sensors, we have to perform an additional step thus obtaining a homogeneous set of measurements.By using elevation from the IRST and the data from the radar, we can write the equations: While by using azimuth elevation from the IRST and range from the radar we have: Now we have two homogeneous sets of data: the vectors and , with as the symbol of transpose operation, where of course . The expressions for the most important statistic figures are reported in Appendix B. In particular it is shown that the expected values of the measurement errors have a bias. When the bias is not negligible, a suitable mitigation process has to be considered.6In a multiple target tracking scenario we have to associate the data from many possible targets. Simply the track coming from the radar and the track from the IRST are considered to be linked to the same object if, said and their respective states, being the time. In our case, and . But and are affected by estimation errors that rarely allow Eq. (5) to be verified. In order to accept the hypothesis that two tracks coming from different sensors concern the same object (track to track association process), we have, therefore, to test that the following differences are inside a given bound for every . So to properly associate and to the same target we proceed with the following criteria:Condition 1:For all tracks compute where is a suitable constant. Equation (6) is justified by considering that the error of and are independent and Gaussian with 0 mean.Condition 2:For all tracks satisfying the previous criteria compute: Then, under the Gaussian approximation on and , verify that the following conditions are satisfied: being , the variances and the correlation coefficient computed in Appendix C. With the approximation that and are Gaussian, is a suitable threshold value evaluated by using techniques based on test.Condition 3:For all tracks satisfying the previous criteria compute: Then verify that the following conditions are satisfied: where , , and can be computed as in the previous step, but using the predicted data, and is evaluated in the same way as .We define “mixed detection” the system of Eq. (7) obtained by combining the data of two tracks which pass the above discrimination process. Equation (7) can be interpreted as a set of measures, whose measurement errors are , , , ,, , and evaluated in the Appendices. 3.Fusion and Trackingand both represent an estimation of the same quantity, as well as and . Now, the classical way to proceed for the fusion is by the following equation:7 Because is common to both and vectors, it can be ignored in computation and the matrix becomes a elements matrix while the estimate a 2 elements vector.Let , , and be the errors on , we have: The covariance and cross-covariance matrices of the errors , , and are While the covariance matrix of the difference is We have7 Since the polar to Cartesian conversion introduced above causes a correlation between the errors along the coordinate axes, the matrix , especially when the trajectories of the target is seen near 45 deg, tends to be singular with impacts on the stability of tracking. A different approach is then adopted.We consider as two different detections which have been determined by the same sensor from the same source. Now we designate the probability density function (pdf) of the Gaussian random variables and , respectively, and the predicted “measure” for the present time evaluated at the previous time by the data fusion system, while is the relevant residual covariance matrix.In that case8 provides the probability of , given that and represent the same object. Therefore, every mixed detection will generate a new “measure,” whose components along the coordinate axes are provided by So we can write where its covariance matrix is with being , , , , , , , , , , given in the Appendices.is the input to the processor devoted to the fused tracks and is the covariance matrix of its error. All the mixed detections meeting the criteria with a suitable threshold are considered for an association with the fused track. It is possible that an IRST track could be associated to more radar tracks and vice versa. In fact, every IRST track with azimuth near to the bearing of a given radar track could pass, regardless of the elevation, the tests of the discrimination process for the mixed detection generation. This means that a given fused track can be unlikely associated with multiple mixed detections. In that case, we proceed by using the classical joint probabilistic data association (JPDA) algorithm,9,10 with the following adaptation: the fused tracks and the IRST tracks are put in the role, respectively, of tracks and detections in the generation of the validation matrix and in the event matrices of JPDA algorithm. Then the exponent of the Gaussian probability density function relevant to each fused track to which the mixed detection is associated (the mixed detection has been previously generated using the relevant IRST track in event matrix), will be equal to the left hand side of Eq. (21). At this point we can proceed with the calculation of the Kalman gain, the estimation of the present and predicted state and the covariance matrixes of the relevant estimation errors. Figure 3 summaries with a block diagram the process described.4.Asynchronous SystemsNow we remove the limitation that radar and IRST are synchronous. Our main objective is to align all data in time in order to implement the track-to-track association process conditions 1 to 3 described in Sec. 2. Let us designate and the sampling periods of radar and IRST, respectively. We select for the following description the sampling period of radar as reference and as the time when all the operations of the fusion system start. Let where and are the sampling times of radar and IRST. Referring to Fig. 4, we can proceed in the following way.The output data from the IRST referred to the radar time are given by while the radar predictions at the time referred to the IRST predicted data are where , , , and are the state, the transition matrix, the covariance matrices of the state error and process noise of the IRST track, while , , and are the state, the transition matrix, the covariance matrix of the state error estimation and of the process noise of the radar track.The Eq. (23) are necessary to verify conditions 1 and 2. Once obtained the predicted state and the predicted state error covariance matrix we are ready to run the condition 3. An alternative method, called interpolation between samples, certainly faster, but mainly, which requires no knowledge of the state equations of both the tracking systems, is as follows: since it is always , assuming a three-dimensional state, it is reasonable to suppose a constant acceleration of the IRST track within the time interval . Furthermore, in the absence of further information between two successive sampling times, we obtain the variances in an intermediate point as a linear interpolation between the variances of the estimated and predicted state errors. So we rewrite Eq. (23) to align the azimuth and elevation angles , and the state errors variances to the time as being and , , and , with , estimated position, velocity, and acceleration of the IR track angles at the time instant . and designate the estimated and predicted variances at the times and , respectively.For condition 3 we will proceed in a simpler way using the linear interpolation for both the positions and and the variance as 5.Simulation ResultsWe generated the target motions in order to stress the system and to check the robustness of algorithms also incurring in the risk to create pour realistic scenarios. Clutter is generated and uniformly distributed inside the observed volume. Both IRST and radar utilize the probabilistic data association (PDA) algorithms9–12 for detection-gate association. Important is also the use of the JPDA algorithm in case of detection shared by more tracks. Moreover, in order to make more accurate the tracking and to reduce the false track probability, we assume the use of the interacting multiple model (IMM) algorithm.9–12 The sampling time of the IRST is , while the radar one is . To synchronize both data flows we used the interpolation between samples method. The radar measures range errors is 75 m rms and bearing error 0.5 deg rms in one case, 1.0 deg rms in the second. The IRST angular measurements error is 0.6 mrad rms. Different motion models are used by IMM for the different process phases. For the radar. For the formation tracks the models are:
For the established tracks the motion models are:
For the IRST. For the formation tracks we have:
For the established tracks, it is:
The data fusion system is characterized by the following features: , , , and a gate threshold . Even for the fusion system we used the IMM algorithm characterized by these three motion models:
For the motion along the altitude direction we adopted only a linear motion model characterized by an acceleration process rms of and time constant . The probability of correct association of the mixed detection with the predicted tracks, i.e., of correct fusion of the data from the two sensors, was practically 100% in all performed tests, with the exception of the case where the distance between two targets was close to the resolution of the two sensors and the relative velocity was close to zero. In that condition, the probability falls to 50%. We consider as merit figures for the data fusion system:
We checked the above merit figures by using a target placed at the distance greater than 70 km, an altitude of 50 m, approaching the observer with a constant velocity of along the - and -directions with an angle of 45 deg with respect to the observer. Figure 5 shows the behavior of the estimation errors at the time based on 0.5 deg rms bearing error, while Fig. 6 on 1.0 deg rms bearing error. It is possible to see that the error in bearing impacts the performance of the radar as a standalone system, but it is practically negligible when data fusion is implemented. The general benefit in accuracy is ever evident. It is interesting to observe in Figs. 5(a), 5(b), 6(a), and 6(b), the difference in behavior of radar estimation error with respect to data fusion estimation error: due to the polar-to-Cartesian coordinate conversion in radar data, to an error on the -axis corresponds an error on -axis, proportional and of opposite sign. In the specific case of the figures where the trajectory of the target is 45-deg off-axis, (dash lines in the figures). The data fusion system significantly mitigates that effect (solid lines). It is interesting to observe the behavior of the fused tracks: when the coefficient of the fusion equation, is around 0.5, i.e., when radar and IRST concur with equal weight to the fused detection, the equal-probability ellipse (gate) of the fused track results orthogonal to the observer-target direction (Fig. 7). The angular radar error causes an error proportional to , the cross-range error, orthogonal to the range itself. When the coefficient tends to 0 the IRST data become prevailing in ruling the tracking. All the uncertainty due to the radar bearing disappears and the cross-range error is determined mainly by the IRST error angle. The gate is mainly determined by the radar error of the range estimation. The gate collapses almost in a segment (Fig. 8). To verify the bias in the estimation error along each coordinate axis we used the normalized mean estimation error (NMEE)13 based on Monte Carlo runs. Using a confidence region of 95%, we consider the error without bias if it results where being the standard deviation of the error component in the ’th run.Our NMEE test is constituted of Monte Carlo runs of length samples (3 min long), using the target of the previous example. Figure 9 shows the graphs of the normalized estimated mean error for 0.5 deg rms bearing error, the bounds of the confidence region, , and the sample mean in each direction. We can observe that the samples are within the confidence interval, except for some isolated points. Finally, Fig. 10 provides the graphs of the NMEE based on Monte Carlo runs, but with an angular radar measurement error of 1 deg. Also in this case we can observe that the NMEE is within the 95% bounds of the confidence interval, now , except for some isolated points, showing a still acceptable zero mean estimation error of the fusion system. 6.ConclusionsThe algorithm to fuse tracks, proposed in this paper, provides a general procedure to manage not homogeneous data as in the case of radar and IRST. The paper considers both synchronous and asynchronous sensors. For that, it introduced a specific synchronization technique of the data flows. The simulation results prove the validity of the method through the significant improvement in the accuracy of tracking and in the absence of bias in the state error of the tracks. The extension of the algorithm to the case of three-dimensional radar is fairly straightforward: in that case, in addition to the bearing, the radar also provides measurements of elevation making the generation of the mixed detection more immediate. Indeed, the comparison between radar and IRST tracks is no longer extended to all objects that are located inside a given band of uncertainty around the bearing of the radar track, as suggested by Eq. (6), but only to a reduced set of targets identified also by their elevation. That involves a computing time smaller than required by the case treated up to now. The method can be also applied to tracking in polar coordinates by using the enhanced Kalman filter. In that case, the evaluation of the range and bearing variances is no more necessary. The enhanced Kalman filter introduces, as cons, a greater computing effort and instability in the case of significant measurement error or process noise or rough initializations. It is clear that sensors, electro-optical, radar, or others, continue to be the core of the process of sensor data fusion because they provide the measurement of the reality. But the sensor data fusion can effectively mitigate defects and improves overall performance also in terms of reliability and data integrity, when these factors are important for the application.14 That by means of the appropriate management of fertile data and redundant data provided by the sensors as described in the paper. As far as the sensor data fusion methods and techniques, object of continuous improvement in formalization,9,15,16 we think that the paper provides an effective theoretical contribution and an appropriate implementation solution. AppendicesAppendix ALet be the range error and the true value. The series expansion of around , terminated to the first term of the expansion, because the others are negligible, gives Since and are Gaussian and zero-mean we can assume the same statistics for . The variance will be with .By using the estimated values, which are the only values available to the fusion system, in place of the real ones and introducing the angle error , we can write Because for a Gaussian random variable with variance and a constant, it results: we get Similarly we can proceed for the bearing , by considering also that the approximation is as good as smaller is the ratio and soAppendix BLet be the error of the two sets of data and and be the true slant values. Considering that for a Gaussian random variable with variance and a constant, it results By simply math transformations, we get and while the cross-correlation is In the same way we get for the components of and while for the correlation between e , after some calculations we have Let now be the error and , , and the true values, by simply math transformations we get while its variance is given by The error will result correlated to the errors , , e by the following equationsReferencesD. V. Stallard,
“An angle-only tracking filter in modified spherical coordinates,”
in Proc. AIAA Guidance, Navigation, and Control Conf.,
542
–550
(1987). Google Scholar
M. Mallicket al.,
“Angle-only filtering in 3D using modified spherical and log spherical coordinates,”
in Proc. 14th Int. Conf. on Information Fusion,
1
–8
(2011). Google Scholar
N. Peach,
“Bearings-only tracking using a set of range-parameterised extended Kalman filters,”
IEE Proc. Control Theor. Appl., 142
(1), 73
–80
(1995). http://dx.doi.org/10.1049/ip-cta:19951614 ICTAEX 1350-2379 Google Scholar
Y. Bar-Shalom, Multitarget Multisensor Tracking: Advanced Applications, 155
–185 Yaakov Bar-Shalom, Storrs, Connecticut
(1990). Google Scholar
S. BlackmanR. Popoli, Modern Tracking Systems, 689
–699 Artech House, Norwood, Massachusetts
(1999). Google Scholar
Y. Bar-ShalomX. R. Li, Multitarget Multisensor Tracking: Principles and Techniques, 36
–55 Yaakov Bar-Shalom, Storrs, Connecticut
(1995). Google Scholar
Y. Bar-ShalomX. R. Li, Multitarget Multisensor Tracking: Principles and Techniques, 448
–450 Yaakov Bar-Shalom, Storrs, Connecticut
(1995). Google Scholar
C. QuarantaG. Balzarotti,
“Probabilistic data association applications to data fusion,”
Opt. Eng., 47
(2), 027007
(2008). http://dx.doi.org/10.1117/1.2857445 OPEGAR 0091-3286 Google Scholar
Y. Bar-ShalomX. R. Li, Multitarget Multisensor Tracking: Principles and Techniques, Yaakov Bar-Shalom, Storrs, Connecticut
(1995). Google Scholar
S. BlackmanR. Popoli, Modern Tracking Systems, Artech House, Norwood, Massachusetts
(1999). Google Scholar
S. Challaet al., Fundamentals Of Object Tracking, Cambridge University Press, New York
(2011). Google Scholar
T. KirubarajanY. Bar-Shalom,
“Probabilistic data association techniques for target tracking in clutter,”
Proc. IEEE, 92
(3), 536
–557
(2004). http://dx.doi.org/10.1109/JPROC.2003.823149 IEEPAD 0018-9219 Google Scholar
Y. Bar-ShalomX. R. LiT. Kirubarajan, Estimation with Application to Tracking and Navigation, 232
–245 John Wiley & Sons, Inc., New York
(2001). Google Scholar
G. Balzarottiet al.,
“An approach to sensor data fusion for flying and landing aid purpose,”
in AGARD, Low-Level and Nap-of-the-Earth (NOE) Night Operations,
(1995). Google Scholar
H. Durrant-Whyte,
“Introduction to sensor data fusion,”
(2002) http://www.acfr.usyd.edu.au/teaching/graduate/Fusion/index.html Google Scholar
H. B. Mitchell, Multi-Sensor Data Fusion, Springer-Verlag,, Berlin, Heidelberg
(2007). Google Scholar
BiographyCarlo Quaranta received the Laurea degree (summa cum laude) in electrical engineering from the University of Calabria, in 1978. For about 10 years, he has been working in the Research and Development Laboratories of Italtel (Milano, Italy), in the field of the digital signal processing applied to synthesizers, adaptive filtering, and digital PLL. Since 1986, he has been active in the Research and Development Laboratories of Selex ES (formerly FIAR S.p.A.), Milano, in definition and design of new amplitude and phase equalizers, digital coherent demodulators for data transmission via satellite to land mobiles. His present main activities are in IRST systems and his research interests are in the field of the target tracking and data fusion. Giorgio Balzarotti received his Laurea degree in electronic engineering in 1982 from Politecnico di Milano. He is senior engineer in Selex ES, and his main activity is related to sensors for defense. He worked in many national and international programs related to infrared and imaging systems. His research and development interests are in data and signal processing and in all aspects related to IRST systems. |