15 April 2013 Technique for radar and infrared search and track data fusion
Author Affiliations +
An algorithm for the fusion of data used for target search and tracking originated by a bidimensional radar and infrared search and track is proposed and described. To permit a fusion process between two systems whose measurements are not completely comparable, a new strategy for mixing the system states is introduced, thus obtaining a set of homogeneous measurements. We describe the rationale behind the method and develop mathematical aspects necessary to obtain the equation for the fusion of tracking data. An analysis of the estimation errors associated with the proposed model is also described. Some simulation results demonstrate the capabilities of the presented technique.



The idea behind the current proposal for a data fusion system derives from the evidence of some specific problems we encountered during our trials with radar and infrared search and track (IRST) for surveillance and tracking. We can summarize and represent what was highlighted during the tests in the following simple and concise way:

  • Especially in complex scenarios, it is not trivial to associate tracks from one system to tracks from the other, mainly because they measure quantities of different type.

  • In air to sea scenarios, track seductions caused by strong maritime clutter causes exchange of the markers associated to radar targets, failing association with IRST tracks.

  • In long range tracking, radar and IRST track association fails due to the inaccuracies of the distances estimated by the IRST and the inability to properly link the angles provided by the two systems.

  • In order to operate the data fusion, kinematic ranging techniques123 are used in IRST to estimate the track range. This technique is not fully reliable and in some cases it does not permit correct track to track association.

This paper describes a simple but effective method to remove the above limitations in the use of the two systems and to operate without the need of any kinematic ranging operation.

Currently, radar and IRST are operating together on various platforms and are considered complementary from an operational point of view. In general, radar systems are effective in long range detection, they are capable of providing precise target range and they are active systems; IRST systems are very precise in angular target tracking and they are completely passive systems.

The proposed data fusion approach intends to exploit the capabilities of the two sensors by mixing their specific characteristics, considering them as part of an integrated system.


Definition of the Context for Data Fusion

We assume at beginning that both the sensors, radar and IRST, provide detection data relevant to the target at the same sampling time Ts (synchronous systems). Later on we will remove this limitation to deal with systems with different sampling time also (asynchronous systems).

We will disregard all the issues related to the registration problem4,5 of the two sensors assuming that they have a common axis system that, in general, is the one of the host platform. Figure 1 is a top level block diagram of the data fusion system.

Fig. 1

Top level block diagram of the system.



Sensing and Preprocessing System

This represents the sensor processing front-end; its main purpose is the extraction of all the possible radar and IRST targets. This is obtained, in general, by suitable statistical signal processing on a single acquisition (batch) basis, without any knowledge of the past events. The resulting data records are called “detection.” Each detection can be originated by a real target of interest, by noise, or by clutter/background. A common situation is to operate with few detections originated by real targets and many detections originated by noise or clutter/background.


Data Processing and Fusion System

This part is devoted to the tracks from noise or clutter. It is based on time-by-time correlation of the detection. Data processors (DAP) perform estimation and prediction of tracks on the basis of kinematics. The fusion system, placed at the end of the DAPs, mixes the tracks data. Here we assume the DAPs and the fusion system are located in the same functional unit even if it is not strictly necessary.

The data fusion system functional block will be expanded later on.

For our purpose we assume that, for each detection, the following data, in input to the radar data processing functional block, are available:

For the IRST, for each detection, the following data, in input to the IRST data processing functional block, are available: azimuth
The angles β and ϕ derive from different sources, but they refer to the same angle although with different measurement noise.

For compactness, the sampling time Ts is not indicated in the following formulas and whenever necessary we will omit the sampling counter k, too.

The geometry associated to the above parameters is schematically presented in Fig. 2.

Fig. 2

Geometry of the scene.


The noise associated with the input data is assumed white, Gaussian, and uncorrelated.

We convert the data of radar from polar to Cartesian coordinates to avoid the use of the extended Kalman filters,

It is known that the above conversion produces correlation and bias in the noise of the converted coordinates, but that bias can be evaluated and therefore removed.6

Now we can proceed in defining the output elements, the tracks, of the radar data processor (by the usual symbols for the description of dynamic systems):

  • estimation of the present state: [x^R(k|k),y^R(k|k)]

  • estimation of the future state (prediction): [x^R(k+1|k),y^R(k+1|k)]

  • state errors estimated variances σx2 and σy2

  • estimated correlation between the state errors: ρxy=E{ΔxR·ΔyR}, where ΔxR=x^Rx and ΔyR=y^Ry are the estimation errors on x^R and y^R, being x and y the true values of the slant coordinates. E{} stands for the expected value.

The output elements, the tracks, of the IRST data processing function are the following:

  • estimation of the present state: φ^(k|k),ϑ^(k|k)

  • prediction of the future state: φ^(k+1|k),ϑ^(k+1|k)

  • estimated variances σφ2, σϑ2.

From the estimates x^R and y^R we can get back the range and bearing filtered data:


Since the estimation errors on x^R and y^R are generally different at every sampling time and are correlated, too, the computation of the variance is not straightforward and some approximation must be taken to obtain (see Appendix A):


Now, to implement the fusion of the data coming from two different sensors, we have to perform an additional step thus obtaining a homogeneous set of measurements.

By using elevation ϑ from the IRST and the data from the radar, we can write the equations:


While by using azimuth ϕ elevation ϑ from the IRST and range r from the radar we have:


Now we have two homogeneous sets of data: the vectors ζR=[ζxRζyRζz]T and ζI=[ζxIζyIζz]T, with T as the symbol of transpose operation, where of course ζZR=ζzI=ζz. The expressions for the most important statistic figures are reported in Appendix B. In particular it is shown that the expected values of the measurement errors have a bias. When the bias is not negligible, a suitable mitigation process has to be considered.6

In a multiple target tracking scenario we have to associate the data from many possible targets. Simply the r track coming from the radar and the i track from the IRST are considered to be linked to the same object if, said XR and XI their respective states,


being t the time. In our case, ζR=XR and ζI=XI. But ζR and ζI are affected by estimation errors that rarely allow Eq. (5) to be verified. In order to accept the hypothesis that two tracks coming from different sensors concern the same object (track to track association process), we have, therefore, to test that the following differences
are inside a given bound for every k. So to properly associate ζR and ζI to the same target we proceed with the following criteria:

Condition 1:

For all tracks compute


where λ is a suitable constant. Equation (6) is justified by considering that the error of β^ and φ^ are independent and Gaussian with 0 mean.

Condition 2:

For all tracks satisfying the previous criteria compute:


Then, under the Gaussian approximation on ζR and ζI, verify that the following conditions are satisfied:


being σxRI2, σyRI2 the variances and ρΔxy the correlation coefficient computed in Appendix C. With the approximation that ζR and ζI are Gaussian, γ0 is a suitable threshold value evaluated by using techniques based on χ2 test.

Condition 3:

For all tracks satisfying the previous criteria compute:


Then verify that the following conditions are satisfied:


where σxRI2(k+1|k), σyRI2(k+1|k), and ρΔxy(k+1|k) can be computed as in the previous step, but using the predicted data, and γ1 is evaluated in the same way as γ0.

We define “mixed detection” the system of Eq. (7) obtained by combining the data of two tracks which pass the above discrimination process. Equation (7) can be interpreted as a set of measures, whose measurement errors are σr2, σβ2, σxR2, σyR2,σxI2, σyI2, and σz2 evaluated in the Appendices.


Fusion and Tracking

ζxR(k) and ζxI(k) both represent an estimation of the same quantity, as well as ζyR(k) and ζyI(k). Now, the classical way to proceed for the fusion is by the following equation:7


Because ζz(k) is common to both ζR and ζI vectors, it can be ignored in computation and the matrix A becomes a 2×2 elements matrix while the estimate ζ^RI(k) a 2 elements vector.

Let ΔζRI, ΔζI, and ΔζR be the errors on ζ^RI(k),ζI(k),ζR(k), we have:


The covariance and cross-covariance matrices of the errors ΔζR, ΔζI, and ΔζRI are
While the covariance matrix of the difference ΔζR(k)ΔζI(k) is


We have7


Since the polar to Cartesian conversion introduced above causes a correlation between the errors along the coordinate axes, the matrix RΔ, especially when the trajectories of the target is seen near 45 deg, tends to be singular with impacts on the stability of tracking. A different approach is then adopted.

We consider (ζR,ζI) as two different detections which have been determined by the same sensor from the same source. Now we designate


the probability density function (pdf) of the Gaussian random variables ζR and ζI, respectively, and Z^=Z^(k|k1) the predicted “measure” for the present time evaluated at the previous time tTs by the data fusion system, while S(k) is the relevant residual covariance matrix.

In that case8


provides the probability of ζR(k), given that ζI and ζR represent the same object. Therefore, every mixed detection will generate a new “measure,” whose components along the coordinate axes are provided by


So we can write


where its covariance matrix is




being σxR2, σxI2, σyR2, σyI2, σz2, C(ΔxζR·ΔyζR), C(ΔxζI·ΔyζI), C(ΔxζI·ΔxζR), C(ΔyζI·ΔyζR), C(ΔxζI·ΔyζR), C(ΔyζI·ΔxζR) given in the Appendices.

ζα(k) is the input to the processor devoted to the fused tracks and R(k) is the covariance matrix of its error.

All the mixed detections meeting the criteria


with γ a suitable threshold are considered for an association with the fused track. It is possible that an IRST track could be associated to more radar tracks and vice versa. In fact, every IRST track with azimuth near to the bearing of a given radar track could pass, regardless of the elevation, the tests of the discrimination process for the mixed detection generation. This means that a given fused track can be unlikely associated with multiple mixed detections. In that case, we proceed by using the classical joint probabilistic data association (JPDA) algorithm,9,10 with the following adaptation: the fused tracks and the IRST tracks are put in the role, respectively, of tracks and detections in the generation of the validation matrix and in the event matrices of JPDA algorithm. Then the exponent of the Gaussian probability density function relevant to each fused track to which the mixed detection is associated (the mixed detection has been previously generated using the relevant IRST track in event matrix), will be equal to 1/2 the left hand side of Eq. (21). At this point we can proceed with the calculation of the Kalman gain, the estimation of the present and predicted state and the covariance matrixes of the relevant estimation errors. Figure 3 summaries with a block diagram the process described.

Fig. 3

Data fusion system diagram.



Asynchronous Systems

Now we remove the limitation that radar and IRST are synchronous. Our main objective is to align all data in time in order to implement the track-to-track association process conditions 1 to 3 described in Sec. 2.

Let us designate TR and TI the sampling periods of radar and IRST, respectively. We select for the following description the sampling period TR of radar as reference and as the time when all the operations of the fusion system start.



where tR and tI are the sampling times of radar and IRST. Referring to Fig. 4, we can proceed in the following way.

Fig. 4

Timing diagram.


The output data from the IRST referred to the radar time tR=k·TI+Δt are given by


while the radar predictions at the time tI+TI=tR+TIΔt referred to the IRST predicted data are


where ξI, ΦI, PI, and QI are the state, the transition matrix, the covariance matrices of the state error and process noise of the IRST track, while ξR,ΦR, PR, and QR are the state, the transition matrix, the covariance matrix of the state error estimation and of the process noise of the radar track.

The Eq. (23) are necessary to verify conditions 1 and 2. Once obtained the predicted state ξ^R(tI+TI) and the predicted state error covariance matrix PR(tI+TI) we are ready to run the condition 3.

An alternative method, called interpolation between samples, certainly faster, but mainly, which requires no knowledge of the state equations of both the tracking systems, is as follows: since it is always ΔtTI, assuming a three-dimensional state, it is reasonable to suppose a constant acceleration of the IRST track within the time interval Δt. Furthermore, in the absence of further information between two successive sampling times, we obtain the variances in an intermediate point as a linear interpolation between the variances of the estimated and predicted state errors. So we rewrite Eq. (23) to align the azimuth and elevation angles φ(tI), ϑ(tI) and the state errors variances σI2=[σφ2σϑ2]T to the time tR as


being φ(tI)=φ^(k|k) and ϑ(tI)=ϑ^(k|k), ωψ(tI)=ω^ψ(k|k), and αψ(tI)=α^ψ(k|k), with ψ=φ,ϑ, estimated position, velocity, and acceleration of the IR track angles at the time instant tI=k·TI. σI2(k|k) and σI2(k+1|k) designate the estimated and predicted variances at the times tI and tI+TI, respectively.

For condition 3 we will proceed in a simpler way using the linear interpolation for both the positions x^R and y^R and the variance σR2=[σx2σy2]T as




Simulation Results

We generated the target motions in order to stress the system and to check the robustness of algorithms also incurring in the risk to create pour realistic scenarios. Clutter is generated and uniformly distributed inside the observed volume. Both IRST and radar utilize the probabilistic data association (PDA) algorithms910.11.12 for detection-gate association. Important is also the use of the JPDA algorithm in case of detection shared by more tracks. Moreover, in order to make more accurate the tracking and to reduce the false track probability, we assume the use of the interacting multiple model (IMM) algorithm.910.11.12

The sampling time of the IRST is TI=157ms, while the radar one is TR=1s. To synchronize both data flows we used the interpolation between samples method. The radar measures range errors is 75 m rms and bearing error 0.5 deg rms in one case, 1.0 deg rms in the second. The IRST angular measurements error is 0.6 mrad rms.

Different motion models are used by IMM for the different process phases.

For the radar.

For the formation tracks the models are:

  • 1. False target model: acceleration process of 35m/s2, time constant τ0=10s, and detection probability PDR=0.

  • 2. Medium-low acceleration model: acceleration process of 35m/s2, time constant τ0=10s, and detection probability PDR=0.9.

  • 3. Maneuver motion model: acceleration process of 70m/s2 and time constant τ1=3s.

For the established tracks the motion models are:

  • 1. Linear motion model: acceleration process of 0.25m/s2 and time constant τ0=60s.

  • 2. Medium-low acceleration model: acceleration process of 35m/s2 and time constant τ0=10s.

  • 3. Maneuver motion model: acceleration process of 70m/s2 and time constant τ1=3s.

For the IRST.

For the formation tracks we have:

  • 1. False target model: acceleration process of 0.5mrad/s2, time constant τ0=5s, and detection probability PDI=0.

  • 2. Medium-low acceleration model: acceleration process of 0.5mrad/s2, time constant τ0=5s, and detection probability PDI=0.9.

  • 3. Maneuver motion model: acceleration process of 25mrad/s2 and time constant τ1=0.5s.

For the established tracks, it is:

  • 1. Linear motion model: acceleration process of 103mrad/s5 and time constant τ0=120s.

  • 2. Medium-low acceleration model: acceleration process of 0.5mrad/s2 and time constant τ0=5s.

  • 3. Maneuver motion model: acceleration process of 25mrad/s2 and time constant τ1=0.5s.

The data fusion system is characterized by the following features:

K0=3, γ0=2.5, γ1=3.75, and a gate threshold γ=20.

Even for the fusion system we used the IMM algorithm characterized by these three motion models:

  • 1. Linear motion model: acceleration process of 0.0025m/s2 and time constant τ0=60s.

  • 2. Medium-low acceleration model: acceleration process of 30m/s2 and time constant τ0=10s.

  • 3. Maneuver motion model: acceleration process of 100m/s2 and time constant τ1=4s.

For the motion along the altitude direction we adopted only a linear motion model characterized by an acceleration process rms of 0.025m/s2 and time constant τz=60s.

The probability of correct association of the mixed detection with the predicted tracks, i.e., of correct fusion of the data from the two sensors, was practically 100% in all performed tests, with the exception of the case where the distance between two targets was close to the resolution of the two sensors and the relative velocity was close to zero. In that condition, the probability falls to 50%.

We consider as merit figures for the data fusion system:

  • 1. The increased precision of the fused tracks with respect to the radar ones.

  • 2. The absence of bias in the estimation errors along the three coordinate axes.

We checked the above merit figures by using a target placed at the distance greater than 70 km, an altitude of 50 m, approaching the observer with a constant velocity of 250km/h along the x- and y-directions with an angle of 45 deg with respect to the observer.

Figure 5 shows the behavior of the estimation errors at the time k based on 0.5 deg rms bearing error, while Fig. 6 on 1.0 deg rms bearing error.

Fig. 5

Estimation errors (m) (radar angular measurement error 0.5 deg): (a) x-axis (m); (b) y-axis (m); and (c) altitude (m). (—— fused track estimation error, – – – radar track estimation error).


Fig. 6

Estimation errors (m) (radar angular measurement error 1 deg): (a) x-axis (m); (b) y-axis (m); and (c) altitude (m). (—— fused track estimation error, – – – radar track estimation error).


It is possible to see that the error in bearing impacts the performance of the radar as a standalone system, but it is practically negligible when data fusion is implemented. The general benefit in accuracy is ever evident.

It is interesting to observe in Figs. 5(a), 5(b), 6(a), and 6(b), the difference in behavior of radar estimation error with respect to data fusion estimation error: due to the polar-to-Cartesian coordinate conversion in radar data, to an error on the x-axis corresponds an error on y-axis, proportional and of opposite sign. In the specific case of the figures where the trajectory of the target is 45-deg off-axis, ΔxΔy (dash lines in the figures). The data fusion system significantly mitigates that effect (solid lines).

It is interesting to observe the behavior of the fused tracks: when the coefficient α of the fusion equation, is around 0.5, i.e., when radar and IRST concur with equal weight to the fused detection, the equal-probability ellipse (gate) of the fused track results orthogonal to the observer-target direction (Fig. 7). The angular radar error causes an error proportional to r^·σβ, the cross-range error, orthogonal to the range itself.

Fig. 7

The gate when α is around 0.5.


When the coefficient α tends to 0 the IRST data become prevailing in ruling the tracking. All the uncertainty due to the radar bearing disappears and the cross-range error is determined mainly by the IRST error angle. The gate is mainly determined by the radar error of the range estimation. The gate collapses almost in a segment (Fig. 8).

Fig. 8

The gate collapses almost in a segment when is α is around 0.


To verify the bias in the estimation error along each coordinate axis we used the normalized mean estimation error (NMEE)13 based on N Monte Carlo runs. Using a confidence region of 95%, we consider the error without bias if it results

being σq the standard deviation of the error component Δx in the q’th run.

Our NMEE test is constituted of N=30 Monte Carlo runs of length K=180 samples (3 min long), using the target of the previous example. Figure 9 shows the graphs of the normalized estimated mean error for 0.5 deg rms bearing error, the bounds of the confidence region, [0.359,0.359], and the sample mean in each direction. We can observe that the samples μ(k) are within the confidence interval, except for some isolated points.

Fig. 9

NMEE (radar angular measurement error 0.5 deg): (a) x-axis; (b) y-axis; and (c) altitude. (—— NMEE, – – – mean value, - · - · - · bounds).


Finally, Fig. 10 provides the graphs of the NMEE based on N=60 Monte Carlo runs, but with an angular radar measurement error of 1 deg. Also in this case we can observe that the NMEE is within the 95% bounds of the confidence interval, now [0.254,0.254], except for some isolated points, showing a still acceptable zero mean estimation error of the fusion system.

Fig. 10

NMEE (radar angular measurement error 1 deg): (a) x-axis; (b) y-axis; and (c) altitude. (—— NMEE, – – – mean value, - · - · - · bounds).




The algorithm to fuse tracks, proposed in this paper, provides a general procedure to manage not homogeneous data as in the case of radar and IRST. The paper considers both synchronous and asynchronous sensors. For that, it introduced a specific synchronization technique of the data flows. The simulation results prove the validity of the method through the significant improvement in the accuracy of tracking and in the absence of bias in the state error of the tracks.

The extension of the algorithm to the case of three-dimensional radar is fairly straightforward: in that case, in addition to the bearing, the radar also provides measurements of elevation making the generation of the mixed detection more immediate. Indeed, the comparison between radar and IRST tracks is no longer extended to all objects that are located inside a given band of uncertainty around the bearing of the radar track, as suggested by Eq. (6), but only to a reduced set of targets identified also by their elevation. That involves a computing time smaller than required by the case treated up to now.

The method can be also applied to tracking in polar coordinates by using the enhanced Kalman filter. In that case, the evaluation of the range and bearing variances is no more necessary. The enhanced Kalman filter introduces, as cons, a greater computing effort and instability in the case of significant measurement error or process noise or rough initializations.

It is clear that sensors, electro-optical, radar, or others, continue to be the core of the process of sensor data fusion because they provide the measurement of the reality. But the sensor data fusion can effectively mitigate defects and improves overall performance also in terms of reliability and data integrity, when these factors are important for the application.14 That by means of the appropriate management of fertile data and redundant data provided by the sensors as described in the paper.

As far as the sensor data fusion methods and techniques, object of continuous improvement in formalization,9,15,16 we think that the paper provides an effective theoretical contribution and an appropriate implementation solution.


Appendix A


be the range error and r the true value. The series expansion of r^(k) around (x,y), terminated to the first term of the expansion, because the others are negligible, gives
Since ΔxR and ΔyR are Gaussian and zero-mean we can assume the same statistics for Δr. The variance will be
with ρxy=E{ΔxR·ΔyR}.

By using the estimated values, which are the only values available to the fusion system, in place of the real ones and introducing the angle error Δβ=β^β, we can write

Because for a Gaussian random variable ξ with variance σ2 and x0 a constant, it results:
we get
Similarly we can proceed for the bearing β, by considering also that the approximation is as good as smaller is the ratio Δr/r
and so

Appendix B


be the error of the two sets of data and x and y be the true slant values. Considering that for a Gaussian random variable ξ with variance σ2 and x0 a constant, it results
By simply math transformations, we get
while the cross-correlation is
In the same way we get for the components of ζI
while for the correlation between ΔxζI e ΔyζI, after some calculations we have
Let now
be the error and z, r, and ϑ the true values, by simply math transformations we get
while its variance is given by
The error Δz will result correlated to the errors ΔxζR, ΔxζI, ΔyζR e ΔyζI by the following equations

Appendix C


be the error of the mixed detection data, we get
Considering that
by using the previous results we compute
Proceeding as before we get
and then the correlation coefficient ρΔxy provided by


1. D. V. Stallard, “An angle-only tracking filter in modified spherical coordinates,” in Proc. AIAA Guidance, Navigation, and Control Conf., Monterey, California, pp. 542–550 (1987). Google Scholar

2. M. Mallicket al., “Angle-only filtering in 3D using modified spherical and log spherical coordinates,” in Proc. 14th Int. Conf. on Information Fusion, pp. 1–8, IEEE, Chicago, Illinois (2011). Google Scholar

3. N. Peach, “Bearings-only tracking using a set of range-parameterised extended Kalman filters,” IEE Proc. Control Theor. Appl. 142(1), 73–80 (1995).ICTAEX1350-2379 http://dx.doi.org/10.1049/ip-cta:19951614 Google Scholar

4. Y. Bar-Shalom, Chapter 5, in Multitarget Multisensor Tracking: Advanced Applications, Vol. I, pp. 155–185, Yaakov Bar-Shalom, Storrs, Connecticut (1990). Google Scholar

5. S. BlackmanR. Popoli, Chapter 10 in Modern Tracking Systems, pp. 689–699, Artech House, Norwood, Massachusetts (1999). Google Scholar

6. Y. Bar-ShalomX. R. Li, Chapter 1 in Multitarget Multisensor Tracking: Principles and Techniques, pp. 36–55, Yaakov Bar-Shalom, Storrs, Connecticut (1995). Google Scholar

7. Y. Bar-ShalomX. R. Li, Chapter 8 in Multitarget Multisensor Tracking: Principles and Techniques, pp. 448–450, Yaakov Bar-Shalom, Storrs, Connecticut (1995). Google Scholar

8. C. QuarantaG. Balzarotti, “Probabilistic data association applications to data fusion,” Opt. Eng. 47(2), 027007 (2008).OPEGAR0091-3286 http://dx.doi.org/10.1117/1.2857445 Google Scholar

9. Y. Bar-ShalomX. R. Li, Multitarget Multisensor Tracking: Principles and Techniques, Yaakov Bar-Shalom, Storrs, Connecticut (1995). Google Scholar

10. S. BlackmanR. Popoli, Modern Tracking Systems, Artech House, Norwood, Massachusetts (1999). Google Scholar

11. S. Challaet al., Fundamentals Of Object Tracking, Cambridge University Press, New York (2011). Google Scholar

12. T. KirubarajanY. Bar-Shalom, “Probabilistic data association techniques for target tracking in clutter,” Proc. IEEE 92(3), 536–557 (2004).IEEPAD0018-9219 http://dx.doi.org/10.1109/JPROC.2003.823149 Google Scholar

13. Y. Bar-ShalomX. R. LiT. Kirubarajan, Chapter 5 in Estimation with Application to Tracking and Navigation, pp. 232–245, John Wiley & Sons, Inc., New York (2001). Google Scholar

14. G. Balzarottiet al., “An approach to sensor data fusion for flying and landing aid purpose,” AGARD, Low-Level and Nap-of-the-Earth (NOE) Night Operations (1995). Google Scholar

15. H. Durrant-Whyte, “Introduction to sensor data fusion,” University of Sydney, (2002),  http://www.acfr.usyd.edu.au/teaching/graduate/Fusion/index.htmlGoogle Scholar

16. H. B. Mitchell, Multi-Sensor Data Fusion, Springer-Verlag,, Berlin, Heidelberg (2007). Google Scholar



Carlo Quaranta received the Laurea degree (summa cum laude) in electrical engineering from the University of Calabria, in 1978. For about 10 years, he has been working in the Research and Development Laboratories of Italtel (Milano, Italy), in the field of the digital signal processing applied to synthesizers, adaptive filtering, and digital PLL. Since 1986, he has been active in the Research and Development Laboratories of Selex ES (formerly FIAR S.p.A.), Milano, in definition and design of new amplitude and phase equalizers, digital coherent demodulators for data transmission via satellite to land mobiles. His present main activities are in IRST systems and his research interests are in the field of the target tracking and data fusion.


Giorgio Balzarotti received his Laurea degree in electronic engineering in 1982 from Politecnico di Milano. He is senior engineer in Selex ES, and his main activity is related to sensors for defense. He worked in many national and international programs related to infrared and imaging systems. His research and development interests are in data and signal processing and in all aspects related to IRST systems.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Carlo Quaranta, Carlo Quaranta, Giorgio Balzarotti, Giorgio Balzarotti, } "Technique for radar and infrared search and track data fusion," Optical Engineering 52(4), 046401 (15 April 2013). https://doi.org/10.1117/1.OE.52.4.046401 . Submission:

Back to Top