The derivation of sensor fusion algorithms is presented with emphasis on detection and estimation of radar type targets. Theoretical expressions are developed in a form which provide the applications engineer with the fundamentals necessary for implementation of these algorithms into systems constituting distributed sensors. The expressions lend themselves to using knowledge and rule based methods so that a priori and learned information about the overall scenario can be used to reduce uncertainties and thereby efficiently direct signal energy toward optimizing system performance. Various surveillance situations are considered and accounted for in the development of the algorithms. These include Bayesian and Neyman-Pearson detection, sequential detection, multiple target situations, estimation, colored noise such as jamming, Constant False Alarm Rate (CFAR), and multiple background estimation. Optimization of mutual information transfer through the distributed sensors is also treated. Where most investigators focus on optimizing the sensors given the fusion rule, our development explores methods for optimizing the fusion rule given the sensor criteria. Some procedures are also presented for the mutual or global optimization of both the sensors and the fusion center. The effects of band width and channel capacity constraints between sensors and fusion center are taken into account in the development. Numerical results are presented which illustrate the improvements obtained from the use of multiple sensors with various fusion rules with respect to the performance of the single sensor [1-3].