Sensor fusion can be performed either on the raw sensor output or after a segmentation step has been done. Our previous work has concentrated on neural network models for sensor fusion after segmentation. Although this method has been shown to be fast and reliable, there is still the overhead entailed from using entire images. The wavelet transform is a multiresolution method that is used to decompose images into detail and average channels. These channels maintain all of the image information and sensor fusion logic operations can be done within the wavelet coefficient space. In addition, image compression can be done within this same space for possible remote transmission. This paper examines sensor fusion within the wavelet coefficient space. The results of some experimental studies performed on the 1024 node NCUBE/10 at the University of South Carolina are also included.