When viewing a scene for an object recognition task, one imaging sensor may not provide all the information needed for recognition. One way to obtain more information is to use multiple sensors. These sensors should provide images that contain complementary information about the same scene. After preprocessing the source images, we use image fusion to combine the information from the difference sensors. The images to be fused may have some details such as shadows, wrinkles, imaging artifacts, etc., that are not needed in the final fused image. One application of morphological filters is to remove objects of a given size range from the image. Therefore, we use morphological filters in conjunction with wavelets to improve the recognition performance after fusion. After morphological filtering, wavelets are used to construct multiresolution representations of the source images. Once the source images are decomposed, the details are combined to form a composite decomposed image. This method allows details at different levels to be combined independently so that important information is maintained in the final composite image. We are developing image fusion algorithms for concealed weapon detection (CWD) applications. Fusion is useful in situations where the sensor types have different properties, e.g., IR and MMW sensors. Fusing these types of images results in composite images which contain more complete information for CWD applications such as detection of concealed weapons on a person. In this paper we present our most recent results in this area.