The first chapter explained how image fusion can benefit a variety of applications, including pattern recognition and visual analysis. Due to the proliferation of sensed data, data synthetization and analysis technologies are in high demand. The demand comes from image fusion technology that combines multisource images to produce one image for comprehensive information. Over the past two decades, image fusion has been an active research domain for many applications. This chapter provides a brief survey of image fusion techniques, categorizations, and datasets.
2.1 Image Fusion Survey
Given the varied objectives of different applications, image fusion can be defined as the process of combining information from two or more images of a scene into a single composite image that is more informative and more suitable for visual perception or computer processing. For instance, simply averaging two aligned CT and MRI images is a method of image fusion. Advanced image fusion methods, such as wavelet-based approaches, involve the representation of images in a transform domain, where image features are easier to access and manipulate. Before image fusion operations can be implemented, images need to be normalized and registered so that the corresponding pixels from the input images are associated with the same physical points in the scene or on the target.