This paper describes an efficient approach to the fusion of spatially registered images and image sequences. The fusion method uses an improved wavelet representation, which has low redundancy as well as near-shift-invariance. Specifically, this representation is derived from an extended discrete wavelet frame with variable resampling strategies, and it corresponds to an optimal strategy. The proposed method lends itself well to rapid fusion for image sequences. Experimental results tested on different types of imagery show that the proposed method, as a shift-invariant scheme, provides better results than conventional wavelet methods and it is much more efficient than existing shift-invariant methods (the undecimated wavelet and the dual-tree complex wavelet methods).
"Image fusion using a low-redundancy and nearly shift-invariant discrete wavelet frame," Optical Engineering 46(10), 107002 (1 October 2007). https://doi.org/10.1117/1.2789640