1 October 2007 Image fusion using a low-redundancy and nearly shift-invariant discrete wavelet frame
Bo Yang, Zhongliang Jing
Author Affiliations +
Abstract
This paper describes an efficient approach to the fusion of spatially registered images and image sequences. The fusion method uses an improved wavelet representation, which has low redundancy as well as near-shift-invariance. Specifically, this representation is derived from an extended discrete wavelet frame with variable resampling strategies, and it corresponds to an optimal strategy. The proposed method lends itself well to rapid fusion for image sequences. Experimental results tested on different types of imagery show that the proposed method, as a shift-invariant scheme, provides better results than conventional wavelet methods and it is much more efficient than existing shift-invariant methods (the undecimated wavelet and the dual-tree complex wavelet methods).
©(2007) Society of Photo-Optical Instrumentation Engineers (SPIE)
Bo Yang and Zhongliang Jing "Image fusion using a low-redundancy and nearly shift-invariant discrete wavelet frame," Optical Engineering 46(10), 107002 (1 October 2007). https://doi.org/10.1117/1.2789640
Published: 1 October 2007
Lens.org Logo
CITATIONS
Cited by 21 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Discrete wavelet transforms

Wavelets

Transform theory

Infrared imaging

Optical engineering

Visualization

Back to Top