1 October 2007 Image fusion using a low-redundancy and nearly shift-invariant discrete wavelet frame
Author Affiliations +
Optical Engineering, 46(10), 107002 (2007). doi:10.1117/1.2789640
Abstract
This paper describes an efficient approach to the fusion of spatially registered images and image sequences. The fusion method uses an improved wavelet representation, which has low redundancy as well as near-shift-invariance. Specifically, this representation is derived from an extended discrete wavelet frame with variable resampling strategies, and it corresponds to an optimal strategy. The proposed method lends itself well to rapid fusion for image sequences. Experimental results tested on different types of imagery show that the proposed method, as a shift-invariant scheme, provides better results than conventional wavelet methods and it is much more efficient than existing shift-invariant methods (the undecimated wavelet and the dual-tree complex wavelet methods).
Bo Yang, Zhongliang Jing, "Image fusion using a low-redundancy and nearly shift-invariant discrete wavelet frame," Optical Engineering 46(10), 107002 (1 October 2007). http://dx.doi.org/10.1117/1.2789640
JOURNAL ARTICLE
10 PAGES


SHARE
KEYWORDS
Image fusion

Discrete wavelet transforms

Wavelets

Transform theory

Infrared imaging

Optical engineering

Visualization

RELATED CONTENT

Resource Measurement System
Proceedings of SPIE (January 09 1984)
A wavelet-based quadtree driven stereo image coding
Proceedings of SPIE (February 18 2009)

Back to Top