Data fusion can be used to generate high quality data from multiple, degraded data sets by appropriately extracting and combining “good” information from each degraded set. In particular for image fusion, it may be used for image denoising, deblurring, or pixel dropout compensation. Image fusion is often performed in an image transform domain. In transform domain fusion approaches, transform coefficients from multiple images may be combined in various ways to produce an improved transform coefficient set. The fused transform data is inverted to produce the fused image. In this paper we formulate a general approach to image fusion in the wavelet domain. The proposed approach exploits context information, through application of nonparametric statistical hypothesis testing. The use of statistical hypothesis testing places the fusion on a theoretically sound and principled basis, and leads to improved fusion performance. Furthermore, use of statistical wavelet coefficient information in a neighborhood of the test coefficient more fully exploits the available context information. In this paper we first formulate the fusion approach. We then present numerical image data fusion results using a sampling of imagery from a public domain image database. We compare fusion performance of the proposed approach with performance of other standard wavelet-domain fusion approaches, and show a performance improvement when using the proposed approach.