4 March 2013 New approach to measure the on-orbit point spread function for spaceborne imagers
Author Affiliations +
Abstract
A new approach to estimate the point spread function (PSF) of remotely sensed images is proposed here based on multiple natural point-like sources, i.e., subimages of the observed image. First the conditional subimages are extracted, and then a blind deconvolution technique is used to derive the PSF from these subimages. For a sampled imaging system with signal-to-noise ratio <20  dB , this method can provide a relatively accurate PSF result. Moreover, the estimated PSF can be applied to image restoration using Wiener filtering or other nonblind restoration methods.
Guo, Wu, Zhang, and Ren: New approach to measure the on-orbit point spread function for spaceborne imagers

1.

Introduction

Spatial characteristics of remote sensing images are generally described by point spread function (PSF) for sampled imaging systems. PSF describes the blur produced in the image by a point in the scene. Unfortunately, a sampled-data system is not shift-invariant on a microscale. Park et al.1 pointed out that the system PSF had a spurious location-dependent phase dependence, which made the PSF measurement very complicated. Nonetheless, a sampled imaging system can be considered quasilinear over a restricted operating region2 and described by a degradation model.

Although the PSFs of spaceborne imagers are measured in laboratory before launch, they may change owing to vibration during launch or aging of materials over time. For this reason, on-orbit PSF measurement is necessary to monitor the actual performance of spaceborne imagers and significant for further image restoration.

The majority of existing PSF estimation methods are developed based on the theoretical method proposed by Smith,3 who aimed at calculating modulation transfer function (MTF, the magnitude of the Fourier transformation of PSF) from its knife-edge response. These methods are frequently used for the reason that knife-edge or step profiles can be easily found in observed images, such as rooflines, farmlands, roadways, and also tarps laid on the ground ahead of time.4 In astronomical applications, reasonably bright, isolated stars can serve as point sources to derive the PSF.5 Similarly, for earth-observing satellites, PSF can also be obtained by imaging a point source lying on the ground.6 In this case, parametric models or multiple point sources are needed to rebuild the PSF. In addition, recently, methods using nonspecific views have been paid more attention. Delvit et al.7 proposed a univariate approach to assess MTF using nonspecific scenes.

In this paper, we present a new approach to measure the on-orbit PSF using multiple opportunistic point-like spots. Unlike the point source methods that involve single or multiple artificial spotlights,6 this method makes use of cross-relation among subimages so that we do not have to lay artificial targets ahead of time or put effort into collecting information about the real feature in each subimage. First of all, more than one point-like subimage in certain conditions is extracted from the observed remote sensing image. Once the subimages are detected, the PSF can be obtained using blind deconvolution. As a result, we get a two-dimensional (2-D) PSF directly. In the presence of noise, the diversity gain resulted from inherent diversity of natural subimages provides significant improvement in the accuracy of PSF measurement.

The rest of the paper is organized as follows. Section 2 introduces the detailed scheme of our approach. In Sec. 3, we examine the proposed method for its ability to assess PSF and the impact of additive noise on the accuracy. In Sec. 4, we illustrate that this method can be successfully applied to spaceborne imagers for the purpose of image restoration. The last section is for the conclusion and final remarks.

2.

Methodology

During the acquisition process, optical imaging systems causes image degradations due to cumulative effects of the instrumental optics (diffraction, aberrations, focusing error) and satellite movement. Typically, the degradation can be modeled by a linear system characterized by its PSF.2,8 The relation between observed image z and the original scene u is described by

(1)

z=h*u+η,
where h is the system PSF and η is an additive noise. About the noise η, we know only some statistics, such as its mean and variance. As we can see, the observed image z is the only known variable in Eq. (1). It is an ill-posed problem to solve and find the solution of h. Besides, the current trend is to make spaceborne cameras with higher resolution and wider field of view (FOV) so that the image z is considerably large in size. This paper is built on the idea of reducing computational cost by using subimages of z. If P different subimages are available, then they satisfy the following equation:

(2)

zp=h*up+ηp,p=1,2,P,
where zp is the pth subimage from the observed image z of uniform size mz×nz, up is the original object corresponding to zp, and ηp is independent noise. As pointed out by Holst,2 the system can be considered globally shift-invariant so that for each subimage, h is considered unchanged. When higher-order aberrations such as comatic aberration cannot be ignored in the system, the assumption of shift-invariance is no longer valid and thus by imposing such an assumption, an averaged PSF is eventually obtained.

2.1.

Blind Deconvolution Method

To solve Eq. (2), i.e., determine the PSF evaluation h^, we adopt a classic approach of minimizing a regularized energy function. The energy function takes the following form:

(3)

E(h,u1,u2,,uP)=12p=1Ph*upzp2+λup=1PQ(up)+λhQ(h)+μR(u1,u2,,uP).

The first term in E(h,u1,u2,,uP) measures the fidelity to the data. The remaining three are regularization terms with positive weighting scalars λh, λu, and μ. Regularization Q(up) [and Q(h)] is a smooth term of the total variation form

(4)

Q(up)=|up|dx,
where |up| is the size of subimage gradient. x=(x,y) denotes location in the domain of image-intensity functions. Q(·) can also have other forms, such as Tichonov regularization and hypersurface minimal function.8

Based on the commutativity of convolution operator, we observe that

(5)

ui*zj=uj*zifor1i,jP
in a noiseless case. [Equation (5) holds only in the case that the convolution in Eq. (2) is full. This discrepancy will be dealt with in Sec. 2.2.] Therefore the regularization term R(u1,u2,,uP) is defined as

(6)

R(u1,u2,,uP)=121i<jPzi*ujzj*ui2.

As a functional of several variables, E(h,u1,u2,,uP) is not convex everywhere and allows infinite solution. The mean-value ambiguity is removed by imposing a constraint on h. It takes the following form:

(7)

hdx=1.

On the other hand, an appropriate setting of the PSF support mh×nh can alleviate the shift ambiguity. For fixed h or up (p=1,2,,P), E(h,u1,u2,,uP) is convex. To find a minimizer of the energy function in Eq. (3), we perform alternating minimizations (AM) of E over h and up. It alternates between the following two steps:8

(8)

{u1(n),u2(n),,uP(n)}=argminE(h(n1),u1,u2,,uP)
and

(9)

h(n)=argminE(h,u1(n),u2(n),,uP(n))
for n1. Generally the AM method used here is a variation of the steepest-descent algorithm. It starts with descending in the subimage space to reach a minimum and then advances in the PSF space in the direction orthogonal to the previous one. The alternating minimization scheme repeats until the result trends to stable. More precisely, we use the preconditioned conjugate gradient method (PCG function in Matlab) to solve the unconstrained minimization problem [Eq. (8)] and fmincon function to solve Eq. (9). Appropriate initial setting of h may help the iterative algorithm to converge to the global minimum quickly.

2.2.

Data Preprocessing

If the blind deconvolution method described above is applied, we first need to extract the subimages from the captured image. Because of the cropping at the boundaries of subimages, the convolution here is a partial convolution. However, Eq. (5) holds only in the case that the subimages zp are of full size, i.e., the convolutions in Eq. (1) are full convolutions. The discrepancy between full and partial convolution operators needs to be eliminated ahead of time.

For notational simplicity, we use one-dimensional signals to describe the difference between full and partial convolutions. For the same convolution result d=[d0,d1,,dM+N2]T and same convolution kernel b=[b0,b1,,bM1]T, full convolution takes the form

(10)

[d0d1dM+N2]=(a*b)full=[b0b1b0bM1bM2b0bM1b1bM1][a0a1aN1],
while partial convolution takes the form

(11)

[d0d1dM+N2]=(a*b)partial=[bM1bM2b0bM1bM2b0bM1bM2b0][aM+1a1a0a1aN1aNaM+N2].

Obviously, they involve different lengths of the input signal a. Moreover, partial and full convolutions are the same if input signal a satisfies the following condition:

(12)

[aM+1a1a0a1aN1aNaM+N2]T=[00a0a1aN100]T.

The limitation is not on [a0,a1,,aN1]T, but on the elements outside the subscript range of 0N1. Similar to Eq. (12), the difference between 2-D partial and convolutions can be eliminated by demanding the pixels outside the border to be zeros. Unfortunately, pixels in natural scenes and captured images are seldom zeros. (For 10-bit remotely sensed data [range: 01023], the minimal value can be about 200.) A propitious thing we learn from experience is that it is much easier to find point-like spots with uniform background in remote sensing images, which have the form

(13)

upfull=[cpDcpVcpDcpHupcpHcpDcpVcpD]=[0000upcpM0000]+[cpcpcpcp],
where cpV, cpH, cpD, and cpM are block matrices consisting of same constants (cp) of size (mh1)×nz, mz×(nh1), (mh1)×(nh1) and mz×nz, respectively. cp is the local uniform background intensity. Considering Eqs. (7) and (13), we have

(14)

(zpcpM)=h*(upcpM)
in a noiseless case. To apply the blind deconvolution method, therefore, we only need to extract subimages with flat borders and then replace zp with (zpcpM) in Eq. (2). In the presence of noise, the observed intensity of flat areas may be subject to tiny fluctuations. So when searching for subimages with local uniform background, it is reasonable to allow small variation of intensity in the surrounding area. In summary, the blind deconvolution method proceeds with the following preprocessing steps: (1) Search for conditional subimages in observed remote sensing image; (2) Record the original data z˜p(p=1,2,P), each of size mz×nz; (3) For each subimage, compute the local background intensity cp=min(z˜p); (4) Update the subimage data zp=z˜pcp, (p=1,2,P).

3.

Simulation

The simulation demonstrates the capability of our method to compute PSF accurately from multiple degraded images. First, six different images were blurred with the same 5×5pixel PSF (see Fig. 1). The blurred images are directly below their corresponding original images in Fig. 1. The boundaries of the input images were extended by padding with a constant value so that the boundary condition mentioned in Sec. 2.2 could be satisfied. Initial parameters were set as follows: h(0)=delta function; mh×nh=5×5; λh=1000; λu=0.01; μ=10.

Fig. 1

(a) Six original images, each of size 5×5pixel. (b) A 5×5pixel point spread function (PSF). (c) Six degraded images, each of size 9×9pixel.

OE_52_3_033602_f001.png

To quantitatively measure the accuracy of estimated PSFs, we use mean squared errors (MSE) here since the real PSF is actually known. The estimation results when P different degraded images were involved are listed in Table 1. As we can see, the MSEs are <1% when utilizing more than three degraded images. A similar simulated experiment was conducted again, but Gaussian noise was added to the blurred images in Fig. 1(c). The estimation results are listed in Table 2. For heavy noisy data (SNR=10, 20 dB), estimation accuracy is significantly hindered by the noise, especially when using just two images from Fig. 1(c). However, practical spaceborne imagers usually provide data with higher SNRs. In addition, this simulation demonstrates the importance of the total number of subimages, i.e., P, to provide better estimation accuracy.

Table 1

MSEs of estimated PSFs in the noise-free case.

P=2P=3P=4P=5P=6
MSE (%)2.20761.28870.62980.58530.4722

Table 2

MSEs of estimated PSFs in presence of different levels of noise.

P=2P=6
SNR (dB)1020304010203040
MSE (%)14.323212.57095.26112.55569.20636.68301.97720.6929

4.

Application

4.1.

On-Orbit Evaluation of PSF

In this section, the proposed method is tested using data in the panchromatic band of the China YG-8 spaceborne camera. As discussed earlier, the conditional subimages were extracted from the captured image, as shown in Fig. 2. Based on our experiments, we find that the needed subimages are mostly located in lakes, seas, and broad roads.

Fig. 2

(a) A decreased fraction of the original remote sensing image where the central locations of subimages are marked with squares. (b) Eight 9×9pixel subimages. (The subimages are enlarged and enhanced to make them easier to see.)

OE_52_3_033602_f002.png

To achieve more accurate estimation of PSF, we used all eight subimages in Fig. 2(b) to carry out the blind deconvolution method. Initial parameters were set the same as in simulation (Sec. 3) except for PSF size mh×nh=7×7. The 2-D PSF result is shown in Fig. 3.

Fig. 3

Normalized 2-D PSF.

OE_52_3_033602_f003.png

In addition, to verify the correction of the estimated results, we compared the results of our method with slanted-edge method.9 The knife-edge method can only obtain line spread function (LSF) and 1-D MTF directly. We accordingly computed the integral of 2-D PSF as well as the MTF curves along both directions. The results are shown in Figs. 4 and 5, respectively.

Fig. 4

Comparison of line spread functions (LSFs) of our method and slanted-edge method.

OE_52_3_033602_f004.png

Fig. 5

Comparison of modulation transfer functions (MTFs) of our method and slanted-edge method.

OE_52_3_033602_f005.png

4.2.

Image Restoration

To improve the quality of observed image, the evaluated PSF is applied to restore the degraded image. Because the PSF is a known variable in Eq. (1) now, we can adopt nonblind restoration methods like Wiener filtering. If the estimated PSF is inaccurate, applying it to Wiener filtering will introduce severe ringing effects and cause deterioration of image quality. Two truncated images were restored using the estimated PSF, as shown in Fig. 6. Apparently, the restored images have better visual effects, and no obvious ringing is exhibited.

Fig. 6

Examples of degraded images [(a) and (c)] and restored images [(b) and (d)].

OE_52_3_033602_f006.png

5.

Conclusion

Our proposed method makes use of multiple opportunistic point-like sources to measure on-orbit PSF. The main advantage of this new method is the independence of artificial ground targets. The simulation results have shown that it can provide a relatively accurate result in presence of less heavy noise (SNR>20dB). Moreover, the PSF of a satellite panchromatic band is estimated, and the derived LSF is also properly close to the result of the slanted-edge method. In addition, the image quality has been improved by using Wiener filtering and the estimated PSF. Because the isolated point-like sources in a relatively uniform background can be found in parking lots, roadways, seas, rivers, lakes, desert scenes, farm ground, etc., the proposed method can be successfully applied to spaceborne sensors as long as these objects are available.

References

1. S. K. ParkR. SchowengerdtM.-A. Kaczynski, “Modulation-transfer-function analysis for sampled image systems,” Appl. Opt. 23(15), 2572–2582 (1984).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.23.002572 Google Scholar

2. G. C. Holst, “Imaging system fundamentals,” Opt. Eng. 50(5), 052601 (2011).OPEGAR0091-3286 http://dx.doi.org/10.1117/1.3570681 Google Scholar

3. P. L. Smith, “New technique for estimating the MTF of an imaging system from its edge response,” Appl. Opt. 11(6), 1424–1425 (1972).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.11.001424 Google Scholar

4. P. W. Nugentet al., “Measuring the modulation transfer function of an imaging spectrometer with rooflines of opportunity,” Opt. Eng. 49(10), 103201 (2010).OPEGAR0091-3286 http://dx.doi.org/10.1117/1.3497051 Google Scholar

5. R. Luptonet al., “The SDSS imaging pipelines,” in Astronomical Data Analysis Software and System X, in ASP Conf. Ser., Vol. 238, p. 0101420, Astr. Spc. Pac., San Francisco (2001). Google Scholar

6. D. LegerJ. DuffautF. Robinet, “MTF measurement using spotlight,” in Proc. Geosci. Remote Sens. Symp., Pasadena, California, Vol. 4, pp. 2010–2012 (1994). Google Scholar

7. J.-M. Delvitet al., “Modulation transfer function estimation from nonspecific images,” Opt. Eng. 43(6), 1355–1365 (2004).OPEGAR0091-3286 http://dx.doi.org/10.1117/1.1724838 Google Scholar

8. F. ŠroubekP. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Trans. Image Process. 21(4), 1687–1700 (2012).IIPRE41057-7149 http://dx.doi.org/10.1109/TIP.2011.2175740 Google Scholar

9. J. T. OlsonR. L. EspinolaE. L. Jacobs, “Comparison of tilted slit and tilted edge superresolution modulation transfer function techniques,” Opt. Eng. 46(1), 016403 (2007).OPEGAR0091-3286 http://dx.doi.org/10.1117/1.2430503 Google Scholar

Biography

OE_52_3_033602_d001.png

Lingling Guo obtained her bachelor’s degree in electronic engineering from University of Science and Technology of China in 2011. Then she attended Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, for her MSc degree. She has been involved in the field of quality assessment and image restoration of Earth observation satellites, especially in the measurement of modulation transfer function of satellite imaging systems and the image restoration and enhancement based on the estimated PSF.

OE_52_3_033602_d002.png

Zepeng Wu obtained his bachelor’s degree in electronic engineering & information science from University of Science and Technology of China in 2011. He attended Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, for his MSc degree after that. He has been working on image processing and analysis since 2009. His current research involves system design for image processing based on FPGA and real-time signal processing algorithm design.

OE_52_3_033602_d003.png

Liguo Zhang is a researcher at Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, and has been involved in the research of space optical remote sensors since 2000. He is in charge of the thermal control system design and image quality enhancement for optical remote sensing images.

OE_52_3_033602_d004.png

Jianyue Ren is a researcher at Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences. He has been engaged in research of space optical remote sensors since 1996 and has been in charge of many major state scientific research projects. His work won first prize of the 2007 army scientific and technological progress and second prize of the 2008 national science and technology invention. He and his team have done a significant job to the development of Earth observation satellites.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Lingling Guo, Zepeng Wu, Liguo Zhang, Jianyue Ren, "New approach to measure the on-orbit point spread function for spaceborne imagers," Optical Engineering 52(3), 033602 (4 March 2013). https://doi.org/10.1117/1.OE.52.3.033602 . Submission:
JOURNAL ARTICLE
7 PAGES


SHARE
Back to Top