**20 dB**, this method can provide a relatively accurate PSF result. Moreover, the estimated PSF can be applied to image restoration using Wiener filtering or other nonblind restoration methods.

## 1.

## Introduction

Spatial characteristics of remote sensing images are generally described by point spread function (PSF) for sampled imaging systems. PSF describes the blur produced in the image by a point in the scene. Unfortunately, a sampled-data system is not shift-invariant on a microscale. Park et al.^{1} pointed out that the system PSF had a spurious location-dependent phase dependence, which made the PSF measurement very complicated. Nonetheless, a sampled imaging system can be considered quasilinear over a restricted operating region^{2} and described by a degradation model.

Although the PSFs of spaceborne imagers are measured in laboratory before launch, they may change owing to vibration during launch or aging of materials over time. For this reason, on-orbit PSF measurement is necessary to monitor the actual performance of spaceborne imagers and significant for further image restoration.

The majority of existing PSF estimation methods are developed based on the theoretical method proposed by Smith,^{3} who aimed at calculating modulation transfer function (MTF, the magnitude of the Fourier transformation of PSF) from its knife-edge response. These methods are frequently used for the reason that knife-edge or step profiles can be easily found in observed images, such as rooflines, farmlands, roadways, and also tarps laid on the ground ahead of time.^{4} In astronomical applications, reasonably bright, isolated stars can serve as point sources to derive the PSF.^{5} Similarly, for earth-observing satellites, PSF can also be obtained by imaging a point source lying on the ground.^{6} In this case, parametric models or multiple point sources are needed to rebuild the PSF. In addition, recently, methods using nonspecific views have been paid more attention. Delvit et al.^{7} proposed a univariate approach to assess MTF using nonspecific scenes.

In this paper, we present a new approach to measure the on-orbit PSF using multiple opportunistic point-like spots. Unlike the point source methods that involve single or multiple artificial spotlights,^{6} this method makes use of cross-relation among subimages so that we do not have to lay artificial targets ahead of time or put effort into collecting information about the real feature in each subimage. First of all, more than one point-like subimage in certain conditions is extracted from the observed remote sensing image. Once the subimages are detected, the PSF can be obtained using blind deconvolution. As a result, we get a two-dimensional (2-D) PSF directly. In the presence of noise, the diversity gain resulted from inherent diversity of natural subimages provides significant improvement in the accuracy of PSF measurement.

The rest of the paper is organized as follows. Section 2 introduces the detailed scheme of our approach. In Sec. 3, we examine the proposed method for its ability to assess PSF and the impact of additive noise on the accuracy. In Sec. 4, we illustrate that this method can be successfully applied to spaceborne imagers for the purpose of image restoration. The last section is for the conclusion and final remarks.

## 2.

## Methodology

During the acquisition process, optical imaging systems causes image degradations due to cumulative effects of the instrumental optics (diffraction, aberrations, focusing error) and satellite movement. Typically, the degradation can be modeled by a linear system characterized by its PSF.^{2}^{,}^{8} The relation between observed image $z$ and the original scene $u$ is described by

^{2}the system can be considered globally shift-invariant so that for each subimage, $h$ is considered unchanged. When higher-order aberrations such as comatic aberration cannot be ignored in the system, the assumption of shift-invariance is no longer valid and thus by imposing such an assumption, an averaged PSF is eventually obtained.

## 2.1.

### Blind Deconvolution Method

To solve Eq. (2), i.e., determine the PSF evaluation $\widehat{h}$, we adopt a classic approach of minimizing a regularized energy function. The energy function takes the following form:

## (3)

$$E(h,{u}_{1},{u}_{2},\cdots ,{u}_{P})=\frac{1}{2}\sum _{p=1}^{P}{\Vert h*{u}_{p}-{z}_{p}\Vert}^{2}+{\lambda}_{u}\sum _{p=1}^{P}Q({u}_{p})+{\lambda}_{h}Q(h)+\mu R({u}_{1},{u}_{2},\cdots ,{u}_{P}).$$The first term in $E(h,{u}_{1},{u}_{2},\cdots ,{u}_{P})$ measures the fidelity to the data. The remaining three are regularization terms with positive weighting scalars ${\lambda}_{h}$, ${\lambda}_{u}$, and $\mu $. Regularization $Q({u}_{p})$ [and $Q(h)$] is a smooth term of the total variation form

where $|\nabla {u}_{p}|$ is the size of subimage gradient. $\mathbf{x}=(x,y)$ denotes location in the domain of image-intensity functions. $Q(\xb7)$ can also have other forms, such as Tichonov regularization and hypersurface minimal function.^{8}

Based on the commutativity of convolution operator, we observe that

## (5)

$${u}_{i}*{z}_{j}={u}_{j}*{z}_{i}\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1\le i,j\le P$$## (6)

$$R({u}_{1},{u}_{2},\cdots ,{u}_{P})=\frac{1}{2}\sum _{1\le i<j\le P}{\Vert {z}_{i}*{u}_{j}-{z}_{j}*{u}_{i}\Vert}^{2}.$$As a functional of several variables, $E(h,{u}_{1},{u}_{2},\dots ,{u}_{P})$ is not convex everywhere and allows infinite solution. The mean-value ambiguity is removed by imposing a constraint on $h$. It takes the following form:

On the other hand, an appropriate setting of the PSF support ${m}_{h}\times {n}_{h}$ can alleviate the shift ambiguity. For fixed $h$ or ${u}_{p}$ ($p=1,2,\dots ,P$), $E(h,{u}_{1},{u}_{2},\cdots ,{u}_{P})$ is convex. To find a minimizer of the energy function in Eq. (3), we perform alternating minimizations (AM) of $E$ over $h$ and ${u}_{p}$. It alternates between the following two steps:^{8}

## (8)

$$\{{u}_{1}^{(n)},{u}_{2}^{(n)},\cdots ,{u}_{P}^{(n)}\}=\mathrm{argmin}\text{\hspace{0.17em}}E({h}^{(n-1)},{u}_{1},{u}_{2},\cdots ,{u}_{P})$$## (9)

$${h}^{(n)}=\mathrm{argmin}\text{\hspace{0.17em}}E(h,{u}_{1}^{(n)},{u}_{2}^{(n)},\cdots ,{u}_{P}^{(n)})$$*fmincon*function to solve Eq. (9). Appropriate initial setting of $h$ may help the iterative algorithm to converge to the global minimum quickly.

## 2.2.

### Data Preprocessing

If the blind deconvolution method described above is applied, we first need to extract the subimages from the captured image. Because of the cropping at the boundaries of subimages, the convolution here is a partial convolution. However, Eq. (5) holds only in the case that the subimages ${z}_{p}$ are of full size, i.e., the convolutions in Eq. (1) are full convolutions. The discrepancy between full and partial convolution operators needs to be eliminated ahead of time.

For notational simplicity, we use one-dimensional signals to describe the difference between full and partial convolutions. For the same convolution result $\mathbf{d}={[{d}_{0},{d}_{1},\dots ,\phantom{\rule{0ex}{0ex}}{d}_{M+N-2}]}^{T}$ and same convolution kernel $\mathbf{b}={[{b}_{0},{b}_{1},\dots ,\phantom{\rule{0ex}{0ex}}{b}_{M-1}]}^{T}$, full convolution takes the form

## (10)

$$\left[\begin{array}{c}{d}_{0}\\ {d}_{1}\\ \vdots \\ {d}_{M+N-2}\end{array}\right]={(\mathbf{a}*\mathbf{b})}_{\text{full}}=\left[\begin{array}{cccc}{b}_{0}& & & \\ {b}_{1}& {b}_{0}& & \\ \vdots & \vdots & \ddots & \\ {b}_{M-1}& {b}_{M-2}& \cdots & {b}_{0}\\ & {b}_{M-1}& \vdots & {b}_{1}\\ & & \ddots & \vdots \\ & & & {b}_{M-1}\end{array}\right]\left[\begin{array}{c}{a}_{0}\\ {a}_{1}\\ \vdots \\ {a}_{N-1}\end{array}\right],$$## (11)

$$\left[\begin{array}{c}{d}_{0}\\ {d}_{1}\\ \vdots \\ {d}_{M+N-2}\end{array}\right]={(\mathbf{a}*\mathbf{b})}_{\text{partial}}=\left[\begin{array}{ccccccc}{b}_{M-1}& {b}_{M-2}& \cdots & {b}_{0}& & & \\ & {b}_{M-1}& {b}_{M-2}& \cdots & {b}_{0}& & \\ & & \ddots & \vdots & \vdots & \ddots & \\ & & & {b}_{M-1}& {b}_{M-2}& \cdots & {b}_{0}\end{array}\right]\left[\begin{array}{c}\begin{array}{c}{a}_{-M+1}\\ \vdots \\ {a}_{-1}\\ {a}_{0}\end{array}\\ {a}_{1}\\ \vdots \\ \begin{array}{c}{a}_{N-1}\\ {a}_{N}\\ \vdots \\ {a}_{M+N-2}\end{array}\end{array}\right].$$Obviously, they involve different lengths of the input signal $\mathbf{a}$. Moreover, partial and full convolutions are the same if input signal $\mathbf{a}$ satisfies the following condition:

## (12)

$${[\begin{array}{cccccccccc}{a}_{-M+1}& \cdots & {a}_{-1}& {a}_{0}& {a}_{1}& \cdots & {a}_{N-1}& {a}_{N}& \cdots & {a}_{M+N-2}\end{array}]}^{T}\phantom{\rule{0ex}{0ex}}={[\begin{array}{cccccccccc}0& \cdots & 0& {a}_{0}& {a}_{1}& \cdots & {a}_{N-1}& 0& \cdots & 0\end{array}]}^{T}.$$The limitation is not on ${[{a}_{0},{a}_{1},\dots ,{a}_{N\text{\hspace{0.17em}}-1}]}^{T}$, but on the elements outside the subscript range of $0\sim N-1$. Similar to Eq. (12), the difference between 2-D partial and convolutions can be eliminated by demanding the pixels outside the border to be zeros. Unfortunately, pixels in natural scenes and captured images are seldom zeros. (For 10-bit remotely sensed data [range: $0\sim 1023$], the minimal value can be about 200.) A propitious thing we learn from experience is that it is much easier to find point-like spots with uniform background in remote sensing images, which have the form

## (13)

$${u}_{p}^{\text{full}}=\left[\begin{array}{ccc}{c}_{p}^{D}& {c}_{p}^{V}& {c}_{p}^{D}\\ {c}_{p}^{H}& {u}_{p}& {c}_{p}^{H}\\ {c}_{p}^{D}& {c}_{p}^{V}& {c}_{p}^{D}\end{array}\right]=\left[\begin{array}{ccc}\mathbf{0}& \mathbf{0}& \mathbf{0}\\ \mathbf{0}& {u}_{p}-{c}_{p}^{M}& \mathbf{0}\\ \mathbf{0}& \mathbf{0}& \mathbf{0}\end{array}\right]+\left[\begin{array}{ccc}{c}_{p}& \cdots & {c}_{p}\\ \vdots & \ddots & \vdots \\ {c}_{p}& \cdots & {c}_{p}\end{array}\right],$$## 3.

## Simulation

The simulation demonstrates the capability of our method to compute PSF accurately from multiple degraded images. First, six different images were blurred with the same $5\times 5\text{\hspace{0.17em}}\text{pixel}$ PSF (see Fig. 1). The blurred images are directly below their corresponding original images in Fig. 1. The boundaries of the input images were extended by padding with a constant value so that the boundary condition mentioned in Sec. 2.2 could be satisfied. Initial parameters were set as follows: ${h}^{(0)}=\text{delta function}$; ${m}_{h}\times {n}_{h}=5\times 5$; ${\lambda}_{h}=1000$; ${\lambda}_{u}=0.01$; $\mu =10$.

To quantitatively measure the accuracy of estimated PSFs, we use mean squared errors (MSE) here since the real PSF is actually known. The estimation results when $P$ different degraded images were involved are listed in Table 1. As we can see, the MSEs are $<1\%$ when utilizing more than three degraded images. A similar simulated experiment was conducted again, but Gaussian noise was added to the blurred images in Fig. 1(c). The estimation results are listed in Table 2. For heavy noisy data ($\mathrm{SNR}=10$, 20 dB), estimation accuracy is significantly hindered by the noise, especially when using just two images from Fig. 1(c). However, practical spaceborne imagers usually provide data with higher SNRs. In addition, this simulation demonstrates the importance of the total number of subimages, i.e., $P$, to provide better estimation accuracy.

## Table 1

MSEs of estimated PSFs in the noise-free case.

P=2 | P=3 | P=4 | P=5 | P=6 | |
---|---|---|---|---|---|

MSE (%) | 2.2076 | 1.2887 | 0.6298 | 0.5853 | 0.4722 |

## Table 2

MSEs of estimated PSFs in presence of different levels of noise.

P=2 | P=6 | |||||||
---|---|---|---|---|---|---|---|---|

SNR (dB) | 10 | 20 | 30 | 40 | 10 | 20 | 30 | 40 |

MSE (%) | 14.3232 | 12.5709 | 5.2611 | 2.5556 | 9.2063 | 6.6830 | 1.9772 | 0.6929 |

## 4.

## Application

## 4.1.

### On-Orbit Evaluation of PSF

In this section, the proposed method is tested using data in the panchromatic band of the China YG-8 spaceborne camera. As discussed earlier, the conditional subimages were extracted from the captured image, as shown in Fig. 2. Based on our experiments, we find that the needed subimages are mostly located in lakes, seas, and broad roads.

To achieve more accurate estimation of PSF, we used all eight subimages in Fig. 2(b) to carry out the blind deconvolution method. Initial parameters were set the same as in simulation (Sec. 3) except for PSF size ${m}_{h}\times {n}_{h}=7\times 7$. The 2-D PSF result is shown in Fig. 3.

In addition, to verify the correction of the estimated results, we compared the results of our method with slanted-edge method.^{9} The knife-edge method can only obtain line spread function (LSF) and 1-D MTF directly. We accordingly computed the integral of 2-D PSF as well as the MTF curves along both directions. The results are shown in Figs. 4 and 5, respectively.

## 4.2.

### Image Restoration

To improve the quality of observed image, the evaluated PSF is applied to restore the degraded image. Because the PSF is a known variable in Eq. (1) now, we can adopt nonblind restoration methods like Wiener filtering. If the estimated PSF is inaccurate, applying it to Wiener filtering will introduce severe ringing effects and cause deterioration of image quality. Two truncated images were restored using the estimated PSF, as shown in Fig. 6. Apparently, the restored images have better visual effects, and no obvious ringing is exhibited.

## 5.

## Conclusion

Our proposed method makes use of multiple opportunistic point-like sources to measure on-orbit PSF. The main advantage of this new method is the independence of artificial ground targets. The simulation results have shown that it can provide a relatively accurate result in presence of less heavy noise ($\mathrm{SNR}>20\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{dB}$). Moreover, the PSF of a satellite panchromatic band is estimated, and the derived LSF is also properly close to the result of the slanted-edge method. In addition, the image quality has been improved by using Wiener filtering and the estimated PSF. Because the isolated point-like sources in a relatively uniform background can be found in parking lots, roadways, seas, rivers, lakes, desert scenes, farm ground, etc., the proposed method can be successfully applied to spaceborne sensors as long as these objects are available.

## References

## Biography

**Lingling Guo** obtained her bachelor’s degree in electronic engineering from University of Science and Technology of China in 2011. Then she attended Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, for her MSc degree. She has been involved in the field of quality assessment and image restoration of Earth observation satellites, especially in the measurement of modulation transfer function of satellite imaging systems and the image restoration and enhancement based on the estimated PSF.

**Zepeng Wu** obtained his bachelor’s degree in electronic engineering & information science from University of Science and Technology of China in 2011. He attended Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, for his MSc degree after that. He has been working on image processing and analysis since 2009. His current research involves system design for image processing based on FPGA and real-time signal processing algorithm design.

**Liguo Zhang** is a researcher at Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, and has been involved in the research of space optical remote sensors since 2000. He is in charge of the thermal control system design and image quality enhancement for optical remote sensing images.

**Jianyue Ren** is a researcher at Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences. He has been engaged in research of space optical remote sensors since 1996 and has been in charge of many major state scientific research projects. His work won first prize of the 2007 army scientific and technological progress and second prize of the 2008 national science and technology invention. He and his team have done a significant job to the development of Earth observation satellites.