Since noise can undermine the effectiveness of information extracted from hyperspectral imagery, noise reduction is a
prerequisite for many classification-based applications of hyperspectral imagery. In this paper, an effective three
dimensional total variation denoising algorithm for hyperspectral imagery is introduced. First, a three dimensional
objective function of total variation denoising model is derived from the classical two dimensional TV algorithms. For
the consideration of the fact that the noise of hyperspectral imagery shows different characteristics in spatial and spectral
domain, the objective function is further improved by utilizing two terms (spatial term and spectral term) and separate
regularization parameters respectively which can adjust the trade-off between the two terms. Then, the improved
objective function is discretized by approximating gradients with local differences, optimized by a quadratic convex
function and finally solved by a majorization-minimization based iteration algorithm. The performance of the new
algorithm is experimented on a set of Hyperion imageries acquired in a desert-dominated area in 2007. Experimental
results show that, properly choosing the values of parameters, the new approach removes the indention and restores the
spectral absorption peaks more effectively while having a similar improvement of signal-to-noise-ratio as minimum
noise fraction (MNF) method.
Because of the more in-depth scientific research, remote sensing images often contain huge amounts of information.
Therefore, remote sensing images always have features with multi-dimensions details and huge size. In order to obtain
the ground information more accurately from the images, the remote sensing image processing would have several steps
in the aim of better image restore and the image information refining.
Frequently, processing for this type of images has faced to some difficult issues, such as calculating slowly or consuming
huge in resources. For this reason, the parallel computing rendering in remote sensing image processing is essentially
necessary. The parallel computing method approached in this paper does not require the original algorithm rewriting.
Under a distributed framework, the method allocated the original algorithm efficiently to the multiple computing cores of
the processing computer. Because this method has fully use the computing resources, so the calculating time would be
reduced linearly with the number of computing threads. What's more, the method can also truly guarantee the integrity
of the remote sensing image data.
For the purpose of validating the feasibility of the method, this paper put the parallel computing method on application,
in which the method rendering into a radiation simulation of remote sensing image processing. We conducted several
experiments and got the statistical results. We integrated the parallel computing into the core of the original algorithm -
the wide huge size convolution. The experimental results showed that the computing efficiency improved linearly. The
number of computer calculating core was proportionally related to the reduced rate of computing time. At the same time,
the computing results were identical to the original results.
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging
simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time
with low expense.
Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2)
optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data
integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol
etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more
than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature
distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging
simulation, which is based on WCF<sup></sup>, uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The
server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost.
In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time.
Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework
could provide unlimited computation capacity in condition that the network and task management server are affordable.
And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
The paper discusses how different frequencies vibrations influence the image quality and limit resolution. The main module of image quality degradation is based on the physical optical imaging process. This model builds up the accurate relationship between image space and object space, and reproduces entire imaging process from time-space joint relation. In this experiment, fixed amplitude harmonic vibrations with frequencies between 1Hz to 2000Hz applied to the 2-Dimesion unit impulse function image and standard knife edge image. Then those degenerated images are analyzed to
measure the PSF (Point Spread Function) and MTF (Modulation Transfer Function). As the result, with the increase of the ratio of <i>T<sub>e</sub> T<sub>0</sub> </i>, the PSF and MTF become constringent. When the ratio of <i>T<sub>e</sub> to T<sub>0</sub></i> is greater than 10, the maximum of relative error of the PSFs is less than 5%. This experiment reveals that the very high frequency vibrations have almost the same effects to degenerate the image as the relative lower frequency vibrations, which means that the
frequency is not as important as the vibration amplitude.