To prevent halo artifacts resulting from edge preserving smoothing methods that use a local filter, a filter method with a guided image that fuses multiple kernels is proposed. This method first computes the coefficients of different local multiple kernels at the pixel level and then linearly fuses these coefficients to obtain the final coefficients. Finally, the filtered image is generated using the linear coefficients. Compared with existing methods, including the popular bilateral filter and guided filter methods, our experimental results show that the proposed method not only obtains images with better visual quality but also prevents halo artifacts, resulting in detail enhancement, haze removal, and noise reduction.
Modern space telescopes are demanded to have a very large aperture in order to achieve high resolution. The notion of
the sparse aperture systems introduces a new solution to make the telescopes practicable. It is significant to simulate
sparse aperture systems before their application in space observation. The multiple mirror telescope (MMT) is one type
of sparse aperture systems and Golay3 is the configuration serving as a good start. The method to simulate Golay3 MMT
is investigated. The fundamental principle of optical surfaces simulation using the optical design program is discussed. It
is proposed that Golay3 MMT simulation can be accomplished by programming to establish surface files with the aid of
the interface of this program. The structure of Golay3 MMT in which three sub-mirrors replace the monolithic spherical
primary mirror is analyzed. The formulas determining locations of sub-mirrors on the spherical primary mirror are
deduced. The surface file representing the primary mirror is created by external programming and defining properties of
rays passing through it. The simulation procedures are introduced in detail. A simulation example is given out and
evaluated. It proves that the simulation method is reasonable and effective, and has significant reference values to
simulate sparse aperture systems with other structures.
Modern star sensors are powerful to measure attitude automatically which assure a perfect performance of spacecrafts.
They achieve very accurate attitudes by applying algorithms to process star maps obtained by the star camera mounted
on them. Therefore, star maps play an important role in designing star cameras and developing procession algorithms.
Furthermore, star maps supply significant supports to exam the performance of star sensors completely before their
launch. However, it is not always convenient to supply abundant star maps by taking pictures of the sky. Thus, star map
simulation with the aid of computer attracts a lot of interests by virtue of its low price and good convenience. A method
to simulate star maps by programming and extending the function of the optical design program ZEMAX is proposed.
The star map simulation system is established. Firstly, based on analyzing the working procedures of star sensors to
measure attitudes and the basic method to design optical system by ZEMAX, the principle of simulating star sensor
imaging is given out in detail. The theory about adding false stars and noises, and outputting maps is discussed and the
corresponding approaches are proposed. Then, by external programming, the star map simulation program is designed
and produced. Its user interference and operation are introduced. Applications of star map simulation method in
evaluating optical system, star image extraction algorithm and star identification algorithm, and calibrating system errors
are presented completely. It was proved that the proposed simulation method provides magnificent supports to the study
on star sensors, and improves the performance of star sensors efficiently.
Star sensors have been developed to acquire accurate orientation information in recent decades superior to other attitude
measuring instruments. A star camera takes photos of the night sky to obtain star maps. An important step to acquire
attitude knowledge is to compare the features of the observed stars in the maps with those of the cataloged stars using
star identification algorithms. To calculate centroids of the star images before this step, they are required to be extracted
from the star maps in advance. However, some large or ultra large imaging detectors are applied to acquire star maps for
star sensors with the development of electronic imaging devices. Therefore, star image extraction occupies more and
more portions of the whole attitude measurement period of time. It is required to shorten star image extraction time in
order to achieve higher response rate. In this paper, a novel star image extraction algorithm is proposed which fulfill the
tasks efficiently. By scanning star map, the pixels brighter than the gray threshold are found and their coordinates and
brightness are stored in a cross-linked list. Data of these pixels are linked by pointers, while other pixels are neglected.
Therefore, region growing algorithm can be used by choosing the first element in the list as a starting seed. New seeds
are founded if the neighboring pixels are brighter than the threshold, and the last seed is deleted from the list. Next
search continues until no neighboring pixels are in the list. At that time, one star image is extracted, and its centroid is
calculated. Likely, other star images may be extracted, and then the examined seeds are deleted which are never
considered again. A new star image search always begins from the first element for avoiding unnecessary scanning. The
experiments have proved that for a 1024×1024 star map, the image extraction takes nearly 16 milliseconds. When
CMOS APS is utilized to transfer image data, the real-time extraction can be almost achieved.
This paper proposes a novel algorithm for distinguishing scenery information from cloud noise in the low-level and
high-level detail coefficients using the wavelet decomposition. Also this paper shows approximate coefficients only
containing the scenery information, and high-level detail coefficients mainly including the cloud noise and the partial
scenery information. Usually cloud is brighter than the scene illumination. Therefore the appropriate brightness threshold
is setup for processing high-level detail coefficients aimed at the elimination of cloud noise. Simultaneously to remove
the residual cloud at the low frequency component and improve the clarity of the scenery image, the paper further
decomposes the detail coefficients based on the frequency. For example, the low-level detail coefficients are
decomposed further once or twice by wavelet packets. So we can remove remaining cloud decomposed effectively at the
low frequency, and through assigning the appropriate weight to the detail coefficient, achieve the goal for enhancing
scenery information and improving the image clarity. Considering influence of the parameter changes on the algorithm
performance, we use the entropy as the criterion for choosing the optimal parameters step by step. We have
demonstrated that this algorithm using the entropy as criterion is feasible. The experimental results are superior to
homomorphism filtering and the Retinex algorithm in many aspects.
The sparse aperture system, which utilizes several sub-apertures in place of monolithic surface, has attracted more
interests because it has the advantages of lower cost and lighter weight, and keeps the same aperture size to reach the
demanded angular resolution. However the image quality degrades because of smaller effective aperture areas. When the
diffraction-limited sparse aperture optical system is imaging ideally, since the optical transfer function is known in the
diffraction-limited sparse aperture system, wiener filter is thought to be the best tool to restore the images. In the actual
imaging process, an image must been disturbed by varieties of noises so that the ability of Wiener filtering image
restoration degrades obviously, the restoration effect of the images with noises by using Wiener filtering is not to be
efficient. This paper proposes an improved de-noising algorithm by analyzing traditional wavelet threshold de-noising
method. For the images created by using the simulated sparse aperture optical system, first, we can remove the noises in
the images using the improved wavelet threshold method and enhance the signal-to-noise ratio, and then obtain the more
ideal image formation in the greatest degree, and finally perform restoration of the preprocessed images based on the
improved Wiener filtering method. The simulated experiments are fulfilled with sparse aperture system Golay6 with
different fill factors designed with the aid of the optical designing software system ZEMAX. The simulated results
demonstrate that the algorithm proposed in this paper is superior to normal Wiener filtering or the improved
Wavelet-Wiener filtering method.
By analyzing the multi-resolution characteristics of wavelet series, the frequency distribution of detail and approximation
coefficients is deduced. It is concluded that the frequency of detail coefficients in low levels is higher than that in high
levels, and the frequency of approximation coefficients is the lowest. According to this conclusion, this paper proposes a
new method of remote sensing image recovery based on Weighted Wavelet Coefficient (WWC), namely, removing the
cloud and mist from the remote sensing images using weighted wavelet coefficient algorithm. Suppose the image is
decomposed by η levels with wavelet transform. By choosing reasonable level number l, the scenery information is
mainly distributed to 1~l levels where detail coefficients have lower frequency and cloud and mist noise information are
distributed mainly to l~n; levels where detail coefficients have relatively higher frequency. Approximation coefficients
will also include cloud information. Detail coefficients in low levels and high levels and approximation coefficients are
weighted with different factors. The scenery information is enhanced by increasing detail coefficients in low levels with
a weight great than 1. The cloud and mist noise is weakened by decreasing detail coefficients in high levels with a weight
less than 1. Approximation coefficients are weighted appropriately if including cloud. It is also proposed that the
information entropy is taken as a criterion for choosing the number of the demarcation levels and the weighted factors.
We have confirmed that our new algorithm is better than homomorphism filtering and Retinex algorithm by experiments.