The classification of hyperspectral images benefits greatly from integration of spectral information and spatial context. There have been many means to incorporate spatial information into the classification, such as the Markov random field, extended morphological profiles, and segmentation-based methods. Recently, spatial filtering was introduced to improve the classification accuracy of hyperspectral images. Compared with other spectral-spatial algorithms, spatial filtering is simple and easy to implement. This advantage makes it suitable for practical applications. However, spatial filtering has not been given enough attention. A comprehensive comparative study of spatial filtering is conducted. Specifically, 10 kinds of filters are used to smooth the hyperspectral images and the classified maps, respectively. The experimental results show that most filtering-based classification methods perform well with high efficiency.
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
A simulation method for analyzing polarization states for infrared scenes is proposed in order to study the polarization features of infrared spontaneous emission deeply, since current infrared polarization devices can’t show the polarization signature of infrared spontaneous emission for a target or an object well. A preliminary analysis on polarization characteristics of infrared spontaneous emission in the ideal case is carried out and also a corresponding ideal model is established through Kirchhoff’s law and the Fresnel theorem. Based on the newly built ideal model, a three-dimensional (3D) scene modeling and simulation based on the OpenSceneGraph (OSG) rendering engine is utilized to obtain the polarization scene of infrared emission under ideal conditions. Through the corresponding software, different infrared scenes can be generated by adjusting the input parameters. By interacting with the scene, the infrared polarization images can be acquired readily, also a fact can be obviously confirmed that the degree of linear polarization (DoLP) for an object in the 3D scene varies with the many factors such as emission angle and complex refractive index. Moreover, large difference between two kinds of material such as metal and nonmetal in the polarization characteristics of infrared spontaneous emission at the same temperature can be easily discerned in the 3D scene. The 3D scene simulation and modeling in the ideal case provides a direct understanding on infrared polarization property, which is of great significance for the further study of infrared polarization characteristics in the situation of real scenes.
This paper puts emphasis on how to model and correct image blur that arises from a camera’s ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera’s path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera’s ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.
A new effective image super resolution (SR) algorithm which is a hybrid of multiple frame Variational Bayesian (VB) reconstruction and single frame Dictionary Learning (DL) reconstruction method is developed to reconstruct a high resolution (HR) satellite image in this article. Firstly, by employing a variational Bayesian analysis, the unknown high resolution image, the acquisition process, the motion parameters and the unknown model parameters are built together in a single mathematical model with a Bayesian formula, and then the distributions of all unknowns are jointly estimated. Without any parameter adjustment, an HR image is adaptively reconstructed from multiple frame low resolution (LR) images. Secondly, by taking the above HR image as input, a higher resolution image can be rebuilt utilizing the statistical correlation between the HR and LR images which is obtained via the DL method. The VB method effectively uses non-redundant information between LR images to recover HR satellite images. Benefit from the dictionary training of magnanimity image, the DL algorithm is able to provide more high-frequency image details, which means this hybrid of VB and DL method combines the above advantages. The experiments show that this proposed algorithm can effectively increase the image resolution of remote sensing images by 0.5times at least comparing with low resolution image.
Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.
Remote Sensing Image can be degraded by a variety of causes during acquisition, transmission, compression, storage and reconstruction. Noise is one of the most important degradation factors. Quantifying its impact on the image may be useful for applications such as improving the acquisition system and thus the quality of the produced images. Objective Image Quality Measure (IQA) methods can be classified by whether a reference image, representing the original signal exists. In the case of remote sensing, the ideal un-degraded image is not available. No-reference (NR) method is required to blindly assess the image quality. In this paper, a new no-reference algorithm is proposed to quantify noise based on local phase coherence (LPC). This algorithm assumes that the input image is contaminated by additive zero mean Gaussian noise. Firstly, a LPC map of degraded image is constructed and the image edge is extracted by modifying the noise threshold. Secondly, the edge is removed from the LPC map. Then, the noise level can be quantified by the remaining noise information and little “residual” information of the LPC map. Experiment results show that the proposed algorithm correlates well with subjective quality evaluations and has high estimation accuracy especially for Gaussian noise-infected images.
An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.
This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
We address an optical imaging method that allows imaging, which owing to the “memory-effect” for speckle correlations, through highly scattering turbid media with “Error Reduction - Hybid Input Ouput (ER-HIO)” algorithm. When light propagates through the opaque materials, such as white paint, paper or biological tissues, it will be scattered away due to the inhomogeneity of the refractive index. Multiple scattering of light in highly scattering media forms speckle field, which will greatly reduce the imaging depth and degrade the imaging quality. Some methods have been developed to solve this problem in recent years, including wavefront modulation method (WMM), transmission matrix method (TMM) and speckle correlations method (SCM). A novel approach is proposed to image through a highly scattering turbid medium, which combines speckle correlations method (SCM) with phase retrieval algorithm (PRA). Here, we show that, owing to the “optical memory effect” for speckle correlations, a single frame image of the speckle field, captured with a high performance detector, encodes sufficient information to image through highly scattering turbid media. Theoretical and experimental results show that, neither the light source, nor wave-front shaping is required in this method, and that the imaging can be easily realized here using just a simple optical system with the help of optical memory effect. Our method does not require coherent light source, which can be achieved with LED illumination, unlike previous approaches, and therefore is potentially suitable for more and more areas. Consequently, it will be beneficial to achieve imaging in currently inaccessible scenarios.
Compressed coded aperture based imaging warning system with a low resolution optical sensor is proposed in this paper, which is specifically designed to support the demands of rapid, high resolution, long-range detection and warning in complex battlefield environment. After analyzing of the tactic specification and the technical specification, the key techniques of this novel alarming system are discussed and designed, including optical imaging module, image-processing module, alarming control module and interfaces unit. The optical imaging module is used for image compression, then, the coded image will be mathematically reconstructed to a high resolution image by the image-processing module. The presented super-resolution reconstruction algorithm is efficient and robust. Combining compressed coded imaging simulation and coded image super-resolution reconstruction, experiments show that the compressed coded aperture imaging alarming system has a longer detectable range and higher resolution, which is potential in the defence of important targets.
The low resolved satellite images caused by serious degradation in remote sensing weaken its utilities in practice. An effective algorithm of high resolution remote sensing image reconstruction is proposed to recover the degraded images using a precise estimated modulated transfer function (MTF) of the imaging system from a curve knife edge. The curve edge is chosen automatically and robustly among many candidate edges, which can provide a higher precision in comparison to straight edge. To suppress the artifacts and noise, the total variation (TV) method is applied as well. The experiments show this algorithm is suitable to recover a high-resolved image with a high signal-to-noise ratio (SNR).
The conventional image quality assessment algorithm, such as Peak Signal to Noise Ratio （PSNR）， Mean Square Error（MSE） and structural similarity (SSIM), needs the original image as a reference. It’s not applicable to the remote sensing image for which the original image cannot be assumed to be available. In this paper, a No-reference Image Quality Assessment (NRIQA) algorithm is presented to evaluate the quality of remote sensing image. Since blur and noise (including the stripe noise) are the common distortion factors affecting remote sensing image quality, a comprehensive evaluation factor is modeled to assess the blur and noise by analyzing the image visual properties for different incentives combined with SSIM based on human visual system (HVS), and also to assess the stripe noise by using Phase Congruency (PC). The experiment results show this algorithm is an accurate and reliable method for Remote Sensing Image Quality Assessment.
The Point Spread Function (PSF) is one of the key indicators characterizing the signal transfer characteristics of an imaging system. Edge method is applicable to calculate the PSF of the remote sensing imaging systems for its easy implement and robust noise-resistant ability. In this paper, a Double-Knife-Edge method is proposed to recover the degraded images using a precise estimated PSF of the imaging system. The exact motion-blur direction is estimated by image differentiation firstly. Two orthogonal edges, one of which is in the same direction as the main motion-blur, are picked up from the candidate edges via Hough transform and employed to obtain edge spread functions (ESF). Derived from these ESFs, a more accurate PSF is used to deconvolute the degraded image by an image restoration algorithm based on total variation (TV) deconvolution which is capable of suppressing the artifacts and noise. The experiment results show that this algorithm is adaptive and efficient to reconstruct remote sensing images, and the reconstructed image has better PSNR, MSE and MTF than the original degraded image.
The 3~5um Medium Wave Infrared(MWIR) laser has gained a lot of attention for its important application values in remote sensing, medical, military and many other fields. However, there are many technical difficulties to fabricate those kind lasers, and the performance of their output power stabilities remain to be improved. In a practical application, the MWIR’s output power will be instability when the temperature changes and the current varies. So a system of reducing MWIR power fluctuation should be established. In this paper, a photoelectric system of stabilizing the output power of He-Ne laser is developed, which is designed based on the theory of feedback control. Some primary devices and technologies are presented and the functions of each module are described in detail. Among of those, an auxiliary visible light path is designed to aid to adjust WMIR optical system. A converging lens as spatial filter is employed to eliminate stray light well. Dewar temperature control equipment is also used to reduce circuit noise in IR detector. The power supply of AD conversion circuit is independently designed to avoid the crosstalk caused by the analog section and digital section. Then the system has the advantages of good controllability, stability and high precision after above designation. Finally, the measurement precision of the system is also analyzed and verified.
CPLD (Complex Programmable Logic Device) is an effective device to realize real-time parallel processing of batches of video data. In the paper, the real-time method for realizing some image enhancement algorithms in CPLD is described. It is based on Altera's ACEX 1K device that is modular enough to be used in many scientific and industrial applications and powerful enough to maintain the throughout required for real-time video enhancement.
In this paper, a new combinatorial image enhancement algorithm has been developed based on the statistical characteristic of the infrared image. The computer simulating experiments have proved this new algorithm can solve the problem of low contrast, noise, blurry image edge in the infrared image quite well. Results will be illustrated with a small, representative set of images taken in different condition. Additional, this new enhancement algorithm has been implemented in hardware. The processed images have demonstrated the effectivity of this image enhancement system. The delay time of the whole system it in microsecond(s) level that can meet the need for the real-time infrared image enhancement processing.