The Risley-prism system is applied in imaging LADAR to achieve precision directing of laser beams. The image quality of LADAR is affected deeply by the laser beam steering quality of Risley prisms. The ray-tracing method was used to predict the pointing error. The beam steering uncertainty of Risley prisms was investigated through Monte Carlo simulation under the effects of rotation axis jitter and prism rotation error. Case examples were given to elucidate the probability distribution of pointing error. Furthermore, the effect of scan pattern on the beam steering uncertainty was also studied. It is found that the demand for the bearing rotational accuracy of the second prism is much more stringent than that of the first prism. Under the effect of rotation axis jitter, the pointing uncertainty in the field of regard is related to the altitude angle of the emerging beam, but it has no relationship with the azimuth angle. The beam steering uncertainty will be affected by the original phase if the scan pattern is a circle. The proposed method can be used to estimate the beam steering uncertainty of Risley prisms, and the conclusions will be helpful in the design and manufacture of this system.
An evaluation model of image fusion based on entropy weight method is put forward to resolve evaluation issue for fused results of multispectral and panchromatic images, such as the lack of overall importance in single factor metric evaluation and the discrepancy among different categories of characteristic evaluation. In this way, several single factor metrics in different aspects of image are selected to form a metric set, then the entropy weights for each single factor index are calculated based on entropy weight method, thus a new comprehensive evaluation index is obtained to evaluate each fused image and images with higher spectral resolution and spatial resolution can be acquired. Experimental analysis shows that the proposed method is of versatility, objectivity and rationality and performs well on the evaluation of fused results of multispectral and panchromatic images.
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
The spatial resolution of hyperspectral imaging systems is constrained by a spatial-spectral resolution tradeoff and current technique limitations. However, spatial resolution is a critical feature for applications that require high spatial resolution and utilization of details. We present a method of restoring high-resolution (HR) images from a set of low-resolution (LR) hyperspectral data cubes with subpixel shifts across different bands. A new observation model is introduced to demonstrate LR hyperspectral images at different bands and an HR image that covers all these bands. A regularized super-resolution (SR) algorithm is then implemented to solve the problem. Experiments of the proposed algorithm and existing SR algorithms are performed and the results are evaluated. The results demonstrate the feasibility of the proposed SR method. Moreover, the image fusion results also demonstrate that the restored image is suitable for enhancing the spatial resolution of entire hyperspectral data cubes.
A control system designed for a Double-Prism scanner is discussed in this paper. This control system is required to
regulate the speed of prisms accurately and change the scan pattern as quickly as possible. Therefore, we designed a
digital double closed-loop control system which consists of an inner loop and an outer loop to achieve that function. In
this double closed-loop control system, the inner loop uses linear Proportional-Integral (PI) controller for the current
control and the outer loop uses saturated Proportional-Integral controller for the speed control. To verify the feasibility
and rationality of this control method, simulation based on MATLAB was performed. And the simulation results indicate
that the step response of prism speed is stable and there is no steady state error. After building the digital control system,
many experiments were performed to obtain key characteristics. The experiment results show that the speed regulation
time is about 0.4s when the reference speed is 1rps. The accuracy of speed regulation reaches 10-4 level, and the
fluctuation ratio of speed regulation reaches 10-2 level over its operation range(0rps-3rps).
In order to achieve the high-resolution multispectral image, we proposed an algorithm for MS image and PAN image fusion based on NSCT and improved fusion rule. This method takes into account two aspects, the spectral similarity between fused image and the original MS image and enhancing the spatial resolution of the fused image. According to local spectral similarity between MS and PAN images, it can help to select high frequency detail coefficients from PAN image, which are injected into MS image then. Thus, spectral distortion is limited; the spatial resolution is enhanced. The experimental results demonstrate that the proposed fusion algorithm perform some improvements in integrating MS and PAN images.
The polarization detection technique provides polarization information of objects which conventional detection techniques are unable to obtain. In order to fully utilize of obtained polarization information, various polarization imagery fusion algorithms have been developed. In this research, we proposed a polarization image fusion algorithm based on the improved pulse coupled neural network (PCNN). The improved PCNN algorithm uses polarization parameter images to generate the fused polarization image with object details for polarization information analysis and uses the matching degree M as the fusion rule. The improved PCNN fused image is compared with fused images based on Laplacian pyramid (LP) algorithm, Wavelet algorithm and PCNN algorithm. Several performance indicators are introduced to evaluate the fused images. The comparison showed the presented algorithm yields image with much higher quality and preserves more detail information of the objects.
Interferogram obtained by Temporally-Spatially Modulated Fourier Transform Spectrometers should be recovered to spectrum by fast Fourier transform method (FFT). However, the interferogram sometimes is nonuniformly sampled, which cannot use FFT directly. In this paper, we propose a wavelet basis fitting method to interpolate the interferogram onto an equal-spaced grid. Hence, we can utilize FFT to recover spectrum. The simulated result of the recovered spectrum indicates that the proposed interferogram wavelet basis fitting method can interpolate the nonuniformly sampled interferogram effectively. The preliminary results show that this method introduces less errors than the Polynomial fitting method does.
As the LR images from the plenoptic camera is greatly constrained by the number of micro-lenses, we can apply the multi-frame super resolution methods to enhance the spatial resolution. Multi-frame super resolution reconstruction is a technology which obtains a high resolution image from several low resolution images of the same scene. Among various super resolution methods, the regularized methods are widely used since they have advantages for solving the ill posed problems. In this paper, some regularized super resolution methods are applied to enhance the spatial resolution of the light field image. The reconstruction results of synthetic low resolution images confirm that all the regularized super resolution algorithm can suppress the Gaussian noise and preserve the edge information. The real data experiment results also confirm the effectiveness of the applied algorithms.