Hyperspectral imaging systems are widely used for their convenient way of capturing 3D information. The coded aperture snapshot spectral imaging (CASSI) systems and diffractive optical element (DOE)-based hyperspectral imaging systems are two types. CASSI systems can deal with the underdetermined inverse problem from a single-shot acquisition and further improve the quality of reconstructed results by utilizing multiple measurements; however, the systems based on DOEs cannot change the point spread function (PSF) encodings after DOEs are designed. Inspired by the tunable Moiré lens, we propose a multishot DOE-based hyperspectral imaging system that utilizes the mutual rotation between two DOEs to realize the change of the PSF encodings and enables multiple measurements. Our system can snapshot similar to fixed DOE-based systems. Meanwhile it requires only rotation and achieves the variable PSF encodings without moving along the horizontal and vertical axes. Moreover, our DOEs have a more regular and controllable dispersion. With the spectral reconstruction algorithm, the simulation results show that increasing measurements improves performance. We suggest a scheme to improve spectral accuracy by changing DOE encodings for multiple measurements.
Compared to single aperture systems, optical synthetic aperture systems greatly improve the spatial resolution, yet still exhibit a certain degree of blurring and contrast reduction. To address this challenge, numerous image restoration methods have been proposed. Recently, instead of the conventional circular synthetic aperture, the rotating rectangular synthetic aperture (RRSA) system employs a rectangular aperture to capture a sequence of images of the same scene. The RRSA system’s foldable design and absence of common-phase adjustments confer cost and complexity benefits. The captured degraded image sequences contain information about multiple directions of the target scene, so it is necessary to use multi-frame image fusion technology to restore them. However, most conventional methods often introduce visual artifacts and require substantial computational time. In this paper, we propose a Dual-Domain Fusion Network (DDFNet), restoring multi-frame degraded images in the spatial and frequency domain and then achieving superior fusion results. DDFNet employs a nested U-Net architecture to capture local pixel-level relationships, facilitating the recovery of local features and structures from spatial domain images. In parallel, we transform the input images into the frequency domain, and utilize another nested U-Net for feature extraction on the normalized spectrum and phase, thereby improving the recovery of texture and edge information. Finally, the fusion model effectively utilizes multi-level features and contextual awareness to combine the spatial and frequency domain features, achieving high-quality fusion results of captured degraded sequence images. Extensive experiments demonstrate that our method achieves superior performance both in quantitative and qualitative assessments compared to state-of-the-art techniques.
Atmospheric turbulence is a major challenge in long-range imaging of ground-based telescopes, especially in the surveillance of space targets, whose observation distance is usually more than 100 km. In this case, space targets are extremely small in images, occupying less than 0.12% of the total image area, and suffer from severe blur and distortion. Consequently, the accuracy of object detection by both conventional and deep-learning-based methods is significantly hampered. Therefore, this paper proposes an effective framework for detecting space target through atmospheric turbulence. The framework incorporates a shallow deblurring module, a transformer-based feature extractor, and a small region proposal network. The training data comprises simulated degraded images of space target images against celestial backgrounds, as well as a selection of images from the Dotav2 dataset. Testing results show that the proposed framework outperforms the general framework, achieving a mean Average Precision (mAP) improvement of over 20%.
Optical system with zooming ability is essential for space exploration. In this paper, we proposed a new varifocal lens design based on the fifth-order X-Y polynomial free-form surface. It can change its focal power by the vertical deflection of the two free-form surfaces. We described the proposed lens’s modulation phase function and aberration based on the first-order optical analysis. Simulation experiments compared the proposed lens with the Alvarez lens regarding surface depth and zoom capability. We further investigate the correspondence between vertical deflection and optical power. The results show that the focal power of the varifocal lens based on the fifth-order X-Y polynomial free-form surface is proportional to the cube of the lateral shift distance, which has a stronger zoom ability than the Alvarez lens. It can achieve a wide range of power changes with a smaller lateral displacement, contributing to the zoom system’s further miniaturization.
Image super-resolution (SR) is the problem of recovering a high-resolution (HR) image from a low-resolution (LR) image of the same scene. It is an ill-posed problem since high frequency details are completely lost in low-resolution images. To overcome this problem, many methods based on deep learning have been proposed. In our dual-focal camera system, we use a beam-splitter to capture images of the same scene at different resolutions. The shorter focal length module produces the wide-view image with the low resolution, and the longer focal length module produces the tele-view image via optical zooming. The long-focal image contains more details than short-focal image so can be used to guide short-focal image to recover high frequency part. However, existing SR approaches ignore the nature of image degradation, which limits the use of these approaches in challenging cases. In this paper, we propose a novel method which is based on the dual-focal camera system. To reconstruct a wide-view image with the high-resolution, we design an end-to-end deep learning model. First, we using a blur kernel estimation convolution neural networks (CNNs) to obtain the mapping relationship between the short-focal image and the long-focal image. The kernel estimation network is a Laplacian pyramid, which can efficiently learn per-pixel kernels to recover the HR image. Then, we interpolate the kernel and apply it to the wide- view image so as to get the high-resolution wide-view image. Extensive experiments show that our method achieves favorable performance over state-of-the-art approaches on both quantitative and qualitative evaluations.
Image demosaicing and denoising are two important processes in the ISP pipeline of mobile cameras, because almost all mobile cameras in use today require colorful images generated by demosaicing algorithm, and the small sensor area of mobile cameras triggers low signal-to-noise ratio. Over the years, a considerable number of sequential demosaicing and denoising methods have been proposed, while they suffer from estimating the noise distribution and adjusting the hyper-parameters in order to balance demosaicing and denoising. There exit simultaneous demosaicing and denoising methods solving these problems. But they lack guidelines designed for mobile cameras. We propose a Plug-and-Play (PnP) demosaicing and denoising method on mobile cameras. Our method is built on PnP demosaicing framework which is derived from variable splitting theory. Any color demosaicing algorithm (i.e., bilinear, Malvar) can be plugged into our framework. We novelly trained an ISO conditioned denoiser for the framework and iteratively apply the denoiser in it. The ISO conditioned denoiser not only removes noise from the demosaicing procedure itself but also noise from camera sensors. By introducing ISO settings to the denoiser, our method takes possession of the adaptability and robustness in various capturing environments under different camera settings. Our method has only two hyperparameters to tune, which eases the hyper-parameter adjustment in sequential demosaicing and denoising methods. Extensive experiments on synthetic datasets show that our method performs better than sequential demosaicing and denoising methods and is practical for mobile cameras.
6s(Second Simulation of the Satellite Signal in the Solar Spectrum) radiative transfer model is one of the atmospheric correction algorithms based on the atmospheric radiative transmission model. It is widely used because of its high correction accuracy. Meanwhile, it is criticized for the complexity of the parameters and the efficiency of the correction process. 6S model needs to establish a look-up table based on the geometric conditions and aerosol conditions which directly determines the accuracy of the atmospheric correction. This paper analyzes the limitations of traditional look-up table method and uses artificial intelligence algorithms such as the support vector regression(SVR) algorithm and the back propagation (BP) algorithm to instead the traditional look-up table method. The experiments’ results show that the output value and predictive value fit well. Both are better than the traditional linear interpolation performance results, and the BP algorithms performs better, which verifies the feasibility of BP neural network algorithms prediction model instead of linear interpolation method for table lookup. Finally, this paper takes Landsat-8 data as an example, uses the method proposed in this article to perform atmospheric correction, and compares the FLAASH model correction results. The visual performance results of the two are roughly the same.
Researchers have proposed beam-splitting imaging methods to solve the problem that capturing the simultaneous view by multiple imaging instruments and positionally aligning them pixel by pixel. Beam-splitting prism and beam-splitting plate are two kinds of commonly used beam-splitting devices. However, which kind of the two devices degrade images to a less extent lacks discussion. Firstly, we theoretically analyzed possible image degradation caused by beam-splitting devices, which mainly include spherical aberration, ghost effect, a non-uniformity of the degradation function, and color cast. And we used ZEMAX optical simulation software to establish a beam-splitting imaging emulation experiment to simulate the image degradations mentioned above. we constructed an experience system of beam-splitting imaging in the laboratory. researchers could select a suitable device for their projects based on our study.
In this paper a novel way to correct chromatic aberration and remove the color fringe for variable aperture optical system considering the physical causes is proposed. From the perspective of the correlation between chromatic aberration and aperture, we explain why image fusion with different aperture parameters can effectively remove color fringing. Then we propose a specific color fringing detection and image fusion process. First, we detect the overexposed area of the large aperture image, perform grayscale grading on its neighborhood, extract the edges, and expand to obtain the candidate color fringing region. Then the two images are transformed to YCbCr color space to extract the purple fringe area by comparing the color information of the candidate color fringe area. Finally, we use the Cb and Cr channels of the small aperture image to correct the corresponding channels in the color fringing area and retain the brightness of the original image. Through image fusion, we can remove the color fringing caused by axial chromatic aberration and sensor crosstalk, and the residual lateral chromatic aberration can be corrected by simple image warp. Compared with the traditional blind restoration of color fringe regions, our method uses the hue information of small aperture image as a reference and no artifact is introduced. Experimental results show that both the subjective visual effects and objective evaluation have been significantly improved.
In this paper, we propose a method which is based on the dual-focal camera facing the same target to expanse the dynamic range of images. Since the spatial resolution of dual-focal camera in this paper is different, down-sampling, up-sampling, and multi-resolution fusion are required in image fusion processing to obtain an ideal high dynamic range image. The current multi-frame high dynamic range algorithm is mainly for similar resolution images. When there are two images with large resolution differences, The effect of ordinary registration algorithms (For example, optical flow registration algorithm) are limited, and the image may appear ghost and color artifacts after registration. Our method uses a convolutional neural network, which composed of two subnets. An image fusion subnet and a style transfer subnet. Because there is only one exposure image in the surrounding field of view, the central field of view is processed separately from the surrounding field of view. In the central field of view, U-Net is used to register the images layer by layer to increase the registration speed and registration accuracy. After the high dynamic range image in the central field of view, the style transfer network is used to transfer the color distribution of the high dynamic range image to the surrounding field of view. As for result, we performed extensive qualitative and quantitative comparisons to show that our method produces excellent results where ghost and color artifacts are significantly reduced compared to existing general multi-frame high dynamic range methods, and is robust across various inputs.
Aiming at the problem that the focus range and focus accuracy cannot be balanced during the focusing process of scenes with small targets in a large field of view such as ships in the ocean, this paper proposes a selection of the focusing window based on gradient matrix and an adjustable coverage range autofocus evaluation function in frequency domain. By using gradient operators of different sizes for different search intervals, the gradient matrix of the image can be obtained. This kind of strategy takes both accuracy and detection rate into consideration. Through segmentation, area with rich details could be chosen as the focusing window. In order to solve the problem that the focusing evaluation function don’t works well when the amount of defocus blur is large, this paper proposes a new evaluation function based on an amplitude summation of different frequency component with variable threshold. For the scenes with small target in a large filed of view, more than 100 pictures with continuously changing defocus were established. Experiments indicate that the method proposed in this paper can meet the characteristics of a wide focusing range and high sensitivity at the same time when variable threshold was used. Compared with existing methods, the method above performs better in noise robustness and has a larger coverage range, at the same time, it provides a sharper peak of evaluation function which means higher sensitivity.
Traditional image global registration algorithms are limited in principle and cannot accurately register large depth of field scenes or moving objects. The local registration method based on dense optical flow has the advantage of not being limited by a single transformation matrix, so that a better registration result can be obtained. However, traditional dense optical flow algorithms are limited by large computational complexity and are difficult to achieve real-time estimation, thus limiting their application. In recent years, many dense optical flow algorithms based on deep learning (such as PWC-Net) have emerged, which have achieved the effect of surpassing traditional optical flow algorithms on public datasets and can be estimated in real time. Based on this, this paper proposes an algorithm flow based on deep learning to predict dense optical flow and use it for registration. And a self-built optical flow data set for supervised learning of the network has also been proposed. Using the same network, the registration results of our datasets are better than those of existing datasets.
A generative adversarial network denoising algorithm which uses a combination of three kinds of loss functions was proposed to avoid the loss of image details in the denoising process. The mean square error loss function was used to make the denoising results similar to the original images, the perceptual loss function was used to understand the image semantic information, and the adversarial learning loss function was used to make images more realistic. The algorithm used the deep residual network, the densely connected convolutional network and a wide and shallow network as the component in the replaceable module of the network. The results show that the three networks tested can make images more detailed and have better peak signal to noise ratio while removing image noise. Among them, the wide and shallow network which uses fewer layers, larger convolution kernels and more feature maps achieves the best result.
Modulation spread function (MTF) is the function of spatial frequency used to evaluate the spatial quality of an imaging system. In measuring the MTF of the digital images, the slanted edge method is widely used. The traditional method projects the pixel values to the direction perpendicular to the edge and construct the edge spread function (ESF) in a subpixel resolution by binning pixels. However, the pixels within in a subpixel bin is not always distributed uniformly, which introduces errors in measurement results. Meanwhile, the value of an image pixel may deviate from the exact value of the ESF on the position of the pixel center since it is the integral of a short segment of the continuous ESF. Besides, the sampling interval is compressed due to inclination, so that the scale in the frequency domain is changed. In this paper, a novel measurement method which samples the ESF via weighting by distance is proposed and the errors exist in the traditional method is compensated. The edge location estimation method is also improved. Simulation experiments on different edge angles and image noise levels are conducted. The results show that the proposed method has an excellent performance on measuring accuracy and robustness.
This paper proposes a none-blind deblurring algorithm for noisy images via distributed gradient prior. The proposed image prior is motivated by observing the gradient properties of noisy images. Based on the prior of image noise's low gradient distribution, we propose an effective optimization method to deal with noisy and blurry images. In this paper, an image-gradient-related distributed factor is introduced to balance image deblurring and denoising. The distributed factor is related to image noise and works adaptively according to different noise levels of blurry images. Richardson-Lucy method is also adopted to achieve a better deconvolution result. Experiments show that our proposed method outperforms other deblurring algorithms in both preserving details and removing noise.
To resolve the issue of blurred backgrounds and fuzzy targets in using the infrared and visible image fusion algorithm, this paper proposes a new method for image fusion based on target-enhancement. First, average filtering is used to obtain rough estimation of the transmission rate, which is refined by calculating the images’ statistical information. Further, a final target-enhanced infrared image is obtained using the atmospheric scattering model. Then, the edge of the target-enhanced infrared image and the visible image is detected and separated using the improved edge detection. The fusion rule based on binary information is used for the edge part, and the fusion rule based on the ratio weighting analysis is used for the non-edge part. Experimental results show that the image fusion algorithm based on target-enhancement not only highlights the target information of an infrared image, but also retains the detailed information of the visible image as much as possible. Additionally, the fused image has better visual effects and higher objective quality evaluation indexes.
In this paper, a new algorithm for non-uniformity correction of infrared focal plane arrays based on neural network and bi-exponential filter is proposed. Due to the edge preserving property of bi-exponential filter, the algorithm can estimate the gain and bias coefficients at the strong edge more accurately, thereby suppressing the ghosting effect. In order to suppress the blurring effect, a motion detection is carried out before the correction coefficients are updated. A motion evaluation index based on the L1 norm of the temporal variation of the image and the image roughness is designed to improve the accuracy of motion detection. Moreover, an adaptive learning rate calculation method is proposed, which makes the learning rate larger in the image smoothing region and smaller in the edge region. This results in a faster convergence in a uniform region of the image , and it is not easy to cause a correction coefficients estimation error in the edge region. Several infrared image sequences are used to verify the performance of the proposed algorithm. The results indicate that the proposed method can not only preserve the details of the image, but also reduce the non-uniformity. Besides, it has a good inhibitory effect on the phenomenon of ”ghosting” and ”blurring”.
This paper aims to develop a novel approach of image fusion for an asymmetrical camera system when multiple images are acquired with cameras which have large differences in focal lengths but similar sensor size with an overlapping field of view. The fused image usually becomes perceptually unpleasant because the high-frequency components of a wideview image will be quite inadequate comparing to the tele-view images. Four steps are consisted in the proposed work: (i) image upscaling of the wide-view image, (ii) texture identification on the upscaled image, (iii) the performance evaluation of image upscaling, and (iv) the image inpainting for the high-frequency components of the wide-view image. The field of view of tele-view camera is set to be 4 times smaller than the wide-view camera in spatial angle in the experiment. The experiment result illustrates that the proposed algorithm brings significantly perceptual improvement to the wide-view image.
A novel over exposure (OE) correction method using Dark Channel Prior and image fusion technique is proposed in this work. Assuming an OE image can be modeled as a normal exposure image added up with a layer of asymmetrical haze, its submerged information in OE regions is enhanced by haze removal model. With image fusion technique, the obtained texture in OE regions is used to restore the over exposure. Experiments show that our method works well in submerged information restoration without increasing pseudo-information and over Saturation.
Fast image restoration for blurred remote sensing image is one of the focus problem in optical image processing. We present a method for remote sensing image restoration using a gyroscope sensor mounted with the camera. With motion track of the camera obtained from gyroscope sensor, we get a better PSF(Point Spread Function) estimation by calibrating the camera. Then we use the TV regularization to solve this non-blind deconvolution which runs faster than blind deconvolution. In experiments, we established a platform to simulate the vibration of satellite and get the synchronized gyroscope data in the exposure time, then we compare our restoration results with ground truth. Our experiments show that, the method has a good performance for blurred image caused by vibration of image system.
In the process of remote sensing imaging, the obtained TDICCD images are always accompanied by distortion due to relative motion between imaging platform and the target. Traditional image evaluation metrics like Structural Similarity Index Measurement (SSIM) or Peak Signal to Noise Ratio (PSNR) are general assessments of image quality, but do not clearly evaluate distortion level. Considering the special properties of TDICCD images, this paper proposes a robust evaluation method to quantitatively describe motion distortion. The proposed method contains mainly three steps: image line PSF estimation, calculation of PSF deviation and overall computation of motion distortion. Numerical experiments have been done to simulate TDICCD motion distortion images under different vibration conditions whose results are later evaluated by the proposed method. Results prove that our method provides precise and robust quantitative assessment for images of different degrees of motion distortion.
Handheld electro-optical imaging devices usually suffer from shaky problems. In this paper, we present a fast robust approach for real-time image stabilization. Since the perform1ance of image stabilization mainly depends on global motion estimation and the accuracy of motion estimation will be affected when foreground motion happened, sudden image jitters will be introduced during stabilization. To solve this problem, conventional methods detect and remove the foreground objects in motion estimation but this way works inefficiently and fails when foreground moving objects occupy large part of image. Our method is based on the following improvements: modified ORB feature points(FPs) processing, adaptive calculation of affine transformation matrix and joint utilization of two Kalman filters. It can solve the sudden image jitter problem even when there are large foreground moving objects in the image. Qualitative and quantitative evaluations demonstrate the merits of our method. Experiments show that our method solves large foreground motion problem and achieves 35 FPS for 640*480 image on Intel Core i5-4590 CPU@3.30 GHz on the windows.
In the field of deep space science detection and high resolution earth observation, a relatively high motion velocity is often generated between the optical camera and the imaging target. Images obtained during the exposure time can produce image motion blur, which becomes one of the main obstacles to acquire high resolution image near the target. As an extended task of the third phase of China’s lunar exploration program, flight imaging of the planned sampling area of Chang’e-5 was carried out. A dual resolution camera with a wide field of view (FOV) camera and a narrow FOV camera was used for imaging mission. High flying speed causes the generation of large motion blurred images captured by the narrow FOV camera and the motion blur can be up to around 30 pixels. To deal with this problem, we analyzed the image features of the blurred images captured by the narrow FOV camera, and proposed a corresponding method that can estimate image motion value from the blurred lunar image based on small craters detection scheme and then adopted the regularization method to restore the image. The algorithm is applied in the batch processing of the real blurred lunar images and has achieved a significant restored effect.
An imaging system’s spatial quality can be expressed by the system’s modulation spread function (MTF) as a function of
spatial frequency in terms of the linear response theory. Methods have been proposed to assess the MTF of an imaging
system using point, slit or edge techniques. The edge method is widely used for the low requirement of targets. However,
the traditional edge methods are limited by the edge angle. Besides, image noise will impair the measurement accuracy,
making the measurement result unstable. In this paper, a novel measurement method based on the support vector
machine (SVM) is proposed. Image patches with different edge angles and MTF levels are generated as the training set.
Parameters related with MTF and image structure are extracted from the edge images. Trained with image parameters
and the corresponding MTF, the SVM classifier can assess the MTF of any edge image. The result shows that the
proposed method has an excellent performance on measuring accuracy and stability.
This paper presents a motion deblurring method which can obtain both the motion information and the recovered image based on local temporal compressive photography. In this method, video blocks are reconstructed at the corners of the image sensor during a single exposure period. The displacement vector, which will be used to build the prior point spread function (PSF) for image deblurring, is then estimated from the reconstructed videos. With the use of the prior PSF, better recovered images can be obtained with much less iteration. An experiment system is also presented to validate the effectiveness of the proposed method. The experimental results show that the proposed method could provide recovered images of high quality.
Fast image restoration method is proposed for vibration image deblurring based on coded exposure and vibration detection. The criterion of the code sequence selection is discussed in detail, and several factors are considered to search for the optimal coded exposure sequence. The blurred vibration image is obtained by the coded exposure technique. Meanwhile, the vibration track information of the camera is detected by a fiber-optic gyroscope. The point spread function (PSF) is estimated using a statistical method with the selected code sequence and vibration track information. Finally, the blurred image is quickly restored with the estimated PSF through a direct inverse filtering method. Simulation experiments are conducted to test the performance of the approach with different vibration forms. A real imaging system is constructed to verify the effectiveness of the proposed algorithm. Experimental results show that the presented algorithm could yield better subjective experiences and superior objective evaluation values.
This paper establishes a geometric model of multi-band mosaic imaging from the same orbit by agile satellites, and introduces a self-write simulation software. Geometric parameters of each band are calculated based on the attitude control ability of the satellite and the mission requirements. Considering the different ground resolution and the imaging angle of each band, two new concepts, Gradient Entropy and Structure Similarity Parameter are presented. These two values are used to evaluate the change of image quality caused by agility, and help to estimate the effect of the mission. By building the geometric model and calculating the agile information with the program, we propose a new approach of forward analysis of agile imaging, which helps users evaluate the image degradation.
Saliency extraction has become a popular topic in imaging science. One of the challenges in image saliency extraction is to detect the saliency content efficiently with a full-resolution saliency map. Traditional methods only involve computer calculation and thus result in limitations in computational speed. An optical imaging system-based visual saliency extraction method is developed to solve this problem. The optical system is built by effectively implementing an optical Fourier process with a Fourier lens to form two frequency planes for further operation. The proposed method combines optical components and computer calculations and mainly relies on frequency selection with precise pinholes on the frequency planes to efficiently produce a saliency map. Comparison shows that the method is suitable for extracting salient information and operates in real time to generate a full-resolution saliency map with good boundaries.
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
In this paper, we present a method for single image blind deconvolution. Many common forms of blind deconvolution methods need to previously generate a salient image, while the paper presents a novel L0 sparse expression to directly solve the ill-positioned problem. It has no need to filter the blurred image as a restoration step and can use the gradient information as a fidelity term during optimization. The key to blind deconvolution problem is to estimate an accurate kernel. First, based on L2 sparse expression using gradient operator as a prior, the kernel can be estimated roughly and efficiently in the frequency domain. We adopt the multi-scale scheme which can estimate blur kernel from coarser level to finer level. After the estimation of this level’s kernel, L0 sparse representation is employed as the fidelity term during restoration. After derivation, L0 norm can be approximately converted to a sum term and L1 norm term which can be addressed by the Split-Bregman method. By using the estimated blur kernel and the TV deconvolution model, the final restoration image is obtained. Experimental results show that the proposed method is fast and can accurately reconstruct the kernel, especially when the blur is motion blur, defocus blur or the superposition of the two. The restored image is of higher quality than that of some of the art algorithms.
Training over-complete dictionaries which facilitate a sparse representation of the image leads to state-of-the-art results in compressed sensing image restoration. The training sparsity should be specified when training, while the recovering sparsity should also be set when image recovery. We find that the recovering sparsity has significant effects on the image reconstruction properties. To further improve the compressed sensing image recover accuracy, in this paper, we proposed a method by optimal estimation of the recovering sparsity according to the training sparsity to control the reconstruction method, and better reconstruction results can be achieved successfully. The method mainly includes three procedures. Firstly, forecasting the possible sparsity range by analyzing a large test data set to obtain a possible sparsity set. We find that the possible sparsity is always 3~5 times the training sparsity. Secondly, to precisely estimate the optimal recovering sparsity, we choose only several samples randomly from the compressed sensing measurements and using the sparsity candidates in the possible sparsity set to reconstruct the original image patches. Thirdly, choosing the sparsity corresponding to the best recovered result as the optimal recovering sparsity to be used in image reconstruction. The estimation computational cost is relatively small and the reconstruction result can be much better than the traditional method. The experimental results show that, the PSNR of the recovered images adopting our estimation method can be higher up to 4dB compared to the traditional method without the sparsity estimation.
The spatial resolution of imaging systems for airborne and space-borne remote sensing are often limited by image degradation resulting from mechanical vibrations of platforms during image exposure. A straightforward way to overcome this problem is to actively stabilize the optical axis or drive the focal plane synchronous to the motion image during exposure. Thus stabilization imaging system usually consists of digital image motion estimation and micromechanical compensation. The performance of such kind of visual servo system is closely related to precision of motion estimation and time delay. Large time delay results in larger phase delay between motion estimation and micromechanical compensation, and leads to larger uncompensated residual motion and limited bandwidth. The paper analyzes the time delay caused by image acquisition period and introduces a time delay compensation method based on SVM (Support Vector Machine) motion prediction. The main idea to cancel the time delay is to predict the current image motion from delayed measurements. A support vector machine based method is designed to predict the image motion. A prototype of stabilization imaging system has been implemented in the lab. To analyze the influences of time delay on system performance and to verify the proposed time delay cancelation method, comparative experiments over various frequencies of vibration are taken. The experimental results show that, the accuracy of motion compensation and the bandwidth of the system can be significantly improved with time delay cancelation.
The wavefront passed through the atmosphere will produce different degree of distortion, due to atmospheric disturbances, defocus, aberration and etc. Distortion of wavefront can result in image degradation. Conventional methods typically use adaptive optics to correct the degradation. Correction system is complex and requires three parts, including wavefront detection, wavefront reconstruction, wavefront correction, and each part requires very precise control. In order to simplify the system structure, we use Hartmann - Shack wavefront sensor to get wavefront information, and then reconstruct the degenerated image using software restoration method. The paper introduces the background and significance of Hartmann-Shack wavefront sensor, summarizes the wavefront reconstruction principle. Then we analyze the general model of optical transfer function (OTF) and the way to calculate the OTF of diffraction limited incoherent image system. Take the actual situation into consideration, wavefront distortion is unavoidable, so we deduce the method to calculate OTF with wavefront distortion. Based on different wavefront detection error and the image restoration quality, we concluded the allowed maximum detection error under different peak value of wavefront.
The fusion of infrared and visible light images can effectively improve the ability of detail description and hot taget representation. For this purpose, a novel image fusion algorithm based on nonsubsampled shearlet transform (NSST) was presented in this paper. Firstly, the NSST was adopted to decompose the two source images at different scales and directions, and the low-frequency and high-frequency sub-band coefficients of the images were obtained. Secondly, we used a modified fusion rules. For the low-frequency coefficients of the fused image, we summed up the low-frequency coefficients of two source images, and then subtracted the average of the mean values of the two low-frequency coefficients. Meanwhile, considering that adjacent pixels had strong correlation, an improved selection principle based on the local energy matching was developed for the high-frequency coefficients of the fused image, which was also consistent with the characteristics of the human vision system. Finally, the fused image was reconstructed by performing the inverse NSST on the combined coefficients. Experimental results demonstrate that the proposed algorithm can effectively integrate important information from infrared and visible light images. And comparing with some other image fusion algorithms, the proposed algorithm can further enhance the contrast of fused images and protect more detail information of source images. Both visual quality and objective evaluation criteria show that the method has a higher performance.
How to remove the noise in infrared image effectively with detail preserving is a significant but difficult problem in infrared image processing. Various methods have been proposed to obtain good results. However, these algorithms usually cannot distinguish noise and detail efficiently, which leads to smoothing some details in infrared images. Recently a novel local measure called relative total variation (RTV) is proposed to accomplish effective texture removal. RTV measure is combined with a general windowed total variation measure and a novel inherent variation measure to smooth the image texture effectively while preserving the main structure. In this paper, using detail preserving smoothing method via RTV, a multi scale denoising algorithm for infrared image is proposed. Firstly, the infrared image is decomposed into several scales by non-subsampled Contourlet transform (NSCT). NSCT decomposition does not do any down sampling or up sampling, thus the results are not band limited. Secondly,the algorithm applies RTV based detail preserving denoising method for each decomposed layers. Different smoothing parameters are respectively used to adjust the denoising levels in different scales. Finally, various synthetic weights are utilized to different layers to reconstruct the final infrared denosing results. Compared with other infrared denoising approaches, the quantitative comparisons demonstrate that the proposed method could well suppress the noise of infrared image while preserving the edge details effectively. Both visual quality and objective measure results show that this method is efficient and has a good application in infrared image denoising.
Due to the low contrast ,lack of details and difficulties to distinguish target from background in traditional infrared(IR) imaging systems, the detection and recognition probability of camouflage infrared target is relatively low. Compared with the traditional IR imaging systems, the method of polarimetric imaging uses polarization information, which can help detect and isolate manmade objects from the natural environment in complex. The method of infrared polarimetric imaging is proposed in this paper. The experiment builds the IR polarimetric imaging system. An IR polarizer made of BaF2 is assembled before the IR camera. By rotating the IR polarizer, twelve polarization images are obtained at every thirty degree. The gray levels of the images are calculated by program. Stokes polarization vector representation is introduced to calculate I of stokes vector and degree of linear polarization (DoLP) with polarization images. According to the character of parameter I of stokes vector and DoLP, we propose an IR polarization fusion method based on Shearlets using regional saliency analysis. This method can highlight the target area and have good performance in the fusion of IR radiation information and IR polarization characteristics. To test the effectiveness of this method, we use mid-wave infrared (MWIR) camera and long-wave infrared(LWIR) camera to get real images. Compared with original image, both the subjective and objective evaluation results indicate that the enhanced images obtained by our method have much more image details and polarization information, which is useful for target detection and recognition.
Hyperspectral imagery typically possesses high spectral resolution but low spatial resolution. One way to enhance the spatial resolution of a hyperspectral image is to fuse its spectral information and the spatial information of another high resolution image. In this paper, we propose a novel image fusion strategy for hyperspectral image and high spatial resolution panchromatic image, which is based on the curvelet transform. Firstly, determine a synthesized image with the specified RGB bands of the original hyperspectral images according to the optimal index factor (OIF) model. Then use the IHS transform to extract the intensity component of the synthesized image. After that, the histogram matching is performed between the intensity component and the panchromatic image. Thirdly, the curvelet transform is applied to decompose the two source images (the intensity component and the panchromatic image) in different scales and directions. Different fusion strategies are applied to coefficients in various scales and directions. Finally, the fused image is achieved by the inverse IHS transform. The experimental result shows that the proposed method has a superior performance. Comparing with the traditional methods such as the PCA transform, wavelet-based or pyramid-based methods and the multi-resolution fusion methods (shearlet or contourlet decomposition), the fused image achieves the highest entropy index and average gradient value. While providing a better human visual quality, a good correlation coefficient index indicates that the fused image keeps good spectral information. Both visual quality and objective evaluation criteria demonstrate that this method can well preserve the spatial quality and the spectral characteristics.
Development of the CCD sensor calls for a rapid star image processing speed and the basic work is to extract star points correctly. The typical histogram of a star image has a two-peak value structure and majority of the pixels belong to the background. In this paper, the noise of the image is removed with the adaptive Wiener filter. The background and the star points are distinguished using a window transform on the pixel gray value, all pixels whose gray value below the threshold are set to zero, where the threshold is determined with the optimal iterative algorithm or the Otsu algorithm. We locate the star points with an improved cross projection algorithm. Firstly, preprocess the star image into a binary image with the threshold determined above. Then calculate the horizontal and vertical projections of the binary image. In each projection direction, extract the domain bound of the star points with the proposed binarization differential extremum method which is different from the traditional marker method. Finally, the precise positions of the star points are calculated with the weighted centroid algorithm. Experiments on simulated and real star images show that the improved algorithm has a high processing speed (~300ms) and good centroid locating accuracy.
To stitch remote sensing images seamlessly without producing visual artifact which is caused by severe intensity
discrepancy and structure misalignment, we modify the original structure deformation based stitching algorithm which
have two main problems: Firstly, using Poisson equation to propagate deformation vectors leads to the change of the
topological relationship between the key points and their surrounding pixels, which may bring in wrong image
characteristics. Secondly, the diffusion area of the sparse matrix is too limited to rectify the global intensity discrepancy.
To solve the first problem, we adopt Spring-Mass model and bring in external force to keep the topological relationship
between key points and their surrounding pixels. We also apply tensor voting algorithm to achieve the global intensity
corresponding curve of the two images to solve the second problem. Both simulated and experimental results show that
our algorithm is faster and can reach better result than the original algorithm.
Due to the vibration of satellite platform, the image degradation has greatly hindered the development of High
Resolution Earth Observation (HREO) system. Modulation Transfer Function (MTF) is an effective means to measure
the image quality, which can well quantitatively analyze the degradation of image quality as a result of satellite platform
vibration. This paper is divided into 3 parts to systematically analyze how the vibration of satellite platform affects the
image quality. Firstly, the basic law of satellite vibration is clarified, and the relationships between image displacements
and the vibration of the space-borne camera in all six degrees of freedom (DOF) are detailed demonstrated. Secondly, the
mechanism of optical degradation for remote starring imaging under vibration is analyzed, and then deriving from
Optical Transform Function (OTF), a formula for calculating the MTF is identified for arbitrary known vibration. Finally,
a properly designed semi-physical simulation system is built. Loading a certain vibration parameter as the vibration
source, the system can get a simulated degraded image. The slanted-edge method for calculating MTF of image is
introduced for comparison. Experiments show that the MTFs calculated from the derived MTF formula and obtained by
slanted-edge method from the simulated degraded image present a good agreement, which verifies the reliability of the
proposed theoretical analysis.
In this paper, we present a hybrid system for time delay and integration (TDI) image restoration. Images are degraded by residual motion, which distorts and blurs the TDI images during exposures and excludes the along-track scanning motion. The motion trajectory is estimated from the image sequence captured by an auxiliary high-speed camera. In order to make the estimated results less sensitive to the imaging conditions and noise, a new method based on cross-correlation is introduced for motion estimation. Then geometric distortion of the TDI image is removed by choosing the correct blurred block according to the central of the corresponding motion trajectory and the final image is restored row by row with the Richardson-Lucy algorithm. Simulated and experimental results are given to prove the effectiveness of our system.
Joint transform optical correlator (JTOC) is an effective motion detection tool, and the quality of spectrogram has
great influence on the detection accuracy. In this paper, we constructed simulation software for JTOC and used
two images with known displacement as the experimental objects; we gradually increased the noise in the
spectrogram, and then compared the detection data under noise conditions with the real data to test the degree of
influence of the noise on the detection accuracy. The test results show that when the noise variance is small, the
influence of noise is very little; when the noise variance is more than 0.8, the influence of noise increases
gradually; and when the noise variance exceeds 1.29, the noise will directly cause failure of joint transform
optical correlator.
This paper presents a windowed phase correlation algorithm for subpixel motion estimation. Motion estimation methods
are crucial for video coding, image stabilization, image deblurring, and micro-mechanical motion compensation, etc.
Conventional phase-based correlation algorithms usually have reduced precision or bias error due to aliasing and edge
effects in real sampled imaging system. A window function is applied to images in the spatial domain before Fourier
transformation to suppress frequency leakage. Further more, unreliable frequencies due to aliasing errors are masked out
in the frequency domain. Experiments show that the proposed approach yields improved accuracy and superior precision
for motion estimation in the presence of aliasing and edge effects in real image systems compared to conventional phasebased
correlation algorithms.
Classical image restoration is mostly base on the image deconvolution under the assumptions of linear system
transformation, stationary signal statistics and stationary, signal independent noise. Unfortunately, the assumptions are
not always accurate in real problems. For example, the optical aberrations, local defocus, local motion blur, temperature
variation, flexible medium, and non-stationary platform all cause the uncertain different degradation in different area of
the images. Therefore, overlapping-region sectioned restoration is suggested to reconstruct such blurred images with
space-variant point spread function (SVPSF). First of all, the full image is divided into several sub-sections, in which the
PSF nominally space invariant (SI). After the restoration with SI algorithm, the sub-frames are spliced to construct the
composite full-frame. Moreover, overlapping extension is employed to isolate edge-ringing effects from circular
convolution between the different restored sub-frames. In this paper, with the help of SSIM (Structural Similarity) and
GRM (Gradient Ringing Matrix) image quality assessment approaches, we discussed the selection of overlapping region
of the sectioned restoration with different algorithms, for images with signal to noise ratio (SNR) from 25db to 40db.
Our investigation proves that the restored image quality is best when the overlapping region as wide as the energydistribution-
area of degradation function.
KEYWORDS: Image processing, Imaging systems, Digital imaging, Digital image processing, Detection and tracking algorithms, Video, Cameras, Control systems, Signal processing, Optical testing
We designed and fabricated test apparatus to analysis performance characteristic of optical and electronic image
stabilization. The imaging system (digital video camera with image stabilization function) was fixed on a platform;
vibration frequency of the platform varies with the input voltage of electrical motor, and vibration amplitude of
the platform is changed through position adjustment of shaft of electrical motor. We start the vibration platform
and acquire ordinary image sequence, and then turn on the stabilizer and record image sequence under optical
stabilizer, afterwards, the optical stabilizer was turned off, the motion detection and compensation were used to
process the acquired image frames, and the image sequence with electronic image stabilization was obtained. We
analyzed and processed two kinds of image sequences from test apparatus and summarized some conclusions
about performance characteristic of image stabilizer. The electronic image stabilization effect is better at low
frequencies and the optical image stabilization effect is better at high frequencies. Furthermore, the improvement
in the degree of image stability caused by the electronic image stabilization is basically not related to the vibration
frequency, while the improvement in the degree of image stability caused by optical image stabilization increases
significantly with the increase in vibration frequency.
Because of the flight of the remote sensing camera in the orbit, the successive images of the scenes captured by the
remote sensing camera are different at any time. So it is difficult for the camera to implement autofocus. In this paper, an
auto-focusing method in the remote sensing camera is proposed based on successive two images captured in a short time
which have an overlapped region where scenes are same. Firstly, the space camera moving in the orbit, shoots one
picture every time the camera adjusts its focus, and then we can obtain a sequence of images after several times, from
which the displacement and the overlapped regions of two adjacent images can be calculated by image registration
algorithm. We can take every two adjacent images as a group. Therefore, every image has a value of focusing accuracy
by performing a sharpness evaluation function on the overlapped region of each image. Finally, according to the transfer
characteristic of evaluation values of every two partly overlapped images, we can unify the evaluation values in a same
merit evaluation system. And then find the maximum value of image evaluation values in a same evaluation system, so
we can find the accurate focus. Simulation experiment shows that this method works pretty well in auto focusing when
relative motion between the camera and the object is existed. This method can be used in aerial camera and remote
sensing camera.
The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from
unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be
arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A
deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote
sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed
detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors.
Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the
blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is
direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution
scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is
convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with
significant objective evaluation increment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.