The fully convolution network is a very powerful visual model that can be used to extract features in an image. We improved a network model that can be used for end-to-end, pixel-to-pixel training to extract target motion trajectories in infrared images. The dataset used in our training comes from the simulation dataset produced by the public infrared dataset combined with the simulation trajectory. In order to enhance the model’s robustness, we add the pepper and salt noise and white noise to the simulated image, and use image augmentation to increase the number of the image. We achieved highly train and test accuracy in our simulation dataset.
Cloud is always the weak and even uninformative area inevitably existing in the remote sensing images, and greatly limits the development of remote sensing applications. Accurate and automatic detection of clouds in satellite scenes is a key problem for the application of remote sensing images. Most of the previous methods use the low-level feature of the cloud, which often generate error results especially with thin cloud or in complex scenes. In this paper, we propose a novel cloud detection method based on deep learning framework for remote sensing images. The designed deep Convolution Neural Network (CNN) which can mine the deep features of cloud consists of three convolution layers and three fully-connected layers. Using the designed network model, we can predict the probability of each image that belongs to cloud region, and then generate the cloud probability map of the image. To demonstrate the effectiveness of the method, we test it on Landsat-8 satellite images. The overall accuracy of our proposed method for cloud detection is higher than 95%. Experimental results indicate that both thin and thick cloud can be well detected with higher accuracy and robustness using our method.
Super-resolution method based on sparse reconstruction is an effective way to deal with the closely spaced objects problem, but when the targets in a noisy environment, the noise will cover over the entire field, leading to the sparsity feature of the original scene is destroyed. Aiming at this phenomenon, this paper proposed a super-resolution method which has the adaptative reconstruction ability in noisy environments, this method takes full advantage of the structural characteristics of the sensor and the reconstruction algorithm parameters, through the establishment of infrared imaging model of the observed signals and pixel meshing, establishment of the position and amplitude of the closely spaced objects of sparse representation, and using the point spread function of the optical system to construct over-complete dictionary, the last step is making the reconstruction parameters in a reasonable range through controlling the ratio of non-zero elements in the rebuilt scene, so as to achieve the purpose of removing noise interference and reconstruction of sparse targets accurately. Simulation results show that the proposed method with adaptive reconfiguration in noisy environments.
The imagery vendors of the most advanced remote sensing satellites usually only provide the coefficients of rational function model (RFM) to replace the sensor model and the precise imaging parameters (orbit parameter, attitude parameter, and so on). So, the rigorous imaging model was limited to use in the geometric correction of remote sensing image. The RFM method could obtain a better correction performance in most cases. However, when the image contains few numbers or uneven distribution of ground control points (GCPs), such as infrared image, the RFM method could not obtain the expected performance. Therefore, a geometric correction method for linear pushbroom infrared imagery using compressive sampling (CS) is proposed. The core idea of the proposed method is to use the equivalent bias angles to approximate the influence of the errors (thermal distortion, optical distortion, assembly error, satellite orbit errors, attitude errors, and so on) in the imaging process and adopt the CS method to recover the equivalent bias angle signals. Most of the data are processed scene by scene with enough GCPs for each scene in conventional methods. This restriction is broken by using the sparsity of equivalent bias angle signals in the proposed method. The infrared images from the Hyperion of EO-1 are used as experiment data, and the results of experiments demonstrate the feasibility and superior performance of proposed method.