Attention mechanism in deep learning is similar to information selection mechanism, and the goal of attention is to select critical information for the current task. In hyperspectral classification, the distinction of some categories depends on the subtle differences, however, most of the classification methods have the problem of insufficient expression ability to discriminate the fine differences of categories. In this paper, a classification method based on group attention is proposed to enhance the difference of hyperspectral data between categories. Firstly, we slice the hyperspectral sample into several groups on spectral channels, and extract the group CNN features. Then we use the attention module to obtain the attention weights for each spectral group. Finally, the "feature recalibration" strategy is used to recalibrate the spectral group CNN features. The experiment show that the proposed approach can improve the classification accuracy of categories with subtle differences.
We use domestic and foreign meteorological satellite data to carry out the research of Operational Regional meteorology which can be used for optical imaging terminal guidances. Attacks on areas covered by clouds can be divided into the following two scenarios: 1. Clouds are medium-high clouds, because the cloud base height of this kind of cloud layer is relatively high, generally more than 2500 meters, it will not have much influence on the optical imaging terminal guidance; 2. With low cloud coverage but not completely covered, the cloud can be detected and segmented, avoiding the cloud to hit the target. We use machine learning algorithm training model to divide the cloud into multi-layer cloud and single layer cloud, and the classification accuracy reaches 82.1%. Then for single-layer clouds, there are two methods to estimate the cloud bottom height: 1. We can use the MODIS data of the Aqua meteorological satellite to identify clouds of different attributes for cloud height estimation. 2. The height of single layer clouds can be calculated directly by using the physical characteristics of clouds, the average calculation error is 16.5%.
Based on the imaging mechanism of infrared image, we carry out the research of infrared spectral image inversion method for expanding data sets. Firstly, the radiation brightness of the long-wave infrared image is inverted to temperature, and then the radiation brightness of the medium-wave infrared image is calculated by using the inverted temperature. In order to improve the accuracy of infrared image inversion, the influence of range radiation and atmospheric transmittance on infrared radiation will be removed in the calculation process. Finally, we evaluate the effectiveness of this method.
The main influencing factor of imaging under high speed conditions is the aero-optical effect. Aero-optical effect is a kind of noise, which can be regarded as a superposition of three kinds of noises, system noise ，aero transmission effect and aero heat radiation effect. In this paper, we only consider aero transmission effect and aero heat radiation effect. The simulation and correction of aero-optical effects are important for terminal guidance. The aero-optical effect simulation method described in this paper uses a deep neural network (Conditional Generative Adversarial Networks). The use of big data allows the conditional generative adversarial networks to learn the mapping between clear pictures and pictures with aero-optical effects in training. Aero-optical effect correction is usually divided into two parts, the correction of the aero transmission effect and the correction of the aero heat radiation effect. In this paper, we use the conditional generative adversarial networks to train a large amount of data and learn the mapping relationship between the pictures with the aero-optical effects and the clear pictures in the training. And this mapping relationship is preserved in the form of bias and weights. It is not necessary to consider the aero-optical effect and the aero heat radiation effect separately. In this paper, the structural similarity (SSIM) between the real image and the simulated image generated by the conditional generation adversarial the network is 97.63%. The structural similarity (SSIM) between the clear image and the aero-optical effect image corrected with the conditional generation adversarial the network is 76.59%. The structural similarity (SSIM) between the original aero-optical effect image and the clear image is 55.73%.