The classification of airborne LiDAR point cloud is one of the key procedure for its further processing and application. Aiming at the difficulty of obtaining high classification accuracy and reducing processing time simultaneously, a transfer learning-based method for classifying airborne LiDAR point cloud is proposed. Firstly, three types of low-level features, i.e. normalized height, intensity and point cloud normal vector are calculated for each LiDAR point, by setting different size of neighborhood, multi-scale point cloud feature images are generated by utilizing the proposed feature image generation method. Then, a pre-trained deep residual network is employed to extract multi-scale deep features from the generated multi-scale feature images. At last, a neural network model containing only two fully connected layers is constructed to achieve being trained efficiently, and point cloud is classified by the trained optimal neural network model. Two International Society for Photogrammetry and Remote Sensing benchmark airborne LiDAR point cloud sets are used in our experiment, the results demonstrate that our method requires less training time, and can obtain 85.9% overall classification accuracy, which can provide reliable information for further processing and application of point cloud. Keywor
In order to make full use of local neighborhood information for high-resolution remote sensing images, this study combined iterative slow feature analysis (ISFA) and stacked denoising autoencoder (SDAE) to improve the change detection precision. First, this approach introduced ISFA for initial change detection in an unsupervised way, which enlarged the separability of changed and unchanged areas. Then, by setting different membership degrees, the changed and unchanged samples were obtained through fuzzy-means clustering. Finally, the change model was built by SDAE to represent the local neighborhood features deeply, and the change detection result can be obtained after all the samples were fed into the model. Experiments were performed on three real datasets, and the results validated the effectiveness and superiority of the proposed approach.
An image matching method based on closed edges incorporated with vertex angles is proposed in this paper. Based on edge detection results by Edison operator, invariant moments of closed edges and angles between the two branches for edge vertexes are used as matching entities to determine conjugate features candidates. The transformation relationship between images is approximated by similarity transformation model, and a set of transformation parameters can be determined by each pair of conjugate features after combining conjugate feature candidates in pair-wise. Furthermore, considering that the difference among transformation parameters which calculated by real conjugate features are minor, K-d tree method and K-means Spatial clustering method are used in succession to eliminate pairs which contain mismatching features. Therefore, conjugate features can be obtained from the similarity transformation parameters. Experimental results turn out that this method shows a stable performance and presents a satisfactory matching result.
Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high
altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing
with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the
modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be
post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML)
algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the
point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is
unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these
limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which
incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during
iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that
the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the
average contrast evaluation indexes.
The performance of high-resolution imaging with large optical instruments is severely limited by atmospheric turbulence,
and an image deconvolution is required for reaching the diffraction limit. A new astronomical image deconvolution
algorithm is proposed, which incorporates dynamic support region and improved cost function to NAS-RIF algorithm.
The enhanced NAS-RIF (ENAS-RIF) method takes into account the noise in the image and can dynamically shrink
support region (SR) in application. In restoration process, initial SR is set to approximate counter of the true object, and
then SR automatically contracts with iteration going. The approximate counter of interested object is detected by means
of beamlet transform detecting edge. The ENAS-RIF algorithm is applied to the restorations of in-door Laser point
source and long exposure extended object images. The experimental results demonstrate that the ENAS-RIF algorithm
works better than classical NAS-RIF algorithm in deconvolution of the degraded image with low SNR and convergence
speed is faster.