Simultaneous Localization and Mapping (SLAM) plays an important role in navigation and augmented reality (AR) systems. While feature-based visual SLAM has reached a pre-mature stage, RGB-D-based dense SLAM becomes popular since the birth of consumer RGB-D cameras. Different with the feature-based visual SLAM systems, RGB-D-based dense SLAM systems, for example, KinectFusion, calculate camera poses by registering the current frame with the images raycasted from the global model and produce a dense surface by fusing the RGB-D stream. In this paper, we propose a novel reconstruction system. Our system is built on ORB-SLAM2. To generate the dense surface in real-time, we first propose to use truncated signed distance function (TSDF) to fuse the RGB-D frames. Because camera tracking drift is inevitable, it is unwise to represent the entire reconstruction space with a TSDF model or utilize the voxel hashing approach to represent the entire measured surface. We use moving volume proposed in Kintinuous to represent the reconstruction region around the current frame frustum. Different with Kintinuous which corrects the points with embedded deformation graph after pose graph optimization, we re-fuse the images with the optimized camera poses and produce the dense surface again after the user ends the scanning. Second, we use the reconstructed dense map to filter out the outliers of the features in the sparse feature map. The depth maps of the keyframes are raycasted from the TSDF volume according to the camera pose. The feature points in the local map are projected into the nearest keyframe. If the discrepancy between depth values of the feature and the corresponding point in the depth map exceeds the threshold, the feature is considered as an outlier and removed from the feature map. The discrepancy value is also combined with feature pyramid layer to calculate the information matrix when minimizing the reprojection error. The features in the sparse map reconstructed near the produced dense surface will impose large influence in camera tracking. We compare the accuracy of the produced camera trajectories as well as the 3D models to the state-of-the-art systems on the TUM and ICL-NIUM RGB-D benchmark datasets. Experimental results show our system achieves state-of-the-art results.
Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.
We present an extensible local feature descriptor that can encode both geometric and photometric information. We first construct a unique and stable local reference frame (LRF) using the sphere neighboring points of a feature point. Then, all the neighboring points are transformed with the LRF to keep invariance to transformations. The sphere neighboring region is divided into several sphere shells. In each sphere shell, we calculate the cosine values of the point with the x-axis and z-axis. These two values are then mapped into two one-dimensional (1-D) histograms, respectively. Finally, all of the 1-D histograms are concatenated to form the signature of position angles histogram (SPAH) feature. The SPAH feature can easily be extended to a color SPAH (CSPAH) by adding another 1-D histogram generated by the photometric information of each point in each shell. The SPAH and CSPAH were rigorously tested on several common datasets. The experimental results show that both feature descriptors were highly descriptive and robust under Gaussian noise and varying mesh decimations. Moreover, we tested our SPAH- and CSPAH-based three-dimensional object recognition algorithms on four standard datasets. The experimental results show that our algorithms outperformed the state-of-the-art algorithms on these datasets.
This paper presents a robust and rotation invariant local surface descriptor by encoding the position angles of neighboring points with a stable and unique local reference frame (LRF) into a 1D histogram. The whole procedure includes two stages: the first stage is to construct a unique LRF by performing eigenvalue decomposition on the covariance matrix formed using all the neighboring points on the local surface. On the second stage, the sphere support field of a key point was divided along the radius into several sphere shells which is similar with the Signature of Histograms OrienTations (SHOT). In each sphere shell, we calculate the cosine of the angles between the neighboring points and the x-axis, z-axis respectively to form two 1D histograms. Finally, all the 1D histograms were stitched together followed by a normalization to generate the local surface descriptor. Experiment results show that our proposed local feature descriptor is robust to noise and varying mesh-resolutions. Moreover, our local feature descriptor based 3D object recognition algorithm achieved a high average recognition rate of 98.9% on the whole UWA dataset.
KEYWORDS: 3D modeling, Object recognition, Laser range finders, 3D image processing, Detection and tracking algorithms, Data modeling, Optical engineering, Statistical modeling, Image resolution, Instrument modeling
This paper presents a highly distinctive and robust local three-dimensional (3-D) feature descriptor named longitude and latitude spin image (LLSI). The whole procedure has two modules: local reference frame (LRF) definition and LLSI feature description. We employ the same technique as Tombari to define the LRF. The LLSI feature descriptor is obtained by stitching the longitude and latitude (LL) image to the original spin image vertically, where the LL image was generated similarly with the spin image by mapping a two-tuple (θ,φ) into a discrete two-dimensional histogram. The performance of the proposed LLSI descriptor was rigorously tested on a number of popular and publicly available datasets. The results showed that our method is more robust with respect to noise and varying mesh resolution than existing techniques. Finally, we tested our LLSI-based algorithm for 3-D object recognition on two popular datasets. Our LLSI-based algorithm achieved recognition rates of 100%, 98.2%, and 96.2%, respectively, when tested on the Bologna, University of Western Australia (UWA) (up to 84% occlusion), UWA datasets (all). Moreover, our LLSI-based algorithm achieved 100% recognition rate on the whole UWA dataset when generating the LLSI descriptor with the LRF proposed by Guo.
Shape Matching under Affine Transformation (SMAT) is an important issue in shape analysis. Most of the existing SMAT methods are sensitive to noise or complicated because they usually need to extract the edge points or compute the high order function of the shape. To solve these problems, a new SMAT method which combines the low order shape normalization and the multi-scale area integral features is proposed. First, the shapes with affine transformation are normalized into their orthogonal representations according to the moments and an equivalent resample. This procedure transforms the shape by several linear operations: translations, scaling, and rotation, following by a resample operation. Second, the Multi-Scale Area Integral Features (MSAIF) of the shapes which are invariant to the orthogonal transformation (rotation and reflection transformation) are extracted. The MSAIF is a signature achieved through concatenating the area integral feature at a range of scales from fine to coarse. The area integral feature is an integration of the feature values, which are computed by convoluting the shape with an isotropic kernel and taking the complement, over the shape domain following by the normalization using the area of the shape. Finally, the matching of different shapes is performed according to the dissimilarity which is measured with the optimal transport distance. The performance of the proposed method is tested on the car dataset and the multi-view curve dataset. Experimental results show that the proposed method is efficient and robust, and can be used in many shape analysis works.
Recent research progress for the particle beam interaction with nanostructures at Shanghai Institute of Applied Physics
(SINAP) is reported. The experimental results on channeling of charged particles along nanostructures at low energy are
demonstrated. The coherent scattering effect of ion beam transportation in carbon nanotubes (CNTs) is investigated. The
direct measurement of beam intensity of angular distribution through AAO or CNTs/AAO sample, and the measurement
of backscattering spectra of Au/Si pass AAO or CNTs/AAO sample are made. It is found that for incident angle of about
0.3" the maximum of Au/Si backscattering signal pass the sample can be detected. On the other hand, the simulation
study for the channeling of charged particle along the nanostructure is also investigated.