As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.
In this paper, two main algorithms about monocular visual odometry is introduced based on 3D-2D motion estimation. A 3D-2D motion estimation method needs to maintain a consistent and accurate set of triangulated 3D features and to create 3D-2D feature matches. Therefore, a keyframe selection strategy is proposed to construct the precise 3D point sets. Based on this strategy, an algorithm is designed to get more proper keyframes by restricting the number of feature points and taking translation amount into account. This keyframe selection strategy will discard inferior frames and construct more precise 3D point sets. We also designed a method to filter 3D-2D feature matches in two different ways. This method contributes to estimating camera pose more accurately. The effectiveness and feasibility of the proposed algorithms were verified in both KITTI outdoor dataset and a real indoor environment. The result of experiment showed that our algorithms can recover the motion trajectory of the camera accurately. And it meet the requirements of real-time and accuracy in monocular visual odometry.