Proc. SPIE. 8003, MIPPR 2011: Automatic Target Recognition and Image Analysis
KEYWORDS: Image processing algorithms and systems, Visual process modeling, Sensors, Image segmentation, Image processing, Distortion, Mobile robots, Corner detection, Environmental sensing, RGB color model
In this paper, a simple but effective method for robot self-localization is presented. The spatial neighborhood
constraint is incorporated into the preprocessing of the image segmentation. Then it uses a closed cycle with rectification
and Hough detection to find the boundary and corners. Depending on the actual size of surrounding environment and the
white lines and corners detected last step, the robot can maintain self-localization through two methods. One method
uses the two lines, and the other method used triangulation. Finally, a weight value is set between the two methods to
realize the self-localization.Actual image sequence from the robot is tested. The robot can be placed anywhere in the
environment. The final self-localization results on very different images with significant light change and noise are
We try to incorporate a graphical model to solve the problem of object recognition, which is a fundamental problem in computer vision. Adopting the multiscale feature keypoint technique, we present an object recognition algorithm that establishes the center, scale factor, and rotation angle of the object in the images. First, the local invariant features are detected in template and scene images. Second, the belief propagation algorithm is used to compute the correspondence considering the spatial constraints. Third, each correspondence point records a vote to the object's center, scale factor, and rotation angle. Finally, we keep the densest point on the vote map as the recognition result. Experimental results demonstrate the robustness of the algorithm on real images.
Object recognition can be formulated as matching image features to model features. When recognition is patch-based, the feature correspondence is one to one. However, due to noise, repetitive structures, and background clutter, features do not match one to one, but one to many. By using the multiscale feature point technique, a new object recognition algorithm is presented to identify the center, scale, and orientation of the objects in the images. This approach recognizes the objects the presence of translation, scale variation, rotation, partial occlusion, and changed viewpoint. It does not require that features match one to one, and maintains the structure information of the object. This is accomplished by voting on the object's center, scale factor, and orientation for each match point. Experimental results demonstrate that the method works well with translation, rotation, scale changes, and partial occlusion, but gives less accurate results when the viewpoint is altered.
Object recognition can be formulated as matching image features to model features. When recognition is based on point
feature, feature correspondence should be one-to-one. However, due to noises, repetitive structures and background
clutters, features don't match one-to-one but one-to-many. By using the multi-scale feature point technique, we present
an object recognition algorithm that makes features match one-to-one. First, it determines the correspondence by using
the location, scale factor, orientation and local invariant descriptor of each feature point. Then a vote is recorded for the
center, scale factor and rotation angle of object for each correspondence point. This approach can recognize the objects
in the case of scale change, rotation angle changes and partial occlusion. Experimental results demonstrate the robustness
of the overall approach on various image pairs.