The beacon recognition approach based on the omni-vision characteristics is presented. The proposed method using a color lights arranged in a given spatial pattern is described. A recongnition logic using color features added spatial dimensions was developed. The navigation software integrating the beacon recognition with robot positioning is provided. The developments of the nr,w method will lead to practical module in omni-vision guidance for mobile robots.
The purpose of this paper aims to promote the application of fish-eye lens. Accurate parameters calibration and effective
distortion rectification of an imaging device is of utmost importance in machine vision. Fish-eye lens produces a
hemispherical field of view of an environment, which appears definite significant since its advantage of panoramic sight
with a single compact visual scene. But fish-eye lens image has an unavoidable inherent severe distortion. The precise
optical center is the precondition for other parameters calibration and distortion correction. Therefore, three different
optical center calibration methods have been researched for diverse applications. Support Vector Machine (SVM) and
Spherical Equidistance Projection Algorithm (SEPA) are integrated to replace traditional rectification methods. SVM is a
machine learning method based on the theory of statistics, which have good capabilities of imitating, regression and
classification. In this research, SVM provides a mapping table between the fish-eye image and the standard image for
human eyes. Two novel training models have been designed. SEPA has been applied to promote the rectification effect
of the edge of fish-eye lens image. The validity and effectiveness of our achievements are demonstrated by processing
the real images.
Omnidirectional vision appears the definite significance since its advantage of acquiring full 360° horizontal field of
vision information simultaneously. In this paper, an embedded original omnidirectional vision navigator (EOVN) based
on fish-eye lens and embedded technology has been researched.
Fish-eye lens is one of the special ways to establish
omnidirectional vision. However, it appears with an unavoidable inherent and enormous distortion. A unique integrated
navigation method which is conducted on the basis of targets tracking has been proposed. It is composed of multi-target
recognition and tracking, distortion rectification, spatial location and navigation control. It is called RTRLN. In order to
adapt to the different indoor and outdoor navigation environments, we implant mean-shift and dynamic threshold
adjustment into the Particle Filter algorithm to improve the efficiency and robustness of tracking capability. RTRLN has
been implanted in an independent development embedded platform. EOVN likes a smart crammer based on
COMS+FPGA+DSP. It can guide various vehicles in outdoor environments by tracking the diverse marks hanging in the
air. The experiments prove that the EOVN is particularly suitable for the guidance applications which need high
requirements on precision and repeatability. The research achievements have a good actual applied inspection.
Omnidirectional vision (Omni-vision) has the feature that an extremely wide view can be achieved simultaneously.
The omni-image brings a highly unavoidable inherent distortion while it provides hemispherical field of views. In this
paper, a method called Spherical Perspective Projection is used for correction of such distorted image. Omni-vision
target recognition and tracking with fisheye lens for AGVs appears definite significant since its advantage of acquiring
all vision information of the three-dimensional space once. A novel Beacon Model and Omni-vision tracker for mobile
robots is described. At present, the research of target model has many different problems, such as outdoor illumination,
target veiling, target losing. Specially, outdoor illumination and beacon veiling are the key problems which need an
effective method to solve. The new beacon model which features particular topology shape can be recognized in the
outdoors with part veiled of the object. In this paper an improved omni-vision object tracking method based on mean
shift algorithm is proposed. The mean shift algorithm which is a powerful technique for tracking objects in image
sequences with complex background has been proved to be successful for the fast computation and effective tracking
problems. The recognition and tracking functions have been demonstrated on experimental platform.
Omni-directional vision navigation for AGVs appears definite significant since its advantage of panoramic sight with a
single compact visual scene. This unique guidance technique involves target recognition, vision tracking, object
positioning, path programming. An algorithm for omni-vision based global localization which utilizes two overhead
features as beacon pattern is proposed in this paper. An approach for geometric restoration of omni-vision images has
to be considered since an inherent distortion exists. The mapping between image coordinates and physical space
parameters of the targets can be obtained by means of the imaging principle on the fisheye lens. The localization of the
robot can be achieved by geometric computation.
Dynamic localization employs a beacon tracker to follow the landmarks in real time during the arbitrary movement of
the vehicle. The coordinate transformation is devised for path programming based on time sequence images analysis.
The beacon recognition and tracking are a key procedure for an omni-vision guided mobile unit. The conventional
image processing such as shape decomposition, description, matching and other usually employed technique are not
directly applicable in omni-vision. Particle filter (PF) has been shown to be successful for several nonlinear estimation
problems. A beacon tracker based on Particle Filter which offers a probabilistic framework for dynamic state estimation in visual tracking has been developed. We independently use two Particle Filters to track double landmarks but a composite algorithm on multiple objects tracking conducts for vehicle localization. We have implemented the tracking and localization system and demonstrated the relevant of the algorithm.
Extremely wide view of the omni-vision performs highly advanced for the vehicle navigation and target detection. However moving targets detection through omni-vision fixed on AGV (Automatic Guided Vehicle) involves more complex environments, where both the targets and the vehicle are in the moving condition. The moving targets will be detected in a moving background. After analyzing the character on omniorientational vision and image, we propose to use the estimation in optical flow fields, Gabor filter over optical flow fields for detecting moving objects. Because polar angle θ and polar radius R of polar coordinates are being changed as the targets moving, we improved optical flow approach which can be calculated based on the polar coordinates at the omniorientational center. We constructed Gabor filter which has 24 orientations every 15°, and filter optical flow fields at 24 orientations. By the contrast of the Gabor filter images at the same orientation and the same AGV position between the situation which there aren't any moving targets in the environment and the situation which there are some moving targets in the same environment, the moving targets' optical flow fields could be recognized. Experiment results show that the proposed approach is feasible and effective.
The paper presents our new research area in virtual reality technology combining with electronic by imitation platform for mobile robot. By this platform, path planning will become easier. From rebuilt virtual reality geographic feature by electronic digital map, mobile robot moving in the actual area could be imitated. Mobile robot coordinate could be corrected by virtual reality characteristic point and CCD image.
Humans can extend their capabilities to remote environments depending upon the remote teleoperation of mobile robots. The communication between the operator and the robot needs to be fast in order to quickly interact with the environment. However, the data transfer speed will slow down as either the data exchange rate increases or the Internet bandwidth decreases. Undesirable communication time delay invokes system instability. The purpose of this paper is to employ virtual reality technique to minimize the communication delay effect. A novel virtual tracker was developed and served to acquire the real position and orientation of mobile robot. The virtual reality scene is displayed on the remote computer, which needs only to use the data of the robot position and orientation. Instead of transmitting the huge amount of video stream from the robot to the client computer, the time delay can be ignored and system stability achieved.
The limitation for a web-based teleoperation system involves time delay in communication. Undesirable communication time delay causes system instabilities. Various techniques have been proposed to alleviate such control problems. This paper proposes an approach that develops a telerobotics system with a wireless web server/client application framework, which employs virtual reality technique to minimize the delay effects. A novel virtual tracker was developed, which acquires the real position and orientation of a mobile robot, and drives the virtual reality scene to display on the remote computer and change with the movements of a mobile robot. This requires only the robot position and orientation data, instead of transmission of the huge amount of video stream data from the robot to the client computer. As a result, the time delay effects can be ignored and system stability achieved. The experimental results have demonstrated the solution for teleoperation technology.
As a laboratory demonstration platform, TUT-I mobile robot provides various experimentation modules to demonstrate the robotics technologies that are involved in remote control, computer programming, teach-and-playback operations. Typically, the teach-and-playback operation has been proved to be an effective solution especially in structured environments. The path generated in the teach mode and path correction in real-time using path error detecting in the playback mode are demonstrated. The vision-based image database is generated as the given path representation in the teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory capability is employed to provide environment perception. A unique omni directional vision (omni-vision) system is used for localization and navigation. The omni directional vision involves an extremely wide-angle lens, which has the feature that a dynamic omni-vision image is processed in real time to respond the widest view during the movement. The beacon guidance is realized by observing locations of points derived from over-head features such as predefined light arrays in a building. The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic sensors is employed for obstacle avoidance.