The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust
intelligent robots may be considered as ones that not only work in one environment but rather in all types of
situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation
to changes in the environment. We have also described the combination of these sensors with a "creative
controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task
selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which
are designed for robust operations and worst case situations such as day night cameras or rain and snow
solutions. This ideal model may be compared to various approaches that have been implemented on
"production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern,
embedded, mobile computing architectures. Many prototype intelligent robots have been developed and
demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering
solution. Continual innovation and improvement are still required. The significance of this comparison is
that it provides some insights that may be useful in designing future robots for various manufacturing,
medical, and defense applications where robust and reliable performance is essential.
History shows that problems that cause human confusion often lead to inventions to solve the problems,
which then leads to exploitation of the invention, creating a confusion-invention-exploitation cycle.
Robotics, which started as a new type of universal machine implemented with a computer controlled
mechanism in the 1960's, has progressed from an Age of Over-expectation, a Time of Nightmare, an Age
of Realism, and is now entering the Age of Exploitation.
The purpose of this paper is to propose architecture for the modern intelligent robot in which sensors permit
adaptation to changes in the environment are combined with a "creative controller" that permits adaptive
critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment.
This ideal model may be compared to various controllers that have been implemented using Ethernet, CAN
Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Several
prototypes and simulations are considered in view of peta-computing. The significance of this comparison
is that it provides some insights that may be useful in designing future robots for various manufacturing,
medical, and defense applications.
This paper describes a methodology for creative learning that applies to man and machines. Creative learning is a general approach used to solve optimal control problems. The creative controller for intelligent machines integrates a dynamic database and a task control center into the adaptive critic learning model. The task control center can function as a command center to decompose tasks into sub-tasks with different dynamic models and criteria functions, while the dynamic database can act as an information system. To illustrate the theory of creative control, several experimental simulations for robot arm manipulators and mobile wheeled vehicles were included. The simulation results showed that the best performance was obtained by using adaptive critic controller among all other controllers. By changing the paths of the robot arm manipulator in the simulation, it was demonstrated that the learning component of the creative controller was adapted to a new set of criteria. The Bearcat Cub robot was another experimental example used for testing the creative control learning.
The significance of this research is to generalize the adaptive control theory in a direction toward highest level of human learning - imagination. In doing this it is hoped to better understand the adaptive learning theory and move forward to develop more human-intelligence-like components and capabilities into the intelligent robot. It is also hoped that a greater understanding of machine learning will motivate similar studies to improve human learning.
The purpose of this paper is to introduce a concept of eclecticism for the design, development, simulation
and implementation of a real time controller for an intelligent, vision guided robots. The use of an eclectic
perceptual, creative controller that can select its own tasks and perform autonomous operations is
illustrated. This eclectic controller is a new paradigm for robot controllers and is an attempt to simplify the
application of intelligent machines in general and robots in particular. The idea is to uses a task control
center and dynamic programming approach. However, the information required for an optimal solution
may only partially reside in a dynamic database so that some tasks are impossible to accomplish. So a
decision must be made about the feasibility of a solution to a task before the task is attempted. Even when
tasks are feasible, an iterative learning approach may be required. The learning could go on forever. The
dynamic database stores both global environmental information and local information including the
kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position
control and simulations. However, models of the dynamics of the manipulators are needed for tracking
control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the
controller, and achieving superior performance. Simulations of various control designs are shown. Much of
the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot
was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach lies in
the fact that it is applicable to both robot arm manipulators and mobile robots such as wheeled mobile
robots. This generality should encourage the development of more mobile robots with manipulator
capability since both models can be easily stored in the dynamic database. The multi task controller also
permits wide applications. The use of manipulators and mobile bases with a high-level control are
potentially useful for space exploration, certain rescue robots, defense robots, medical robotics, and robots
that aids older people in daily living activities.
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
The purpose of this paper is to describe the concept and architecture for an intelligent robot system that can adapt, learn and predict the future. This evolutionary approach to the design of intelligent robots is the result of several years of study on the design of intelligent machines that could adapt using computer vision or other sensory inputs, learn using artificial neural networks or genetic algorithms, exhibit semiotic closure with a creative controller and perceive present situations by interpretation of visual and voice commands. This information processing would then permit the robot to predict the future and plan its actions accordingly. In this paper we show that the capability to adapt, and learn naturally leads to the ability to predict the future state of the environment which is just another form of semiotic closure. That is, predicting a future state without knowledge of the future is similar to making a present action without knowledge of the present state. The theory will be illustrated by considering the situation of guiding a mobile robot through an unstructured environment for a rescue operation. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots.
Intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths to accomplish a variety of tasks. Such machines have many potential useful applications in medicine, defense, industry and even the home so that the design of such machines is a challenge with great potential rewards. Even though intelligent systems may have symbiotic closure that permits them to make a decision or take an action without external inputs, sensors such as vision permit sensing of the environment and permit precise adaptation to changes. Sensing and adaptation define a reactive system. However, in many applications some form of learning is also desirable or perhaps even required. A further level of intelligence called understanding may involve not only sensing, adaptation and learning but also creative, perceptual solutions involving models of not only the eyes and brain but also the mind. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots with examples of adaptive, creative and perceptual learning. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to important beneficial applications.
Mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. That is why mobile robotics problems are complex with many unanswered questions. To reach a high degree of autonomous operation, a new level of learning is required. On the one hand, promising learning theories such as the adaptive critic and creative control have been proposed, while on other hand the human brain’s processing ability has amazed and inspired researchers in the area of Unmanned Ground Vehicles but has been difficult to emulate in practice. A new direction in the fuzzy theory tries to develop a theory to deal with the perceptions conveyed by the natural language. This paper tries to combine these two fields and present a framework for autonomous robot navigation. The proposed creative controller like the adaptive critic controller has information stored in a dynamic database (DB), plus a dynamic task control center (TCC) that functions as a command center to decompose tasks into sub-tasks with different dynamic models and multi-criteria functions. The TCC module utilizes computational theory of perceptions to deal with the high levels of task planning. The authors are currently trying to implement the model on a real mobile robot and the preliminary results have been described in this paper.
Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots.
During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot’s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application.
To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is “beyond the adaptive critic.” A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.
Intelligent industrial and mobile robots may be considered proven technology in structured environments. Teach programming and supervised learning methods permit solutions to a variety of applications. However, we believe that to extend the operation of these machines to more unstructured environments requires a new learning method. Both unsupervised learning and reinforcement learning are potential candidates for these new tasks. The adaptive critic method has been shown to provide useful approximations or even optimal control policies to non-linear systems. The purpose of this paper is to explore the use of new learning methods that goes beyond the adaptive critic method for unstructured environments. The adaptive critic is a form of reinforcement learning. A critic element provides only high level grading corrections to a cognition module that controls the action module. In the proposed system the critic's grades are modeled and forecasted, so that an anticipated set of sub-grades are available to the cognition model. The forecasting grades are interpolated and are available on the time scale needed by the action model. The success of the system is highly dependent on the accuracy of the forecasted grades and adaptability of the action module. Examples from the guidance of a mobile robot are provided to illustrate the method for simple line following and for the more complex navigation and control in an unstructured environment. The theory presented that is beyond the adaptive critic may be called creative theory. Creative theory is a form of learning that models the highest level of human learning - imagination. The application of the creative theory appears to not only be to mobile robots but also to many other forms of human endeavor such as educational learning and business forecasting. Reinforcement learning such as the adaptive critic may be applied to known problems to aid in the discovery of their solutions. The significance of creative theory is that it permits the discovery of the unknown problems, ones that are not yet recognized but may be critical to survival or success.
A method is discussed describing how different types of Omni-Directional fisheye lenses can be calibrated for use in robotic vision. The technique discussed will allow for full calibration and correction of x,y pixel coordinates while only taking two uncalibrated and one calibrated measurement. These are done by finding the observed x,y coordinates of a calibration target. Any Fisheye lense that has a roughly spherical shape can have its distortion corrected with this technique. Two measurements are taken to discover the edges and centroid of the lens. These can be done automatically by the computer and does not require any knowledge about the lens or the location of the calibration target. A third measurement is then taken to discover the degree of spherical distortion. This is done by comparing the expected measurement to the measurement obtained and then plotting a curve that describes the degree of distortion. Once the degree of distortion is known and a simple curve has been fitted to the distortion shape, the equation of that distortion and the simple dimensions of the lens are plugged into an equation that remains the same for all types of lenses. The technique has the advantage of needing only one calibrated measurement to discover the type of lens being used.
The purpose of this paper is to compare three methods for 3- D measurements of line position used for the vision guidance to navigate an autonomous mobile robot. A model is first developed to map 3-D ground points into image points to be developed using homogeneous coordinates. Then using the ground plane constraint, the inverse transformation that maps image points into 3-D ground points is determined. And then the system identification problem is solved using a calibration device. Calibration data is used to determine the model parameters by minimizing the mean square error between model and calibration points. A novel simplification is then presented which provides surprisingly accurate results. This method is called the magic matrix approach and uses only the calibration data. A more standard variation of this approach is also considered. The significance of this work is that it shows that three methods that are based on 3-D measurements may be used for mobile robot navigation and that a simple method can achieve accuracy to a fraction of an inch which is sufficient in some applications.
An autonomous guided vehicle is a multi-sensor mobile robot. The sensors of a multi-sensor robot system are characteristically complex and diverse. They supply observations, which are often difficult to compare or aggregate directly. To make efficient use of the sensor information, the capabilities of each sensor must be modeled to extract information form the environment. For this goal, a probability model of ultrasonic sensor (PMUS) is presented in this paper. The model provides a means of distributing decision making and integrating diverse opinions. Also, the paper illustrates that a series of performance factors affect the probability model as parameters. PMUS could be extended to other sensor as members of the multi-sensor team. Moreover, the sensor probability model explored is suitable for all multi-sensor mobile robots. It should provide a quantitative ability for analysis of sensor performance, and allow the development of robust decision procedures for integrating sensor information. The theoretical sensor model presented is a first step in understanding and expanding the performance of ultrasound systems. The significance of this paper lies in the theoretical integration of sensory information from the probabilistic point of view.
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
Motion control is one of the most critical factors in the design of a robot. The purpose of this paper is to describe the research for applying motion control principles for a mobile robot systems design, which is on going at the University of Cincinnati Robotics Center. The mobile robot was constructed during the 1998-1999 academic year, and called BEARCAT II. Its design has inherited many features of its predecessor, BEARCAT II, such as vision guidance, sonar detection and digital control. In addition, BEARCAT II achieved many innovative motion control features as rotating sonar, zero turning radius, current control loop, and multi- level controller. This paper will focus on the motion control design, development and programming for the vehicle steering control and rotating sonar systems. The systems have been constructed and tested at the 1999 International Ground Robotics Competition with the BEARCAT II running an obstacle course for 153.5 feet and finishing fourth in the competition. The significance of this work is in the increased understanding of robot control and the potential application of autonomous guided vehicle technology for industry, defense and medicine.