We present a modular approach to intelligent control of a simulated jointed leg in performing the task of running. The modular controller learns from experience to generate the appropriate control signals. We focus on the control of a single running stride. Separate controllers are used for each of three phases of the stride. A neurofuzzy takeoff controller produces torques which are applied at the leg joints while the foot is on the ground. It controls the height, distance and angular momentum of the stride. Once the foot leaves the ground, a neural network based controller is used to control the movement of the leg during the ballistic phase. This controller moves the joints along a planned trajectory which avoids obstacles, if any, and places the leg in an appropriate configuration for landing. When the foot touches the ground, a neural network based landing controller takes over. It is designed to produce control torques which will move the leg into a crouched position suitable for takeoff of the next stride. Results achieved by the takeoff and ballistic controllers are presented here. Very accurate control is achieved for simple strides of different sizes and strides which involve scaling an obstacle.
In this paper a concept for a wheeled, multijoint robot able to operate in sewage systems is presented. The robot should work in an autonomous way. This concerns power supply, control and information processing. The cablefree navigation and the multijoint redundant construction of the robot enable higher mobility in sewage systems. In contrast to present systems, the robot should be able to avoid and overcome small obstacles, e.g. socket displacements, holes or sediments, and to pass junctions and curves. In the following the results of a feasibility study are described in which the state of the art in sewer inspection robot as well as first experiments for the development of such a system is shown.
This paper describes an observer based design for control of vehicle traction that is important in providing safety and obtaining desired vehicle motion in longitudinal vehicle control. Since vehicle traction force depends on the friction coefficient between road and tire, which in turn depends on the wheel slip and road conditions, we may influence traction force by varying the wheel slip. A robust adaptive sliding mode controller is designed to maintain the wheel slip at any given value. Simulations show that this longitudinal traction controller is capable of controlling the vehicle with parameter deviations and disturbances. The direct state feedback is then replaced with nonlinear observers in order to estimate the vehicle velocity from the output of the system which is the wheel velocity. The nonlinear systems model is shown to be locally observable. Extended Kalman filter and sliding observer are the two methods used for estimation. The effects and drawbacks of these observers are shown via simulations. The sliding observer is found to be promising while the extended Kalman filter is unsatisfactory due to unpredictable changes in the road conditions.
A GUI subsystem is essential in a development environment for small mobile robots. With it a developer can quickly get a sense of the robot's state and its current behavior or misbehavior. In the case of the BALI environment, a development environment for small mobile robots based on the Java language and the MIT 6.270 Robot kit, such a GUI environment needs to be properly decoupled from the physical robot. This is because the 6.270 robot kit allows the user to build not one but many robots, all of which have to be accessible to the BALI GUI environment. The work discussed in this paper focuses on how the GUI environment can be integrated with the IC language and environment that is commonly used with the 6.270 robot. A side effect of this work is that some early ideas of a BALI virtual robot are starting to emerge. The BALI environment essentially presents the user with the means to interact with a virtual robot. The real robot at the other end of the tether is just an implementation of the virtual robot. THis issue of providing the right balance between the abstraction level and flexibility are also discussed here.
In the present paper we propose a new, biologically inspired conceptual scheme for introducing parallelism, redundancy and learning in control systems for autonomous vehicles (AV). AV are regarded as a special class of autonomous agents (AA). Most generally stated, an AA is defined with the set of its percepts (S), the set of its elementary actions (A), and its internal structure. Within our scheme expectation is the key concept, and an agent is said to be aware of its environment if it can anticipate the effects of the actions (A) it applies in particular situations (S). If the environment stabilizes for awhile, the effects of application of a particular action will remain same for particular situation. We can take advantage of this fact and avoid recomputation of the optical action in that situation by applying instead, the action generated by the stored anticipations for that situation. In the introduction we give a brief overview of the present AA architectures in the domain of autonomous vehicles and outline the basics of our architecture. In the next section the expecting agent is described in more details. The last section is devoted to an example of implementation of this scheme within the domain of obstacle avoiding path finding autonomous vehicles control. The implementation details and the simulation results of three experiments are discussed.
The subsumption architecture is used to provide an autonomous vehicle with the means to stay within the boundaries of a course while avoiding obstacles. A three- layered network has been devised incorporating computer vision, ultrasonic ranging, and tactile sensing. The computer vision system operates at the lowest level of the network to generate a preliminary vehicle heading based upon detected course boundaries. The network's next level performs long-range obstacle detection using an array of ultrasonic sensors. The range map created by these sensors is used to augment the preliminary heading. At the highest level, tactile sensors are used for short-range obstacle detection and serve as an emergency response to obstacle collisions. The computer vision subsystem is implemented on a personal computer, while both ranging systems reside on a microcontroller. Sensor fusion within a subsumption framework is also executed on the microcontroller. The resulting outputs of the subsumption network are actuator commands to control steering and propulsion motors. THe major contribution of this paper is as a case study of the application of the subsumption architecture to the design of an autonomous ground vehicle.
This paper investigates developing a robust method for outdoor tracking using low cost machine vision. The conditioning method combines an electronically-shuttered vision system with a xenon strobe to produce a high contrast image of the structured target at up to a quarter mile away. The high contrast image eases landmark recognition which ultimately increases robustness and improves system response time. Tracking landmarks outdoors is important for farming, construction, and highway automation. Numerical analysis shows a retro-reflector can be detected to 530 meters with a S/N ratio of 2 for the nominally case of a 1 Joule xenon flash focused to 5 degrees with a flash duration of 100 microsecond(s) . The target position can be determined to within 17 millimeters at 400 meters using subpixel edge detection and a vision system with a 24 mm lens and 10 micrometers square pixels. Experimentally, the retro-reflective target achieved a 1.5 S/N at 91 meters which was 66.5 percent of the expected value. Experimentally, an IR blocking filter increased the S/N ratio by 42 percent by taking advantage of the differences in the spectrum of the xenon strobe and the sun. This result closely matches the expected S/N ratio increase associated with using an IR filter.
Proc. SPIE 2903, Sensor requirements for control systems with an example of longitudinal headway control of vehicles in automated highway systems, 0000 (23 January 1997); https://doi.org/10.1117/12.265350
There is a need for developing a methodology for obtaining sensor requirements for control systems. This study is an attempt at developing a methodology for specifying sensor needs as part of the design process of dynamic control systems in general, and automated highway system (AHS) in particular. Such approaches have not been advanced to-date. The current practice is to assume sensor parameters from available sensors and devise control strategies. That does not in general produce optimal designs. This paper provides a discussion on the sensor requirements needed for longitudinal headway control of vehicles in an AHS.
The Nomad200 is an electrical mobile robot who is equipped with different kinds of sensors. It can be programmed and controlled in two ways: directly via the on board PC card or remotely via a UNIX workstation. In the late case, commands and data are transmitted by an ethernet radio link. The purpose of this work is to build a robust navigation algorithm that has to be included in a general robotics application. This algorithm uses the laser sensor and the infrared sensors to build a map of the environment. The map is constructed step by step during the motion of the robot and a complete path is computed at each update of them map. The Voronic retraction method, the A algorithm and custom techniques are used to obtain the free way. Two different speed controllers, one heuristic and one using the fuzzy logic theory, have also been developed. The program was first written on the remote Host Station. But, in order to give to the robot a better autonomy and to prevent the communication failures, a distributed control of the robot has been implemented. We propose in this work different solutions for the distribution of the control. Each solution has been tested in simulation and with the real robot.
As the environment under which the mobile robot works varies, the characteristic of the sensors for the mobile robot navigation also varies. Thus it is desirable that the sensor is calibrated on-line for more reliable information using the measurements during the mobile robot performs a task. This paper presents an on-line sensor calibration scheme to estimate the unknown sensor bias for mobile robot navigation using the parity vector and recursive minimum variance estimation. The calibration error equation independent of the current position is obtained from the parity vector and then the current position of the mobile robot is estimated from the calibrated sensor data. The validity of the proposed scheme is evaluated through computer simulation.
Low-level navigation for autonomous vehicles can be accomplished efficiently by a behavioral-based approach that involves the simultaneous execution of independent sub-tasks seen as primitive behaviors. Each behavior maps sensory data into control commands in a reactive way, with no need of internal representations. A useful tool for realizing such a direct mapping is fuzzy logic, that allows the production of control rules by either manual programming or automatic learning. In prospect of implementing an articulated control system including all the low-level behaviors of navigation, this paper focuses on the problem of obtaining an efficient and robust fuzzy controller performing a single behavior and presents a method for minimizing the number of rules of a fuzzy controller developed for driving a TRC Labmate based vehicle along the wall on its right-hand side. Fuzzy rules, that map ultrasonic sensor readings onto steering velocity values, are learned automatically from training data collected during operator-driven runs of the vehicle. In addition, we address the problem of defining an appropriate performance function, that may be useful for evaluating the influence of the rule base reduction on the overall behavior of the vehicle during navigation, but also for estimating the quality of a control rule, in order to adapt rules on- line. Results of an experimental comparison between the original fuzzy wall-follower and its optimized version are reported.
ROBART III is an advanced demonstration platform for non- lethal security response measures, incorporating reflexive teleoperated control concepts developed on the earlier ROBART II system. The addition of threat-response capability to the detection and assessment features developed on previous systems has been motivated by increased military interest in Law Enforcement and Operations Other Than War. Like the MDARS robotic security system being developed at NCCOSC RDTE DIV, ROBART III will be capable of autonomously navigating in semi-structured environments such as office buildings and warehouses. Reflexive teleoperation mode employs the vehicle's extensive onboard sensor suite to prevent collisions with obstacles when the human operator assumes control and remotely drives the vehicle to investigate a situation of interest. The non-lethal-response weapon incorporated in the ROBART III system is a pneumatically-powered dart gun capable of firing a variety of 3/16-inch-diameter projectiles, including tranquilizer darts. A Gatling-gun style rotating barrel arrangement allows size shots with minimal mechanical complexity. All six darts can be fired individually or in rapid succession, and a visible-red laser sight is provided to facilitate manual operation under joystick control using video relayed to the operator from the robot's head-mounted camera. This paper presents a general description of the overall ROBART III system, with focus on sensor-assisted reflexive teleoperation of both navigation and weapon firing, and various issues related to non-lethal response capabilities.
In the course of developing automated vehicle-roadway systems, opportunities to deploy vehicle control systems art intermediate stages of development may emerge. Some of these systems may provide a significant efficiency or safety enhancement to existing operations with manually driven vehicles. Under certain circumstances, transit buses provide an ideal testbed for such systems. The work presented here represents a feasibility study for the application of advanced vehicle control systems (AVCS) to transit bus operations. The paper explores past and present research relevant to automatic control for buses and recommends specific operations which could be better performed by AVCS- assisted or controlled vehicles. A survey of feasible technologies for the guidance and control of the buses is also presented.
This paper presents the development of an integrated wiring system that can support hierarchical control system for autonomous excavators. Lots of sensor signals and control signals are sent from actuators to the controller and vice versa in an autonomous excavator, which may deteriorate the reliability and stability of the system. To overcome this problem, a control system is divided into several a local controllers located at the actuators and the essential data are communicated among the controllers through the wiring systems under the supervision of a host controller. The wiring system is based on a time-division multiplexing technique and is able to connect various components that are distributed over the vehicle. This paper describes the conceptual design, packet definition, computer simulation, hardware architecture, and software for the integrated wiring system.
The problem of determining the offset to lane markings is an important one in designing vision-based automotive safety systems that operate on structured road environments. The lane offset information is critical for lateral control of the automobile. In this paper, we investigate the use of this information for an autonomous robot's lane-keeping task. We employ a deformable template-based algorithm for determining the location of lane markings in visual images taken from a side-looking camera. The matching criteria involves a modification of the standard signal-to-noise (SNR) ratio-based matched filtering criteria. A KL-type color transformation is used for transforming the RGB channels of the given image onto a composite color channel, in order to eliminate some of the noise. The standard perspective transformation is used for transforming the offset information from image coordinates onto ground coordinates. The resulting algorithm, named STARLITE is robust to shadows, specular reflections, road cracks, etc. Experimental results are provided to illustrate the performance of STARLITE and compare its performance to the AURORA algorithm, and the SNR-based matched filter.
Many commercial carriers are currently operating vehicles which are overweight, creating an unsafe and illegal situation. However, the cost to law enforcement agencies to stop vehicles for roadside weight checks is prohibitive, while the cost to the nation in lost travel time adds shipping costs which are reflected in the price of every product transported by truck. Overweight trucks also become a threat to public safety when, on public highways, solid cargo breaks loose or liquid cargo leaks. The solution is an on-board monitoring system. With such a system, trucks under their legal weight limit would be allowed to travel past state borders and checkpoints without being stopped. THis would save money both in law enforcement and shipping costs to the nation as a whole. A properly designed system would also have the capability to warn both the driver and local safety and enforcement personnel when the truck is loaded beyond capacity or any other unsafe condition. This paper will detail a system that would even in early limited production be cost effective for both the law enforcement agencies and the operators of trucking fleets. In full production the systems would be cost effective even for smaller or owner/operator trucks. This is a safety system that could become standard equipment similar to seat belts, ABS, and airbags. The initial testing of sub-assemblies and sub-systems which could be deployed now for beta test has been completed.
Previously, control systems for remotely controlled vehicles (RCVs) and unmanned ground vehicles (UGVs) have largely been of a centralized design, in which all vehicles sensing and servo control systems are individually interfaces to a central computer. These controllers often have been completely redeveloped for each new application. This approach leads to increased development, installation, and maintenance costs, and to a product that is not easily adaptable to other platforms or tasks. Under a Phase II SBIR program, RedZone Robotics is developing a distributed control systems (DCS) that reduces development, installation, and maintenance costs while enhancing adaptability to other platforms or applications. The DCS consists of a distributed control network of small, intelligent local controller nodes acting on the vehicle motion and sensing system components. A central card oversees the network and handles higher level commands. The central card and local nodes are linked through the controller area network serial bus. The node hardware is of standardized design so that application specific tasks are largely accomplished in software. The standardized design makes the DCS potentially compatible with multiple UGV platforms and eventual dual-use applications in commercial vehicles. More sophisticated functionality, such as remote control or autonomous navigation can be layered on top of the low level control supplied by DCS. Thus, the DCS can be an enabling component for development of advanced UGV technologies. ALso, intelligent nodes enable fault identification and orderly shutdown to be accomplished directly at the vehicle actuators. This SBIR is sponsored by the US Army Tank-Automotive Research, Development and Engineering Center.
This paper presents the design and implementation of a high performance vehicle controller based on parallel digital processing systems for automated vehicles. From the literature it has been observed that one of the main limiting factors of most automated vehicles rests on the available computing power. Most systems employ camera vision for guidance purposes. In some cases other sensors are used in combination with camera vision. The amount of information that has to be processed can overwhelm many processors. Solutions so far involved distributed processing, massively parallel processors, dedicated processors and mini computers. In most cases, these systems use specially designed processors, lacking standard interfacing, and as a result proprietary interface cards have to be built. This paper takes the alternate approach of designing a high performance controller using the parallel DSP systems, namely, the TMS320C40 processors with 275 MIPS and 50 MFLOPS. This controller processes data from a CCD camera which is focused onto a road segment containing a line that has suitable contrast with the road surface. The DSP based controller in a PC environment. Carries out the task of high level control while the low level servo control is assigned to dedicated motion controllers communicating with the DSP based controller through the PC bus. Results of image processing and timing requirements for various topologies are detailed.
This paper describes the design and control of a scaled robotic vehicle that can function in two modes: telerobotic manual mode and automated mode. In the telerobotic manual mode, a steering console with a steering wheel and gas pedal is used to get steering and speed commands from human operator. These commands are transmitted to a microcontroller on the vehicle using a wireless serial data link. The microcontroller then converts these commands into PWM cycles that drive the motor and the steering servo of the vehicle. In the automated mode, steering and speed commands are extracted from video images of the road ahead of the vehicle. These images are captured by a wireless camera on the vehicle that sends them to a control computer. Using a frame grabber with DSP for image processing, information is generated about the vehicle's position on the road. This information serves as input to a controller that calculates steering commands to keep the vehicle in the road center. The design and testing of this dual mode robotic vehicle is performed in the flexible low-cost automated scaled highway laboratory at Virginia Tech, where scale models of vehicles are designed for that purpose.