We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.
Tactical behaviors for autonomous ground and air vehicles are an area of high interest to the Army. They are critical for the inclusion of robots in the Future Combat System (FCS). Tactical behaviors can be defined at multiple levels: at the Company, Platoon, Section, and Vehicle echelons. They are currently being defined by the Army for the FCS Unit of Action. At all of these echelons, unmanned ground vehicles, unmanned air vehicles, and unattended ground sensors must collaborate with each other and with manned systems. Research being conducted at the National Institute of Standards and Technology (NIST) and sponsored by the Army Research Lab is focused on defining the Four Dimensional Real-time Controls System (4D/RCS) reference model architecture for intelligent systems and developing a software engineering methodology for system design, integration, test and evaluation. This methodology generates detailed design requirements for perception, knowledge representation, decision making, and behavior generation processes that enable complex military tactics to be planned and executed by unmanned ground and air vehicles working in collaboration with manned systems.
The level of automation in combat vehicles being developed for the Army's objective force is greatly increased over the Army's legacy force. This automation is taking many forms in emerging vehicles; varying from operator decision aides to fully autonomous unmanned systems. The development of these intelligent vehicles requires a thorough understanding of all of the intelligent behavior that needs to be exhibited by the system so that designers can allocate functionality to humans and/or machines. Traditional system specification techniques focused heavily on the functional description of the major systems and implicitly assumed that a well-trained crew would operate these systems in a manner to accomplish the tactical mission assigned to the vehicle. In order to allocate some or all of these intelligent behaviors to machines in future vehicles it is necessary to be able to identify and describe these intelligent behaviors in detail. In this paper, we describe an effort to develop an intelligent systems (IS) ontology using Protege. The goal of this effort is to develop a common, implementation-independent, extendable knowledge source for researchers and developers in the intelligent vehicle community that will:
* Provide a standard set of domain concepts along with their attributes and inter-relations
* Allow for knowledge capture and reuse
* Facilitate systems specification, design, and integration, and
* Accelerate research in the field.
This paper describes the methodology we have used to identify knowledge in this domain and an approach to capture and visualize the knowledge in the ontology.
Sensory processing for real-time, complex, and intelligent control systems is costly, so it is important to perform only the sensory processing required by the task. In this paper, we describe a straightforward metric for precisely defining sensory processing requirements. We then apply that metric to a complex, real-world control problem, autonomous on-road driving. To determine these requirements the system designer must precisely and completely define 1) the system behaviors, 2) the world model situations that the system behaviors require, 3) the world model entities needed to generate all those situations, and 4) the resolutions, accuracy tolerances, detection timing, and detection distances required of all world model entities.
The U.S. Department of Defense has initiated plans for the deployment of autonomous robotic vehicles in various tactical military operations starting in about seven years. Most of these missions will require the vehicles to drive autonomously over open terrain and on roads which may contain traffic, obstacles, military personnel as well as pedestrians. Unmanned Ground Vehicles (UGVs) must therefore be able to detect, recognize and track objects and terrain features in very cluttered environments. Although several LADAR sensors exist today which have successfully been implemented and demonstrated to provide somewhat reliable obstacle detection and can be used for path planning and selection, they tend to be limited in performance, are effected by obscurants, and are quite large and expensive. In addition, even though considerable effort and funding has been provided by the DOD R&D community, nearly all of the development has been for target detection (ATR) and tracking from various flying platforms. Participation in the Army and DARPA sponsored UGV programs has helped NIST to identify requirement specifications for LADAR to be used for on and off-road autonomous driving. This paper describes the expected requirements for a next generation LADAR for driving UGVs and presents an overview of proposed LADAR design concepts and a status report on current developments in scannerless Focal Plane Array (FPA) LADAR and advanced scanning LADAR which may be able to achieve the stated requirements. Examples of real-time range images taken with existing LADAR prototypes will be presented.
The Real-time Control System (RCS) Methodology has evolved over a number of years as a technique to capture task knowledge and organize it into a framework conducive to implementation in computer control systems. The fundamental premise of this methodology is that the present state of the task activities sets the context that identifies the requirements for all of the support processing. In particular, the task context at any time determines what is to be sensed in the world, what world model states are to be evaluated, which situations are to be analyzed, what plans should be invoked, and which behavior generation knowledge is to be accessed. This methodology concentrates on the task behaviors explored through scenario examples to define a task decomposition tree that clearly represents the branching of tasks into layers of simpler and simpler subtask activities. There is a named branching condition/situation identified for every fork of this task tree. These become the input conditions of the if-then rules of the knowledge set that define how the task is to respond to input state changes. Detailed analysis of each branching condition/situation is used to identify antecedent world states and these, in turn, are further analyzed to identify all of the entities, objects, and attributes that have to be sensed to determine if any of these world states exist. This paper explores the use of this 4D/RCS methodology in some detail for the particular task of autonomous on-road driving, which work was funded under the Defense Advanced Research Project Agency (DARPA) Mobile Autonomous Robot Software (MARS) effort (Doug Gage, Program Manager).
This paper describes NIST’s efforts in evaluating what it will take to achieve autonomous human-level driving skills in terms of time and funding. NIST has approached this problem from several perspectives: considering the current state-of-the-art in autonomous navigation and extrapolating from there, decomposing the tasks identified by the Department of Transportation for on-road driving and comparing that with accomplishments to date, analyzing computing power requirements by comparison with the human brain, and conducting a Delphi Forecast using the expert researchers in the field of autonomous driving. A detailed description of each of these approaches is provided along with the major finding from each approach and an overall picture of what it will take to achieve human level driving skills in autonomous vehicles.
The National Institute of Standards and Technology has developed a modular definition of components for machine control, and a specification to their interfaces, with broad application to robots, machine tools, and coordinate measuring machines. These components include individual axis control, coordinate trajectory generation, discrete input/output, language interpretation, and task planning and execution. The intent of the specification is to support interoperability of components provided by independent vendors. NIST has installed a machine tool controller based on these interfaces on a 4-axis horizontal machining center at the Pontiac Powertrain Division of General Motors. The intent of this system is to validate that the interfaces are comprehensive enough to serve a demanding application, and to demonstrate several key concepts of open architecture controllers: component interoperability, controller scalability, and function extension. In particular, the GM-NIST Enhanced Machine Controller demonstrates interoperability of motion control hardware, scalability across computing platforms, and extensibility via user-defined graphical user interfaces. An important benefit of platform scalability is the ease with which the developers could test the controller in simulation before site installation. The EMC specifications are serving a larger goal of driving the development of true industry standards that will ultimately benefit users of machine tools, robots, and coordinate measuring machines. To this end, a consortium has been established and cooperative participation with the Department of Energy TEAM program and the US Air Force Title III program has been undertaken.