For the past fifteen years, the International Ultraviolet Explorer (IUE) astronomical satellite has been successfully scheduled by `coarse-graining' the time into large discrete blocks. The success is due in part to the flexibility of coarse-graining which allows real-time modifications to the observing plan by the guest investigators. Such flexibility is desirable whenever an astronomical object is observed for the first time by a particular mission, since new data sometimes contain scientific surprises, and because several important types of astronomical objects are characteristically unpredictable and variable (e.g. supernovas, x-ray transients, etc.). Software which can incorporate this approach has the potential of significantly improving the efficiency and scientific return of future satellite missions. We give an overview of the IUE satellite and its scheduling requirements and describe our approach to satellite scheduling using constraint logic programming. We describe some of the constraints which are useful for satellite scheduling and show how the constraints can be used for efficient coarse- grained scheduling. We also discuss advantages of this approach for other satellite telescopes.
It is commonly acknowledged that there is a tradeoff between schedule quality and schedule robustness. In general terms, schedules that are of high quality tend to be of low robustness, and schedules that are robust tend to be of low quality. To better manage the robustness/quality tradeoff, we have developed an algorithm that implements what we call Just-In-Case scheduling; this algorithm explicitly considers the way in which scheduled actions might fail and how such failures can impact the executability of a schedule. Just-In-Case scheduling is able to build schedules that are robust and of high quality. The Just-In-Case algorithm is motivated in this paper by a specific telescope scheduling problem, and the paper presents the results of an experiment, carried out using real telescope scheduling data, that illustrates the performance improvement one can expect from using it.
As part of the recently completed Microelectronics Manufacturing Science and Technology (MMST) project, a decision support and analysis tool for planning in a semiconductor manufacturing facility has been developed. Design of the planning system uses an object- oriented approach, and implementation is performed in the Smalltalk programming environment. The system continually maintains a plan for wafer release into a facility, and predicts processing completion dates. The system has been built to run in a distributed environment, allowing simultaneous users in different parts of the facility. The system also provides several types of what-if analysis, both on the existing production plan and on production data. Production plan analysis is used to assist in making operational decisions related to the facility in its current state, such as determining the least disruptive time to take a piece of equipment down for maintenance. Production data analysis, which can be performed independent of the production plan, determines information such as equipment throughput rates to achieve given product cycle-times. All planning is performed using artificial intelligence search techniques, and is based on a time-phased capacity model of the facility. Uncertainty inherent in production data, such as cycle-times, is modeled using fuzzy arithmetic. This tool was used during the final 1000 wafer demonstration for MMST, and is currently being installed in other semiconductor manufacturing facilities. This paper describes the main goals of the planning system, the overall approach to planning and analysis, and a brief description of the current status.
An investigation applying genetic algorithm techniques to assembly line scheduling is reported. Specifically, the allocation of resources (e.g., assembly crews and their skill levels, tooling, parts availabilities, etc.) to assembly tasks on production line data is performed by a scheduling system based on a genetic algorithm. This article presents the general problem we have attempted to solve and how we have addressed the problem. We discuss our experience with the problem encoding, genetic operators used, the genetic plan, chromosome repair, and schedule evaluation.
Igor is a knowledge-based system for exploratory statistical analysis of complex systems and environments. Igor has two related goals: to help automate the search for interesting patterns in data sets, and to help develop models that capture significant relationships in the data. We outline a language for Igor, based on techniques of opportunistic planning, which balances control and opportunism. We describe the application of Igor to the analysis of the behavior of Phoenix, an artificial intelligence planning system.
Knowledge-based cooperative query answering facilities have been introduced to the GIS for approximate query answering. We use context-sensitive spatial type abstraction hierarchies to provide such cooperative capabilities. As a result, the ranges of the approximate spatial operators can be automatically determined by STAHs, rather than specified by domain experts. Thus these operators are context-sensitive. Further, STAHs can be generated automatically from geographic databases and thus is scalable. A prototype system, CoGIS, has been implemented to incorporate these cooperative capabilities.
Our objective is to integrate transformational analogy, derivational analogy, and goal- regression to create solutions for an intelligent system called SEIDAM (System of Experts for Intelligent Data Management). SEIDAM answers queries about forests and the environment through the integration of remote sensing, geographic information, models, and field measurements. A query (problem) could require, for example, that a forest inventory stored in a geographical information system be updated to reflect past harvesting by overlaying current satellite imagery over forest cover maps. A case consists of a query, remote sensing data, and geographic information, and the analysis methods to answer the query. SEIDAM will consist of approximately 150 expert systems performing satellite and aircraft image analysis, integrated to multiple GIS and a relational database. Derivational analogy provides the means by which this search can be expanded knowledgeably; i.e., provide a knowledge-based approach justifying the expansion of the search. Transformational analogy eliminates the problems associated with searching by foregoing a search altogether. The advantage is that the intractability of exploring the search space is no longer a consideration.
In order for long-range autonomous robots to be successful, they must be able to maintain accurate information about their location, available resources, and the state of critical components. We propose here a methodology that incorporates traditional, sensor-based tracking methods with discrete probabilistic representations of system state. Further, we extend the use of the Gaussian distribution to include a richer set of mathematical descriptions of system performance under specific failure conditions. The extended representations are then used to statistically test for these failure conditions by predicting the most likely values for observable parameters given the system state. This technique is then combined with first-order extended Kalman filtering to yield a probabilistic framework for tracking and fault detection in domains with nonlinear dynamics.
Diagnosis is often thought of as an isolated task in theoretical reasoning (reasoning with the goal of updating our beliefs about the world). We present a decision-theoretic interpretation of diagnosis as a task in practical reasoning (reasoning with the goal of acting in the world) and sketch components of our approach to this task. These components include an abstract problem description, a decision-theoretic model of the basic task, a set of inference methods suitable for evaluating the decision representation in real time, and a control architecture to provide the needed continuing coordination between the agent and its environment. A principal contribution of this work is the representation and inference methods we have developed, which extend previously available probabilistic inference methods and narrow, somewhat, the gap between probabilistic and logical models of diagnosis.
This paper discusses DOC, an efficient method for diagnosis of continuous systems based on qualitative analysis of an analytic constraint equation model of the system. Starting with equations that relate observations (measurements of parameter values) of a system to its individual components, the system first generates a diagnosis model based on partial explanations of the associated measurement parameters. Partial explanations are generated by qualitative causal analysis of the given constraint equations. These relations are then exploited for effective candidate generation and measurement selection when multiple candidates are generated. A method for analyzing systems with complex feedback loops is also presented. DOC has been successfully applied to diagnosing faults in the Space Station Thermal Bus system.
We describe the design of an architecture for an intelligent integrated mission planner for autonomous underwater vehicles (AUVs). Mission planning is an integral and important part of the software architecture of an AUV. Among the several functional modules of an AUV such as planner, controller, navigator, and perception, the planner plays the key role in generating, monitoring, and controlling of all mission tasks. In order to perform complex missions, the planner needs a wide range of knowledge and efficient reasoning techniques. Mission planning involves navigational planning, resource planning, safety planning, and mission-specific task planning. These functions require reasoning about the knowledge of the environment, vehicle, on-board resources, and mission tasks. The proposed design employs a mixture of hierarchical and heterarchical architectures. Case-based reasoning is employed for synthesizing mission plans. Among the different planner modules, design details of the navigational planner have been elaborated. The approach integrates advanced artificial intelligence techniques with AUV control architecture to make mission planning and execution simpler and flexible. The design takes into consideration the limited availability of AUV resources, scalability, and portability to other autonomous systems.
The European Community's strategic research initiative in information technology has been in place for seven years. A good example of the pan-European collaborative projects conducted under this initiative is PANORAMA: Perception and Navigation for Autonomous Mobile Robot Applications. This four-and-a-half-year project, completed in October 1993, aimed to prove the feasibility of an autonomous mobile robotic system replacing a human-operated vehicle working outdoors in a partially structured environment. The autonomous control of a mobile rock drilling machine was chosen as a challenging and representative test scenario. This paper presents an overview of intelligent mobile robot control architectures. Goals and objectives of the project are described, together with the makeup of the consortium and the roles of the members within it. The main technical achievements from PANORAMA are then presented, with emphasis given to the problems of realizing intelligent control. In particular, the planning and replanning of a mission, and the corresponding architectural choices and infrastructure required to support the chosen task oriented approach, are discussed. Specific attention is paid to the functional decomposition of the system, and how the requirements for `intelligent control' impact on the organization of the identified system components. Future work and outstanding problems are considered in some concluding remarks.
Many applications require the ability to navigate a mobile robotic platform through environments containing unknown obstacles. For example, many old nuclear reactors being decommissioned have no plans or models of the details of what is inside them. Sending humans into such unknown and hazardous environments is both costly and dangerous. Therefore, a robot that can go safely into these areas, knowing little or nothing about what it will encounter, is needed. This paper describes a system being developed that can be used either telerobotically, using graphical interfaces, or autonomously, with supervision capabilities through the interfaces. The system starts with little or no knowledge base and adds to it as it moves through the environment. The experiments described utilize a unique mobile robotic testbed including a wheeled robot integrated with ultrasonic range sensors, infrared proximity sensors, and CCD cameras controlled through VME based hardware.
We analyze simple everyday actions with a view to developing strategies that an intelligent robot can use to perform these same actions. The domain of tasks studied are in the class of simple machine-type actions involving hand tools. The tool is assumed to be composed of two principal geometric primitives that serve as the handle and the output end respectively. A task is modeled as an operation on a target object by the tool. This desired effect determines a motion trajectory for the output end of the tool. The decisions on grasp location and orientation are made based on the handle motions computed above. In addition to planning grasps and manipulations, we also formulate strategies for recognizing such tools. Tool recognition (from visual input) is based on the geometric information extracted. All objects in a scene are segmented into volumetric primitives. The primitives are then analyzed for their suitability to participate in the required task. Different primitives are ranked according to these criteria and the most suitable object is chosen to function as the tool.
Very rarely do physical devices function as intended by their designers the first time they are implemented. Usually, there are two ways in which the behavior of a device may deviate from the intended one: the device may not exhibit a desired behavior or it may result in an undesirable behavior. We describe a model-based method for solving the latter task. This method involves diagnosis and repair of the failed device and verification of the modified device. It uses compiled structure-behavior-function (SBF) models of how the device works. In an SBF model, the behaviors and the structural elements of a device act as indices into causal mechanisms that explain how the structure of the device produces its behaviors. The causal mechanisms in turn serve as indices into qualitative relations between device variables. The KRITIK2 system uses this indexing scheme to access relevant causal mechanisms and qualitative relations, and uses this knowledge for solving the diagnosis, repair, and verification subtasks of redesign. KRITIK2 shows that this model-based method is sufficient for parametric redesign even for devices in which a single cause results in multiple effects and a single structural element plays a role in multiple causal behaviors.
Object modeling organizes knowledge about design components and analyses in a modular fashion, from which representations of candidate designs may be quickly constructed. Modeling flexibility requires complimentary flexibility in analysis. Constraint propagation is a least-commitment approach to performing computations, since dependencies between calculations are inferred at run-time; it is thus well-suited to managing parametric analyses for conceptual/preliminary design as candidate configurations are evaluated and modified. A modeling package that combines these two approaches has been implemented. Classes for describing both design components and design analyses can be specified in terms of attributes and constraints on these attributes. A third type of class, called a link, is used to specify constraints on the attributes of other objects, referred to as its linkages. Candidate designs are modeled by instantiating classes representing the components to be combined, the analyses to be performed, and the links among them. As attribute values are assigned by the user, constraint propagation is triggered to run analyses and transfer results among the instances. Discussion focuses on implementation of this approach, and its application to sample problems in the design of aircraft engine turbomachinery and exhaust nozzles.
A design optimization methodology that couples optimization techniques to CFD analysis for design of airfoils is presented. This technique optimizes 2D airfoil sections of a blade by minimizing the deviation of the actual Mach number distribution on the blade surface from a smooth fit of the distribution. The airfoil is not reverse engineered by specification of a precise distribution of the desired Mach number plot, only general desired characteristics of the distribution are specified for the design. Since the Mach number distribution is very complex, and cannot be conveniently represented by a single polynomial, it is partitioned into segments, each of which is characterized by a different order polynomial. The sum of the deviation of all the segments is minimized during optimization. To make intelligent changes to the airfoil geometry, it needs to be associated with features observed in the Mach number distribution. Associating the geometry parameters with independent features of the distribution is a fairly complex task. Also, for different optimization techniques to work efficiently the airfoil geometry needs to be parameterized into independent parameters, with enough degrees of freedom for adequate geometry manipulation. A high-pressure, low reaction steam turbine blade section was optimized using this methodology. The Mach number distribution was partitioned into pressure and suction surfaces and the suction surface distribution was further subdivided into leading edge, mid section and trailing edge sections. Two different airfoil representation schemes were used for defining the design variables of the optimization problem. The optimization was performed by using a combination of heuristic search and numerical optimization. The optimization results for the two schemes are discussed in the paper. The results are also compared to a manual design improvement study conducted independently by an experienced airfoil designer. The turbine blade optimization system (TBOS) is developed using the described methodology of coupling knowledge engineering with multiple search techniques for blade shape optimization. TBOS removes a major bottleneck in the design cycle by performing multiple design optimizations in parallel, and improves design quality at the same time. TBOS not only improves the design but also the designers' quality of work by taking the mundane repetitive task of design iterations away and leaving them more time for innovative design.
Knowledge acquisition has frequently been identified as the bottleneck in knowledge engineering. Many techniques for knowledge acquisition have been developed, but this paper goes further to present a complete methodology. The use of general questions, specific questions, direct observation, the proper knowledge representation format, teachback techniques, and simulations form that methodology. An expert system created for advising the operators of a reactor on the condition of the rector shows how each phase of the methodology can be applied to the knowledge acquisition process.
When engineers diagnose system failures, they often use models to confirm system operation. This concept has produced a class of advanced expert systems which perform model-based diagnosis. A model-based diagnostic expert system for a Space Station Freedom electrical power distribution testbed is currently being developed at the NASA Lewis Research Center. The objective of this expert system is to autonomously detect and isolate electrical fault conditions. Marple, a software package developed at TRW, provides a model-based environment utilizing constraint suspension. Originally, constraint suspension techniques were developed for digital systems. However, Marple provides the mechanisms for applying this approach to analog systems, such as the testbed, as well. The expert system was developed using Marple and Lucid Common Lisp running on Sun Sparc-2 workstation. The Marple modeling environment has proved to be a useful tool for investigating the various aspects of model-based diagnostics. This paper describes work completed to date and lessons learned while employing model-based diagnostics using constraint suspension within an analog system.
We describe the development and application of a knowledge-based expert system for the space shuttle main propulsion propellant loading. This tool, the propulsion advisory tool (PAT), performs system diagnostics and engineering assessments for space shuttle main propulsion system engineers. The PAT identifies both actual and potential system failures and provides pertinent data for anomaly resolution. Over 150 measurements are monitored in real time at the full sample rate in order to evaluate system performance. Rules for the system are coded in plain English and are based on existing requirements documents and knowledge captured from propellant loading experts. In this way, the expert system works within a framework similar to the way humans would; a system based on human logic and reasoning. The PAT knowledge base interfaces with a user display on the same platform for advanced system schematics, data representation, and data management. The Rockwell International knowledge base diagnostic system and the Lockheed Space Operations Company user display are planned to run on an integrated workstation.
A controlled ecological life support system (CELSS) is a critical technology for the Space Exploration Initiative. To reduce required manpower and increase CELSS reliability, an automated control system is being developed for CELSS. One part of the control system being investigated is the use of an expert system to troubleshoot mechanical failures in the biomass production chamber (BPC) within a CELSS at Kennedy Space Center, Florida. An expert system such as this would provide mission crew members with instructions and advice on how to minimize the impact of mechanical failures on the crop(s) being raised in the BPC. The nutrient delivery system (NDS) is one of the most critical subsystems of the BPC and is representative of the other BPC subsystems. The NDS is the primary focus of the expert system that was developed. The biomass production chamber operations assistant fact base was developed using several experts from the fields of mechanical engineering and biology. This combination of expertise provides the BPC Operations Assistant with a knowledge base that can assess the criticality of a mechanical failure, identify the short and long term effects on the resident crop(s), and provide instructions for bringing failed mechanical sub-systems back on-line. The BPC Operations Assistant can also help in the decision process of a failure scenario as to when it might be more cost effective to replant a crop versus continuing growing, or immediate harvesting of the crop(s). The experience gained from the research and development of the BPC Operations Assistant provides key insights into knowledge-based monitoring and diagnostic systems that will be required for the manned exploration of space in the future.
Real-time rule-based decision systems are embedded AI systems and must make critical decisions within stringent timing constraints. In the case where the response time of the rule- based system is not acceptable, it has to be optimized to meet both timing and integrity constraints. This paper describes a novel approach to reduce the response time of rule-based expert systems. Our optimization method is twofold: the first phase constructs the reduced cycle-free finite state transition system corresponding to the input rule-based system, and the second phase further refines the constructed transition system using the simulated annealing approach. The method makes use of rule-base system decomposition, concurrency, and state- equivalency. The new and optimized system is synthesized from the derived transition system. Compared with the original system, the synthesized system has fewer number of rule firings to reach the fixed point, is inherently stable, and has no redundant rules.
We have developed and field tested a real-time robust diagnostic system, which uses hierarchical, multiple-aspect models of plants. The models include the functional structure, timed failure propagation graphs, physical component structure, and component failure modes. The diagnostic reasoning applies structural and temporal constraints for the generation and validation of fault hypotheses using the `predictor-corrector' principle. The diagnosis is generated in real time, amid an evolving alarm scenario, and uses progressive deepening control strategy. The robust diagnostic system has been tested and demonstrated using ECLSS models obtained from the Boeing Company.
This paper introduces a fundamentally different approach to recognition -- the object-based approach -- which is inherently knowledge-based and sensor independent. The paper begins with a description of an object-based recognition system, contrasting it with the image-based approach. Next, the multilevel stage of the system, incorporating several sensor data sources is described. From these sources elements of the situation hypothesis are generated as directed by the recognition goal. Depending on the degree of correspondence between the sensor-fed elements and the object-model-fed elements, a hypothetical element is created. The hypothetical element is further employed to develop evidence for the sensor-fed element through the inclusion of secondary sensor outputs. The sensor-fed element is thus modeled in more detail, and further evidence is added to the hypothetical element. Several levels of reasoning and data integration are involved in this overall process; further, a self-adjusting correction mechanism is included through the feedback from the hypothetical element to the sensors, thus defining secondary output connections to the sensor-fed element. Some preliminary work based on this approach has been carried out and initial results show improvements over the conventional image-based approach.
With the increasing amount of computer power available in civilian flight decks, it is becoming feasible to use some of this power for non-flight-critical systems. One area which could benefit greatly from some additional computer assistance is the pilot-machine interface. We describe the ARCHIE project, an attempt to make man-machine interfaces more robust and reliable. The initial target areas of this project are civilian glass cockpit flight decks and air traffic control stations.
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
A major objective of a Pavement Management System (PMS) is to assist highway managers and engineers in making consistent and cost effective decisions related to maintenance and rehabilitation of pavements. But in many circumstances in PMS decision making, pavement engineers find themselves in a state of uncertainty. In this paper, the application of valuation- based systems and networks is used to provide a framework for representing, solving, and drawing inferences under a PMS decision-making environment. The valuation-based system is a new representation of decision-making and problem solving. The graphical depiction of a valuation-based system is called a valuation network. Valuation networks provide a compact representation emphasizing qualitative features of decision making. A valuation network in PMS decision making is constructed and the key algorithm in solving and making inferences in the network is illustrated.