Axial flow compressors are subjected to demands for ever-increasing levels of pressure ratio at a compression efficiency that augments the overall cycle efficiency. However, unstable flow may develop in the compressor, which can lead to a stall or surge and subsequently to gas turbine failure resulting in significant downtime and cost to repair. To protect against these potential aerodynamic instabilities, compressors are typically operated with a stall margin. This means operating the compressor at less than peak pressure rise which results in a reduction in operating efficiency and performance. Therefore, it is desirable to have a reliable method to determine the state of a compressor by detecting the onset of a damaging event prior to its occurrence. In this paper, we propose a health monitoring scheme that gathers and combines the results of different diagnostic tools to maximize the advantages of each one while at the same time minimizing their disadvantages. This fusion scheme produces results that are better than the best result by any one tool used. In part this is achieved because redundant information is available that when combined correctly improves the estimate of the better tool and compensates for the shortcomings of the less capable tool. We discuss the usage of diagnostic information fusion for a compressor event coupled with proactive control techniques to support improved compressor performance while at the same time avoid the increased damage risk due to stall margin reduction. Discretized time to failure windows provide event prediction in a prognostic sense.
The recent advances in sensor technology, remote communication and computational capabilities, and standardized hardware/software interfaces are creating a dramatic shift in the way the health of vehicles is monitored and managed. These advances facilitate remote monitoring, diagnosis and condition-based maintenance of automotive systems. With the increased sophistication of electronic control systems in vehicles, there is a concomitant increased difficulty in the identification of the malfunction phenomena. Consequently, the current rule-based diagnostic systems are difficult to develop, validate and maintain. New intelligent model-based diagnostic methodologies that exploit the advances in sensor, telecommunications, computing and software technologies are needed.
In this paper, we will investigate hybrid model-based techniques that seamlessly employ quantitative (analytical) models and graph-based dependency models for intelligent diagnosis. Automotive engineers have found quantitative simulation (e.g. MATLAB/SIMULINK) to be a vital tool in the development of advanced control systems. The hybrid method exploits this capability to improve the diagnostic system's accuracy and consistency, utilizes existing validated knowledge on rule-based methods, enables remote diagnosis, and responds to the challenges of increased system complexity. The solution is generic and has the potential for application in a wide range of systems.
Modern industrial systems assume different configurations to accomplish multiple objectives during different phases of operation, and the component parameters may also vary from one phase to the next. Consequently, reliability evaluation of complex multi-phased systems is a vital and challenging issue. Maximization of mission reliability of a multi-phase system via optimal asset selection is another key demand; incorporation of optimization issues adds to the complexities of reliability evaluation processes. Introduction of components having self-diagnostics and self-recovery capabilities, along with increased complexity and phase-dependent configuration variations in network architectures, requires new approaches for reliability evaluation.
This paper considers the problem of evaluating the reliability of a complex multi-phased system with self-recovery/fault-protection options. The reliability analysis is based on a colored digraph (i.e., multi-functional) model that subsumes fault trees and digraphs as special cases. These models enable system designers to decide on system architecture modifications and to determine the optimum levels of redundancy. A sum of disjoint products (SDP) approach is employed to compute system reliability. We also formulated the problem of optimal asset selection in a multi-phase system as one of maximizing the probability of mission success under random load profiles on components. Different methods (e.g., ordinal optimization, robust design, and nonparametric statistical testing) are explored to solve the problem. The resulting analytical expressions and the software tool are demonstrated on a generic programmable software-controlled switchgear, a data bus controller system and a multi-phase mission involving helicopters.
Complex dynamical systems such as aircraft, manufacturing systems, chillers, motor vehicles, submarines, etc. exhibit continuous and event-driven dynamics. These systems undergo several discrete operating modes from startup to shutdown. For example, a certain shipboard system may be operating at half load or full load or may be at start-up or shutdown. Of particular interest are extreme or “shock” operating conditions, which tend to severely impact fault diagnosis or the progression of a fault leading to a failure. Fault conditions are strongly dependent on the operating mode. Therefore, it is essential that in any diagnostic/prognostic architecture, the operating mode be identified as accurately as possible so that such functions as feature extraction, diagnostics, prognostics, etc. can be correlated with the predominant operating conditions. This paper introduces a mode identification methodology that incorporates both time- and event-driven information about the process. A fuzzy Petri net is used to represent the possible successive mode transitions and to detect events from processed sensor signals signifying a mode change. The operating mode is initialized and verified by analysis of the time-driven dynamics through a fuzzy logic classifier. An evidence combiner module is used to combine the results from both the fuzzy Petri net and the fuzzy logic classifier to determine the mode. Unlike most event-driven mode identifiers, this architecture will provide automatic mode initialization through the fuzzy logic classifier and robustness through the combining of evidence of the two algorithms. The mode identification methodology is applied to an AC Plant typically found as a component of a shipboard system.
In this paper we present a new architecture for integrating system health monitoring tasks into the development- and life cycle of space systems. On the basis of model-supported diagnosis technology the presented method uses information for diagnosis purposes that is already gathered during the development of a technical system. This information is extracted from simulation models used for design-studies and what-if-analyses during the design- and development phase.
For building up these simulation models easily, we developed a library of generic models of spacecraft components. These models cover the components' nominal and off-nominal behavior as it is specified in the component FMECAs. By combining and parametrizing the components a system model is built up. Since due to the limited resources on board of a spacecraft we can not use the model directly for model-based diagnosis, we use a model-supported approach: By systematically simulating possible component faults within the system's operational modes, we retrieve a set of measurement data that serve as symptoms to the failure modes. By classifying these data we get a knowledge-base for a symptom-based on-board diagnosis system. In order to cope with the uncertainty in the measurement data, this diagnosis system has been realized as a fuzzy system that on the basis of the given knowledge-base computes the most probable diagnoses from the given symptoms. The described system has been implemented within Astrium's Columbus Simulation System (CSS) and has been evaluated on several aerospace systems ranging from an unmanned aerial robot on the basis of an airship to the Propulsion and Reboost Subsystem of the Automated Transfer Vehicle (ATV), a supply spacecraft for the International Space Station.
Advances in information technology enable a capability to gather large amounts of raw data from all parts of our society to a degree of efficiency that can actually encumber surveillance missions. Today one can create automated networks of sensors to gather enormous volumes of data, but we do not have the human capacity to sort through all this raw data. Similar problems exist in the area of machinery condition or health monitoring - another surveillance problem. By automating data collection, processing, fusion and interpretation, one can bring the most relevant and timely information to human analysts, planners, and responders. Key technologies that enable this transformation from data to knowledge are distributed hardware and software architectures, intelligent sensors, data fusion and reasoning algorithms, and open system architectures for information exchange. Distributed hardware and software structures partition complex systems into a collection of interconnected subsystems. Intelligent sensors enable this conversion of data to information by processing massive amounts of data at the subsystem and higher levels to extract contextually relevant information. The transformation from raw sensor data to useful information requires the application of subsystem-specific signal processing and feature extraction algorithms, data fusion, and classification algorithms to combine data and features from commensurate and non-commensurate sensors or information sources. Emerging standards in open system architectures for condition based maintenance apply equally to surveillance systems and condition monitoring systems. Examples from fielded system health monitoring applications are presented along with their parallels to surveillance systems with application to homeland defense.
In this paper, we propose a cyber-event fusion, correlation, and situation assessment framework that, when instantiated, will allow cyber defenders to better understand the local, regional, and global cyber-situation. This framework, with associated metrics, can be used to guide assessment of our existing cyber-defense capabilities, and to help evaluate the state of cyber-event correlation research and where we must focus our future cyber-event correlation research. The framework, based on the cyber-event gathering activities and analysis functions, consists of five operational steps, each of which provides a richer set of contextual information to support greater situational understanding. The first three steps are categorically depicted as increasingly richer and broader-scoped contexts achieved through correlation activity, while in the final two steps, these richer contexts are achieved through analytical activities (situation assessment, and threat analysis & prediction). Category 1 Correlation focuses on the detection of suspicious activities and the correlation of events from a single cyber-event source. Category 2 Correlation clusters the same or similar events from multiple detectors that are located at close proximity and prioritizes them. Finally, the events from different time periods and event sources at different location/regions are correlated at Category 3 to recognize the relationship among different events. This is the category that focuses on the detection of large-scale and coordinated attacks. The situation assessment step (Category 4) focuses on the assessment of cyber asset damage and the analysis of the impact on missions. The threat analysis and prediction step (Category 5) analyzes attacks based on attack traces and predicts the next steps. Metrics that can distinguish correlation and cyber-situation assessment tools for each category are also proposed.
In this paper, the Adaptive Mean-Field Bayesian Data Reduction
Algorithm is discussed, which utilizes a method that maximizes the
class separability of unlabeled training data. The algorithm is
based on a Dirichlet distribution model for each class. In this
new method, the Dirichlet model is extended such that dissimilar
distributions are encouraged amongst the classes with respect to
unlabeled data, and with respect to data containing missing values. It has previously been shown for two class cases that the theoretical
probability of error is lower bounded by 0.25 under the original
Dirichlet model. Thus, the new model has been developed with the
idea of encouraging error probabilities below this lower bound
given the data contains missing information, such as the class
labels. Results are illustrated with simulated data as applied to
sequential classification using Page's test. In general it is
shown that the new method's performance is superior to that of the
original Dirichlet model, where it is apparent that any previously
acquired unlabeled data are being utilized in the training set to
improve the correct classification of future test data samples.
Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.
Vibration monitoring is an important practice throughout regular operation of gas turbine power systems and, even more so, during characterization tests. Vibration monitoring relies on accurate and reliable sensor readings. To obtain accurate readings, sensors are placed such that the signal is maximized. In the case of characterization tests, strain gauges are placed at the location of vibration modes on blades inside the gas turbine. Due to the prevailing harsh environment, these sensors have a limited life and decaying accuracy, both of which impair vibration assessment. At the same time bandwidth limitations may restrict data transmission, which in turn limits the number of sensors that can be used for assessment. Knowing the sensor status (normal or faulty), and more importantly, knowing the true vibration level of the system all the time is essential for successful gas turbine vibration monitoring. This paper investigates a dynamic sensor validation and system health reasoning scheme that addresses the issues outlined above by considering only the information required to reliably assess system health status. In particular, if abnormal system health is suspected or if the primary sensor is determined to be faulted, information from available “sibling” sensors is dynamically integrated. A confidence expresses the complex interactions of sensor health and system health, their reliabilities, conflicting information, and what the health assessment is. Effectiveness of the scheme in achieving accurate and reliable vibration evaluation is then demonstrated using a combination of simulated data and a small sample of a real-world application data where the vibration of compressor blades during a real time characterization test of a new gas turbine power system is monitored.
This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use
in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.
This paper presents a real time approach to the detection and isolation of component failures in largescale systems. The algorithm is given a set of observed test results from multiple sensors, and its main task is to deal with sensor errors (i.e., noise). The probabilities of these missed detections and false alarms are
not known a-priori, and must be estimated - ideally along with the accuracies of these estimates - online, within the inference engine. Further, recognizing a practical concern in most real systems, a sparsely instantiated observation vector must not be a problem. The key ingredients to the approach include the Multiple Hypothesis Tracking (MHT) philosophy to complexity management, and a Beta prior distribution on the sensor errors. We provide results illustrating performance in terms of both computational needs and error rate, and show its application both as a filter (i.e., used to "clean" sensor reports) and as a standalone state estimator.
We have been developing an architecture for reasoning with multiple sensors distributed on a computer network, linking them with analysis modules and reasoning with the results to combine evidence of possible intrusion for display to the user. The architecture, called MAITA, consists of monitors distributed across machines and linked together under control of the user and supported by a monitor of monitors that manages the interaction among the monitors. This architecture enables the system to reason about evidence from multiple sensors. For example, a monitor can track FTP logs to detect password scans followed by successful uploads of data from foreign sites. At the same time it can monitor disk use and detect significant trends. Monitors can then combine the evidence in the sequence in which they occur and present evidence to the user that someone has successfully gained write access to the FTP site and is occupying significant disk space. This paper discusses the architecture enabling the creation, linking, and support of the monitors. The monitors may be running on the same or different machines and so appropriate communication links must be supported as well as regular status checks to ensure that monitors are still running. We will also discuss the construction of monitors for sensing the data, abstracting and characterizing data, synchronizing data from different sources, detecting patterns, and displaying the results.
The goal of host-based intrusion detection is to detect attacks
against a single information system. Many host-based intrusion
detector systems - especially those that use anomaly detection - use training data to synthesize detectors automatically, that is, the detectors are classifiers created by machine learning. Regularization, which often improves the performance of machine learning algorithms, has not previously been applied to intrusion detector synthesis.
This paper discusses regularization for machine learning-based
intrusion detectors, showing how regularization can be accomplished
for such systems and providing the results of an empirical evaluation.
The purpose of Aladdin is to assist plant operators in the early detection and diagnosis of faults and anomalies in the plant that either have an impact on the plant performance, or that could lead to a plant shutdown or component damage if allowed to go unnoticed. The kind of early fault detection and diagnosis performed by Aladdin is aimed at allowing more time for decision making, increasing the operator awareness, reducing component damage, and supporting
improved plant availability and reliability. In this paper we describe in broad lines the Aladdin transient classifier, which combines techniques such as recurrent neural network ensembles, Wavelet On-Line Pre-processing (WOLP), and Autonomous Recursive Task Decomposition (ARTD), in an attempt to improve the practical applicability and scalability of this type of systems to real processes and machinery. The paper focuses then on describing an application of Aladdin to a Nuclear Power Plant (NPP) through the use of the HAMBO experimental simulator of the Forsmark 3 boiling water reactor NPP in Sweden. It should be pointed out that Aladdin is not necessarily restricted to applications in NPPs. Other types of power plants, or even other types of processes, can also benefit from the diagnostic capabilities of Aladdin.