Change detection is an important task in remotely monitoring and diagnosing equipment and other processes. Specifically, early detection of differences that indicate abnormal conditions has the promise to provide considerable savings in averting secondary damage and preventing system outage. Of course, accurate early detection has to be balanced against the successful rejection of false positive alarms. In noisy environments, such as aircraft engine monitoring, this proves to be a difficult undertaking for any one algorithm. In this paper, we investigate the performance improvement that can be gained by aggregating the information from a set of diverse change detection algorithms. Specifically, we examine a set of change detectors that utilize a variety of different techniques such as neural nets, random forests, and support vector machines. The different techniques have different detection sensitivities and different regression technique that operates well for time series as well as averaging schemes, and a meta-classifiers. We provide results using illustrative examples from aircraft engine monitoring.
In this paper, we present a feature selection and classification approach that was used to assess highly noisy sensor data from a NDE field study. Multiple, heterogeneous NDT sensors were employed to examine the solid structure. The goal was to differentiate between two types of phenomena occurring in a solid structure where one phenomenon was benign, the other was malignant. Manual distinction between these two types is almost impossible. To address these issues, we used sensor validation techniques to select the best available sensor that had the least noise effects and the best defect signature in the region of interest. Hundreds of features were formulated and extracted from data of the selected sensors. Next, we employed separability measures and correlation measures to select the most promising set of features. Because the NDE sensors poorly described the different defect types under consideration, the resulting features also exhibited poor separability. The focus of this paper is on how one can improve the classification under these constraints while minimizing the risk of overfitting (the number of field data was small). Results are shown from a number of different classifiers and classifier ensembles that were tuned to a set true positive rate using the Neyman-Pearson criterion.
Non-destructive evaluation (NDE) techniques for condition monitoring in remote solid structures have evolved vastly in the last few years. Algorithms for estimation of sensor integrity and for noise correction form a crucial aspect of NDE. This paper presents a sensor validation approach that verifies sensor integrity, identifies and corrects noise effects and selects the best possible array of sensors for multi-sensor fusion. The proposed methodology uses a novel change detection algorithm for noise correction and a clustering algorithm to isolate useful signal information from the sensor data. It was used for sensor selection in a NDE field study, where multiple sensors were used to examine a solid structure. The methodology achieved 97% accuracy in the experiments, indicating its efficacy.
In design of partial discharge (PD) diagnostic systems, finding a set of features corresponding to an optimal classification performance (accuracy and reliability) is critical. A diagnostic system designer typically does not have much difficulty to obtain a decent number of features by applying different feature extraction methods on PD measurements. However, the designer often faces challenges in finding a set of features that give optimal classification performance for the given PD diagnosis problem. The primary reasons for that are: a) features cannot be evaluated individually since feature interaction affects classification performance more significantly than features themselves; and b) optimal features cannot be obtained by simply combining all features from different feature extraction methods since there exist redundant and irrelevant features. This paper attempts to address the challenge by introducing feature selection to PD diagnosis. Through an example this paper demonstrates that feature selection can be an effective and efficient approach for systematically finding a small set of features that correspond to an optimal classification performance of PD diagnostic systems.
This paper explores classifier fusion problems where the task is selecting a subset of classifiers from a larger set with the goal to achieve optimal performance. To aid in the selection process we propose the use of several correlationbased diversity measures. We define measures that capture the correlation for n classifiers as opposed to pairs of classifiers only. We then suggest a sequence of steps in selecting classifiers. This method avoids the exhaustive evaluation of all classifier combinations which can become very large for larger sets of classifiers. We then report on observations made after applying that method to a data set from a real-world application. The classifier set chosen achieves close to optimal performance with a drastically reduced set of evaluation steps.
This paper addresses the automated detection of line features in large
industrial inspection images. The manual examination of these images
is labor-intensive and causes undesired delay of inspection results. Hence, it is desirable to automatically detect certain features of interest. In this paper we are concerned with the detection of vertical or slanted line features that appear at unpredictable intervals across the image. The line features may appear distorted due to shortcomings of the sensor and operator conditions. Line features are modeled as a pair of smoothed step edges of opposite polarity that are in close proximity, and two operators are used to detect them. The individual operator-outputs are combined in a non-linear fashion to form the line-feature response. The line features are then obtained by following the ridge of the line-feature response. In experiments on four datasets, over 98.8% of line features are correctly detected, with a low false-positive rate. Experiments also show that the approach works well in the presence of considerable noise due to poor operating conditions or sensor failure.
Classification requirements for real-world classification problems are often constrained by a given true positive or false positive rate to ensure that the classification error for the most important class is within a desired limit. For a sufficiently high true positive rate, this may result in the set-point being located somewhere in the flat portion of the ROC curve where the associated false positive rate is high. Any further classifier design will then attempt to reduce the false positive rate while maintaining the desired true positive rate is. We call this type of performance requirements for classifier design the constrained performance requirement. This type of performance requirements is different from the accuracy maximization requirement and thus requires different strategies for classifier design. This paper is concerned with designing classifier ensembles under such constrained performance requirements. Classifier ensembles are one of the most significant advances in pattern recognition/classification in recent years and have been actively studied by many researchers. However, not much attention has been given to designing ensembles to satisfy constrained performance requirements. This paper attempts to identify and address some of design related issues associated with the constrained performance requirement. Specifically, we present a design strategy for designing neural network ensembles to satisfy constrained performance requirements, which is illustrated by designing a real-world classification problem. The results are compared to those from conventional design method.
KEYWORDS: Sensors, Information fusion, Feature extraction, Sensor fusion, Data fusion, Environmental sensing, Data transmission, Reliability, Gas sensors, Sensor performance
Vibration monitoring is an important practice throughout regular operation of gas turbine power systems and, even more so, during characterization tests. Vibration monitoring relies on accurate and reliable sensor readings. To obtain accurate readings, sensors are placed such that the signal is maximized. In the case of characterization tests, strain gauges are placed at the location of vibration modes on blades inside the gas turbine. Due to the prevailing harsh environment, these sensors have a limited life and decaying accuracy, both of which impair vibration assessment. At the same time bandwidth limitations may restrict data transmission, which in turn limits the number of sensors that can be used for assessment. Knowing the sensor status (normal or faulty), and more importantly, knowing the true vibration level of the system all the time is essential for successful gas turbine vibration monitoring. This paper investigates a dynamic sensor validation and system health reasoning scheme that addresses the issues outlined above by considering only the information required to reliably assess system health status. In particular, if abnormal system health is suspected or if the primary sensor is determined to be faulted, information from available “sibling” sensors is dynamically integrated. A confidence expresses the complex interactions of sensor health and system health, their reliabilities, conflicting information, and what the health assessment is. Effectiveness of the scheme in achieving accurate and reliable vibration evaluation is then demonstrated using a combination of simulated data and a small sample of a real-world application data where the vibration of compressor blades during a real time characterization test of a new gas turbine power system is monitored.
KEYWORDS: Diagnostics, Reliability, Sensors, Model-based design, Detection and tracking algorithms, Information fusion, Filtering (signal processing), Wavelets, Signal detection, Systems modeling
Axial flow compressors are subjected to demands for ever-increasing levels of pressure ratio at a compression efficiency that augments the overall cycle efficiency. However, unstable flow may develop in the compressor, which can lead to a stall or surge and subsequently to gas turbine failure resulting in significant downtime and cost to repair. To protect against these potential aerodynamic instabilities, compressors are typically operated with a stall margin. This means operating the compressor at less than peak pressure rise which results in a reduction in operating efficiency and performance. Therefore, it is desirable to have a reliable method to determine the state of a compressor by detecting the onset of a damaging event prior to its occurrence. In this paper, we propose a health monitoring scheme that gathers and combines the results of different diagnostic tools to maximize the advantages of each one while at the same time minimizing their disadvantages. This fusion scheme produces results that are better than the best result by any one tool used. In part this is achieved because redundant information is available that when combined correctly improves the estimate of the better tool and compensates for the shortcomings of the less capable tool. We discuss the usage of diagnostic information fusion for a compressor event coupled with proactive control techniques to support improved compressor performance while at the same time avoid the increased damage risk due to stall margin reduction. Discretized time to failure windows provide event prediction in a prognostic sense.
During design of classifier fusion tools, it is important to evaluate the performance of the fuser. In many cases, the output of the classifiers needs to be simulated to provide the range of fusion input that allows an evaluation throughout the design space. One fundamental question is how the output should be distributed, in particular for multi-class continuous output classifiers. Using the wrong distribution may lead to fusion tools that are either overly optimistic or otherwise distort the outcome. Either case may lead to a fuser that performs sub-optimal in practice. It is therefore imperative to establish the bounds of different classifier output distributions. In addition, one must take into account the design space that may be of considerable complexity. Exhaustively simulating the entire design space may be a lengthy undertaking. Therefore, the simulation has to be guided to populate the relevant areas of the design space. Finally, it is crucial to quantify the performance throughout the design of the fuser. This paper addresses these issues by introducing a simulator that allows the evaluation of different classifier distributions in combination with a design of experiment setup, and a built-in performance evaluation. We show results from an application of diagnostic decision fusion on aircraft engines.
KEYWORDS: Data modeling, Principal component analysis, Sensors, Process modeling, Digital filtering, Feature selection, Computer simulations, Fuzzy logic, Fuzzy systems, Data acquisition
Hybrid soft computing models, based by neural, fuzzy and evolutionary computation technologies, have been applied to a large number of classification, prediction, and control problems. This paper focuses on one of such applications and presents a systematic process for building a predictive model to estimate time-to-breakage and provide a web break tendency indicator in the wet-end part of paper making machines. Through successive information refinement of information gleaned from sensor readings via data analysis, principal component analysis (PCA), adaptive neural fuzzy inference system (ANFIS), and trending analysis, a break tendency indicator was built. Output of this indicator is the break margin. The break margin is then interpreted using a stoplight metaphor. This interpretation provides a more gradual web break sensitivity indicator, since it uses more classes compared to a binary indicator. By generating an accurate web break tendency indicator with enough lead-time, we help in the overall control of the paper making cycle by minimizing down time and improving productivity.
Classifier performance evaluation is an important step in designing diagnostic systems. The purposes of performing classifier performance evaluation include: 1) to select the best classifiers from the several candidate classifiers, 2) to verify that the classifier designed meets the design requirement, and 3) to identify the need for improvements in the classifier components. In order to effectively evaluate classifier performance, a classifier performance measure needs to be defined that can be used to measure the goodness of the classifiers considered. This paper first argues that in fault diagnostic system design, commonly used performance measures, such as accuracy and ROC analysis are not always appropriate for performance evaluation. The paper then proposes using misclassification cost as a general performance measure that is suitable for binary as well as multi-class classifiers, and -most importantly- for classifiers with unequal cost consequence of the classes. The paper also provides strategies for estimating the cost matrix by taking advantage of fault criticality information obtained from FMECA. By evaluating the performance of different classifiers considered during the design process of an engine fault diagnostic system, this paper demonstrates that misclassification cost is an effective performance measure for evaluating the performance of multi-class classifiers with unequal cost consequence for different classes.
KEYWORDS: Diagnostics, Information fusion, Matrices, Data fusion, Reliability, Sensors, Neural networks, Data modeling, Model-based design, Control systems design
In this paper we present methods to enhance the classification rate in decision fusion with partially redundant information by manipulating the input to the fusion scheme using a priori performance information. Intuitively, it seems to make sense to trust a more reliable tool more than a less reliable one without discounting the less reliable one completely. For a multi-class classifier, the reliability per class must be considered. In addition, complete ignorance for any given class must also be factored into the fusion process to ensure that all faults are equally well represented. However, overly trusting the best classifier will not permit the fusion tool to achieve results that rate beyond the best classifiers performance. We assume that the performance of classifiers to be fused is known, and show how to take advantage of this information. In particular, we glean pertinent performance information from the classifier confusion matrices and their cousin, the relevance matrix. We further demonstrate how to integrate a priori performance information within an hierarchical fusion architecture. We investigate several schemes for these operations and discuss the advantages and disadvantages of each. We then apply the concepts introduced to the diagnostic realm where we aggregate the output of several different diagnostic tools. We present results motivated from diagnosing on-board faults in aircraft engines.
This paper presents methods to boost the classification rate in decision fusion with partially redundant information. This is accomplished by utilizing the information of known mis- classifications of certain classes to systematically modify class output. For example,, if it is known beforehand that tool A mis- classifies class 1 as often as class 2, then it appears to be prudent to integrate that information into the reasoning process if class 1 is indicated by tool B and class 2 is observed by tool A. Particularly this preferred mis-classification information is contained in the asymmetric (cross-correlation) entries of the confusion matrix. An operation we call cross-correlation is performed where this information is explicitly used to modify class output before the first fused estimate is calculated. We investigate several methods for cross-correlation and discuss the advantages and disadvantages of each. We then apply the concepts introduced to the diagnostic realm where we aggregate the output of several different diagnostic tools. We show how the proposed approach fits into an information fusion architecture and finally present results motivated from diagnosing on-board faults in aircraft engines.
This paper introduces techniques to deal with temporal aspects of fusion systems with redundant information. One of the challenges of a fusion system is that individual information is not necessarily announced at the same time. While some decisions (or features or data) are produced at a high sampling frequency, other decisions are generated at a much lower rate, perhaps only once during the operation of the system or only during certain operating conditions. This means that some information will be outdated when the actual information fusion task is performed. An event may have occurred in the meantime leading to a decision discord. We tackle this challenge by introducing the concept of `information or decision forgetting'. In other words, in case of an information discord, more recent information is evaluated with higher confidence than older information. Another difficulty is distinguishing between outliers and actual system changes. If tools perform their task at a high sampling frequency we can employ `decision smoothing'. That is, we factor out the occasional outlier and generally reduce the noise of the system. To that end, we introduce an adaptive smoothing algorithm that evaluates the system state and changes the smoothing parameter if it encounters suspicious situations, i.e., situations that might indicate a changed system state. We show the concepts introduced in the diagnostic realm where we aggregate the output of several different diagnostic tools.
KEYWORDS: Sensors, Sensor fusion, Fuzzy logic, Temperature metrology, Combustion, Algorithm development, Data fusion, Gas sensors, Temperature sensors, Failure analysis
In this paper we present a methodology for fuzzy sensor fusion. We then apply this methodology to sensor data from a gas turbine power plant. The developed fusion algorithm tackles several problems: 1) It aggregates redundant sensor information; this allows making decision which sensors should be considered for propagation of sensor information. 2) It filters out noise and sensor failure from measurements; this allows a system to operate despite temporary or permanent failure of one or more sensors. For the fusion, we use a combination of direct and functional redundancy. The fusion algorithm uses confidence values obtained for each sensor reading form validation curves and performs a weighted average fusion. With increasing distance from the predicted value, readings are discounted through a non-linear validation function. They are assigned a confidence value accordingly. The predicted value in the described algorithm is obtained through application of a fuzzy exponential weighted moving average time series predictor with adaptive coefficients. Experiments on real data from a gas turbine power plant show the robustness of the fusion algorithm which leads to smooth controller input values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.