Wind is an important renewable energy
source. The energy and economic return from building
wind farms justify the expensive investments in doing so.
However, without an effective monitoring system, underperforming
or faulty turbines will cause a huge loss in
revenue. Early detection of such failures help prevent
these undesired working conditions. We develop three
tests on power curve, rotor speed curve, pitch angle curve
of individual turbine. In each test, multiple states are
defined to distinguish different working conditions,
including complete shut-downs, under-performing states,
abnormally frequent default states, as well as normal
working states. These three tests are combined to reach a
final conclusion, which is more effective than any single
Through extensive data mining of historical data and
verification from farm operators, some state combinations
are discovered to be strong indicators of spindle failures,
lightning strikes, anemometer faults, etc, for fault detection.
In each individual test, and in the score fusion of
these tests, we apply multidimensional scaling (MDS) to
reduce the high dimensional feature space into a 3-dimensional
visualization, from which it is easier to discover
turbine working information. This approach gains a qualitative
understanding of turbine performance status to
detect faults, and also provides explanations on what has
happened for detailed diagnostics.
The state-of-the-art SCADA (Supervisory Control And
Data Acquisition) system in industry can only answer the
question whether there are abnormal working states, and
our evaluation of multiple states in multiple tests is also
promising for diagnostics. In the future, these tests can be
readily incorporated in a Bayesian network for intelligent
analysis and decision support.
This paper proposes a new discrete particle swarm optimization (DPSO) algorithm with a multiplicative likeliness
enhancement rule for unordered feature selection. In this paper, the pool of features for face recognition are
derived from direct fractional-step linear discriminant analysis (DFLDA). Each particle is associated with a
subset of features, and their recognition performance on the validation set influences the particle's fitness with
randomness. Features are selected by their assigned likeliness, which is enhanced by the agreement between a
particle and its attractors (its previous location, pbest and gbest). The new DPSO double-asserts or triple-asserts
the selection if the attractors share common features. The feature selection technique proposed in this paper is
a modular procedure and thus can be applied to other features if a separate validation set is available for fitness
evaluation. This DPSO algorithm is successfully applied on the FERET database. The recognition performance
is improved for both L1 and L2 norm distance metrics. The cumulative matching score (CMS) is improved for
higher ranks, which indicates that this performance improvement is beneficial for identification task. In overall
comparison, the multiplicative updating rule achieves higher fitness and smaller standard deviation than the
additive likeliness enhancement rule.
This paper is a survey on biometrics and forensics, especially on the techniques and applications of face recognition in forensics. This paper describes the differences and connections between biometrics and forensics, and bridges each other by formulating the conditions when biometrics can be applied in forensics. Under these conditions, face recognition, as a non-intrusive and non-contact biometrics, is discussed in detail as an illustration of applying biometrics in forensics. The discussion on face recognition covers different approaches, feature extractions, and decision procedures. The advantages and limitations of biometrics in forensic applications are also addressed.
Learning curve phenomenon indicates that not all available images need to be used in training. This paper
proposes a three-step intelligent sampling to construct a representative and efficient training database, where
both the number of training images and which images to be included are determined. Firstly, clustering on a
subset of huge face database is implemented as preparation. Secondly, systematic sampling on clusters is utilized
to improve the efficiency. Thirdly, performance is evaluated to check whether the learning curve has reached
a point of diminishing returns, and a new metric of difficulty is defined to determine which images from the
complementary subset of initial training set should be added into training. The proposed intelligent three step
sampling design enhances recognition rate and generalizability while improving efficiency, which exerts the full
potential of any given face recognition algorithm without system overhaul.
A face recognition system gains flexibility and cost efficiency while being integrated into a wireless network. Meanwhile, face recognition enhances the functionality and security of the wireless network. This paper proposes a distributed wireless network prototype, consisting of feature net and database net, to accomplish face identification task by optimally allocating network resources. The face recognition technique used in this paper is subspace-based modular processing with score and decision level fusion. The subspace features are selected by a step-wise statistical procedure, Modified Indifference-Zone Method, which improves efficiency and accuracy. Fusion further improves the performance from using either the whole face or modules alone. The face recognition techniques are re-engineered to be implemented on the distributed wireless network, and the simulation result shows promising improvement over centralized recognition.
KEYWORDS: Facial recognition systems, Feature selection, Databases, Principal component analysis, Monte Carlo methods, Detection and tracking algorithms, Feature extraction, Interference (communication), Statistical analysis, Signal to noise ratio
We propose a multistep statistical procedure to determine
the confidence interval of the number of features that should
be retained in appearance-based face recognition, which is based
on the eigen decomposition of covariance matrices. In practice, due
to sampling variation, the empirical eigenpairs differ from their underlying
population counterparts. The empirical distribution is difficult
to derive, and it deviates from the asymptotic approximation
when the sample size is limited, which hinders effective feature selection.
Hence, we propose a new technique, MIZM (modified indifference
zone method), to estimate the confidence interval of the
number of features. MIZM overcomes the singularity problem in face
recognition and extends the indifference zone selection from PCA to
LDA. The simulation results on the ORL, UMIST, and FERET databases
show that the overall recognition performance based on
MIZM is improved from that using all available features or heuristically
selected features. The relatively small number of features also
indicates the efficiency of the proposed feature selection method.
MIZM is motivated by feature selection for face recognition, but it
extends the indifference zone method from PCA to LDA and can be
applied in general LDA tasks.
A face recognition system consists of two integrated parts: One is the face recognition algorithm, the other is the selected classifier and derived features by the algorithm from a data set. The face recognition algorithm definitely plays a central role, but this paper does not aim at evaluating the algorithm, but deriving the best features for this algorithm from a specific database through sampling design of the training set, which directs how the sample should be collected and dictates the sample space. Sampling design can help exert the full potential of the face recognition algorithm without overhaul. Conventional statistical analysis usually assume some distribution to draw the inference, but the design-based inference does not assume any distribution of the data and it does not assume the independency between the sample observations. The simulations illustrates that the systematic sampling scheme performs better than the simple random sampling scheme, and the systematic sampling is comparable to using all available training images in recognition performance. Meanwhile the sampling schemes can save the system resources and alleviate the overfitting problem. However, the post stratification by sex is not shown to be significant in improving the recognition performance.
This paper utilizes the intra-difference in still images to segment a face from its background and then combines the intra-difference detection result with the eigenface/eigenfeature methods to identify the face. This novel diverse scheme can finally solve the problem of accuracy in practical applications, thus broadening the application of face recognition into more versatile situations such as security building entrance, customs and mug spotting. The organic combination of intra-difference detection method and eigenface/eigenfeature methods into one system is shown to be more robust and have a better identification rate than either method alone. This paper first addresses the problems of the real-time accuracy issue and the need of pre-processing (mainly normalization). And then it proposes to use intra-difference to effectively segment a human face. The segmented face is further processed by both intra-difference detection method and eigenface/eigenfeature methods to determine its identity. Correspondingly, the proposed algorithm consists of three parts: segmentation, pre-processing, and multi-phase face identification by fusing the results from both the intra-difference detection method and the eigenface/eigenfeature methods.