A new Bayesian network (BN) learning method using a hybrid algorithm and chaos theory is proposed. The principles of
mutation and crossover in genetic algorithm and the cloud-based adaptive inertia weight were incorporated into the
proposed simple particle swarm optimization (sPSO) algorithm to achieve better diversity, and improve the convergence
speed. By means of ergodicity and randomicity of chaos algorithm, the initial network structure population is generated
by using chaotic mapping with uniform search under structure constraints. When the algorithm converges to a local
minimal, a chaotic searching is started to skip the local minima and to identify a potentially better network structure. The
experiment results show that this algorithm can be effectively used for BN structure learning.
Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to
enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic
algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied
strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to
individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we
present some experimental results.
While continuous variables become more and more inevitable in Bayesian networks for modeling real-life applications
in complex systems, there are not much software tools to support it. Popular commercial Bayesian
network tools such as Hugin, and Netica etc., are either expensive or have to discretize continuous variables.
In addition, some free programs existing in the literature, commonly known as BNT, GeNie/SMILE, etc, have
their own advantages and disadvantages respectively. In this paper, we introduce a newly developed Java tool
for model construction and inference for hybrid Bayesian networks. Via the representation power of the script
language, this tool can build the hybrid model automatically based on a well defined string that follows the
specific grammars. Furthermore, it implements several inference algorithms capable to accommodate hybrid
Bayesian networks, including Junction Tree algorithm (JT) for conditional linear Gaussian model (CLG), and
Direct Message Passing (DMP) for general hybrid Bayesian networks with CLG structure. We believe this tool
will be useful for researchers in the field.
In addition to computing the posterior distributions for hidden variables in Bayesian networks, one other important inference task is to find the most probable explanation (MPE). MPE provides the most likely configurations
to explain away the evidence and helps to manage hypotheses for decision making. In recent years, researchers
have proposed a few methods to find the MPE for discrete Bayesian networks. However, finding the MPE for
hybrid networks remains challenging. In this paper, we first briefy review the current state-of-the-art in the literature regarding various explanation methods. We then present an algorithm by using a modified max-product
clique tree to find the MPE for accommodating the needs in hybrid Bayesian networks. A detailed example is
demonstrated to show the algorithm.
A new BN structure learning method using a cloud-based adaptive immune genetic algorithm (CAIGA) is
proposed. Since the probabilities of crossover and mutation in CAIGA are adaptively varied depending on X-conditional
cloud generator, it could improve the diversity of the structure population and avoid local optimum. This is due to the
stochastic nature and stable tendency of the cloud model. Moreover, offspring structure population is simplified by using
immune theory to reduce its computational complexity. The experiment results reveal that this method can be effectively
used for BN structure learning.
Genetic algorithms (GAs) have been applied to many difficult optimization problems such as track assignment and
hypothesis managements for multisensor integration and data fusion. However, premature convergence has been a main
problem for GAs. In order to prevent premature convergence, we introduce an allied strategy based on biological
evolution and present a parallel Genetic Algorithm with the allied strategy (PGAAS). The PGAAS can prevent
premature convergence, increase the optimization speed, and has been successfully applied in a few applications. In this
paper, we first present a Markov chain model in the PGAAS. Based on this model, we analyze the convergence property
of PGAAS. We then present the proof of global convergence for the PGAAS algorithm. The experiments results show
that PGAAS is an efficient and effective parallel Genetic algorithm. Finally, we discuss several potential applications of
the proposed methodology.
Probabilistic inference for hybrid Bayesian networks, which involves both discrete and continuous variables, has
been an important research topic over the recent years. This is not only because a number of efficient inference
algorithms have been developed and used maturely for simple types of networks such as pure discrete model, but
also for the practical needs that continuous variables are inevitable in modeling complex systems. Pearl's message
passing algorithm provides a simple framework to compute posterior distribution by propagating messages
between nodes and can provides exact answer for polytree models with pure discrete or continuous variables. In
addition, applying Pearl's message passing to network with loops usually converges and results in good approximation.
However, for hybrid model, there is a need of a general message passing algorithm between different
types of variables. In this paper, we develop a method called Direct Message Passing (DMP) for exchanging
messages between discrete and continuous variables. Based on Pearl's algorithm, we derive formulae to compute
messages for variables in various dependence relationships encoded in conditional probability distributions. Mixture
of Gaussian is used to represent continuous messages, with the number of mixture components up to the
size of the joint state space of all discrete parents. For polytree Conditional Linear Gaussian (CLG) Bayesian
network, DMP has the same computational requirements and can provide exact solution as the one obtained by
the Junction Tree (JT) algorithm. However, while JT can only work for the CLG model, DMP can be applied
for general nonlinear, non-Gaussian hybrid model to produce approximate solution using unscented transformation
and loopy propagation. Furthermore, we can scale the algorithm by restricting the number of mixture
components in the messages. Empirically, we found that the approximation errors are relatively small especially
for nodes that are far away from the discrete parent nodes. Numerical simulations show encouraging results.
Pearl's traditional message passing algorithm developed in 1980s is the first exact inference
algorithm for Bayesian networks (BNs). Although it originally was developed for discrete
polytree networks only, it has been used widely in networks with loops by providing approximate
solutions. In such case, messages propagated in the loops are not exact and this
method is called loopy belief propagation. Loopy propagation usually converges and when
it converges, it provides good approximate solutions. However, when dealing with arbitrary
continuous Bayesian networks, message representations and manipulations need special cares
because continuous vairables may have arbitrary distributions and their dependency relationships
could be nonlinear. In this paper, we propose a loopy message passing mechanism
for arbitrary continuous Bayesian networks called Unscented Message Passing (UMP-BN).
UMP-BN combines Pearl's algorithm and an efficient nonlinear transformation technique
called Unscented Transformation to provide estimates of the first two moments of the posterior
distributions for any hidden continuous variable. We study its convergence properties by
investigating various typical situations with different networks. The numerical experiments
show promising results.
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
In recent decades, Bayesian Network (BN) has shown its power to solve probabilistic inference problems because of its expressive representation of dependence relationships among random variables and the dramatic development of inference algorithms. They have been applied for decision under uncertainty in many areas such as data fusion, target recognition, and medical diagnosis, etc. In general, the problem of probabilistic inference for a dynamic BN is to compute the posterior probability distribution of a specific variable of interest given a set of observations cumulated over time. The accuracy of the resulting posterior probability distribution is essential since the correct decision in any partially observable environment depends on this distribution. However, there is no general evaluation methodology available to predict the inference performance for a BN other than extensive Monte Carlo simulation methods. In this paper, we first present a method to model the inference performance for a static BN. This approximate method is designed to predict the inference performance analytically without extensive simulation. We then propose a sequential simulation method based on the particle filter concept to evaluate the inference performance for a dynamic BN. The specific model we deal with is the hybrid partial dynamic BN consists of
discrete and continuous variables with arbitrary relationships. Since no exact inference algorithm available for such a model, we use likelihood weighting (LW) method on an unrolled DBN to estimate its true performance bound for comparison with the predicted performance. Comparison and analysis of the experimental results show the potential capability of the sequential simulation method for evaluating the performance of dynamic Bayesian networks.
Bayesian Networks are graphical representation of dependence relationships between domain variables. They have been applied in many areas due to their powerful probabilistic inference such as data fusion, target recognition, and medical diagnosis, etc. There exists a number of inference algorithms that have different tradeoffs in computational efficiency, accuracy, and applicable network topologies. It is well known that, in general, the exact inference algorithms are either computationally infeasible for dense networks or impossible for mixed discrete-continuous networks. However, in practice, mixed Bayesian Networks are commonly used for various applications. In this paper, we compare and analyze the trade-offs for several inference approaches. They include the exact Junction Tree algorithm for linear Gaussian networks, the exact algorithm for discretized networks, and the stochastic simulation methods. We also propose an almost instant-time algorithm (AIA) by pre-compiling the approximate likelihood tables. Preliminary experimental results show promising performance.