In today’s world, intelligent systems are expected to handle diverse types of events that can be represented as dichotomy, probabilistic, fuzzy, or mixed statements. This paper proposes an analytic framework called “statemental analysis,” which deals with the operations and truth values of finite and infinite sequences of statements of various categories. The framework treats vagueness and randomness in a unified manner using the concepts of statemental algebra and truth calculus. It allows for generalization of many results in probability theory to statements with significant implications. We demonstrate the utility of the proposed framework with an example of reliability analysis of uncertain systems.
The limitations of current logical systems in addressing vagueness and randomness restrict their application in the field of artificial intelligence (AI). To address this issue, we introduce an axiomatic mathematical system called statemental credibility logic (SCL). First, we jointly define logical operations and equivalence. Then, we introduce the concept of a truth measure. By extending the law of excluded middle and the law of non-contradiction, and using them in conjunction with other self-evident rules in the axiomization of SCL, we can incorporate classical propositional calculus and probability theory. Furthermore, SCL can handle statements that exhibit varying degrees of vagueness and randomness. We also extend SCL to deal with predicates that involve uncertainty. We believe that SCL and its extension have the potential to improve the reasoning capabilities of intelligent systems.
One crucial capability of unmanned systems is their ability to make decisions and inferences like humans. In this paper, we develop a novel logical system that imitates the way humans engage in reasoning with statements possessing varying degrees of ambiguity and unpredictability. Our proposed logical system is constructed using an axiomatic approach with self-evident rules, which allows us to define statemental operations and logical equivalence without the need for a concept of truth valuation. Our logical system includes both statemental algebra and truth calculus, which are designed to manipulate statements and assess their credibility. We believe that our proposed logical system has the potential to enhance the intelligence of unmanned systems.
KEYWORDS: Monte Carlo methods, Statistical analysis, Computing systems, Engineering, Control systems, Biological samples, Systems modeling, Stochastic processes, Dynamical systems, Tolerancing
The average performance of uncertain dynamic discrete-event systems remains a persistent concern in the field of control engineering. In this paper, we propose to use a Monte Carlo method to analyze uncertain systems by determining whether their average performance exceeds an acceptable level. Specifically, we formulate the performance analysis as a problem of statistical hypothesis testing of mean values. Using a mean-preserving transform, we convert this problem into one of statistical hypothesis testing of probabilities, which can be solved using our adaptive Monte Carlo test. This test is based on Wald’s sequential probability ratio (SPRT). We demonstrate the applicability of our method by investigating the average performance of a control system with parametric uncertainty.
KEYWORDS: Statistical analysis, Control systems, Adaptive control, System integration, Matrices, Data processing, Signal detection, Data acquisition, Target recognition, Signal processing
The advancement of modern control systems leads to increasingly high standard on the capability of systems to make decisions and control strategies in an adaptive and efficient way. In many applications, the decision time and performance index of control are determined by stochastic processes. In this paper, we develop a family of new limit theorems on the joint convergence of partial sums of independent random vectors and associated random indexes under general assumptions. We demonstrate that the random index and the partial sum are asymptotically independent under a proper normalization, with the partial sum converges in distribution to a random variable of normal distribution. Moreover, we obtain limit theorems for functions of partial sums, random indexes, and parameters, which include central limit theorems as special cases. We also extend the results to Levy processes. An illustrative example is given on an integration system, which is a building block of control and decision systems.
An indispensable function of intelligent systems is to perform logical reasoning in the presence of uncertainty. In this paper, we establish a mathematical framework, called statemental credibility logic (SCL), for inference under uncertainty. The proposed SCL consists of statemental algebra and truth calculus. The statemental algebra deals with the operations of statements which can be representations of deterministic, vague, random events, and their mixture. The truth calculus discusses the evaluation and inference of the truth values of dichotomy, fuzzy, probabilistic statements, and their combinations. We generalize the classical Bayesian networks and develop robust inference methods which have potential to construct more capable and reliable inference engines of intelligent systems.
KEYWORDS: Monte Carlo methods, Statistical analysis, Dynamical systems, Control systems, Computing systems, Stochastic processes, Computer simulations, Tolerancing, Systems modeling, Tin
The analysis of uncertain dynamic discrete-event systems is generally intractable by deterministic numeric methods. In this paper, we propose an adaptive Monte Carlo test method to analyze systems. In contrast to the conventional methods of estimating the probability that a system fails to satisfy prespecified requirements, our goal is to determine whether the probability that the system violates the requirements. To accomplish this goal, we exploit a testing method based on the sequential probability ratio (SPRT) method invented by Wald. We demonstrate that such method can result in a substantial reduction of computational complexity as compared to conventional methods. To make the test method rigorous, we develop exact methods for computing the probability of making wrong decisions and the average number of simulations runs. The proposed method can be applied to investigate the stability of a control system with parametric uncertainty.
Modern sensors produce increasingly high volume of data that requires efficient and reliable statistical methods for information processing. We consider frequent problems of information processing which can be cast into the framework of parameter estimation and multihypothesis testing. We propose a unified approach for statistical inference of information processing by introducing the inclusion principle, confidence process, unimodal likelihood estimator, and time-uniform concentration inequalities. Our methods attempt to make decision based on observing data in an adaptive and sequential way so that the decision can be made as quick as possible, while the probability of committing mistakes is acceptably small.
KEYWORDS: Data processing, Probability theory, Sensors, Target recognition, Integral transforms, Image classification, Error analysis, Detection and tracking algorithms, Data fusion, Data analysis
Interval estimation of data parameters is a frequent task of information processing for sensor systems. Classical parameter estimation methods for information processing suffer from the drawbacks of inaccuracy or conservatism. In this article, we propose a general method for constructing confidence regions for parameters of data. Moreover, we develop computable expressions on the minimum coverage probability of random intervals, which allows for a bisection coverage tuning method for constructing confidence intervals for parameters of various types of data. The proposed theory and algorithms can be applied to relevant tasks such as pattern classification, data fusion, target recognition and tracking.
KEYWORDS: Control systems, Stochastic processes, Statistical analysis, Computer simulations, Analytical research, Tin, Monte Carlo methods, Systems modeling, Probability theory, Algorithm development
A persistence concern of control engineering is the performance of systems in the presence of uncer- tainty. By treating uncertain parameters of systems as random variables, the performance of systems may be formulated as means of random variables. In this paper, we develop multistage schemes for making statistical inference of means of random variables. Such schemes are unprecedentedly e±cient as compared to existing methods, while guaranteed pre-speci¯ed level of credibility. The optimality of the proposed schemes is established by making use of the uniform exponential maximal inequalities. The proposed schemes are applied to robustness analysis of control systems under uncertainty. It is demonstrated the computational complexity of the proposed schemes is substantially lower and inde- pendent of the problem size, as compared to the non-polynomial complexity of the worst-case method of robustness analysis.
In this paper, we develop adaptive PAC (probably approximately correct) learning methods with applications to design control strategy for uncertain systems. The proposed PAC learning methods mimic the adaptive learning behavior of human being to accumulate evidence step by step and make decisions based on available observations. In the proposed methods, new comparative inferential techniques are developed to quickly eliminate inferior hypotheses. We demonstrate that the proposed PAC learning methods are substantially more efficient in finding the optimal hypothesis with pre-specified level of confidence and accuracy. The proposed PAC learning methods can be applied to the design of robust controllers, where the uncertain parameters of the relevant system is sampled to obtain training examples for the learning process.
KEYWORDS: Error analysis, Computer simulations, Monte Carlo methods, Computing systems, Information theory, Statistical analysis, Telecommunications, Mathematical modeling, Control systems, Tolerancing
In this paper, we develop a rigorous and efficient method for risk evaluation. Our risk evaluation method is an adaptive Monte Carlo estimation method implemented as a rectangular random walk, which is derived from a mixed error criterion and the concept of relative entropy from information theory. Our proposed method of risk evaluation can be orders of magnitude more efficient as compared to existing methods in literatures and widely used softwares. This new method makes it possible to evaluate risk of systems so that in a strict statistical sense, either the absolute error can be controlled below 10−6 or the relative error can be controlled below 0.01, that is, the error of risk evaluation can be rigorously certified at extremely low level, which is impossible by using existing methods.
KEYWORDS: Control systems, Control systems design, Algorithm development, Computing systems, Monte Carlo methods, Stochastic processes, Failure analysis, Optimization (mathematics), Probability theory, Dynamical systems
Control systems are usually designed based on nominal values of relevant physical parameters. To ensure that a control system will work properly when the relevant physical parameters vary within certain range, it is crucial to investigate how the performance measure affected by the variation of system parameters. In this paper, we demonstrate that such issue boils down to the study of the variation of functions of uncertainty. Motivated by this vision, we propose a general theory for inferring function of uncertainties. By virtue of such theory, we investigate concentration phenomenon of bounded random vectors. We derive multidimensional concentration inequalities for bounded random vectors, which are substantially tighter as compared to existing ones. The new concentration inequalities are applied to investigate the performance of control systems with real parametric uncertainty. It is demonstrated much more useful insights of control systems can be obtained. Moreover, the concentration inequalities offer performance analysis in a significantly less conservative way as compared to the classical deterministic worst-case method.
KEYWORDS: Computer simulations, Error control coding, Error analysis, Monte Carlo methods, Genetic algorithms, Evolutionary algorithms, Control systems design, Control systems, Systems modeling, Computing systems
In this paper develop a novel, quantitative, rigorous and efficient method for risk minimization for control and decision under uncertainty. The crucial components of our approach include a rigorous, efficient risk evaluation method and a stochastic optimization technique. The risk evaluation method is an adaptive Monte Carlo estimation method which is derived from the concept of relative entropy and truncated inverse binomial sampling. The stochastic optimization technique is built upon evolutionary computing methods such as genetic algorithms, where the fitness function is constructed from the adaptive Monte Carlo estimation method. The effectiveness of the proposed method is demonstrated by its applications to the design of PID controllers for uncertain systems, where the probability of performance violation is minimized.
Modern intelligent systems are expected to be able to learn from experience, making decisions on the basis of the available information and proceeding step by step to a desired goal. An important specification of such adaptive decision making method is the amount of time to accomplish a decision. In this paper, we propose a random walk model for such decision making method. The model involves random processes which have independent stationary increments. The decision times are formulated as first passage times dependent on the parameters of decision rules. Asymptotic and nonasymptotic results are developed for the analysis of first passage times.
Modern intelligent systems highly depends on their capabilities to learn from experience and take control actions in uncertain environments. In this paper, we propose a random walk approach for analyzing the performance of learning and control of intelligent systems. We show that in many situations, the learning and control problem can be formulated as a random walk in a hyperspace with stopping boundary defined by parameters of learning and control policies. We show that the performance of the intelligent systems can be measured by a function of the stopping time and associated values of stochastic processes. Under some mild regularity conditions, we demonstrate that the performance measure follows stochastic functional laws of the iterated logarithm as the parameters of learning and control policies tend to certain values.
A critical issue affecting the success of decision making is the underlying uncertainty. In this paper, we consider decision making problems involving uncertainties characterized by stochastic processes of independent stationary increments. The cost function of decision making is expressed as a function of the decision time and associated values of stochastic processes. The decision time is a stopping time dependent on the parameters of decision rules. We investigate the asymptotic behavior of the cost function as the parameters of decision rules tend to certain values. We demonstrate that the cost function follows stochastic functional limit theorems as the parameters of the decision rules tend to certain values.
A persistent concern of control engineering is the performance of systems in the presence of uncertainty. In this paper, we consider uncertainties affecting systems as stochastic processes of independent stationary increments. We show that in many situations the performance of an uncertain system can be measured by a function of a parametric stopping time and associated values of stochastic processes. Under some mild regularity conditions, we demonstrate that the performance measure is governed by stochastic functional central limit theorems as the parameters of the stopping time tend to certain values. Such results can be applied to the analysis and design of control systems affected by uncertainties.
KEYWORDS: Control systems, Sensors, Control systems design, Feature extraction, Statistical analysis, Integral transforms, Probability theory, Sensing systems, Error analysis, Dynamical systems
High volume of data are becoming increasingly common for modern sensors and control systems. In this paper, we propose new techniques for constructing confidence regions based on concentration inequalities. Such confidence regions can be used to represent large volume of data with high dimensionality. Moreover, such confidence regions can be used to analyze the performance of uncertain systems under uncertainties, especially the estimation of the average overshoot, rise time and settling time which are critical specifications of a control system.
KEYWORDS: Control systems design, Control systems, Stochastic processes, Monte Carlo methods, Matrices, Computer simulations, Failure analysis, Probability theory, Systems modeling
In the design of control systems affected by uncertain parameters, a primary goal is to ensure that a controller designed based on nominal values of parameters will perform satisfactorily in the presence of uncertainties. Adaptive randomized algorithms have been proposed in literature for overcoming the issue of conservatism and computational complexity which exponentially grows with respect to the dimension of uncertainty. In this paper, we demonstrate that such adaptive randomized algorithms are inherently associated with stopped random walks. We develop a unified theory of stopped random walks which has potential to make better decision and control strategies for uncertain systems.
For wireless data communication systems employing multiple antennas, space-time codes play crucial roles for fast transmission of data with accuracy and bandwidth efficiency. Motivated by the large size of constellations of space-time codes and the resultant computational complexity, we develop a stochastic approach for the optimization of space-time constellations. We use union bounds of block error rate as performance measures of the space-time codes. To overcome the computational complexity, we propose to transform the performance measure into the mean of a bounded random variable and establish a statistical method for the estimation of such mean and its gradients with respect to parameters. A stochastic gradient descent method is developed for optimizing space-time codes. Such stochastic techniques are applied to obtain high performance space-time codes of large constellation sizes.
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
KEYWORDS: Tin, Control systems design, Algorithm development, Monte Carlo methods, Stochastic processes, Control systems, Computer simulations, Matrices, Statistical analysis, Computing systems
We consider the general problem of analysis and design of control systems in the presence of uncertainties. We treat uncertainties that affect a control system as random variables. The performance of the system is measured by the expectation of some derived random variables, which are typically bounded. We develop adaptive sequential randomized algorithms for estimating and optimizing the expectation of such bounded random variables with guaranteed accuracy and confidence level. These algorithms can be applied to overcome the conservatism and computational complexity in the analysis and design of controllers to be used in uncertain environments. We develop methods for investigating the optimality and computational complexity of such algorithms.
In this paper, we propose an analytic sequential methods for detecting port-scan attackers
which routinely perform random “portscans” of IP addresses to find vulnerable servers to
compromise. In addition to rigorously control the probability of falsely implicating benign
remote hosts as malicious, our method performs significantly faster than other current solutions.
We have developed explicit formulae for quick determination of the parameters of the
new detection algorithm.
We propose a new approach for deriving probabilistic inequalities based on bounding likelihood
ratios. We demonstrate that this approach is more general and powerful than the classical method
frequently used for deriving concentration inequalities such as Chernoff bounds. We discover that
the proposed approach is inherently related to statistical concepts such as monotone likelihood ratio,
maximum likelihood, and the method of moments for parameter estimation. A connection between
the proposed approach and the large deviation theory is also established. We show that, without using
moment generating functions, tightest possible concentration inequalities may be readily derived by
the proposed approach. The applications of the new probabilistic techniques to statistical machine
learning theory are demonstrated.
We propose a new structure of Space-Time codes of which the decoding problem can be decomposed
into multiple one-dimensional closest-point search. The search can be accomplished by a simple
rounding method. The new coding technique can be applied to data transmission of sensor systems,
where the decoding task is expected to be quickly accomplished for the purpose of fast response.
In this paper, we propose new sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. Moreover, our method guarantees that the maximum amount of observational time is bounded. In contrast to the previous most effective method, Threshold Random Walk Algorithm, which is explicit and analytical in nature, our proposed algorithm involve parameters to be determined by numerical methods. We have introduced computational techniques such as iterative minimax optimization for quick determination of the parameters of the new detection algorithm. A framework of multi-valued decision for detecting portscanners and DoS attacks is also proposed.
KEYWORDS: Probability theory, Machine learning, Statistical analysis, Statistical inference, Data modeling, Error analysis, Electrical engineering, Information technology, Picture Archiving and Communication System, Pattern recognition
In this paper, we propose a general approach for statistical inference and machine learning based
on accumulated observational data. We demonstrate that a large class of machine learning problems
can be formulated as the general problem of constructing random intervals with pre-specified coverage
probabilities for the parameters of the model for the observational data. We show that the construction
of such random intervals can be accomplished by comparing the endpoints of random intervals with
confidence sequences for the parameters obtained from the observational data. Asymptotic results are
obtained for such sequential methods.
In this paper, we develop an exact computational approach for simultaneous inference of population
proportions. The main idea of this computational approach is to use branch and bound technique for
rigorous checking of coverage probabilities and the probabilities of making wrong decisions. Applications
of the proposed method can be found in machine learning and other areas.
KEYWORDS: Antennas, Wireless communications, Telecommunications, Signal to noise ratio, Data communications, Receivers, Optical spheres, Data transmission, Sensors, Smoothing
In some scenarios of wireless communications, due to the fast change of channel information, it is
very difficult to estimate the channel parameters in real time. This difficulty can be overcame by noncoherent
communication techniques. In this paper, we propose a new class of unitary space-time codes
for non-coherent wireless MIMO communications, aimed at improving the bit error rate performance
and data speed of communication systems. This class of unitary space-time codes can be efficiently
decoded using sphere decoder algorithms. A numerical approach is proposed for the optimization of
signal constellation. Such coding techniques can be applied to the data transmission of wireless sensor
systems.
KEYWORDS: Stochastic processes, Statistical analysis, Monte Carlo methods, Magnesium, Control systems, Fourier transforms, Zinc, Computing systems, Systems modeling, Computer simulations
In this paper, we propose a statistical approach for analyzing the performance of uncertain systems.
By treating the uncertain parameters of systems as random variables, we formulate a wide class of
performance analysis problems as a general problem of quantifying the deviation of a random variable
from its mean value. New concentration inequalities are developed to make such quantification rigorous
and analytically simple. Application examples are given for demonstrating the power of our approach.
In this paper, we demonstrate that a wide class of machine learning problems can be formulated as
general problems of multi-valued decision and classification. To reduce the sample complexity associated
with the statistical learning and inference schemes, we propose the principle of probabilistic comparison,
the inclusion principle and exact computational methods for constructing multistage procedures for the
relevant multi-hypothesis testing problems.
KEYWORDS: Antennas, Optical spheres, Receivers, Genetic algorithms, Matrices, Optimization (mathematics), Transmitters, Stochastic processes, Data communications, Signal to noise ratio
In scenarios where channel state information is available to the receiver, making use of the information
in detection significantly improves system performance. Such transmission scheme is called
coherent detection. In this work, we propose a new family of space-time codes for coherent detection
schemes in a wireless environment using multiple transmitter and receiver antennas. The decoding
problem can be efficiently solved by parallel sphere decoder algorithm. A combination of Genetic algorithms
and stochastic gradient descendent algorithms is established for the code optimization. Our simulation results indicate that such a wireless communication technique is suitable for sensing systems
with reliable transmission of high volume of data.
Artificial neural networks are widely used in pattern recognition for sensing systems and other areas.
In this paper, we propose to improve the performance of neural networks from the perspectives
of output encoding rules, determination of training sample sizes, training performance index and evaluation
of generalization error. We propose a new output encoding rule which significantly reduces the
training error as compared to classical output encoding methods. Moreover, we develop a new training
performance index which is closely relate to the generalization error and is a smooth function suitable for optimization by virtue of nonlinear programming. Furthermore, motivated by the crucial impact of training sample size on the generalization error and the computational complexity of training, we propose a rigorous method for determining appropriate number of training samples. Since the development of a neural network requires many cycles of training and performance evaluation, we introduce adaptive methods for estimating the generalization error. The new techniques of neural network training and evaluation have potential to improve the power of modern sensing systems.
In this paper, we propose new network intrusion detection techniques which promptly detect malicious
attacks and thus lower the resulting damage. Moreover, our approach rigorously control the
probability of falsely implicating benign remote hosts as malicious. Such technique is especially suitable
for detecting DoS attackers and port-scan attackers routinely perform random "portscans" of IP
addresses to find vulnerable servers to compromise. Our method performs significantly faster and also
more accurate than other current solutions.
In this paper, we investigate the multiple hypothesis problems of target detection and tracking
in sensor systems. In many practical situations, the observational data may be expensive to acquire
and the speed of decision can be affected by unnecessary amount of observational data. Motivated
by the importance of accuracy and efficiency of sensor systems, we propose novel adaptive statistical
inferential methods to reduce the amount of required observational data while achieving acceptable
level of accuracy. Toward this goal, we propose adaptive methods in the general framework of testing
multiple hypotheses for the detection and classification problems. The feasibility and optimality of the
methods have been established.
KEYWORDS: Monte Carlo methods, Computer simulations, Error analysis, Very large scale integration, Statistical analysis, Device simulation, Oxides, Error control coding, Capacitance, Integrated circuits
In this article, an explicit formula is derived for determining appropriate number of simulation
runs to estimate the parametric yield or violation probability of VLSI circuits. The formula involves
no approximation and thus offers a rigorous control of the statistical error of estimation. Moreover,
the formula is substantially less conservative than existing methods and hence can be used to avoid
unnecessary computation. The application of the formula is illustrated by the timing analysis of an
n-input NAND gate with a capacitive load.
KEYWORDS: Probability theory, Statistical analysis, Error analysis, Darmstadtium, Bismuth, Analytical research, Control systems, Monte Carlo methods, Homeland security, Copper
In this paper, we propose a unified framework of multistage parametric inference with wide applications.
Within the new framework, we have developed specific multistage parametric estimation
and hypothesis testing procedures which are rigorous and unprecedentedly efficient as compared to
existing methods. Our multistage parametric inferential techniques have immediate applications to
performance evaluation of information and dynamic control systems.
An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized
asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the
sequel, new theoretical work has been contributed which substantially enhances existing performance
analysis formulations. Major contributions include: substantial computational complexity reduction,
including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation
for systems with arbitrary spectral spreading distributions, with non-uniform transmission
delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA
system model is constructed and used to evaluated the BER performance, in a variety of scenarios.
In this paper, the generalized system modeling was used to evaluate the performance of both Walsh-
Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection
of these codes was informed by the observation that WH codes contain N spectral spreading values
(0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading
values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral
spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the
system model is sufficiently supported. The results in this paper, and the sequel, support the claim
that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full
range of binary coding, with minimal computational complexity.
KEYWORDS: Optical spheres, Wireless communications, Silicon, Digital signal processing, Statistical analysis, Electrical engineering, Data communications, Reliability, Telecommunications, Signal detection
In this paper, we develop an adaptive sphere decoding technique for space-time coding of wireless
MIMO communications. This technique makes use of the statistics of previous decoding results to
reduce the decoding complexity of subsequent decoding process. Specially, we propose a method for
the determination of the initial sphere radius for the decoding process of future time-frame based on a
queue of records of minimum sphere radius obtained from the decoding process of previous time-frames.
Concrete methods have been derived for the choice of appropriate queue sizes. Numerical experiment
is performed for demonstrating the efficiency of the adaptive technique.
In this paper, one cost function for setting optimal geometries of multiple sensors' locations is proposed, and related
theorems for optimal geometries of multiple sensors' locations are obtained. In order to keep the sensors in optimal
deployment for moving target, a self-adjusting method is figured out, and an AOA based optimal sensors' locations
self-adjusting and moving target's location estimation algorithms are developed. In order to check the efficiency of the
new algorithms, some simulation results are also provided.
One category of Space-Time codes are constellations of unitary matrix in parametric form. Optimization
is essential for seeking parameters of constellation with the largest diversity product. In this
research, we demonstrate that diversity product is not a good measure of constellation quality as widely
adopted in communication community. Moreover, we show that good codes may not need to have full
diversity. We propose better criteria for measuring the quality of constellations, which are also very
amenable for optimization and particularly suitable for the gradient search method. Furthermore, we
propose a new approach for signal constellation design. Instead of ambiguously discriminating low and
high SNR, our techniques target the range of block error rate which is acceptable and not extremely
small. Although the computational complexity of code designing can be formidable, we have developed
techniques which significantly improve the efficiency. We obtain space-time codes which significantly
outperform existing ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.