The concepts of backwards consistency and minimality are shown to be the essential tools in studying error propagation in fast least-squares adaptive filters. A conceptual framework identifying the principal error propagation mechanism is developed, allowing the numerical stability of new or existing algorithms to be ascertained without the technical labor previously thought necessary. Order recursive algorithms are confirmed to have intrinsic advantages over their transversal counterparts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper we analyze the finite precision properties of the QRD-based adaptive lattice filter used as a linear one-step predictor implemented using fixed-point arithmetic. We present equations for the steady-state mean-squared values of the accumulated numerical errors in the computations of the algorithm. We also present experimental verification of these equations. The predicted and experimental results show close agreement when the weighting factor is low and a moderately large number of bits is used for the fractional part; however, the analysis breaks down as the weighting factor approaches unity and as the number of bits for the fractional part is reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper a recursive total least squares (RTLS) adaptive filter is introduced and studied. The TLS approach is more appropriate and provides more accurate results than the LS approach when there is error on both sides of the adaptive filter equation; for example, linear prediction, AR modeling, and direction finding. The RTLS filter weights are updated in time O(mr) where m is the filter order and r is the dimension of the tracked subspace. In conventional adaptive filtering problems, r equals 1, so that updates can be performed with complexity O(m). The updates are performed by tracking an orthonormal basis for the smaller of the signal or noise subspaces using a computationally efficient subspace tracking algorithm. The filter is shown to outperform both LMS and RLS in terms of tracking and steady state tap weight error norms. It is also more versatile in that it can adapt its weight in the absence of persistent excitation, i.e., when the input data correlation matrix is near rank deficient. Through simulation, the convergence and tracking properties of the filter are presented and compared with LMS and RLS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
An algorithm is presented for updating the adaptive beamformer weights using recursive eigenvalue decomposition (EVD) of a covariance matrix and subspace constraint. This algorithm exploits the subspace structure that the covariance matrix of the interference sources and the noise is a low-rank matrix plus a diagonal matrix. This eigenspace characterization approach avoids the numerically unstable recursive procedure based on the matrix inversion lemma. Moreover, the subspace property makes it possible to develop a fast algorithm by monitoring only the principal eigenvalues and eigenvectors and the noise eigenvalue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Eigenstructure decomposition of correlation matrices is an important pre-processing stage in many modern signal processing applications. In an unknown and possibly changing environment, adaptive algorithms that are efficient and numerically stable as well as readily implementable in hardware for eigendecomposition are highly desirable. Most modern real- time signal processing applications involve processing large amounts of input data and require high throughput rates in order to fulfill the needs of tracking and updating. In this paper, we consider the use of a novel systolic array architecture for the high throughput on-line implementation of the adaptive simultaneous iteration method (SIM) algorithm for the estimation of the p largest eigenvalues and associated eigenvectors of quasi-stationary or slowly varying correlation matrices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A recurring problem in adaptive filtering is selection of control measures for parameter modification. A number of methods reported thus far have used localized order statistics to adaptively adjust filter parameters. The most effective techniques are based on edge detection as a decision mechanism to allow the preservation of edge information while noise is filtered. In general, decision-directed adaptive filters operate on a localized area within an image by using statistics of the area as a discrimination parameter. Typically, adaptive filters are based on pixel to pixel variations within a localized area that are due to either edges or additive noise. In homogeneous areas within the image where variances are due to additive noise, the filter should operate to reduce the noise. Using an edge detection technique, a decision directed adaptive filter can vary the filtering proportional to the amount of edge information detected. We show an approach using an entropy measure on edges to differentiate between variations in the image due to edge information as compared against noise. The method uses entropy calculated against the spatial contour variations of edges in the window.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Adaptive infinite-impulse response (IIR) filters can be used for a variety of digital communication and digital signal processing problems, such as, linear prediction, adaptive notch filtering, adaptive differential pulse code modulation, channel equalization, and adaptive array processing [1 ] . In such applications an adaptive IIR filter can provide a better performance than the standard adaptive FIR filters because hR filters can track both zeroes and poles. Also to achieve a specified level of performance an IIR filter requires fewer coefficients than the corresponding FIR filters [1]. Many types of optoelectronic adaptive FIR filters based on fiber optics or acousto optics have been developed over the last few years [2,3 ]. Design and fabrication of optoelectronic adaptive IIR signal processors is a goal of our research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Blind equalization deals with the restoration of the input sequence given the output of a linear communication channel and statistical information about the input. A solution to this problem can be achieved by using nonlinear transformations or higher-order statistics of the output sequence. The purpose of this paper is to provide a review of existing adaptive blind equalization algorithms and point out their strengths and limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The identification (or inversion) of the impulse response or the transfer function of a linear channel, given only the channel output, has several applications: equalization of communication or data transmission channels; vocal tract identification; seismic data deconvolution for multiple reflection elimination; image deblurring; echographic data focusing, either in the acoustic case or in the electromagnetic one (synthetic aperture radar); etc. The aim of this paper is to clarify the connections between some of the techniques proposed so far to solve this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper examines the transient and steady-state characteristics of several Bussgang-type blind equalization algorithms. A combination of computer simulations and analysis is used to assess the relative performance of the various algorithms. The computer simulations involve channel characteristics typical of those found in an urban multipath environment, and they include the effects of frequency offset. The equalizer structures considered in this paper are comprised of a T/2 fractionally-spaced linear finite impulse response filter. The analysis of misadjustment is based on an approximate Gaussian model of the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Stop-and-go adaptation rules that are used to improve the blind convergence characteristics of the conventional and sign decision-directed algorithms are proposed and examined. They are based on the so-called Sato and Godard type errors which are commonly used in blind deconvolution applications. The convergence rates achieved by different algorithms with QAM type constellations are compared. Also, optimal values for the parameters that are used in the Sato and Godard errors and their effect to the convergence of the stop-and-go schemes are investigated by means of analysis and computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
An adaptive algorithm for estimating the input to a linear system is presented. This explicit self-tuning filter is based on the identification of an innovations model. From that model, input and measurement noise ARMA-descriptions are decomposed, using second order moments. Identifiability results guarantee a unique decomposition. Main tools in the algorithm are the solution of two linear systems of equations. The filter design is based on the polynomial approach to Wiener filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present a class of iterative methods for solving the problem of blind deconvolution of an unknown possibly non-minimum phase linear system driven by an unobserved input process. The methods converge monotonically at a very fast super-exponential rate to the desired solution in which the inverse of the unknown system is identified, and the input process is recovered up to a delay and possibly a constant phase shift. The proposed methods are universal in the sense that they do not impose any restrictions on the probability distribution of the input process, provided that it is non-Gaussian.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The problem of blind equalization was thought to have been solved by some `globally convergent' blind equalizers. However, the established local convergence of some blind equalizers demonstrates the possibility of ill-convergence in the true equalizer parameter space by these supposedly ideal algorithms. In this paper, we analyze the reason why an algorithm globally convergent in the combined channel and equalizer parameter space can have undesirable local convergence in the true equalizer parameter space. We demonstrate that the existing results on global convergence are implicitly based on an idealistic assumption that the equalizer is infinitely parameterized with infinite output delay. We show that despite the engineering intuition that such results should carry over to the finitely parametrized case; in reality the theory breaks down and local minima are common place. Thus, while the theories proving global convergence are elegant and positive, the reality is that for realizable systems such convergence behavior may not materialize.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper introduces a new adaptive blind equalization algorithm, the power cepstrum and tricoherence equalization algorithm (POTEA), based on second- and fourth-order statistics of the received sequence. The algorithm performs simultaneous identification and equalization of a nonminimum phase channel from its output only, without using training sequences. POTEA is based on adaptive computations of the channel's power cepstrum and cepstrum of tricoherence by employing second- and fourth-order statistics, respectively. Extensive simulation results, with QAM signals, are presented to demonstrate the effectiveness of POTEA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
All existing blind equalization algorithms rely on the non-Gaussianity of the input sequence. In recent years, a technique called shaping has been used in data transmission, in order to approach the capacity of Gaussian channels. The transmitted signal then has a Gaussian-like distribution and this will affect blind equalization algorithms. In this paper, we address the problem of blind equalization where the input of the unknown system is a shaped data signal. In particular, we demonstrate the effects of shaping on some of the widely known blind equalization schemes. Our results point to the need for more powerful blind equalization algorithms for shaped signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A new approach to blind equalization is investigated in which the receiver performs joint data and channel estimation in an iterative manner. Hence, instead of estimating the channel inverse, the receiver computes the maximum likelihood estimate of the channel itself. The iterative algorithm involves maximum likelihood sequence estimation (Viterbi decoding) for the data estimation part and least squared estimation for the channel estimation part. A suboptimal algorithm is also proposed that uses a reduced-state trellis instead of the Viterbi algorithm. Simulation results show that the performance obtained by these algorithms is comparable to that of a receiver operating with complete knowledge of the channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Blind deconvolution is a typical solution to unknown LSI system inversion problems. When only the output is available, second order statistics are not sufficient to retrieve the phase of the LSI system, so that some form of higher-order analysis has to be employed. In this work, a general iterative solution based on a Bayesian approach is illustrated, and some cases both for mono and bidimensional applications are discussed. The method implies the use of non second-order statistics (rather than higher-order statistics), tuned to specific a priori statistical models. The Bayesian approach yields specific solutions corresponding to known techniques, such as MED deconvolution employed in seismic processing, and more sophisticated procedures for non-independent identically distributed (for instance Markovian) inputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We propose a new approach to recursive estimation of the parameters of finite impulse response (FIR) and infinite impulse response (IIR) non-Gaussian signals which are assumed to be generated by driving a finite-dimensional channel (system) by an independent identically distributed (i.i.d.) non-Gaussian sequence. The problem is addressed in a blind setting, i.e., only the channel output is observed, not the input to it. The signal model is allowed to be nonminimum phase, hence, the model is applicable to the problem of blind channel equalization in data communication systems. The proposed recursive parameter estimator is shown to be globally convergent, i.e., the parameter estimator converges to the true model regardless of its initialization. The proposed parameter estimator is based upon a two model decomposition approach: a spectrally equivalent minimum phase (SEMP) system in cascade with an allpass system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
An application of a blind-deconvolution technique to defocused images is described. The power spectrum of the out-of-focus blurring function is estimated by averaging the spectrums of subimages of the blurred image. The phase response is then identified from both the power spectrum and the power cepstrum of the blurring function. The identified blurring function is used with both Wiener and homomorphic filters to perform restoration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper is a review that attempts to address the following: (1) Why nonlinear processes are of interest to signal processors; (2) Conceptual tools needed to handle nonlinearities; and (3) How such concepts are translated into algorithms and architectures for practical signal processing?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A Wiener estimator structure is defined for the nonlinear prediction of scalar time series. This leads to a three stage strategy for constructing the predictor which involves: (1) estimation of the probability density function (pdf) associated with the embedding vector; (2) construction of a set of orthonormal polynomials defined on that pdf; (3) calculation of a set of coefficients to define a linear combiner which forms the prediction. The practical difficulties associated with (1) and the theoretical problems inherent in (2) lead to an approach based on a Volterra-type expansion. A set of orthonormal polynomials is constructed through eigenvalue analysis of the `correlation' matrix associated with the Volterra expansion. The orthonormal polynomials are defined by the eigenvalues and eigenvectors of this matrix -- the rank indicating the degree of the orthonormal approximation. Unlike the Wiener approach, the orthonormal polynomials are defined by the higher order moments present in the `correlation' matrix rather than the pdf. The radial basis function (RBF) network is then reexamined in light of this interpretation. The pseudoinverse, which is often used to calculate the coefficients of the RBF network, is in fact a method for deriving a set of orthonormal functions. Since each RBF can itself be expanded as power series, the eigenvalue analysis inherent in the formation of the pseudo-inverse defines a set of infinite orthonormal functions. In theory at least, the RBF network exploits the entire pdf as demanded by the Wiener approach. Some simulation results are presented concerning the Volterra analysis, albeit using least squares rather than this revised approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The radial basis function network is implemented in an iterative form for the prediction of time series by modeling their generating dynamics. The technique is demonstrated on an experimental time series, for which the iterated network learns an attracting solution. Analysis of the Lyapunov exponents and their local analogs reveals the presence of local instability while giving insight into how overall stability is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Traditional signal processing techniques normally apply stochastic process theory to account for the inability to predict, control, or reproduce precise results in repeated experiments. This often requires fairly restrictive assumptions (e.g., linear and Gaussian) regarding the nature of the processes generating the signal source and its contamination. Our purpose is to provide a preliminary analysis of an alternative model to account for this random behavior. The alternative model assumes that randomness can result from chaotic dynamics in the processes that generate and contaminate the signal of interest. This provides the option to use nonlinear dynamic prediction models instead of traditional statistical modeling for signal separation. The effectiveness of a given prediction model for a particular application can then be interpreted in terms of the predictability of the data set using that model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Radar backscatter from an ocean surface, commonly referred to as sea clutter, has a long history of being modeled as a stochastic process. In this paper, we take a fundamentally different viewpoint in describing sea clutter. In particular, we demonstrate that the random nature of sea clutter is indeed the result of chaotic phenomenon. Using different real-life sea clutter data, we use correlation dimension analysis to show that sea clutter can be embedded as a chaotic attractor in a finite dimensional space. This observation provides a reliable indication for the existence of a chaotic behavior. The result of correlation dimension analysis is used to construct a neural network model for sea clutter to reconstruct the dynamics of sea clutter. The model is in the form of a radial basis function (RBF) network. The deterministic model for sea clutter so obtained is shown to be capable of predicting the evolution of sea clutter as a function of time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper we show that the order recursive least squares (LS) algorithms can be systematically studied as follows: (1) Determine various estimator structures (connection of basic building cells) based on the known properties of the input data vector and required output, and (2) Investigate possible realizations of time-updating and their properties implemented in the elementary cells. In view of this approach, we show that using a structure from (1) in combination with a time-update realization from (2) almost always forms a valid order-recursive LS (ORLS) algorithm. All known ORLS algorithms can be derived under a unified framework by using this approach. The properties of ORLS algorithms, such as computational complexity and sensitivity to the round-off error, can also be investigated in a systematic manner. This method not only simplifies the investigation of existing algorithms, but it also is powerful enough for creating new ORLS algorithms. These points will become clearer during the course of the development of this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
An interpretation of the tracking behavior of fast RLS adaptive filters is given. It is shown that the overall performance of an RLS adaptive filter is solely dependent on the forgetting function, which is involved in coefficient updating. Fast RLS adaptive filters restrict themselves to simple exponential or rectangular forgetting functions. Their tracking behavior is quite limited and sometimes even disappointing when compared to the much simpler LMS algorithm. These limitations can be circumvented with the advent of Schur RLS adaptive filters which allow the application of arbitrarily shaped forgetting functions in the coefficient updating process. Schur RLS adaptive filters are closely connected to the theory of discrete transmission lines. They are flexible in their possible configuration and share excellent structural and numerical properties, which make them highly attractive candidates for concurrent implementations. Systolic arrays of the Schur RLS adaptive filters are presented and their performance is demonstrated with a typical example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Assuming a fast learning condition for an adaptive resonance theory (ART) type neural network, we have explored the effect of the vigilance parameter and the order function on the performance of the neural network for binary pattern recognition. A modified search order was developed for classification of binary alphabet characters and airplane classes and compared with the performance of the original ART-1 network for binary pattern recognition with and without the presence of noise. Our results suggest that the effect of noise on binary pattern recognition is solely dependent on the induced changes in the critical feature patterns when other control parameters remained the same.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper describes a modular, unsupervised neural network architecture that can be used for data clustering and classification. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The system consists of a fuzzy K-means learning rule embedded within a control structure similar to that found in the adaptive resonance theory (ART-1) network. AFLC adaptively clusters analog inputs into classes without a priori knowledge of the entire data set or of the number of clusters present in the data. The classification of an input takes place in a two stage process: a simple competitive stage and a euclidean metric comparison stage. Due to the modular design of AFLC, the euclidean metric can be replaced with various other metric for improved performance in a particular problem. The AFLC algorithm and operating characteristics are described, and the algorithm is compared to fuzzy K-means for both computer generated and real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The recovery of an original image from its corrupted version is of importance in a number of applications. The detection of small and dim targets is one such problem, requiring the enhancement of target signals and suppression of noise and clutter in the image. Conventional methods like matched filtering require a priori knowledge of the target intensity spread function, the clutter correlation characteristics, etc. These techniques are difficult to implement if the image is nonstationary. This paper describes an adaptive clutter whitening technique which increases signal detectability in colored noise and clutter. Signal enhancement is based on the intrinsic differences in the spatial extent of the target relative to the clutter. An adaptive spatial filter is used to whiten the clutter present in the image. The output of such an adaptive spatial filter, termed the adaptive clutter whitener (ACW), is then passed on to a matched filter based detector. The receiver operating characteristics are found using Monte- Carlo simulation techniques, both for the ACW augmented matched filter detector and a conventional matched filter detector. It is seen that for highly correlated clutter, the ACW augmented detector has a better ROC than one without a prewhitening filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Fast recursive least squares (FRLS) algorithms have been extensively studied since the mid- 1970s for adaptive signal processing applications. Despite their large number and apparent diversity, they were almost exclusively derived using only two techniques: partitioned matrix inversion lemma or least squares geometric theory. Surprisingly, Chandrasekhar factorizations, that were introduced in the early 1970s to derive fast Kalman filters, were little used, even though fast RLS algorithms can also be derived with this technique, under various forms, either unnormalized or over-normalized. For instance, the well-known FTF algorithm corresponds exactly to a particular case of the Chandrasekhar equations. The aim of this paper is to take stock of the interest of the Chandrasekhar technique for FRLS estimation. The corresponding equations have a somewhat generic character which can help to show the links between FRLS algorithms and other least squares estimation problems, since they were successfully used to derive fast algorithms for estimating random variables through regularization techniques, or for computing cross-validation criteria in statistics. These Chandrasekhar factorizations can also help teach fast adaptive algorithms: they are easy to understand, they can be used in a large variety of algorithmic problems, and, in a least squares algorithmic context, there is no need to learn the FRLS algorithms separately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Compactly supported wavelet bases are sets of compactly supported functions that are orthonormal bases for a wide variety of function spaces, including signals that have finite energy or finite power. This article places the theory of compactly supported wavelets in its historical context. Wavelet matrices and wavelet functions of one and two variables are defined. The continuous, discrete, and finite wavelet transforms are contrasted with the corresponding Fourier transforms. A multirate digital filter interpretation is provided and adaptive trees of wavelet filters are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Several recent works [1,2,3) have discussed the role of the Gabor transform as a tool for signal detection and feature extraction in the presence of noise. Gabor, or more generally Weyl—Heisenbrerg, expansions of signals provide a greater degree of sensitivity to local phenomena such as local frequency changes as compared to classical Fourier methods. To be an effective tool, such expansions should highlight relatively few components and not spread signal energy throughout an unreasonably large number of components. An important first step in applying the Gabor transform to detect a signal is to maximally exploit a priori signal knowledge to design an appropriate weighting function as a window on input data. The effect of such a window in practice is to reduce the dimension of the signal search space. If complete signal information is known at the outset, then the optimal signal processing strategy is the matched filter, which reduces the search—space to one dimension. In this work, we fix a window and provide tools for measuring how well various subspaces of signals can be analyzed relative to the window. In a Fourier signal processing strategy, signal decay rate (along with that of its Fourier transform) and signal smoothness usually serve as the essential a priori information necessary to set sampling rates and establish error estimates. This information will not be sufficient for an effective application of the Gabor transform. The Zak transform plays a key role in several works [4,5,6,7] dealing with the Gabor transform. The function formed from the ratio of the Zak transform of a signal to the Zak transform of the window will provide essential information about the window's effectiveness in compactly representing signal information. Various subspaces of signals will be defined by functional properties of the quotient. In general, the quotient is doubly periodic but is not necessarily continuous. However, under rather general conditions, Fourier coefficients can be defined which in some cases determine the coefficients of a Weyl-Heisenberg expansion. It is natural to ask whether these coefficients can be used to provide a (discrete) time—frequency representation of the signal relative to the fixed window independent of their role in a Weyl—Heisenberg expansion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We propose a new distance metric for a radial basis functions (RBF) neural network. We consider a two-dimensional space of time and frequency. In the usual context of RBF, a two- dimensional space would imply a two-dimensional feature vector. In our paradigm, however, the input feature vector may be of any length, and is typically a time series (say, 512 samples). We also propose a rule for positioning the centers in time-frequency (TF) space, which is based on the well-known expectation maximization (EM) algorithm. Our algorithm, for which we have coined the term `log-on expectation maximization' (LEM) adapts a number of centers in TF space in such a way as to fit the input distribution. We propose two variants, LEM1, which works in one dimension at a time, and LEM2, which works in both dimensions simultaneously. We allow these `circles' (somewhat circular TF contours) to move around in the two-dimensional space, but we also allow them to dilate into ellipses of arbitrary aspect ratio. We then have a generalization which embodies both the Weyl-Hiesenberg (e.g., sliding window FFT) and affine (e.g., wavelet) spaces as special cases. Later we allow the `ellipses' to adaptively `tilt.' (In other words we allow the time series associated with each center to chirp, hence the name `chirplet transform.') It is possible to view the process in a different space, for which we have coined the term `bow-tie' space. In that space, the adaptivity appears as a number of bow-tie shaped centers which also move about to fit the input distribution in this new space. We use our chirplet transform for time-frequency analysis of Doppler radar signals. The chirplet essentially embodies a constant acceleration physical model. This model almost perfectly matches the physics of constant force, constant mass objects (such as cars with fixed throttle starting off at a stoplight). Our transform resolves general targets (those undergoing nonconstant acceleration) better than the classical Fourier Doppler periodogram. Since it embodies the constant velocity (Doppler periodogram) as a special case, its extra degrees of freedom better capture the physics of moving objects than does classical Fourier processing. By making the transform adaptive, we may better represent the signal with fewer transform coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Parallel algorithm to compute the continuous wavelet transform is presented. The parallel computation uses the LINDA paradigm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Since the wavelet transform, a time-scale representation is linear by definition, it does not have any cross terms. However, since signal processors often use the plots of the quadratic square magnitude, i.e., the energy distribution of the WT, to represent a signal, there exists nonlinear cross terms which could cause problems while analyzing multicomponent signals. In this paper, we show that these WT cross terms do exist and discuss the nature and the geometry of these cross terms by deriving mathematical expressions for the energy distribution of the WT of a multicomponent signal. From the mathematical expressions for the WT cross terms, we can infer that the nature of these 'cross terms' are comparable with those found in the Wigner distribution (WD), a quadratic time-frequency representation, and the short-time Fourier transform (STFT), of closely spaced signals. The `cross terms' of the WT and the STFT energy distributions occur at the intersection of their respective WT and STFT spaces, whereas for the WD, cross terms occur midtime and midfrequency. The parameters of the cross terms are a function of the difference in frequency and time of the perpended signals. The amplitude of these cross terms can be twice as large as the product of the magnitudes of the transforms of the two signals in question in all the three cases. We also discuss the significance of the existence of WT cross terms while analyzing a multicomponent signal with representative examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Measures of complexity -- time-bandwidth (TW) product being the leading example -- used by engineers and scientists to characterize signals are often assumed to describe inherent attributes of the signal. Signals of low TW product are viewed as `simple,' and those of high TW, such as a spread spectrum signal, appease our intuition by appearing complicated. Following precedent from classical physics, this sort of complexity can also be quantified in terms of degrees of freedom or occupied dimensions. One thesis of this paper is that the conventional association of complexity with large values of a dimension measure results from the signal representations -- impulses and sine waves -- we instinctively use in time- and frequency-domain thinking. These basis functions are easy to visualize, thus putting the burden of complexity on the expansion coefficients. Conclusions reached under this paradigm do not necessarily hold up under the enlarged scope of mixed time-frequency representations, and thus we make the case that there is no inherent dimensionality to an individual signal, and that dimension properties are entirely representation dependent. Corresponding conclusions concerning sets of signals compose the second thesis of the paper, and these turn out to be somewhat different. Simple examples illustrate that under certain constraints on the basis functions, a nontrivial minimum dimension can be assigned to a signal set based on its maximally compact representation over eligible bases. The central ideas are made precise by postulating some desirable attributes for a signal dimension measure relative to a basis set, and showing that these imply a definition intimately related to both the quantum-mechanical, probabalistic interpretation of expansion coefficients and the entropy function. A logical extension from signal dimension to signal set dimension is then presented. No fully inherent dimension definition is reached. Instead we find that basis sets must exhibit some structure enabling the dimension problem to be posed and solved, and that a well-defined inherent dimension of a signal set exists only with respect to subclasses of bases. Certain highly structured basis function families that provide settings within the signal processing framework with respect to which the signal set dimension measure may prove useful are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The usefulness of algorithmic `neural' networks applied to signal and statistical pattern processing is considered in the context of more traditional techniques. The ability of networks to perform well in a variety of tasks is linked to the networks' flexibility in being able to incorporate the functionality of more traditional, established techniques. Discussions and examples of these relationships are presented, including density estimation, functional interpolation, clustering, and discriminant analysis. It is argued that the current embodiments of neural network structures are useful pattern analysis tools precisely because they provide an interpretational `glue' or a framework which links together a variety of methods and viewpoints, thus constituting a generic form of pattern analysis methods. This also indicates their limits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We describe the dynamics of learning in unsupervised linear feature-discovery networks that have recurrent lateral connections. Bifurcation theory provides a description of the location of multiple equilibria and limit cycles in the weight-space dynamics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Neural networks can be applied to the problem of extracting multipath delay information from the correlation of two signals. The ability of a feedforward neural network to learn the correct delay vector response is demonstrated using both broadband and narrowband correlation data. A trained neural network can be combined with a traditional adaptive algorithm for systems applications such as adaptive interference cancellation. In many cases, the network improves overall system performance by providing a nearly optimal starting point. The use of such an approach in an existing optical processing architecture is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This work deals with the use of artificial neural networks (ANN) for the digital processing of finite discrete time signals. The effort concentrates on the efficient replacement of fast Fourier transform (FFT) algorithms with ANN algorithms in certain engineering and scientific applications. The FFT algorithms are efficient methods of computing the discrete Fourier transform (DFT). The ubiquitous DFT is utilized in almost every digital signal processing application where harmonic analysis information is needed. Applications abound in areas such as audio acoustics, geophysics, biomedicine, telecommunications, astrophysics, etc. To identify more efficient methods to obtain a desired spectral information will result in a reduction in the computational effort required to implement these applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A method for reducing the parameters of time variant integrated autoregressive (IAR) processes by means of a multilayer self-organizing neural network is presented. For the test of signal detectors, observed time series are described by a time variant integrated AR-model. The characterizing parameter vector, called feature vector, is continuously calculated with every new observed time sample. The corresponding sequence of feature vectors results in the feature trajectory, which initializes an adaptive filter for generating new time series. We reduce feature trajectories by means of a multilayer self-organizing feature map. The presented network enables the mapping of each feature trajectory into a scalar sequence after optimizing the internal states of each map with an appropriate learning algorithm. Inverse mapping of each scalar sequence into a quantized trajectory shows good reproduction properties. The results show good tracking behavior and an acceptable data reduction which is asymptotically limited by the feature vector dimension. Experimental results are presented for time series corresponding to fire signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The presence of clutter complicates the location of targets in time series and images. Various types of adaptive clutter model have been proposed to deal with this problem. In this paper we treat clutter as a type of texture, and we propose a novel type of hierarchical Gibbs distribution texture model. To optimize this type of model, we define a relative entropy cost function that we decompose into a sum over a number of terms, each of which can be interpreted as the mutual information between clusters of samples of the data. Furthermore, we show how the various terms of this cost function can be used to construct an image-like representation of the relative entropy. Finally, using a Brodatz texture image, we present an example of this type of decomposition and demonstrate that a statistical anomaly in the Brodatz texture image can be easily located.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Several researchers have investigated the usefulness ofradial basis function (RBF) neural networks for prediction of difficult (chaotic) time series.18 This paper demonstrates adaptive unsupervised / supervised learning procedures which improve on the RBF network performance in terms of generalization, and final mean-square error. After a description of the RBF network architecture, the learning of the network is discussed with comments on the advantages and disadvantages of various approaches. Unsupervised clustering of RBF centres has been used by researchers6'° to overcome some of the disadvantages in supervised learning, and to improve the performance of the RBF network while keeping the number of basis functions relatively low. A maximum likelihood solution for unsupervised clustering called the expectation maximization (EM) algorithm has recently been used'° to estimate the parameters of the RBF hidden layer units. The input density is modeled as a mixture of component gaussian distributions. The estimated parameters of the mixture density are then transplanted into the RBF network, after which supervised learning of the network takes place. An extension to input—output space clustering is shown to be superior. Although papers have been written on input space clustering for RBF networks, this paper examines why the extended metric clustering should be a superior methodology. Two examples, a chaotic time series predictor and a cross—polarization canceller, illustrate the performance of the algorithms. The EM algorithm with extended metric clustering method achieves superior performance on signal processing problems by creating a better hidden layer representation in areas of the input space where samples are more likely to be jointly located.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present here some preliminary results on a new approach for the analysis of the propagation of round-off errors in recursive algorithms. This approach is based on the concept of backward consistency. In general, this concept leads to a decomposition of the state space of the algorithm, and, in fact, to a manifold. This manifold is the set of state values that are backward consistent. Perturbations within the manifold can be interpreted as perturbations on the input data. Hence, the error propagation on the manifold corresponds exactly (without averaging or even linearization) to the propagation of the effect of a perturbation of the input data at some point in time on the state of the algorithm at future times. In this paper, we apply these ideas to the Kalman filter and its various derivatives. In particular, we consider the conventional Kalman filter, some minor variations of it, and its square-root forms. Next we consider the Chandrasekhar equations, which apply to time-invariant filtering problems. Recursive least-squares parameter (RLS) estimation is a special case of Kalman filtering and, hence, the previous results also apply to the RLS algorithms. We shall furthermore consider in detail two groups of fast RLS algorithms: the fast transversal filter (FTF) algorithms and the fast lattice fast QR (FLA/FQR) RLS algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We have observed that if one restricts the von Neumann lattice to N points on the time axis and M points in the frequency axis there are, by definition, only MN independent Gabor coefficients. If the data is sampled such that there are exactly MN samples, then the forward and inverse Gabor transforms should be representable as linear transformations in CMN, the MN-dimensional vector space over the complex numbers, and the relationships that hold become matrix equations. These matrix equations are formulated, and some conclusions are drawn about the relative merits of using some methods as opposed to others, i.e., speed versus accuracy, as well as whether or not the coefficients that are obtained via some methods are true Gabor coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.