Research is being conducted on high power microwave devices (e.g., gyrotrons) at the University of Michigan. Of utmost concern is the phenomenon of pulse shortening, that is, the duration of the microwave pulse is shorter than the duration of the cathode voltage. For years researchers have applied the Fourier transform to the heterodyned microwave signals. The problem with this technique is that a signal with multiple frequency components has the same spectrum as that of a signal with frequency components emitted at different times. Time-frequency analysis (TFA) using Reduced Interference Distributions provided an entirely different outlook in the community when it was recently applied to heterodyned microwave signals. Results show, with unprecedented clarity, mode hopping, mode competition, and frequency modulation due to electron beam voltage fluctuations. The various processes that lead to pulse shortening may finally be identified. Time resolved maximum intensity of the TFA has produced results very similar to the microwave power signal, verifying the utility of TFA in the analysis of the temporal evolution of power in each mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, we consider cross time-frequency distributions, with particular emphasis on their ability to preserve phase difference (between two signals) information as a function of time and frequency. In addition, some properties of cross time-frequency distributions are examined and compared with the traditional Cohen's class of time-frequency distributions. Finally, the phase preserving properties of appropriately defined cross time-frequency distributions are illustrated with a pair of Gabor logons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Establishing measures for local stationarity is an open problem in the field of time-frequency analysis. One promising theoretical measure, known as the spread, provides a means for quantifying potential correlation between signal elements. In this paper we investigate the issue of generalizing techniques developed by the authors to better estimate the spread of a signal. Existing techniques estimate the spread as the rectangular region of support of the associated expected ambiguity function oriented parallel to the axes. By applying Radon Transform techniques we can produce a parameterized model which describes the orientation of the region of support providing tighter estimates of the signal spread. Examples are provided that illustrate the enhancement of the new method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper addresses time-frequency (TF) analysis from a statistical signal processing perspective, with the goal of developing a general statistical methodology for TF analysis. We review earlier work on statistical models for nonstationary stochastic signals, including frequency modulated locally stationary processes, which have covariance functions which yield nonnegative Wigner distributions. For such processes, time- frequency spectra may be defined without invoking 'local-' or 'quasi-' stationarity. These results are extended to include general time-varying linear systems and their associated time- frequency spectra. Any time-varying linear system driven by white noise has associated with it a nonnegative time-frequency spectrum. The bilinear class of time-frequency distributions are estimators of this time-frequency spectrum, as are adaptive methods such as positive time-frequency distributions and adaptive multitaper spectrograms. An analysis of the statistical properties of these estimators, including moments and distributional properties, is reviewed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We have recently introduced the class of generalized scale transforms and its subclass of warped Fourier transforms. Members in each class are defined by continuous time warping functions. While the two transforms admit a mathematically elegant analysis of warp-shift invariant systems it is still unclear how to design warping functions that deliver optimal representations for a given class of signals or systems. In many cases we can obtain an optimal choice for the warping function via a closed form analysis of the system that generates the signal of interest. In cases in which a closed form analysis is not possible we have to rely on a warp function estimation method. The approach we are taking in this paper is founded in information theory. We consider the observed signal as a random process. A power estimate of the warped Fourier transform parameterized by an underlying warping function is obtained from a finite number of realizations. We treat the power estimate as a probability density in warp-frequency and minimize its differential entropy over the space of admissible warping functions. We use an iterative numerical method for the minimization process. A proper formulation of a discrete time warped Fourier transform is employed as a foundation for the numerical analysis. Applications of the proposed algorithm can be found in detection, system identification, and data-compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present a general procedure for obtaining equations of motion for the Wigner distribution of functions that are governed by ordinary and partial differential equations. For the case of fields we show that in general one must consider Wigner distribution of the four variables, position, momentum, time and frequency. We also show that in general one cannot write an equation of motion for position and momentum however it can be done in some cases, the Schrodinger equation being one such case. Our method leads to an equation of motion for the Schrodinger equation with time dependent potentials in contrast to the result obtained by Wigner and Moyal which was for time independent potentials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present a matrix decomposition that can be used to derive features from processes that are described by discrete-time, time- frequency representations. These include, among others, electrocardiograms, brain wave signals, seismic signals, vibration and shock signals, speech signals for voice recognition, and acoustic transient signals. The new decomposition is based on a transformation of the basis vectors of the singular value decomposition (SVD) which we call transformed singular value decomposition or TSVD. The transformed basis vectors are obtained by forming linear combinations of the original SVD basis vectors in a way such that the means of the transformed vectors are extrema of each other. The TSVD basis vectors are used to identify concentrations of energy density in the discrete-time, time- frequency representation by time and frequency descriptors. That is, descriptors such as the location in time, the spread in time, the location in frequency and the spread in frequency for each principal concentration of energy density can be obtained from the TSVD terms in the matrix decomposition series. Several examples are presented which illustrate the application of the new matrix decomposition for deriving principal time and frequency features from the discrete-time, time-frequency representations of nonstationary processes. Two of the examples illustrate how the derived time and frequency features can be used to classify individual short duration transient signals into respective classes, that is,: (1) automatically classify sonar signals as belonging to one of ten classes, and (2) automatically classify heartbeat signals as belonging to one of two people.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
While spread spectrum systems are robust to many types of interference, performance can be significantly degraded if the interference is strong enough, particularly for wideband interferences. In these situations, various signal processing methods can be employed to remove, or excise the jammer prior to despreading the received signal, resulting in enhanced performance. We investigate the effects of amplitude and frequency modulated (AM-FM) jammers on the performance of direct sequence spread spectrum communication systems. We demonstrate that such jammers cause significant degradation in bit-error-rate with increasing AM on systems designed to excise FM jammers only (i.e., systems with fixed notch-width excision filters). We propose an adaptive technique that utilizes the instantaneous bandwidth of the jammer, in addition to its instantaneous frequency, to filter wideband AM- FM interference from the DSSS signal. We also investigate the effects of adapting the filter notch-depth as well, according to the instantaneous power of the jammer. Simulations demonstrate additional improvement in system performance for the proposed adaptive technique compared to fixed notch-width and fixed notch- depth excision filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We generalized the short time Fourier transform and the cross spectral methods developed by the author and others over the past two decades. The method presented is based on application of a multidimensional Fourier transform. Each iteration of the Fourier transform channelizes the signal into successively narrower channels, and a single application of phase differentiation stabilizes the Fourier phase and effectively places all the signal component in their correct location in the spectrum. Finally, it is shown that it is possible to recover spectral components from spectral observations which are remote from the components being estimated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Previous work has shown that time-frequency distributions (TFDs) belonging to Cohen's class can be represented as a sum of weighted spectrograms. This representation offers the means of reducing the computational complexity of TFDs. The windows in the spectrogram representation may either be the eigenfunctions obtained from an eigen decomposition of the kernel or any complete set of orthonormal basis functions. The efficiency of the computation can further be increased by using a set of scaled and shifted functions like wavelets. In this paper, the concept of scaling is considered in discrete-time domain. The scale operator in the frequency domain is formulated and the vectors which correspond to the solutions of this eigenvalue problem in discrete-time are derived. These new eigenvectors are very similar in structure to the eigenvectors obtained from eigensystem decomposition of reduced interference distribution (RID) kernels. The relationship between these two sets of window functions is illustrated and a new efficient way of decomposing time-frequency kernels is introduced. The results are compared to the previous decomposition methods. Finally, some possible applications of these discrete scale functions in obtaining new time-frequency distributions are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, we formulate a framework for discrete-time processing of Linear Scale Invariant (LSI) systems which are invariant to scale changes in time. Continuous-time LSI systems can be processed by Mellin and Modified Scale Transforms analogous to the use of the Laplace and Fourier Transforms in the continuous- time processing of LTI systems. In this work, we present the geometric sampling theorem to prevent aliasing in the scale domain. We also derive the perfect reconstruction filter in time domain, the discrete-time convolution sum, the Discrete Time Modified Scale Transform (DTMST) and the Discrete Modified Scale Transform (DMST) for geometrically sampled signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper we combine two recently developed multi-scale deconvolution algorithms, known as the scale-time domain method and the sum-of-cumulants domain method. We formulate the deconvolution problem in the scale-cumulant domain using the Scale Transform (ST) and show that the procedure is simpler when the unknown source signal is non-minimum phase and robust if Gaussian noise exists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Given two n X n Toeplitz matrices T_{1} and T_{2}, and a vector b (epsilon) R^{n(2}), consider the linear system Ax equals b - (eta) , where (eta) (epsilon) R^{n(2}) is an unknown vector representing the noise and A equals T_{1} (direct product) T_{2}. Recovering approximations of x, given A and b, is encountered in image restoration problems. We propose a method for the approximation of the solution x that has good regularization properties. The algorithm is based on a modified version of Newton's iteration for matrix inversion and relies on the concept of approximate displacement rank. We provide a formal description of the regularization properties of Newton's iteration in terms of filters and determine bounds to the number of iterations that guarantee regularization. The method is extended to deal with more general systems where A equals (summation)_{i equals 1}^{h} T_{1}^{(i}) (direct product) T_{2}^{(i}). The cost of computing regularized inverses is O(n log n) operations (ops), the cost of solving the system Ax equals b is O(n^{2} log n) ops. Numerical experiments which show the effectiveness of our algorithm are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Let P be a symmetric positive definite Pick matrix of order n. The following facts will be proven here: (1) P is the Gram matrix of a set of rational functions, with respect to an inner product defined in terms of a 'generating function' associated to P; (2) Its condition number is lower-bounded by a function growing exponentially in n. (3) P can be effectively preconditioned by the Pick matrix generated by the same nodes and a constant function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SVD). The ULVD can be modified much faster than the SVD. The accurate computation of the subspaces is required in applications in signal processing. In this paper we introduce a recursive ULVD algorithm which is faster than all available stable SVD algorithms. Moreover, we present an alternative refinement algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present an algorithm for solving Toeplitz least squares problems. By embedding the Toeplitz matrix into a circulant block matrix and by applying the Discrete Fourier Transform, we are able to transform the linear least squares problem into a discrete least squares approximation problem for polynomial vectors. We have implemented our algorithm in Matlab. Numerical experiments indicate that our approach is numerically stable even for ill-conditioned problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
An adaptive algorithm based on a direction set method for solving unconstrained minimization problems without using derivatives is developed for spectral estimation with the presence of noise. It is a fast algorithm since it requires O(N) multiplications for each system update where N is the number of parameters used in the system design. Computer simulations have shown that the algorithm is stable and very efficient for spectral estimation. The frequency estimates obtained by this algorithm often are smoother than those generated using a conjugate gradient based algorithm which has a computational complexity of O(N^{2}). A detailed study of the algorithm for adaptive spectral estimation including its computational complexity, computer simulations and its convergence comparing with other known methods, is given in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In many digital signal processing and multimedia applications, results that overflow are saturated to the most positive or most negative representable number. This paper presents efficient techniques for performing saturating n-bit integer multiplication on unsigned and two's complement numbers. Unlike conventional techniques for saturating multiplication, which compute a 2n-bit product and then examine the n most significant product bits to determine if overflow has occurred, the techniques presented in this paper compute only the (n + 1) least significant bits of the product. Specialized overflow detection units, which operate in parallel with the multiplier, determine if overflow has occurred and the product should be saturated. These techniques are applied to designs for saturating array multipliers that perform either unsigned or two's complement saturating integer multiplication, based on an input control signal. Compared to array multipliers that use conventional methods for saturation, these multipliers have about half as much area and delay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Media signal processing requires high computing power and the algorithms exhibit a great deal of parallelism on low precision data. The basic components of multi-media objects are usually simple integers with 8, 12, or 16 bits of precision. In order to support efficient processing of media signals, Instructions Set Architecture (ISA) of the traditional processors requires modifications. In this paper, we present the quantitative analysis and the computational complexity required to perform media processing. Main classes of instructions that are needed for the required level of performance of the Media Processor are identified. Their efficient implementation and effect on the processor data-path is discussed. The main operations required in media processing are Addition (with or without saturation), Multiplication (with or without rounding), Sum of Products, and Average of two numbers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In several cases, the input argument of an elementary function evaluation is given bit-serially, most significant bit first. We suggest a solution for performing the first step of the evaluation (namely, the range reduction) on the fly: the computation is overlapped with the reception of the input bits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In Residue Number Systems (RNS), an integer X is represented by its residues {x_{0},...,x_{n-1}} modulo a base of relatively prime numbers {m_{0},...,m_{n-1}}. Thus a large number can be represented as a set of small integers. Addition and multiplication can be easily parallelized, there is no carry propagation. The time is reduced to the evaluation of these operations with small numbers. This representation is useful in cryptography and digital signal processing. Furthermore, in these two domains, modular multiplication (A X B mod N) is frequently used. So, in 1998, we have presented in IEEE journal of transactions on computers, a new modular multiplication algorithm in RNS. This algorithm is based on the Montgomery algorithm, using the associated Mixed Radix representation, for the weighted digits. It was the first algorithm of this type. In this paper, we present two remarks. First, if we develop the different expressions due to the algorithm, we obtain some mathematical simplifications that allow us to suppress some Mixed Radix occurrence in the basic iteration simply with a new initialization of our variables. Thus, in this new version, the complexity of each basic iteration, becomes equivalent to two products of small integers instead of three. The second remark is that, most of the time, modular multiplications are done with the same modulo N. We can precompute some values and reduce the complexity of each basic iteration to one multiplication of two small integers. Thus, the basic iteration is three times faster, and the global computation, due to the initialization, is 8/5 time faster than the original version. Sometime after the last basic iteration a Mixed Radix conversion can be needed. Classical parallel methods are linear. We propose an algorithmic parallel algorithm for this translation from RNS to Mixed Radix. For this, we use a result that comes from an RNS division algorithm, we published in Journal of VLSI signal processing systems 1998. We obtain in a logarithmic time an approximation of the Mixed radix representation. The correct representation is then established in a logarithmic time too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Direct table-based methods are frequently proposed for the implementation of low-precision evaluation of functions. We examine and compare the real implementation on current FPGAs of two methods, the single table and the bipartite table, introduced in the literature. We focus on the sine/cosine functions, input and output sizes in {8,...,12} and on LUT-based FPGAs and especially on the Virtex device family from Xilinx.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Schwarz demonstrates the reuse of a multiplier partial product array (PPA) to approximate higher-order functions such as the reciprocal, division, and square root. This work presents techniques to decrease the worst case error for the reciprocal approximation computed on a fixed height PPA. In addition, a compensation table is proposed that when combined with the reciprocal approximation produces a fixed precision result. The design space for a 12-bit reciprocal is then studied and the area- time tradeoff for three design points is presented. Increasing the reciprocal approximation computation decreases the area needed to implement the function while increasing the overall latency. Finally, the applicability of the proposed technique to the bipartite ROM reciprocal table is discussed. The proposed technique allows hardware reconfigurability. Programmable inputs for the PPA allow the hardware unit to be reconfigured to compute various higher-order function approximations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper discusses the use of a recently introduced index calculus Double-Base Number System (IDBNS) for representing and processing numbers for non-linear digital signal processing; the target application is a digital hearing aid processor. The IDBNS representation uses 2 orthogonal bases (2 and 3) to represent real numbers with arbitrary precision. By restricting the number of digits to one or two, It is possible to efficiently represent the real number using the indices of the bases rather than the distribution of the digits. In this paper we discuss the use of the two-digit form of this representation (2-IDBNS) to efficiently perform arithmetic associated with the non-linear processing required to correct the usual forms of hearing loss in a digital hearing aid. The non-linear processing takes the form of dynamic range compression as a function of frequency band. Currently developed digital hearing instrument processors require large dynamic range representations (20 - 24 bits) in order to accurately generate the dynamic range compression associated with typical hearing loss. We show that the natural non-linear representation afforded by the IDBNS provides both a more efficient signal representation and a more efficient technique for processing the dynamic range compression. We pay particular attention to a novel technique of converting from a linear binary input directly to the 2-IDBNS representation using an observation of partial cyclic repetition in the indices along with near unity approximants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper presents arithmetic implementations which use binary redundant numbers based on carry-save representations. It is well- known that constant-time addition, in which the execution delay is independent of operand length, is feasible only if the result is expressed in a redundant representation. Carry-save based formats are one type of a redundant representation which can lead to highly efficient implementations of arithmetic operations. In this paper, we discuss two specific carry-save formats that lead to particularly efficient realizations. We illustrate these formats, and the 'equal-weight grouping' (EWG) mechanism wherein bits having the same weight are grouped together during an arithmetic operation. This mechanism can reduce the area and delay complexity of an implementation. We present a detailed comparison of implementations based on these two carry-save formats including measurements from VLSI cell layouts. We then illustrate the application of these VLSI cells for multi-operand additions in fast parallel multipliers. Finally, we also indicate the relationship with previous results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The design of digital systems that make use of redundant digit sets requires a specific design methodology. These types of systems are frequently used in high-performance arithmetic algorithms. Since more than one output represents the correct result and more than one bit-vector may represent the same output value, there are many possibilities for the system's realization. An important decision involves the selection of the digit codes. This decision impacts the area and delay of the final system. After the digit codes are defined, the number of possible implementations is usually large. This paper presents a design methodology for the implementation of redundant digit systems which allows the creation of an environment for the investigation of design alternatives for such systems. The use of this methodology makes it possible to systematically select digit codes and determine the best design solution. It provides the basis for the implementation of a CAD tool that can assist the designer in the generation of minimal gate networks for redundant digit systems. Besides presenting the methodology we also illustrate its use in the design of some redundant adders, and underline rules that should be followed in order to obtain the best gate networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A high speed multiplier-accumulator (MAC) architecture is presented that accepts up to 2^{k} pairs of n-bit 2's complement input operands and generates one 2n+k-bit result. The implementation uses a 2 stage pipelined multiplier with Dadda reduction and a single carry propagating adder. A significant speedup is achieved with this implementation over conventional MAC designs. The complexity is reduced slightly. An example design is presented with 24-bit input operands designed using a 0.35 micrometer CMOS technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper presents the design, implementation, and verification of fine-grained pipelined Least-Mean-Square (LMS) adaptive Finite- Impulse-Response (FIR) filters in Virtex FPGA technology. The paper focuses on the impact of introducing pipelining into the LMS filter. While pipelining provides a speed increase, the additional effect is to introduce delay into the error feedback loop which degrades performance. This effect is overcome through the use of look-ahead and delayed LMS based algorithms. In addition, the paper shows that FPGA technology, such as the Virtex FPGA is an ideal platform for this implementation, as the costs of pipelining are offset by the availability of high levels of flip flops within the FPGA architecture. A pipelined momentum LMS algorithm is identified, which is considered to offer a better convergence behavior and tracking capability than the pipelined LMS algorithm. Detailed performance results including area and timing figures based on actual FPGA layout are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Adaptive beamforming can play an important role in sensor array systems in countering directional interference. In high-sample rate systems, such as radar and comms, the calculation of adaptive weights is a very computational task that requires highly parallel solutions. For systems where low power consumption and volume are important the only viable implementation is as an Application Specific Integrated Circuit (ASIC). However, the rapid advancement of Field Programmable Gate Array (FPGA) technology is enabling highly credible re-programmable solutions. In this paper we present the implementation of a scalable linear array processor for weight calculation using QR decomposition. We employ floating-point arithmetic with mantissa size optimized to the target application to minimize component size, and implement them as relationally placed macros (RPMs) on Xilinx Virtex FPGAs to achieve predictable dense layout and high-speed operation. We present results that show that 20GFLOPS of sustained computation on a single XCV3200E-8 Virtex-E FPGA is possible. We also describe the parameterized implementation of the floating-point operators and QR-processor, and the design methodology that enables us to rapidly generate complex FPGA implementations using the industry standard hardware description language VHDL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper presents a hybrid pipelined and multiplexed architecture for finite-duration impulse response (FIR) filters used in real- time applications. The emphasis is placed on efficient hardware utilization, compared to conventional multiplexed or pipelined architectures. The multiply-accumulate (MAC) unit used here is a Booth recoded, Wallace tree based multiplier, in which the accumulator is merged with the partial product matrix. However, the approach is equally well suited to other multiplier/accumulator implementations. A novel sign extension technique is described and incorporated into the partial product reduction phase. This effectively reduces the number of rows that need sign extension, and can be used in conjunction with existing sign extension techniques. The final structure described is a form of pipelined tap-division multiplexing with the goal of maximum hardware re-use during run-time, given input data rate constraints. A method to compute the optimal hybrid pipeline depth is presented, based on the ratio of the input data rate to the critical speed of the hardware in the target technology. Performance is measured against FIR structures implemented using conventional, multiplexed, and pipelined approaches. It is shown that hardware complexity reductions of up to 18% over conventional pipelined architectures, and up to 29% over multiplexed approaches can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
It has been quite common in the analysis of single- or multi- fractal signals originating from complex nonlinear systems to make a time-delayed construction of the state space attractor in which the dynamics can be qualitatively viewed. This involves the calculations of the embedding dimension and an appropriate time delay based on the signal nonlinear correlation behavioral pattern. This is usually followed by a sub-optimal short-term linear prediction in the signal time/subspace domain instead of the optimal nonlinear prediction in the time/frequency domain. In this paper, we propose to alleviate the sub-optimality problem and exploit the nonlinear signal dynamics embedded in the attractor and integrate them in the design of a new family of temporal multiple- step Volterra predictors. Essentially, this is done by including relevant past information preserved in the signal up to a time t_{d} equals the embedded dimension X embedded time delay, and sampled at instances coincident with the embedded time delays, to predict one-step ahead, adaptively, using the LMS criteria. The results obtained using several nonlinear (chaotic or non-chaotic) synthetic and measured biomedical signals and performing the novel quadratic- and cubic-Volterra predictions show superior MSE performance, of as much as 30 dB, over those obtained using an optimized conventional one-step Volterra predictor of the same order, particularly in the case of electrocardiogram signals. This is not achieved at the expense of any increase in the CPU time as the algorithm is designed to cater for new parallel Volterra structures, with progressive delayed inputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
High-Frequency radar detects targets at thousands of kilometers over the horizon by refracting its beam from the ionosphere. A disturbed ionosphere may distort the signal severely, especially in auroral and equatorial regions. The powerful ground clutter spreads in doppler and masks targets. This distortion is sometimes assumed to take the form of a random complex time-varying distortion function multiplying the time-domain signal. Simple and effective techniques have been developed to mitigate this distortion provided that either the amplitude or the phase of the distortion predominates. The general case of severe amplitude and phase distortion is much more difficult. The techniques are highly model- dependent but are sometimes reasonable for HF radar signals. An emphasis is placed on making the algorithms efficient, so that they can run in real time and keep up with the flood of radar data. The distortion model is first analyzed by phase-screen concepts that model the physics of the electromagnetic propagation through the turbulent ionosphere. To date the techniques have been tested on simulations, since the I/Q data collected thus far do not exhibit the kind of distortions for which these techniques are applicable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
An architecture is presented for front-end processing in a wideband array system which samples real signals. Such a system may be encountered in cellular telephony, radar, or low SNR digital communications receivers. The subbanding of data enables system data rate reduction, and creates a narrowband condition for adaptive processing within the subbands. The front-end performs passband filtering, equalization, subband decomposition and adaptive beamforming. The subbanding operation is efficiently implemented using a prototype lowpass finite impulse response (FIR) filter, decomposed into polyphase form, combined with a Fast Fourier Transform (FFT) block and a bank of modulating postmultipliers. If the system acquires real inputs, a single FFT may be used to operate on two channels, but a channel separation network is then required for recovery of individual channel data. A sequence of steps is described based on data transformation techniques that enables a maximally efficient implementation of the processing stages and eliminates the need for channel separation. Operation count is reduced, and several layers of processing are eliminated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The problem addressed in this paper is to removing unwanted sounds from music sound. The sound to be removed could be disturbance such as cough. We shall present some preliminary results on this problem using statistical properties of signals. Our approach consists of three steps. We first estimate the fundamental frequencies and partials given noise-corrupted music sound. This gives us the autoregressive (AR) model of the music sound. Then we filter the noise-corrupted sound using the AR parameters. The filtered signal is then subtracted from the original noise-corrupted signal to get the disturbance. Finally, the obtained disturbance is used a reference signal to eliminate the disturbance from the noise- corrupted music signal. Above three steps are carried out in a recursive manner using a sliding window or an infinitely growing window with an appropriate forgetting factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The GMRES method is a popular iterative method for the solution of linear systems of equations with a large nonsymmetric nonsingular matrix. However, little is known about the performance of the GMRES method when the matrix of the linear system is of ill-determined rank, i.e., when the matrix has many singular values of different orders of magnitude close to the origin. Linear systems with such matrices arise, for instance, in image restoration, when the image to be restored is contaminated by noise and blur. We describe how the GMRES method can be applied to the restoration of such images. The GMRES method is compared to the conjugate gradient method applied to the normal equations associated with the given linear system of equations. The numerical examples show the GMRES method to require less computational work and to give restored images of higher quality than the conjugate gradient method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Image restoration is a procedure which is characterized by ill- poseness, ill-conditioning and non-uniqueness of the solution in the presence of noise. Iterative numerical methods have gained much attention for solving these inverse problems. Among the methods, minimal variance or least squares approaches are widely used and often generate good results at a reasonable cost in computing time using iterative optimization of the associated cost functional. In this paper, a new regularization method obtained by minimizing the autocorrelation function of residuals is proposed. Several numerical tests using the BFGS nonlinear optimization method are reported and comparisons to the classical Tikhonov regularization method are given. The results show that this method gives competitive restoration and is not sensitive to the regularization weighting parameter. Furthermore, a comprehensive procedure of image restoration is proposed by introducing a modified version of the Mumford-Shah model, which is often used in image segmentation. This approach shows promising improvement in restoration quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A variant of the MINRES method, often referred to as the MR-II method, has in the last few years become a popular iterative scheme for computing approximate solutions of large linear discrete ill- posed problems with a symmetric matrix. It is important to terminate the iterations sufficiently early in order to avoid severe amplification of measurement and round-off errors. We present a new L-curve for determining when to terminate the iterations with the MINRES and MR-II method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We study the problem of reconstructing a super-resolution image f from multiple undersampled, sifted, degraded frames with subpixel displacement errors. The corresponding operator H is a spatially- variant operator. In this paper, we apply the preconditioned conjugate gradient method with cosine transform preconditioners to solve the discrete problems. Preliminary results show that our method converges very fast and gives sound recovery of the super- resolution images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In many areas of science and engineering, it is of interest to find the shape of an object or region from indirect measurements. For instance, in geophysical prospecting, gravitational or magnetic field measurements made on the earth's surface are used to detect an oil reservoir deep inside the earth. In a different application, X-ray attenuation measurements are used in Computer Assisted Tomography (CAT) to reconstruct the shape and density of biological or inorganic materials for diagnostic and other purposes. It turns out that in these two rather disparate areas of application, among many others, the partial information can actually be distilled into moments of the underlying shapes we seek to reconstruct. Moments of a shape convey geometric information about it. For instance, the area, center of mass, and moments of inertia of an object give a rough idea of how large it is, where it is located, how round it is, and in which direction it is elongated. For simple shapes such as an ellipse, this information is sufficient to uniquely specify the shape. However, it is well-known that, for a general shape, the infinite set of moments of the object is required to uniquely specify it. Remarkable exceptions are simple polygons, and a more general class of shapes called quadrature domains that are described by semi-algebraic curves. These exceptions are of great practical importance in that they can be used to approximate, arbitrarily closely, any bounded domain in the plane. In this paper, we will describe our efforts directed at developing the mathematical basis, including some stable and efficient numerical techniques for the reconstruction of (or approximation by) these classes of shapes given 'measured' moments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Many properties of the atmosphere affect the quality of images propagating through it by blurring and reducing their contrast. The atmospheric path involves several limitations such as scattering and absorption of the light and turbulence, which degrade the image. The recently developed atmospheric Wiener filter, which corrects for turbulence blur, aerosol blur, and path radiance simultaneously, is implemented here in digital restoration of Landsat Thematic Mapper (TM) imagery over seven wavelength bands of the satellite instrumentation. Turbulence MTF (Modulation Transfer Function) is calculated from meteorological data or estimated in no meteorological data were measured. Aerosol MTF is consistent with optical depth. The product of the two yields atmospheric MTF, which is implemented in the atmospheric Wiener filter. Restoration improves both smallness of size of resolvable detail and contrast. Restorations are quite apparent even under clear weather conditions. Different restoration results are obtained by trying to restore the degraded image. A way to determine which is the best restoration result and how good is the restored image is presented here, by examining mathematical criteria such as MSE (Mean Square Error), ROH (Richness of Histogram), and SOH (Similarity of Histogram), to obtain an improved image and consequently better visual restoration results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This study has been realized to improve industrial machines that allow to analyze planks by detecting their width and too important defects thanks to a computer vision system. These machines are currently piloted by software with the help of PCs. The aim of our work is to realize a hardware card to increase the processing speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Color representation and comparison based on the histogram has proved to be very efficient for image indexing in content-based image retrieval and machine vision applications. However, the issues of color constancy and accurate color similarity measures remain unsolved. This paper presents a new algorithm for intensity- insensitive color characterization for image retrieval and machine vision applications. The color characterization algorithm divides the HSI (hue, saturation and intensity) color space into a given number of bins in such a way that the color characterization represents all the colors in the hue/saturation plane as well as black, white and gray colors. The color distribution in these bins of the HSI space is represented in the form of a one-dimensional vector called Color Spectrum Vector (CSV). The color information that is stored in the CSV is insensitive to changes in the luminance. A weighted version of CSV called WCSV is introduced to take the similarity of the neighboring bins into account. A Fuzzy Color Spectrum Vector (FCSV) color representation vector that takes into account the human uncertainty in color classification process is also introduced here. The accuracy and speed of the algorithm is demonstrated in this paper through a series of experiments on image indexing and machine vision applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This work is intended to provide some ideas on the use of a Gaussian-derivative model for visual perception, called the Hermite transform, to extract motion information from an image sequence. Gaussian-derivative operators have long been used in computer vision for feature extraction and are relevant in visual system modeling. A directional energy is defined in terms of the 1-D Hermite transform coefficients of local projections. Each projection is described by the Hermite transform, resulting in a directional derivative analysis of the input at a given spatiotemporal scale. We demonstrate that the 1-D Hermite transform coefficients of local projections are readily computed as a linear mapping of the 3-D Hermite transform coefficients through some projecting functions. The directional response is used to detect spatiotemporal patterns that are 1-D or 2-D. Practical consideration and experimental results are also of concern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The aim of our work is to implement a system of motion estimation in image sequences processing on DSP's: fast motion estimation based on Gabor spatio-temporal filters. Our approach consists to calculate optical flow using an energy-based method, named combined filtering which associates the energetic responses of Gabor spatio- temporal filters organized in triads. For this purpose, we applicate a technique developed by the Laboratory LIS in France, inspired from the architecture of Heeger. To reduce the computation time, we present also a parallel implementation of the algorithm on a multi-DSP architecture using SynDEx tool which is a programming environment to generate optimized distributed real-time executives. We show that an acceleration of factor 1.86 has been obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A high-resolution spectral analysis algorithm with application to ISAR (inverse synthetic aperture radar) imaging is proposed in this paper. The ISAR imaging is induced by target motion, which in turn causes time varying spectrum of reflected signals from the target. During the imaging time, the scatterers must remain in their range cells. Optimal Integration Angle need to be estimated to prevent defocusing in cross-range. In order to measure the evolution of spectra, we propose a new information divergence measure based on Renyi entropy. A detailed discussion reveals many of the desirable properties of this new Jensen-Renyi divergence measure. When applied in inspecting time-frequency representation of reflected signals, optimal integration angle can be obtained to produce a well focused and high resolution ISAR image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In a two-dimensional pattern matching problem, a known template image has be located in another image, irrespective of the template's position, orientation and size in the image. One way to accomplish invariance to the changes in the template is by forming a set of feature vectors that encompass all the variations in the template. Matching is then performed by finding the best similarity between the feature vector extracted from the image to the feature vectors in the template set. In this paper we introduce a new concept of a generalized Fourier transform. The generalized Fourier transform offers a relatively robust and extremely fast solution to the described matching problem. The application of the generalized Fourier transform to scale invariant pattern matching is shown here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Applying the fractional Fourier transform (FRFT) and the Wigner distribution on a signal in a cascade fashion is equivalent with a rotation of the time and frequency parameters of the Wigner distribution. In this paper an integral representation formula is presented that yields affine transformations on the spatial and frequency parameters of the n-dimensional Wigner distribution if it is applied on a signal with the Wigner distribution as for the FRFT. This representation formula is used to solve certain energy localization problems in phase space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.