PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We present an improvement of BKM: a shift-and-add algorithm based on the CORDIC that allows fast computation of complex exponential and logarithm functions. BKM is accelerated by the use of a redundant binary number system. Unlike the previous redundant CORDIC methods, we do not need neither to calculate the scale factor during the computation, nor to double the number of iterations. So our algorithm is suitable for an efficient hardware implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a high-speed method for approximating reciprocals and square roots. This method is based on piecewise second-order Taylor expansion, yet it requires one third as many coefficients and has less delay. It can be implemented using a table lookup, a multiplication and a merged operation, which has approximately the same area and delay as a multiplication. The table lookup and merged operation are independent and can be performed in parallel. To reduce the overall hardware requirements, this method can share hardware with an existing multiplier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The control of error during numerical evaluation of an expression is a crucial problem. A natural solution is to use interval arithmetic during calculations. For this reason, we need algorithms able to efficiently determine the image of an interval by the elementary functions. Adapting classical algorithms to interval arithmetic leads to a large overestimation of the result. In this paper, we propose a method based on Taylor approximation which evaluates functions on the bounds of the interval and deduces the resulting image. The computation of each bound in done with the Taylor approximation evaluated by the Smith scheme. Indeed, polynomial approximation seems to be the most adapted to our range of precision (several hundreds of digits). This approach is better than the classical algorithm using the Horner scheme and than the other polynomial approximations (e.g. minimax and Chebyshev polynomials) which are more precise but also more complicated. Furthermore, we present a rigorous evaluation of the error. Thus, the desired accuracy of the result is given, even on the numerically instable points (big number, (pi) /2 ...). For the computation of a point, our algorithm implemented with GMP is up to 2 times faster than the corresponding computation with softwares like MAPLE or MUPAD which don't even guarantee the relative precision. We show that the control of the error is not costly. And since the Taylor approximation is just a bit less precise but far more simple than the other approximations, it can be rapidly determined and, all in all, is the most efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new digital reprogrammable architecture called Field Programmable On-line oPerators (FPOP). This architecture is a kind of FPGA dedicated to very low-power implementations of numerical algorithms in signal processing or digital control applications for embedded or portable systems. FPOP is based on a reprogrammable array of on-line arithmetic operators. On-line arithmetic is a digit-serial arithmetic with most significant digits first using a redundant number system. Because of the small size of the digit-serial operators and the small number of communication wires between the operators, single chip implementation of complex numerical algorithms can be achieved using on-line arithmetic. Furthermore, the digit-level pipeline and the small size of the arithmetic operators lead to high performance parallel computations. Compared to a standard FPGA, the basic cells in FPOP are arithmetic operators such as adders, subtracters, multipliers, dividers, square-rooters, sine or cosine operators. This granularity level allows very efficient power X delay implementations of most algorithms used in digital control and signal processing. The circuit also integrates some analog to digital and digital to analog converters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We suggest a new table-based method for evaluating the exponential function in double precision arithmetic. This method can easily be extended to the base 2 exponential function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents novel methods of designing analog Cellular Nonlinear (Neural) Networks (CNNs) to implement very low-noise binary addition. In these techniques the continuous characteristic of the current that charges (discharges) the load capacitor, leads to a virtually switching free addition process that significantly reduces the switching noise. This switching mechanism also leads to higher slew of output voltage during the transitions which in turn reduces the cross talk. Simulation results demonstrate a three orders of magnitude reduction in the noise generated by this structure compared to that generated by a digital adder running at the same speed. This very good noise performance of these new adder structures makes them suitable choices for low to moderate speed high precision mixed signal applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The multistage detection algorithm has been widely accepted as an effective interference cancellation scheme for next generation Wideband Code Division Multiple Access (W-CDMA) base stations. In this paper, we propose a real-time implementation of this detection algorithm in the uplink system, where we have achieved both high performance in the interference cancellation and computational efficiency. When interference cancelation converges, the difference of the detection vectors between two consecutive stages is mostly zero. We recode the estimation bits, mapping from plus or minus 1 to 0 and plus or minus 2. Bypassing all the zero terms saves computations. Multiplication by plus or minus 2 can be easily implemented in hardware as arithmetic shifts. The system delay of a three-stage detector can be reduced by half with satisfactory bit error rate. We also propose a VLSI implementation of this algorithm that has the potential of real-time performance. The detector handling up to eight users with 12-bit fixed point precision was fabricated using a 1.2 micrometer CMOS technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the application of the Linear Sequential Array (LSA) retiming approach, developed for conventional digit- recurrence algorithms, to on-line multiplication. The result is a modular and fast pipelined structure which due to a small constant fan-out and cycle time independent of precision is suitable for FPGA implementation. First we present the basics of on-line multiplication, and determine data dependencies according to the LSA design methodology. Based on these dependencies we redesign the traditional on-line multiplier to obtain the LSA structure. Since in DSP applications one of the multiplier operands is fixed for a long sequence of operations, we briefly present a parallel-serial multiplication unit that receives one of the operands in parallel and the other operand in Most-Significant-Digit-First format. Performance and area results are provided for the LSA on-line multiplier design and then compared with the conventional on-line design, using Xilinx FPGAs as the target technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fault tolerance is increasingly important as society has come to depend on computers for more and more aspects of daily life. The current concern about the Y2K problems indicates just how much we depend on accurate computers. This paper describes work on time- shared TMR, a technique which is used to provide arithmetic operations that produce correct results in spite of circuit faults.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies an approach to solve the problem of color purification for images of scanned paper maps in an experimental manner. The mathematical foundation of the approach is briefly outlined. A computationally feasible algorithm is then proposed. This algorithm is tested through real life testing. Results indicate that this approach not only restores and purifies colors of the map digitally. It compresses the data of the image files too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that high-dimensional integral can be solved with Monte Carlo algorithms. Recently, it was discovered that there is a relationship between low discrepancy sets and the efficient evaluation of higher-dimensional integral. Theory suggests that for midsize dimensional problems, algorithms based on low discrepancy sets should outperform all other existing methods by an order of magnitude in terms of the number of sample points used to evaluate the integral. We show that the field of image processing can potentially take advantage of specific properties of low discrepancy sets. To illustrate this, we applied the theory of low discrepancy sequences to some relatively simple image processing and computer vision related operations such as the estimation of gray level image statistics, fast location of objects in a binary image and the reconstruction of images from a sparse set of points. Our experiments show that compared to standard methods, the proposed new algorithms are faster and statistically more robust. Classical low discrepancy sets based on the Halton and Sobol' sequences were investigated thoroughly and showed promising results. The use of low discrepancy sequences in image processing for image characterization, understanding and object recognition is a novel and promising area for further investigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a robust algorithm for image processing using generalized reaction-diffusion equations. An edge enhancing functional is proposed for image enhancement. A number of super diffusion operators is introduced for fast and effective smoothing. Statistical information is utilized for robust edge-stopping and diffusion rate estimation. A unification of computational methods is discussed. The unified computational method is employed for the numerical integration of the generalized reaction-diffusion equations. Computer experiments indicate that the present algorithm is very efficient for edge-detecting and noise-removing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach for generating gray scale characters, called gray scale rasterization, is discussed in this paper for generating Chinese gray scale characters. Gray scale characters proved to be superior in quality than the traditional black-and-white characters on the CRT display which is the most important output device for all multimedia, Internet applications. Digital filtering, a commonly used technique is image processing, is often used to generate English gray scale characters which are stored in font libraries for high quality display applications. Simple estimations show that such an approach cannot be used to support the creation of Chines gray scale font libraries. A gray scale rasterizer, which generates gray scale pixel maps directly from the outline descriptions of the characters, is proposed in this proposal. Some implementation techniques are introduced in this paper: the approximation of Beizer curves by straight lines, the investigation of the filling process and the migration of the horizontal and vertical strokes. With such techniques, a gray scale rasterizer will be implemented. It is expected that gray scale Chinese display can be beneficial to the Chinese multimedia and Internet applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of our work is to implement a system,of automatic face image processing on DSP's: face detection in an image, face recognition and face identification. The first step is to localize the face in an image. Our approach consists to approximate the face oval shape with an ellipse and to compute coordinates of the center of the ellipse. For this purpose, we explore a new version of the Hough transformation: the Fuzzy Generalized Hough transformation. To reduce the computation time, we present also several parallel implementations of the algorithm on a multi-DSP architecture using SynDEx tool which is a programming environment to generate optimized distributed real-time executives. We show that an acceleration of factor 1.7 has been obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is not uncommon for remote sensing systems to produce in excess of 100 Mbytes/sec. Los Alamos National Laboratory designed a reconfigurable computer to tackle the signal and image processing challenges of high bandwidth sensors. Reconfigurable computing, based on field programmable gate arrays, offers ten to one hundred times the performance of traditional microprocessors for certain algorithms. This paper discusses the architecture of the computer and the source of performance gains, as well as an example application. The calculation of multiple matched filters applied to multispectral imagery, showing a performance advantage of forty- five over Pentium II (450 MHz), is presented as an exemplar of algorithms appropriate for this technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a complete fast imaging system applied to human movement analysis. The main goal of this system composed of a fast CCD camera associated with a real time calculation board, is to follow and to analyze the movement of a human operator. This system functions to 300 images by second. This has been able to be possible thanks to the utilization of FPGA circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A cascadable 10GOPS transversal filter chip has been designed and fabricated and can operate in 32-tap symmetric, 32-tap anti- symmetric or 16-tap non-symmetric modes. It has programmable tap weights and uses 16-bit signed arithmetic with radix-16 multipliers and 4 - 2 compressors to reduce the transistor count. The chip was fabricated in a 0.35 micrometer CMOS process, measures 3.1 X 4.4 mm and contains 310,000 transistors. The chip is pipelined and has a maximum clock rate of 200 MHz (200 MSa/s throughput). An error table compensator system using a lookup table has been built using the transversal filter programmed as a wideband differentiator with some additional on chip circuits including delays and an adder. An external memory stores the error table. The error table technique is capable of providing between 7 to 15 dB improvement in the dynamic range of typical 100 Ms/s A/D converters. An application to pulse shaping of high chip rate spread spectrum signals is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a high performance implementation of the FFT algorithm on the BOPS ManArray parallel DSP processor. The ManArray we consider for this application consists of an array controller and 2 to 4 fully interconnected processing elements. To expose the parallelism inherent to an FFT algorithm we use a factorization of the DFT matrix in Kronecker products, permutation and diagonal matrices. Our implementation utilizes the multiple levels of parallelism that are available on the ManArray. We use the special multiply complex instruction, that calculates the product of two complex 32-bit fixed point numbers in 2 cycles (pipelinable). Instruction level parallelism is exploited via the indirect Very Long Instruction Word (iVLIW). With an iVLIW, in the same cycle a complex number is read from memory, another complex number is written to memory, a complex multiplication starts and another finishes, two complex additions or subtractions are done and a complex number is exchanged with another processing element. Multiple local FFTs are executed in Single Instruction Multiple Data (SIMD) mode, and to avoid a costly data transposition we execute distributed FFTs in Synchronous Multiple Instructions Multiple Data (SMIMD) mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subband-domain algorithms provide an attractive technique for wideband radar array processing. The subband-domain approach decomposes a received wideband signal into a set of narrowband signals. While the number of processing threads in the system increases, the narrowband signals within each subband can be sampled at a correspondingly slower rate. Therefore, the data rate at the input is similar to that at the output of the subband processor. There are several advantages to the subbanding method. It can simplify typical radar algorithms such as adaptive beamforming and equalization by the virtue of reducing subband signal bandwidth, thereby potentially reducing the computational complexity over an equivalent tapped-delay line approach. It also allows for a greater parallelization of the processing task, hence enabling the use of slower and less power consuming hardware. In order to evaluate the validity of the subbanding approach, it is compared with conventional processing methods. This paper focuses on adaptive beamforming and pulse compression performance for a wideband radar system. The performance of an adaptive beamformer is given for a polyphase filter based subband approach and is measured against narrowband processing. SINR loss curves and beampatterns for a subband system are presented. Design criteria for subband polyphase filter processing that minimizes signal distortion are provided and the distortion is characterized. Finally subband- domain pulse compression is demonstrated and compared with the conventional approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the fixed-point analysis and realization of a blind beamforming algorithm. This maximum-power beamforming algorithm consists of the computation of a correlation matrix and its dominant eigenvector, and we propose that the later be accomplished by the power method. After analyzing the numerical stability of the power method, we derive a division-free form of the algorithm. Based on a block-Toeplitz assumption, we design an FIR filter based system to realize both the correlation computation and the power method. Our ring processor, which is optimized to implement digital filters, is used as the core of the architecture. A special technique for dynamically switching filter inputs is shown to double the system throughput. Finally we discuss the issue of hardware/software hybrid realization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problems of blind decorrelation and blind deconvolution have attracted considerable interest recently. These two problems traditionally have been studied as two different subjects, and a variety of algorithms have been proposed to solve them. In this paper, we consider these two problems jointly in the application of a multi-sensor network and propose a new algorithm for them. In our model, the system is a MIMO system (multiple-input multiple-output) which consists of linearly independent FIR channels. The unknown inputs are assumed to be uncorrelated and persistently excited. Furthermore, inputs can be colored sources and their distributions can be unknown. The new algorithm is capable of separating multiple input sources passing through some dispersive channels. Our algorithm is a generalization of Moulines' algorithm from single input to multiple inputs. The new algorithm is based on second order statistics which require shorter data length than the higher order statistics algorithms for the same estimation accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For single-input multiple-output (SIMO) systems blind deconvolution based on second-order statistics has been shown promising given that the sources and channels meet certain assumptions. In our previous paper we extend the work to multiple-input multiple-output (MIMO) systems by introducing a blind deconvolution algorithm to remove all channel dispersion followed by a blind decorrelation algorithm to separate different sources from their instantaneous mixture. In this paper we first explore more details embedded in our algorithm. Then we present simulation results to show that our algorithm is applicable to MIMO systems excited by a broad class of signals such as speech, music and digitally modulated symbols.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of source localization from arrival time delay estimates requires a computationally costly iterative solution of a set of nonlinear equations. Most known methods assume that the propagation speed is known. In this paper, we provide several effective source localization and propagation velocity estimation methods which only use measurements of the relative arrival time delays between sensors. The formulae for source localization and propagation speed estimation are derived based on least squares, total least squares, bounded data uncertainty, and constrained least squares methods. Statistical performance of these methods are compared via computer simulation. In addition, in order to avoid time delay ambiguity problems and obtain smoother time delays, two time delay smoothing methods based on the forward backward algorithm and the Viterbi algorithm are also proposed. Field experiment results based on these techniques are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of vehicle signals with methods derived from the theory of nonlinear dynamics is a potential tool to classify different vehicles. The nonlinear dynamical methodologies provide alternate system information that the linear analysis tools have ignored. In order to observe the nonlinear dynamic phenomena more clearly, and estimate system invariants more robustly, we exploit the maximum power blind beamforming algorithm as a signal enhancement and noise reduction method when locations of a source and sensors are unknown. The dynamical behavior of an acoustic vehicle signal is studied with the use of correlation dimension D2 and Lyapunov exponents. To characterize the nonlinear dynamic behavior of the acoustic vehicle signal, Taken's embedded theory is used to form an attractor in phase space based on a single observed time series. The time series is obtained from the coherently enhanced output of a blind beamforming array. Then the Grassberger- Procaccia algorithm and Sano-Sawada method are exploited to compute the correlation dimension and Lyapunov exponents. In this paper, we also propose some efficient computational methods for evaluating these system invariants. Experimental classification results show that the maximum power blind beamforming processing improves the estimation of the invariants of the nonlinear dynamic system. Preliminary results show that the nonlinear dynamics is useful for classification applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SVD). The ULVD can be updated and downdated much faster than the SVD, hence its utility in the solution of recursive total least squares (TLS) problems. However, the robust implementation of ULVD after the addition and deletion of rows (called updating and downdating respectively) is not altogether straightforward. When updating or downdating the ULVD, the accurate computation of the subspaces necessary to solve the TLS problem is of great importance. In this paper, algorithms are given to compute simple parameters that can often show when good subspaces have been computed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is shown how some simple multilinear transformations may be used to generate an alternative, and potentially faster, version of the BLISS algorithm for blind signal separation. The BLISS algorithm performs an independent component analysis of the type proposed by Comon but uses a different method for estimating the pairwise rotation angles. It has been applied to a wide range of communication signals and proved extremely successful in practice. The reformulated version of BLISS requires less arithmetic operations provided the number of signals to be separated is fairly small (approximately 3 to 5) as found in a range of important communication scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performances of high-resolution array processing methods are known to degrade in random inhomogeneous media because the amplitude and phase of each wavefront tend to fluctuate and to loose their coherence between array sensors. As a result, in the presence of such a multiplicative noise, the conventional coherent wavefront model becomes inapplicable. Such a type of degradation may be especially strong for large aperture arrays. Below, we develop new high-resolution covariance matching (CM) techniques with an improved robustness against multiplicative noise and related coherence losses. Using a few unrestrictive physics-based assumptions on the environment, we show that reliable algorithms can be developed which take into account possible coherence losses. Computer simulation results and real sonar data processing results are presented. These results demonstrate drastic improvements achieved by our approach as compared with conventional high- resolution array processing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conventional resolution of individual emitters or frequencies within a cluster is limited by the physical dimensions and electrical aspects (such as the bandwidth) of a sensor system. Super-resolution describes algorithmic techniques that potentially enhance the conventional degree of resolution. Although there has been considerable research into super-resolution techniques (since 1970), there has, in contrast, been very little that addresses the fundamental bound of resolution performance that should theoretically be achievable by a 'perfect' algorithm in ideal conditions. The purpose of this paper is to present a generic method for predicting the fundamental resolution limit. We show that the resolution of closely-spaced signal waveforms is intrinsically linked to the signal-to-noise ratios of those signals. The method can be applied to individual spatial, temporal or spectral discriminants or to multi-discriminant systems. Loss of SNR resulting from the need to separate signals is derived both for the matched filter case and for eigen decomposition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a clustering algorithm that dynamically inserts and relocates cluster units based only on their interaction with neighboring clusters and data points. This leads to update and allocation procedures for centers locations based on local data distributions. These local data distributions can be uncovered by examining neighboring clusters or local interconnections between center locations. The consequence of only adapting nearest centers to a newly inserted cluster unit is a significant reduction in the necessary computational power for finding the center distribution that reduces the global distortion error. The proposed algorithm inserts new cluster units based on local distortion errors and utility measures, and uses a local LBG routine to integrate the new unit. Experiments have shown that it is not necessary to let the LBG routine converge in order to achieve integration of the new unit; the number of necessary iterations is instead determined by the center distribution in the neighborhood of new units. The algorithm thus offers a considerable speedup compared to conventional clustering algorithms that take the entire data set into account when inserting or relocating cluster units.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce the concept of (epsilon) -displacement rank, that allows us to devise a fast algorithm for the approximate solution of BBBT/BTB (Block Banded Block Toeplitz with Banded Toeplitz Blocks) systems by means of cyclic reduction. We also introduce the concept of incomplete displacement block LU factorization of a Toeplitz-like matrix, where the displacement structure is imposed to the blocks of the factors L and U. The role of the matrix LU as preconditioner is discussed. Finally we propose another preconditioner obtained by extending a BBBT/BTB matrix to a banded Toeplitz matrix. Some open problems are addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classical Caratheodory-Fejer and Nevanlinna-Pick interpolation problems have a long and distinguished history, appearing in a variety of applications in mathematics and electrical engineering. It is well-known that these problems can be solved in O(n2) operations, where n is the overall multiplicity of interpolation points. In this paper we suggest a superfast algorithm for solving the more general confluent tangential interpolation problem. The cost of the new algorithm varies from O(n log2 n) to O(n log3 n), depending on the multiplicity pattern of the interpolation points. The new algorithm can be used to factorize, invert, and solve a linear system of equations with confluent- Cauchy-like matrices. This class of matrices includes Hankel-like (i.e., permuted Toeplitz-like), Vandermonde-like and Cauchy-like matrices as special cases. An important ingredient of the proposed method is a new fast algorithm to compute a product of a confluent- Cauchy-like matrix by a vector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the important steps of an O(n2 log n) algorithm for finding the eigenvalues of a complex Hankel matrix. The three key steps are a Lanczos-type tridiagonalization algorithm, a fast FFT-based Hankel matrix-vector product procedure, and a QR eigenvalue method based on complex-orthogonal transformations. In this paper, we present an error analysis of the three steps, as well as results from numerical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper gives displacement structure algorithms for the factorization positive definite and indefinite Hankel and Hankel- like matrices. The positive definite algorithm uses orthogonal symplectic transformations in place of the (Sigma) -orthogonal transformations used in Toeplitz algorithms. The indefinite algorithm uses a look-ahead step and is based on the observation that displacement structure algorithms for Hankel factorization have a natural and simple block generalization. Both algorithms can be applied to Hankel-like matrices of arbitrary displacement rank.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Continued fractions (CFs) enable straightforward representation of elementary functions and rational approximations. We improve the positional algebraic algorithm, which computes homographic functions such as y equals ax+b/cx+d, given redundant continued fractions x,y, and integers a,b,c,d. The improved algorithm for the linear fractional transformation produces exact results, given regular continued fraction input. In case the input is in redundant continued fraction form, our improved linear algorithm increases the percentage of exact results with 12-bit state registers from 78% to 98%. The maximal error of non-exact results is improved from approximately 1 to 2-8. Indeed, by detecting a small number of cases, we can add a final correction step to improve the guaranteed accuracy of non-exact results. We refer to the fact that a few results may not be exact as 'Semi- Exact' arithmetic. We detail the adjustments to the positional algebraic algorithm concerning register overflow, the virtual singularities that occur during the computation, and the errors due to non-regular, redundant CF inputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linearized rational interpolation problems at roots of unity play a crucial role in the fast and superfast Toeplitz solvers that we have developed. Our interpolation algorithm is a sequential algorithm in which a matrix polynomial that satisfies already some of the interpolation conditions is updated to satisfy two additional interpolation conditions. In the algorithm that we have used so far, the updating matrix, which is a matrix polynomial of degree one, is constructed in a two-step process that resembles Gaussian elimination. We briefly recall this approach and then consider two other approaches. The first one is a completely new approach based on an updating matrix that is unitary with respect to a discrete inner product that is based on roots of unity. The second one is an application of an algorithm for solving discrete least squares problems on the unit circle, a problem that has linearized rational interpolation at roots of unity as its limiting case. We conduct a number of numerical experiments to compare the three strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cauchy-Vandermonde matrices are related with rational interpolation problems. In this paper we consider the general case in which multiple poles can appear. Fast algorithms for solving the corresponding linear systems are presented. Some results on the total positivity of these matrices and other related matrices are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The idea is (1) to use a variable energy allocation per signaling symbol and (2) to pseudorandomize the order in which signaling symbols are transmitted. This strategy will allow the most important part of the sensor data to be (1) decorrelated against periodic unequal power interfering noise and specifically (2) where that noise is purposely tailored or allocated in an attempt to provide maximum disruption of information transport as in 'smart' jamming. In a very real sense, perfect compression followed by this method leads to a genre of spread spectrum known as Time Hopping (TH).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital audio watermarking scheme of low complexity is proposed in this research as an effective way to deter users from misusing or illegally distributing audio data. Previous work on audio watermarking has primarily focused on the inaudibility of the embedded watermark and its robustness against attacks such as compression and noise. In this research, special attention is paid to the synchronization attack caused by casual audio editing or malicious random cropping, which is a low-cost yet effective attack to watermarking algorithms developed before. The proposed scheme is based on audio content analysis and watermark embedding in the Fourier transform domain. A blind watermark detection technique is developed to identify the embedded watermark under various types of attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a technique for transmit antenna nulling for low-cost large sparse phased array radar system. Radar system described includes an array of elemental antennas, each with a transmit/receive (T/R) module. The T/R modules are operated at or near maximum output to achieve maximum CD-to-RF efficiency. A phase controller controls the phase shift, which are imparted by each module to its signal, to form a mainbeam and its associated sidelobes. A perturbation phase generator adds phase shifts computed, to form wide nulls in the sidelobe structure. The nulls are achieved at very minimal loss of gain, in the order of fraction of a dB. The speed of obtaining these nulls in real time allows a rapid steering of these nulls in a hostile environment. The thinned aperture allow designing a light weigh mobile system. In radar context, these nulls may be placed on a source of ground clutter, a set of jammers or a set of undesirable radio sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of orthonormal scaling functions, wavelets and wavelet packets for modulation has been recently proposed in references 1 - 3. In this paper we study the performance of the Meyer, Daubechies, biorthogonal Spline, Battle-Lemarie, and modified Gaussian families of wavelets, for modulation over linear and non- linear channels, and we propose the use of wavelets for modulation over FDM non-linear satellite channels and for multicarrier modulation on satellite and multipath channels. Simulation results indicate that in certain cases the wavelet based shaping pulses could out-perform the traditional techniques in extremely critical transmission conditions, such as interchannel interference impaired satellite transmission or transmission on multipath channels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a digital beamforming architecture for nulling a mainlobe jammer and multiple sidelobe jammers while maintaining the angle estimation accuracy of the monopulse ratio. A sidelobe jammer canceling adaptive array is cascaded with a mainlobe jammer canceller, imposing a mainlobe maintenance technique or constrained adaptation during sidelobe cancellation process so the results of sidelobe jammer cancellation process do not distort subsequent mainlobe cancellation process. The sidelobe jammers and the mainlobe jammer are thus cancelled sequentially in separate processes. This adaptive digital beamforming technique is for improving radar processing for determining the angular location of a target, and specifically to an improvement in the monopulse technique so as to maintain accuracy of the monopulse ratio in the presence of jamming by adaptive suppression of jamming before forming the sum and difference beams.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We obtain explicit expressions for the time, scale, and frequency moments of the wavelet transform in terms of the moments of the signal and wavelet. We show that generally they do not exist for common signals even when the moments of the signal and wavelet do exit. The peculiar behavior of the wavelet transform in this regard is pinpointed. The lack of existence of simple moments makes the interpretation and usefulness of the wavelet transform for time- frequency analysis problematic and it is argued that its behavior is quite poor when compared to other simple time-frequency methods, such as the short-time Fourier transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper outlines means of using special sets of orthonormally related windows to realize Cohen's class of time-frequency distributions (TFDs). This is accomplished by decomposing the kernel of the distribution in terms of the set of analysis windows to obtain short time Fourier transforms (STFTs). The STFTs obtained using these analysis windows are used to form spectrograms which are then linearly combined with proper weights to form the desired TFD. A set of orthogonal analysis windows which also have the scaling property proves to be very effective, requiring only 1 + log2(N - 1) distinct windows for an overall analysis of N + 1 points, where N equals 2n, with n a positive integer. Application of this theory offers very fast computation of TFDs, since very few analysis windows needed and fast, recursive STFT algorithms can be used. Additionally, it is shown that a minimal set of specially derived orthonormal windows can represent most TFDs, including Reduced Interference Distributions (RIDs) with only three distinct windows plus an impulse window. Finally, the Minimal Window RID (MW-RID) which achieves RID properties with only one distinct window and an impulse window is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In earlier work, Thomson's adaptive multitaper spectrum estimation method was extended to the nonstationary case. This paper reviews the time-frequency multitaper method and the adaptive procedure, and explores some properties of the eigenvalues and eigenvectors. The variance of the adaptive estimator is used to construct an adaptive smoother, which is used to form a high resolution estimate. An F-test for detecting and removing sinusoidal components in the time-frequency spectrum is also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are presenting a new class of transforms which facilitates the processing of signals that are nonlinearly stretched or compressed in time. We refer to nonlinear stretching and compression as warping. While the magnitude of the Fourier transform is invariant under time shift operations, and the magnitude of the scale transform is invariant under (linear) scaling operations, the new class of transforms is magnitude invariant under warping operations. The new class contains the Fourier transform and the scale transform as special cases. Important theorems, like the convolution theorem for Fourier transforms, are generalized into theorems that apply to arbitrary members of the transform class. Cohen's class of time-frequency distributions is generalized to joint representations in time and arbitrary warping variables. Special attention is paid to a modification of the new class of transforms that maps an arbitrary time-frequency contour into an impulse in the transforms that maps an arbitrary time-frequency contour into an impulse in the transform domain. A chirp transform is derived as an example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we use the concept of evolutionary spectrum to solve key problems in array processing. We present Cross-power Evolutionary Periodogram for direction finding and blind separation of nonstationary signals. We model nonstationary signals received by each sensor in the array as a sum of complex sinusoids with time-varying amplitudes. These amplitudes carry information about the direction of arrival which may also be time-varying. We first estimate the time-varying amplitudes, then use the results for the estimation of evolutionary cross-power distributions of the sensor data. Next, using cross-power estimates at time-frequency samples of interest, we estimate the directions of arrival using one of the existing high resolution direction finding methods. If the directions are time-varying, we select time-frequency points around the time of interest. By carrying out the estimation at different times, we obtain the directions as a function of time. If the sources are stationary, then we can use all time-frequency points of interest for the estimation of fixed directions. We also use whitening and subspace methods to find the mixing matrix and separate the signals received by the array. Simulation examples illustrating the performances of the proposed algorithms are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Amin et. al. recently developed a time-frequency MUSIC algorithm with narrow band models for the estimation of direction of arrival (DOA) when the source signals are chirps. In this research, we consider wideband models. The joint time-frequency analysis is first used to estimate the chirp rates of the source signals and then the DOA is estimated by the MUSIC algorithm with an iterative approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we introduce a multi-scale deconvolution technique performed in the scale-domain. In sensor array applications such as in radar, sonar and seismic processing, the sensor outputs are modeled as the convolution of the unknown source signal with various unknown system impulse responses that are scaled versions of each other with unknown scale parameters. In many applications these signals or the scaling parameters are needed to be estimated only from the sensor outputs. In our earlier work, we estimated the unknown scale parameters by using properties of the scale transform and then employed existing deconvolution algorithms. Here, we derive the multi-scale blind deconvolution algorithm in the scale transform domain. The performance of the method is illustrated using simulation examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysis and interpretation of the signals subject to scale changes. A number of single-dimensional applications of scale concept have been made in speech analysis, processing of biological signals, machine vibration analysis and other areas. Recently, the scale transform was also applied in multi-dimensional signal processing and used for image filtering and denoising. Discrete implementation of the scale transform can be carried out using logarithmic sampling and the well-known fast Fourier transform. Nevertheless, in the case of the uniformly sampled signals, this implementation involves resampling. An algorithm not involving resampling of the uniformly sampled signals has been derived too. In this paper, a modification of the later algorithm for discrete implementation of the direct scale transform is presented. In addition, similar concept was used to improve a recently introduced discrete implementation of the inverse scale transform. Estimation of the absolute discretization errors showed that the modified algorithms have a desirable property of yielding a smaller region of possible error magnitudes. Experimental results are obtained using artificial signals as well as signals evoked from the temporomandibular joint. In addition, discrete implementations for the separable two-dimensional direct and inverse scale transforms are derived. Experiments with image restoration and scaling through two-dimensional scale domain using the novel implementation of the separable two-dimensional scale transform pair are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Investigating a number of different integral transforms uncovers distinct patterns in the type of scale-based convolution theorems afforded by each. It is shown that scaling convolutions behave in quite a similar fashion to translational convolution in the transform domain, such that the many diverse transforms have only a few different forms for convolution theorems. The hypothesis is put forth that the space of integral transforms is partitionable based on these forms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We apply time-frequency methods to automotive vibration signals for sound quality analysis. Our analysis indicates that time-frequency methods provide additional information beyond that provided by the spectral density and vibration time series that is relevant to the assessment of noise and sound quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a discussion of methods based on the complex cross- spectrum and the application of these methods to the analysis of speech. The cross spectral methods developed here are an extension of methods developed in the 1980s by one of the authors for accurately estimating stationary and cyclo-stationary parameters of signals buried deep in the noise. Since speech is non-stationary and therefore supports very little integration, the methods have been re-developed to address issues such as non-stationarity, harmonic structures and rapidly changing resonance Cross-spectral methods are presented as complex valued time-frequency surface methods which provide signal parameter estimation by taking advantage of signal structure. These methods have proven to be very powerful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a number of methods that use image and signal processing techniques for removal of noise from a signal. The basic idea is to first construct a time-frequency density of the noisy signal. The time-frequency density, which is a function of two variables, can then be treated as an 'image,' thereby enabling use of image processing methods to remove noise and enhance the image. Having obtained an enhanced time-frequency density, one then reconstructs the signal. Various time frequency-densities are used and also a number of image processing methods are investigated. Examples of human speech and whale sounds are given. In addition, new methods are presented for estimation of signal parameters from the time- frequency density.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The identification of structural systems using time-frequency analysis has been recently proposed to detect possible damages under normal serviceability conditions. Accelerometric signals are recorded at different points of the structure. They consist of the superposition of vibration modes (that identify the system) and residual components. Each vibration mode is represented by its frequency, its amplitude at different points of the structure, and the damping factor that determines its amplitude modulation. We have lately proposed a Cohen Class cross-time-frequency based technique to identify the vibration modes. In this paper we show how the technique may be developed in a fully automatic procedure and we discuss how the use of adaptive kernels may improve the reliability of the identification. The automatic procedure is based on two properties that characterize the vibration modes: (1) the ratio between the amplitude of the same modal component at different points of the structure is constant; and (2) the phase difference between the signals corresponding to the same modal component at different point of the structure is constant. These properties enable the vibration modes and residual components to be discriminated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a signal-based technique for evaluating a pole-zero representation of the resonant response of a violin instrument. This technique combines time-frequency signal analysis with system identification techniques to determine the pole-zero function that would account for amplitude modulation observed on the partials of violin notes performed with vibrato. Violin vibrato signals are analyzed with the modal distribution to obtain values of instantaneous amplitude and frequency for each partial. From these, input and output functions are synthesized and used to estimate the violin body's impulse response using an infinite impulse response (IIR) system identification procedure. In each case, the input and output functions share the same instantaneous frequency of the measured partial. However, the rapid amplitude variations are present only on the output function. We report on the location and spacing of these estimated resonances and discuss their relationship to those obtained from theoretical predictions and other measurement procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a method for signal component separation, operating in the Time-Frequency (TF) plane and employing a Turbo Estimation Algorithm (TEA), is described. A novel 2D distribution is proposed, named Two Window Spectrogram (TWS), which is free from crossterms and able to yield good time and frequency resolution. Then, a set of parameters is defined in the time-frequency plane, which are able to carry the relevant information on the signal components. An algorithm of estimation of these parameters is proposed, making use of a TEA scheme to yield improved performance. The algorithm has been tested by simulation, yielding very encouraging performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As with the case of instantaneous frequency, it is often difficult to interpret the instantaneous bandwidth of most signals: both quantities typically range beyond the spectral support of the signal, yielding the paradox that the instantaneous bandwidth (and frequency) can be greater than the global bandwidth of the signal. A new definition of instantaneous frequency that does not suffer from this difficulty has recently been given, and we build on those results here to obtain a new definition of instantaneous bandwidth. Kernel constraints for a Cohen-class time-frequency distribution to yield these new results for its conditional moments are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this work is to contrast techniques used to estimate two instantaneous frequency parameters of the surface electromyographic (EMG) signal, the instantaneous median frequency and the instantaneous mean frequency, based on their estimation error. Three methods are compared: Cohen class and Cohen-Posch class time- frequency representations are used to compute both the above- mentioned instantaneous frequency parameters, and a cross-time- frequency based technique is adopted to derive the instantaneous mean frequency. The results demonstrate that the algorithm based on Cohen-Posch class transformations leads to a standard deviation of the instantaneous frequency parameters that is smaller than that obtained using Cohen class representations. However, the cross- time-frequency estimation procedure for instantaneous mean frequency produced the smallest standard deviation compared to the other techniques. The algorithms based on Cohen class and Cohen- Posch class transformations often provided a lower bias than the cross-time-frequency based technique. This advantage was particularly evident when the instantaneous mean frequency varies non-linearly within the epochs used to derive the cross-time- frequency representation of the surface EMG signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Originally coined by the sensory psychologist Roger Shepard in the 1960s, chroma transforms frequency into octave equivalence classes. By extending the concept of chroma to chroma strength and how it varies over time, we have demonstrated the utility of chroma in simplifying the processing and representation of signals dominated by harmonically-related narrowband components. These investigations have utilized an ad hoc procedure for calculating the chromagram from a given time-frequency distribution. The present paper is intended to put this ad hoc procedure on more sound mathematical ground.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.