PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Recognition of specific patterns and signatures in images has long been of interest. Powerful techniques exist for detection and classification, but are defeated by straightforward changes and variations in the pattern. These variations include translation and scale changes. Translation and scaling are well understood in a mathematical sense and transformations exist such that, when applied to an image, the result is invariant to these disturbances. Hence, methods may be designed wherein these effects are absent in the resultant representations. This paper describes a pattern recognition procedure which uses scale and translation invariant representations (STIRs) as one step of the process. A novel feature extraction method then identifies features of the STIRs orthogonal to noise variation. This is followed by a detection approach which exploits these features to detect desired patterns in noise. By explicitly modeling the variation due to noninteger scaling factors and sub-pixel translation, strong discrimination between similar patterns is achieved. Using the orthogonal features of the invariant representations, several tests are shown to classify well. A two dimensional image is the basic starting point for the technique. This may be an actual image of an object or the two dimensional form of signal representation such as a time-frequency distribution. The example of keyword spotting in scanned documents serves to illustrate the pattern recognition method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In time-frequency analysis, we extend functions of one variable to functions of two variables. The functions of two variables provide information about the signal that is not easily discernible from the functions of one variable. In this paper, we investigate a method for creating quartic functions of three variables and also a quartic function of all four variables. These quartic functions provide a meaningful representation of the signal that goes beyond the well known quadratic functions. The quartic functions are applied to the design of signal-adaptive kernels for Cohen's class and shown to provide improvements over previous methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals oP minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of detection and classification of complicated signals in noise. Classical detection methods such as energy detectors and linear discriminant analysis do not perform well in many situations of practical interest. We introduce a new approach based on hidden Markov modeling in the wavelet domain. Using training data, we fit a hidden Markov model (HMM) to the wavelet transform to concisely represent its probabilistic time- frequency structure. The HMM provides a natural framework for performing likelihood ratio tests used in signal detection and classification. We compare our approach with classical methods for classification of nonlinear processes, change-point detection, and detection with unknown delay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speech signals have the property that they are broad-band white conveying information at a very low rate. The resulting signal has a time-frequency representation which is redundant and slowly varying in both time and frequency. In this paper, a new method for separating speech from noise and interference is presented. This new method uses image enhancement techniques applied to time- frequency representations of the corrupted speech signal. The image enhancement techniques are based on the assumption that speech and/or the noise and interference may be locally represented as a mixture of two-dimensional Gaussian distributions. The signal surface is expanded using a Hermite polynomial expansion and the signal surface is separated from the noise surface by a principal- component process. a Wiener gain surface is calculated from the enhanced image, and the enhanced signal is reconstructed from the Wiener gain surface using a time varying filter constructed from a basis of prolate-spheroidal filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We calculate the instantaneous frequency at a fixed position of a propagating pulse in the asymptotic regime. We also discuss the multimode case and relate it to the concept of instantaneous bandwidth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many practical signal detection problems, the detectors have to be designed from training data. Due to limited training data, which is usually the case, it is imperative to exploit some inherent signal structure for reliable detector design. The signals of interest in a variety of applications manifest such structure in the form of nuisance parameters. However, data-driven design of detectors by exploiting nuisance parameters is virtually impossible in general due to two major difficulties: identifying the appropriate nuisance parameters, and estimating the corresponding detector statistics. We address this problem by using recent results that relate joint signal representations (JSRs), such as time-frequency and time-scale representations, to quadratic detectors for a wide variety of nuisance parameters. We propose a general data-driven framework that: (1) identifies the appropriate nuisance parameters from an arbitrarily chosen finite set, and (2) estimates the second-order statistics that characterize the corresponding JSR-based detectors. Simulation results demonstrate that for limited training data, exploiting the structure of nuisance parameters via our framework can deliver substantial gains in performance as compared to empirical detectors which ignore such structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new theory of random fields based on the concept of local averaging was developed in the 80s where the second-order properties of the random fields are characterized by the variance function. Certain asymptotic properties of the variance function lead to the definition of a scalar called the 'scale of fluctuation,' which has many interesting properties. A non- parametric method of estimating instantaneous scale of fluctuation is developed using the time-varying model-based time-frequency distribution. A wide range of random processes can be modeled by appropriate state-space models with white process noise. For properly defined state transition matrices and observation vectors, the states estimated using Kalman filtering or smoothing algorithms provide the estimated time-frequency distribution (Kalman-TFD). Using Kalman-TFD, the instantaneous scale of fluctuation is estimated. Performance of this estimator is compared to other instantaneous and block methods using the coefficient of variation of the estimators. The Kalman-TFD-based scale of fluctuation estimator has a coefficient of variation of 6% where as other methods yield coefficients of variation greater than 35%. The instantaneous scale of fluctuation quantifies the temporal variability of the underlying system and possible resultant limit- cycle oscillations. Tests with real vibration data from machine tools before and during chatter show that the estimated instantaneous scale of fluctuation may permit on-line prediction of chatter development many hundreds of milliseconds in advance. To explain the behavior of the estimated instantaneous scale of fluctuation during pre-chatter period, detailed simulations were undertaken which revealed that the random process during pre- chatter condition goes through an increase in 'degrees-of-freedom' or its unit standard deviation contour volume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for computing positive time-frequency distributions (TFDs) for nonstationary signals is presented. This work extends the earlier work of the author and his colleagues in computing positive TFDs. A general approach to the problem of computing these signal-dependent distributions is derived. The method is based on an evolutionary spectrum formulation of positive TFDs. Following earlier work, a relationship is derived between positive TFDs and the ambiguity function of the observed signal. In particular, it is shown that the TFD is approximately equal to the two-dimensional Fourier transform of the ambiguity function. A method for computing the positive TFD is then presented based on minimizing the squared error in this approximation subject to the TFD being positive and satisfying the time and frequency marginals. The squared error may be weighted non-uniformly, resulting on a constrained weighted least-squares optimization problem. A solution to this optimization problem based on an alternating projections framework is presented, and an example is provided. The resulting TFD provides excellent time-frequency resolution while maintaining positivity and satisfying the marginals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blind source separation is an emerging field of fundamental research with a broad range of applications. It is motivated by practical problems that involve several source signals and several sensors. Each sensor receives an instantaneous linear mixture of the source signals. The problem of the blind source separation consists then of recovering the original waveforms of the sources without any knowledge of the mixture structure. So far, the problem of the blind source separation has been solved using statistical information available on the source signals. A blind source separation approach for non-stationary signals based on time- frequency representations (TFR) have been recently introduced by the authors (SPIE 1996). Herein, we generalize the TFR based blind source separation approach to arbitrary variables, including time and frequency. 'Spatial joint arbitrary variable distributions' are introduced and used for blind source separation via joint diagonalization techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the definition and interpretation of instantaneous frequency and other time-varying frequencies of a signal, and related concepts of instantaneous amplitude, instantaneous bandwidth and the time-varying spectrum of a signal. A definition for the average frequency at each time is given, and we show that spectrograms and Cohen-Posch time-frequency distributions can yield this result for the first conditional moment in frequency. For some signals this result equals the instantaneous frequency, but generally instantaneous frequency is not the average frequency at each time in the signal. We discuss monocomponent versus multicomponent signals, and give an estimate of the time-varying spectrum given the instantaneous frequencies and bandwidths of the components. We also consider the role of the complex signal in defining instantaneous amplitude, frequency and bandwidth, and ways to obtain a complex signal satisfying certain physical properties, given a real signal (or its time-varying spectrum). Depending upon the physical properties desired (e.g., the instantaneous amplitude of a magnitude-bounded signal should itself be bounded), one obtains different complex representations -- and hence different instantaneous amplitudes, frequencies and bandwidths -- of the given signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Impulsive transient signals have been difficult to characterize and classify using traditional signal processing methods. We show that time-frequency distributions can effectively characterize the transient response of an acoustical cavity. Class-dependent kernels developed from time-frequency distributions are used to successfully classify the impulsive transients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small formed elements and gas bubbles in flowing blood, called microemboli, can be detected using pulse Doppler ultrasound. In this application, a pulsed constant-frequency ultrasound signal insonates a volume of blood in the middle cerebral artery, and microemboli moving through this sample volume produce a Doppler shifted transient reflection. Current detection methods include searching for these transients in a short-time Fourier transform (STFT) of the reflected signal. However, since the embolus transit time through the Doppler sample volume is inversely proportional to the embolus velocity (Doppler shift frequency), a matched-filter detector should in principle use a wavelet transform, rather than a short-time Fourier transform, for optimal results. Closer examination of the Doppler shift signals usually shows a chirping behavior apparently due to acceleration or deceleration of the emboli during their transit through the Doppler sample volume. These variations imply that a linear wavelet detector is not optimal. We argue from physical principles that quadratic time- scale detectors provide a robustness to variations that is nearly optimal for embolus detection. Using a theory for optimal quadratic time-scale detection, we develop a method for designing the optimal detection kernels from training data and derive efficient algorithms for implementation of the resulting quadratic time-scale detectors. The performance of these detectors is compared with optimized STFT and wavelet detectors. The performance for all of these methods is found to be very similar, and on average only about 2.5 dB less than an 'oracle' detector which provides a theoretical upper bound for microembolus detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper outlines means of combining and reconciling concepts associated with Cohen's class of distributions and with the wavelet transform. Both have their assets and their liabilities. Previous work has shown that one can decompose any time-frequency distribution (TFD) in Cohen's class into a weighted sum of spectrograms. A set of orthogonal analysis windows which also have the scaling property in common with wavelets is proposed. Successful application of this theory offers very fast computation of TFDs, since very few analysis windows may be needed and fast algorithms can be used. In addition, the decomposition idea offers the possibility of shaping the analysis such that good local and global properties as well as a number of desirable TFD properties are retained. Finally, one may view the result in terms of conventional Cohen's class concepts or, alternatively, in terms of wavelet concepts and potentially combine powerful insights and concepts from both points of view. Preliminary results applied to radar backscatter are provided. Performance curves for several wavelet types are also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An entirely new set of criteria for the design of kernels (generating functions) for time-frequency representations (TFRs) is presented. These criteria aim only to produce kernels (and thus, TFRs) which will enable more accurate classification. We refer to these kernels, which are optimized to discriminate among several classes of signals, as signal class dependent kernels, or simply class dependent kernels. The genesis of the class dependent kernel is to be found in the area of operator theory, which we use to establish a direct link between a discrete-time, discrete-frequency TFR and its corresponding discrete signal. We see that many similarities, but also some important differences, exist between the results of the continuous-time operator approach and our discrete one. The differences between the continuous representations and discrete ones may not be the simple sampling relationship which has often been assumed. From this work, we obtain a very concise, matrix-based expression for a discrete- time/discrete-frequency TFR which is simply the product of the kernel with another matrix. This simple expression opens up the possibility to optimize the kernel in a number of ways. We focus, of course, on optimizations most suitable for classification, and ultimately wind up with the class dependent kernel. When applied to simulated sonar transient signals, we find that our approach does a good job of discriminating within very similar classes of transients and is especially sensitive to differences in time variation across classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an iterative algorithm to estimate the instantaneous frequency (IF) an matched spectrogram of nonstationary signals. The matched spectrogram obtained by this method is concentrated along the IF for monocomponent signals. The convergence analysis and the properties of the IF estimation algorithm are presented. Finally, examples showing the performance of the proposed algorithm is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss applications of time-frequency analysis to the investigation of astronomical type signals. In particular, we apply time-frequency techniques to a data set consisting of the kinetic energy in the three body problem. We explain how the methods of time-frequency analysis shed light on these signals and also how the concept of multicomponent signals is applied to their decomposition. We also discuss methods to do simple filtering and estimation of the signal parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The primary basis for adaptive radar algorithm design is that (1) a binary hypothesis formulation with unknown parameters is an adequate test and (2) that radar interference is composed of combinations of thermal noise, self-induced clutter, and extraneous noise. This is the typical generalized likelihood formulation that yield the CFAR characteristic for the assumed conditions. Implementations have shown that such formulations yield inadequate performance in complex clutter environments. As compensation measure, a secondary CFAR process then addresses the potential violation of this assumption by large 'target-like' interference such as large clutter discretes or a large number of targets interfering with each other. In order to detect small targets, an approach based on the likelihood statistic provides a technique for optimally suppressing the neighboring large signals. Performance is characterized as a function of a generalized distance and relative signal power ratios in the joint space-time domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under the U.S. Air Force Wright Labs (WL/AAMR) adaptive radar architecture contract, Hughes performed terrain scattered interference (TSI) experiments using the Hughes A-3 data collection system. This data collection system is a fighter like x-band radar with front mounted four port antenna and four receiver channels. The flights were conducted in April 1996 against an ALQ-167 airborne jammer over the open ocean near the California coast and over the desert area near Edwards Air Force Base. Joint clutter and TSI mitigation techniques were tested using the collected four port monopulse x-band TSI data. Verifying past simulation predictions, results show that the four channel monopulse beamforming system yields sufficient performance for tactical utility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the benefits and drawbacks of different configurations of 3 and 4 microphone adaptive arrays. In particular, broadside and endfire configurations of equally spaced and logarithmically spaced arrays are compared. The assessment is done using arrays of simple electret microphones in actual acoustic environments. Fixed and adaptive processing approaches are assessed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Microphone arrays can be used for high-quality sound pick up in reverberant and noisy environments. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing superior signal-to-noise ratio (SNR) when compared to single microphone performance. There are two aspects in microphone array system performance: The ability of the system to locate and track sound sources, and its ability to selectively capture sound from those sources. Both aspects of system performance are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize sensor placement based on geometry of the environment and assumed sound source behavior. The objective of the optimization is to obtain the greatest average system SNR using a specified number of sensors. A method is derived to evaluate array performance for a given array configuration defined by the above mentioned metrics. An overall performance function is described based on these metrics. A framework for optimum placement of sensors under the practical considerations of possible sensor placement and potential location of sound sources is also characterized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smart antenna techniques have been proposed to increase channel capacity and enhance system performance of a base station in cellular and PCS applications. To date, most of the smart antenna research is focused on the development and analysis of smart antenna systems for uplink applications, i.e., the receive part of a smart antenna system. Not much research is done on developing smart antenna algorithms for downlink applications. In this paper, we shall present a new systematic technique for designing downlink weighting vectors for improving the downlink performance of a smart antenna system. The new method draws on the filter bank concept and provides a suboptimal solution to the difficult downlink weight design problem. With the computation complexity comparable to the so-called pseudo-inverse approach, our method always outperforms its counterpart, sometimes by a significant margin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Antenna arrays can be used in wireless communication systems to increase system capacity and improve communication quality. The antenna arrays, which receive several signals at the same time and frequency domain, can be modeled as a multiple-input/multiple- output (MIMO) system. To separate and recover multiple signals from arrays, the parameters of the system have to be identified explicitly or implicitly. In the first part of this paper, we deal with blind parameter identification based on second-order statistics. We investigate the identifiability of the MIMO FIR channels, and obtain a necessary and sufficient condition for second-order based identifiability of the MIMO FIR channels. Then, we extend the identification algorithms for single-input/multiple- output (FIR-SIMO) channels, such as the algebraic algorithm and the subspace algorithms to the identification of the MIMO FIR channels. The MIMO systems can also be directly equalized using blind techniques. We then investigate blind algorithms to separate multiple signals received by antenna arrays. We analyze the CMA equalizer used in the MIMO systems. According to our analysis, for the MIMO FIR channels satisfying certain conditions, the MIMO-CMA FIR equalizer is able to recover one of input signals and remove the intersymbol interference and co-channel interference regardless of the initial setting of the equalizer. To recover all input signals simultaneously, a novel MIMO channel blind equalization algorithm is developed in this paper. The global convergence of the new algorithm for MIMO channels is proved. Hence, the new blind equalization algorithm for MIMO channels can be applied to separate and equalize the signals received by antenna arrays in communication systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers 'blind beamforming' operations on a wireless network of randomly distributed MEM sensors. Maximum power collection criterion is proposed and results in array weights obtained from the eigenvector corresponding to largest eigenvalue of a matrix eigenvalue problem. Theoretical justification of this approach to an extension of Szego's asymptotic distribution of eigenvalues is provided. Numerical results on propagation time delay estimation and loss of coherency due to propagation disturbances are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Luk and Qiao introduced an algorithm for the generalized ULV decomposition (ULLVD). The proposed decomposition scheme performs the rank-revealing operation, but requires a lower computational cost in the updating of new data as compared to the generalized singular value decomposition (GSVD). In this paper, we extend their algorithm for handling downdating, and propose a systolic array structure for implementing both updating and downdating. A scheme for rank revealing is also implemented on the proposed systolic array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A planar array spatial spectrum estimator has been developed from an extension to (partial)-MUSIC, for discriminating between two closely spaced sources over a wide range of signal correlations. For two closely spaced and/or corrected signals, the second eigenvector in the signal subspace is not observable since the signal covariance matrix tends to be singular. Thus standard MUSIC methods only find one source which is a power weighted centroid of these signals. The (partial)-MUSIC spatial spectrum estimator derives an estimate of the second source eigenvector by applying a spatial derivative operator to the covariance matrix. A new signal subspace is formed by a projection operator which combines the first eigenvector and the second eigenvector estimate. The (partial)-MUSIC algorithm has been tested, modified and compared to the MUSIC algorithm using a point source simulation for both a linear and a planar array at various levels of correlation for two sources. The algorithm is also tested with simulated data from a terrain scattered interference source. The algorithm is found to be relatively insensitive to correlation and can separate targets to better than one-half the angular separation of MUSIC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new architecture is presented for incremental computation of the two-dimensional inverse discrete cosine transform (2D-IDCT) in the context of progressive image decoding. This architecture offers advantages over existing DCT inversion techniques for systems used in conjunction with progressive image coding schemes or those that must operate in environments with severe and/or time-varying resource constraints (e.g. real-time and low-power systems). The use of a bit-serial distributed arithmetic approach comprised of parallel input-pruned 2D-IDCT processing elements enables low- quality image reconstructions to be quickly and efficiently obtained using only a subset of the DCT coefficient bit stream. Initial approximate reconstructions can be improved in subsequent stages of incremental refinement according to the availability of processing resources or DCT coefficient data. Analysis is presented of image degradation at successive stages, illustrating uniform improvement according to quantitative and perceptual criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents discrete thresholded binary networks of the Hopfield-type as feasible configurations to perform image restoration with regularization. The typically large scale nature of image data processing is handled by partitioning these structures and adopting sequential or parallel update strategies on the partitions one at a time. Among the advantages of such architectures are the ability to efficiently utilize space- bandwidth constrained resources, obviate the need for zero self- feedback connections in sequential procedures and diminish the likelihood of limit cycling in parallel approaches. In the case of image data corrupted by blurring and AWGN, the least squares solution is attained in stages by switching between partitions to force energy descent. Two forms of partitioning have been discussed. The partial neuron decomposition is seen to be more efficient than the partial data strategy. Further, parallel update procedures are more practical from an electro-optical standpoint. The paper demonstrates the viability of these architectures through suitable examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the architecture and application of a flexible 100 GOPS (giga operations per second) exhaustive search segment matching VLSI architecture to support evolving motion estimation algorithms as well as block matching algorithms of established video coding standards. The architecture is based on a 32 by 32 processor element (PE) array and a 10240 byte on-chip search area RAM and allows concurrent calculation of motion vectors for 32 by 32, 16 by 16, 8 by 8 and 4 by 4 blocks and partial quadtrees (called segments) for a plus or minus 32 pel search range with 100% PE utilization. This architecture supports object based algorithms by excluding pixels outside of video objects from the segment matching process as well as advanced algorithms like variable block-size segment matching with luminance correction. The VLSI has been designed using VHDL synthesis and a 0.35 micrometer CMOS technology and will have a clock rate of 100 Mhz (min.) allowing the processing of 23668 32 by 32 blocks per second with a maximum of plus or minus 32 pel search area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a method to obtain a maximum likelihood estimation of the parameters of the generalized gamma and K probability density functions. Explicit closed form expressions are derived between the model parameters and the experimental data. Due to their nonlinear nature global optimization techniques are proposed for solving the derived expressings with respect to clutter model parameters. Experimental results show in all attempted cases that the resulting expressions are convex functions of the parameters. In addition to the maximum likelihood solution we present two other solutions. One is based on moment and the other on histogram matching. The Cramer-Rao lower bound is also derived and used for performance comparisons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new family of circulant preconditioners for solving Toeplitz systems. They are based on B-splines. The R. Chan and T. Chan preconditioners can be constructed from the first and the second order B-splines. Numerical results show that preconditioners from higher-order B-splines perform much better than well-known ones even in the cases where the Toeplitz matrices are ill-conditioned. Like that of the other circulant preconditioners, the construction of B-spline preconditioners requires only the entries of the given Toeplitz matrix and does not require an a priori knowledge of its generating function. Thus they are most suitable for applications where the generating function of the given Toeplitz matrix is not known explicitly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The total variational (TV) regularization method was first proposed for gray scale images and was extended for vector valued images. In this work, we apply the TV regularization method to solve the multichannel image deconvolution problem. The motivation for regularizing with the TV norm is that it is extremely effective for recovering edges of images. In this paper, a fast iterative method is developed to solve the deconvolution problem. Our method involves solving linear systems and the conjugate gradient method is applied in which Fourier transform type preconditioners are used to speed up the convergence rate. Numerical experiments demonstrate the effectiveness of the TV regularization method. In this paper, we present some preliminary results on multichannel blind deconvolution with TV regularization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The total variation denoising method, proposed by Rudin, Osher and Fatermi, 92, is a PDE-based algorithm for edge-preserving noise removal. The images resulting from its application are usually piecewise constant, possibly with a staircase effect at smooth transitions and may contain significantly less fine details than the original non-degraded image. In this paper we present some extensions to this technique that aim to improve the above drawbacks, through redefining the total variation functional or the noise constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gabor transform yields a discrete representation of a signal in the phase space. Since the Gabor transform is non-orthogonal, efficient reconstruction of a signal from its phase space samples is not straightforward and involves the computation of the so- called dual Gabor function. We present a unifying approach to the derivation of numerical algorithms for discrete Gabor analysis, based on unitary matrix factorization. The factorization point of view is notably useful for the design of efficient numerical algorithms. This presentation is the first systematic account of its kind. In particular, it is shown that different algorithms for the computation of the dual window correspond to different factorizations of the frame operator. Simple number theoretic conditions on the time-frequency lattice parameters imply additional structural properties of the frame operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many techniques involve the computation of singular subspaces associated with an extreme cluster of singular values of an m X n data matrix A. Frequently A is sparse and/or structured, which usually means matrix-vector multiplications involving A and its transpose can be done with much less than (sigma) (mn) flops, and A and its transpose can be stored in static data structures with much less than (sigma) )(mn) storage locations. Standard complete orthogonal decompositions may be unattractive due to the computational and dynamic storage overhead associated with the initial preprocessing of the data. We describe an efficient Matlab implementation of the low-rank ULV algorithm for extracting reliable and accurate approximations to the singular subspaces associated with the cluster of large singular values without altering the matrix. The user can choose any principal singular vector estimator to underwrite the algorithm, may call a specialized routine to compute matrix-vector products involving A and its transpose, and can choose the desired level of accuracy of a residual. The main computational savings stems from preserving A and avoiding the explicit formation of unwanted information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Jacobi method for singular value decomposition is well-suited for parallel architectures. Its application to signal subspace computations is well known. Basically the subspace spanned by singular vectors of large singular values are separated from subspace spanned by those of small singular values. The Jacobi algorithm computes the singular values and the corresponding vectors in random order. This requires sorting the result after convergence of the algorithm to select the signal subspace. A modification of the Jacobi method based on a linear objective function merges the sorting into the SVD-algorithm at little extra cost. In fact, the complexity of the diagonal processor cells in a triangular array get slightly larger. In this paper we present these extensions, in particular the modified algorithm for computing the rotation angles and give an example of its usefulness for subspace separation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A recursive algorithm is described which performs approximate numerical rank estimation and subspace-projection of the least squares weight vector. Although it appears to work well in practice, being based upon a one-sided rank-revealing QR factorization, it lacks a formal guarantee to reveal the rank and it is more approximate than URV or Chan's rank-revealing QR factorization. However, it may be implemented very simply, requires little additional computation and, being based upon QR, it is able to exploit the established and numerically sound architecture of the Gentleman-Kung QRD-based RLS algorithm. It is implemented by redefining the cells in the array and performing additional updates so that the R matrix which is stored at any time approximates that which would be obtained for the QR factorization of the projected data matrix. Thus, the projected least-squares residual is output directly from the array and the projected weight vector is readily extracted if required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some problems of subspace methods are demonstrated by the two principle radar applications: resolution enhancement and interference suppression. We show that for real data a critical problem of subspace methods is the definition of the signal subspace. Channel errors result in a leakage of signal power into the noise eigenvalues. For resolution enhancement the optimum signal subspace dimension is close to the dimension without errors, because by this choice error effects can be reduced. A performance comparison of some current criteria to determine the subspace dimension is given. For interference suppression the error/leakage subspace must be included in the subspace. Real data experiments show, that projection methods are sensitive to the choice of the dimension of the jammer subspace. For this application 'weighted projections' show a much better performance. These 'weighted projections' can be effectively constructed from eigenvector-free subspace estimation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an approach for determining a dataflow processor which is intended for the execution of Jacobi algorithms which are found in the application domain of array processing and other real-time adaptive signal processing applications. Our approach to determine a processor for their execution is to exploit the quasi regularity property in their dependence graph representations in search for what we call the Jacobi processor. This processor emerges from an exploration iteration which takes off from a processor template and a set of Jacobi algorithms. Based on qualitative and quantitative performance analysis, both the algorithms and the processor template are restructured towards improved execution performance. To ensure the mapper is part of the emerging processor specification, the algorithm-to-processor mapping method is included in the iterative and hierarchical exploration method. The control flow in the processor exploits properties related to regularity in the structure of the algorithms; allows gentle transitions from regular to irregular levels in the algorithm hierarchy, and supports different control models for the irregular structures that appear at deeper levels in the hierarchy. Transformations aiming at reducing critical paths, increasing throughput, improving mapping efficiency and minimizing control overheads are taken into account. They include retiming, pipelining and lookahead techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms of classifying and segmenting bit streams with different source content (such as speech, text and image, etc.) and different coding methods (such as ADPCM, (mu) -law, tiff, gif and JPEG, etc.) in a communication channel are investigated. In previous work, we focused on the separation of fixed- and variable-length coded bit streams, and the classification of two variable-length coded bit streams by using Fourier analysis and entropy feature. In this work, we consider the classification of multiple (more than two sources) compressed bit streams by using vector quantization (VQ) and Gaussian mixture modeling (GMM). The performance of the VQ and GMM techniques depend on various parameters such as the size of the codebook, the number of mixtures and the test segment length. It is demonstrated with experiments that both VQ and GMM outperform the single entropy feature. It is also shown that GMM generally outperforms VQ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spread spectrum modulation techniques such as frequency-hopped code division multiple access (FH-CDMA) are an efficient way to allow multi-user transmission over a limited bandwidth. Recently, there has been a push to increase user capacity over the same bandwidth. In particular, smart antenna arrays have been studied for their capacity gain potential. In this regard, we propose the use of blind signal separation for recovery of signals received through an antenna array. To implement this technique requires the antenna array vectors to be stationary which does not hold in FH-CDMA, so we also propose a new method for frequency compensation. We present the theoretical details of the frequency compensation and compare its performance to no frequency compensation. We then present the blind signal separation algorithm applied to a complex antenna array matrix and complex signals with noise and show in simulation that the blind signal separation works.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a novel method of estimating displacement of moving objects from one frame to the next in the image sequence is presented. This method is based on using the artificial neural networks for different models of motion. The two model is examined: affine flow and planar surface motion. Various circuit architectures of simple neuron-like processors are considered for estimation of motion parameters. The efficiency of the proposed networks are investigated by computer simulation for using in video processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia computing systems are typically heterogeneous multiprocessors which are highly optimized for both performance and implementation cost. This makes them well-suited to hardware/software co-design techniques, which simultaneously optimize the hardware and software architectures of a system to meet system requirements. This paper surveys important results in co-design with emphasis on techniques necessary for the design of multimedia computing systems, particularly the analysis and synthesis of memory systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, multimedia algorithms have proliferated vastly in the telecommunications, entertainment, and educational spheres. Proprietary methods of multimedia generation and presentation have given way to common inter-operable standards brought about by the efforts of international standards bodies like JPEG, MPEG, DVD and others. For designers of multimedia systems, adherence to these mandatory intentional standards as well as a quick time to market their products have become necessary goals. For quick design turnarounds, the need to lay out new silicon with emerging standards must be obviated. Chip designers have started filling in this need by designing media and graphics processors that are programmable and therefore afford a flexible software based approach to multimedia system design as compared to the traditional hardwired approach. These first generation of multimedia processors will definitely become more sophisticated in architecture and computation power with the passage of time, thereby enabling product manufacturers to develop systems that allow consumers to easily explore the offerings from the world of multimedia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents, index mapping, a technique to efficiently map a widely used class of digital signal processing algorithms onto a space/time paradigm with immediate representation as the partitioning and scheduling map of a small, I/O efficient, hardware array. When applied to reconfigurable FPGA based hardware architectures with downstream sea-of-gates optimization methods, the resulting systems form a dynamic signal processing environment with the best mix of performance and flexibility for wireless applications. Herein, index mapping is demonstrated with a mapping of the fast Fourier transform (FFT) onto an FPGA computing machine, the reconfigurable processor (RCP).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent developments in wireless communications have asked for new design methodologies in designing integrated circuits. We study the VLSI design challenges posed by new wireless applications and suggest possible solutions. High data-rate wireless applications demand a complex VLSI. An extension of the IS-95 based code division multiple access (CDMA) system with multi-channel architecture is being investigated and scrutinized by standards committee. We present a robust hardware architecture of the Viterbi decoder for high data-rate CDMA system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a survey of speech coding techniques currently used in digital cellular communication systems. Speech production and perception are reviewed. General coding structures are presented. The history and recent refinements of a variety of specific speech coding techniques are reviewed, including short term (LPC) prediction and quantization, long term pitch prediction and quantization, and excitation modeling. Finally, the speech coding algorithms currently in use in digital cellular communications are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The personal handyphone system (PHS) is a personal digital micro- cellular system that has recently been deployed in Japan and is being considered for operation in other locations. System objectives are to provide home, office and public access capability. This paper describes a four chip implementation of the PHS handset from Rockwell Semiconductor Wireless Communications Division (formerly Pacific Communications Sciences Inc.). It focuses on signal processing techniques which support the design of a low-cost receiver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates methods to reduce the amount of computation needed to detect information bits using a linear detector for a CDMA system. We show windowing technique coupled with pipelining can reduce the amount of computation without significantly sacrificing the performance of linear feedback detector. We also describe efficient techniques to adapt to a dynamic system where the system parameters vary due to the change in delays associated with individual users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe how to efficiently apply a spatially-variant blurring operator using linear interpolation of measured point spread functions. Numerical experiments illustrate that substantially better resolution can be obtained at very little additional cost compared to piecewise constant interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose the use of complex-orthogonal transformations for finding the eigenvalues of a complex symmetric matrix. Using these special transformations can significantly reduce computational costs because the tridiagonal structure of a complex symmetric matrix is maintained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We review some recent results on identification of leading coefficients of a second order parabolic equation from single or all possible boundary measurements of its solutions. We describe numerical experiments for a linearized version of this inverse problem which potentially has important applications to nondestructive evaluation of physical bodies from measurements of their temperature fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.