Focal plane arrays with associated electronics and cooling are a substantial portion of the cost, complexity, size, weight,
and power requirements of Long-Wave IR (LWIR) imagers. Hyperspectral LWIR imagers add significant data volume
burden as they collect a high-resolution spectrum at each pixel. We report here on a LWIR Hyperspectral Sensor that
applies Compressive Sensing (CS) in order to achieve benefits in these areas.
The sensor applies single-pixel detection technology demonstrated by Rice University. The single-pixel approach uses a
Digital Micro-mirror Device (DMD) to reflect and multiplex the light from a random assortment of pixels onto the
detector. This is repeated for a number of measurements much less than the total number of scene pixels. We have
extended this architecture to hyperspectral LWIR sensing by inserting a Fabry-Perot spectrometer in the optical path.
This compressive hyperspectral imager collects all three dimensions on a single detection element, greatly reducing the
size, weight and power requirements of the system relative to traditional approaches, while also reducing data volume.
The CS architecture also supports innovative adaptive approaches to sensing, as the DMD device allows control over the
selection of spatial scene pixels to be multiplexed on the detector.
We are applying this advantage to the detection of plume gases, by adaptively locating and concentrating target energy.
A key challenge in this system is the diffraction loss produce by the DMD in the LWIR. We report the results of testing
DMD operation in the LWIR, as well as system spatial and spectral performance.
Compressive sensing is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate. Recent results in compressive sensing show that a sparse or compressible signal can be reconstructed from very few measurements with an incoherent, and
even randomly generated, dictionary. To date the hardware implementation of compressive sensing analog-to-digital systems has not been straightforward. This paper explores the use of Sigma-Delta quantizer architecture to implement such a system. After examining the challenges of using Sigma-Delta with a randomly generated
compressive sensing dictionary, we present efficient algorithms to compute the coefficients of the feedback loop. The experimental results demonstrate that Sigma-Delta relaxes the required analog filter order and quantizer precision. We further demonstrate that restrictions on the feedback coefficient values and stability constraints impose a small penalty on the performance of the
Sigma-Delta loop, while they make hardware implementations significantly simpler.
The theory of compressive sensing enables accurate and robust signal reconstruction from a number of measurements
dictated by the signal's structure rather than its Fourier bandwidth. A key element of the theory is
the role played by randomization. In particular, signals that are compressible in the time or space domain can
be recovered from just a few randomly chosen Fourier coefficients. However, in some scenarios we can only observe
the magnitude of the Fourier coefficients and not their phase. In this paper, we study the magnitude-only
compressive sensing problem and in parallel with the existing theory derive sufficient conditions for accurate
recovery. We also propose a new iterative recovery algorithm and study its performance. In the process, we
develop a new algorithm for the phase retrieval problem that exploits a signal's compressibility rather than its
support to recover it from Fourier transform magnitude measurements.
The theory of compressive sensing (CS) enables the reconstruction of a sparse or compressible
image or signal from a small set of linear, non-adaptive (even random) projections. However, in
many applications, including object and target recognition, we are ultimately interested in making
a decision about an image rather than computing a reconstruction. We propose here a framework
for compressive classification that operates directly on the compressive measurements without first
reconstructing the image. We dub the resulting dimensionally reduced matched filter the smashed
filter. The first part of the theory maps traditional maximum likelihood hypothesis testing into the
compressive domain; we find that the number of measurements required for a given classification
performance level does not depend on the sparsity or compressibility of the images but only on
the noise level. The second part of the theory applies the generalized maximum likelihood method
to deal with unknown transformations such as the translation, scale, or viewing angle of a target
object. We exploit the fact the set of transformed images forms a low-dimensional, nonlinear
manifold in the high-dimensional image space. We find that the number of measurements required
for a given classification performance level grows linearly in the dimensionality of the manifold but
only logarithmically in the number of pixels/samples and image classes. Using both simulations
and measurements from a new single-pixel compressive camera, we demonstrate the effectiveness
of the smashed filter for target classification using very few measurements.
Compressive Sensing is an emerging field based on the revelation that a small number of linear projections of a compressible signal contain enough information for reconstruction and processing. It has many promising implications and enables the design of new kinds of Compressive Imaging systems and cameras. In this paper, we develop a new camera architecture that employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudorandom binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while sampling the image fewer times than the number of pixels. Other attractive properties include its universality, robustness, scalability, progressivity, and computational asymmetry. The most intriguing feature of the system is that, since it relies on a single photon detector, it can be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers.
In this paper, we study families of images generated by varying a
parameter that controls the appearance of the object/scene in each image. Each image is viewed as a point in high-dimensional space; the family of images forms a low-dimensional submanifold that we call an image appearance manifold (IAM). We conduct a detailed study of some representative IAMs generated by translations/rotations of simple objects in the plane and by rotations of objects in 3-D space. Our central, somewhat surprising, finding is that IAMs generated by images with sharp edges are nowhere differentiable. Moreover, IAMs have an inherent multiscale structure in that approximate tangent planes fitted to ε-neighborhoods continually twist off into new dimensions as the scale parameter ε varies. We explore and explain this phenomenon. An additional, more exotic kind of local non-differentiability happens at some exceptional parameter points where occlusions cause image edges to disappear. These non-differentiabilities help to understand some key phenomena in image processing. They imply that Newton's method will not work in general for image registration, but that a multiscale Newton's method will work. Such a multiscale Newton's method is similar to existing coarse-to-fine differential estimation algorithms for image registration; the manifold perspective offers a well-founded theoretical motivation for the multiscale approach and allows quantitative study of convergence and approximation. The manifold viewpoint is also generalizable to other image understanding problems.
We develop a quaternion wavelet transform (QWT) as a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant, tight frame representation whose coefficients sport a magnitude and three phase values, two of which are directly proportional to local image shifts. The QWT can be efficiently computed using a dual-tree filter bank and is based on a 2-D Hilbert transform. We demonstrate how the QWT's magnitude and phase can be
used to accurately analyze local geometric structure in images. We also develop a multiscale flow/motion estimation algorithm that computes a disparity flow map between two images with respect to local object motion.
In the last few years, it has become apparent that traditional wavelet-based image processing algorithms and models have significant shortcomings in their treatment of edge contours. The standard modeling paradigm exploits the fact that wavelet coefficients
representing smooth regions in images tend to have small magnitude, and that the multiscale nature of the wavelet transform implies that these small coefficients will persist across scale (the canonical
example is the venerable zero-tree coder). The edge contours in the image, however, cause more and more large magnitude wavelet coefficients as we move down through scale to finer resolutions. But if the contours are smooth, they become simple as we zoom in on them, and are well approximated by straight lines at fine scales. Standard wavelet models exploit the grayscale regularity of the smooth regions of the image, but not the geometric regularity of the contours.
In this paper, we build a model that accounts for this geometric regularity by capturing the dependencies between complex wavelet coefficients along a contour. The Geometric Hidden Markov Tree (GHMT) assigns each wavelet coefficient (or spatial cluster of wavelet
coefficients) a hidden state corresponding to a linear approximation of the local contour structure. The shift and rotational-invariance properties of the complex wavelet transform allow the GHMT to model
the behavior of each coefficient given the presence of a linear edge at a specified orientation --- the behavior of the wavelet coefficient given the state. By connecting the states together in a quadtree, the GHMT ties together wavelet coefficients along a contour, and also models how the contour itself behaves across scale.
We demonstrate the effectiveness of the model by applying it to feature extraction.
In this paper, we link concepts from nonuniform sampling, smoothness function spaces, interpolation, and wavelet denoising to derive a new multiscale interpolation algorithm for piecewise smooth signals. We formulate the optimization of finding the signal that balances agreement with the given samples against a wavelet-domain regularization. For signals in the Besov space Bαp(Lp) p ≥ 1, the optimization corresponds to convex programming in the wavelet domain. The algorithm simultaneously achieves signal interpolation and wavelet denoising, which makes it particularly suitable for noisy sample data, unlike classical approaches such as bandlimited and spline interpolation.
Natural images can be viewed as combinations of smooth regions,
textures, and geometry. Wavelet-based image coders, such as the
space-frequency quantization (SFQ) algorithm, provide reasonably
efficient representations for smooth regions (using zerotrees, for
example) and textures (using scalar quantization) but do not properly
exploit the geometric regularity imposed on wavelet coefficients by
features such as edges. In this paper, we develop a representation for
wavelet coefficients in geometric regions based on the wedgelet
dictionary, a collection of geometric atoms that construct
piecewise-linear approximations to contours. Our wedgeprint
representation implicitly models the coherency among geometric
wavelet coefficients. We demonstrate that a simple compression
algorithm combining wedgeprints with zerotrees and scalar quantization
can achieve near-optimal rate-distortion performance D(R) ~ (log R)2/R2 for the class of piecewise-smooth images containing smooth C2 regions separated by smooth C2 discontinuities. Finally, we extend this simple algorithm and propose a complete compression framework for natural images using a rate-distortion criterion to balance the three representations. Our Wedgelet-SFQ (WSFQ) coder outperforms SFQ in terms of visual quality and mean-square error.
Since their introduction a little more than 10 years ago, wavelets
have revolutionized image processing. Wavelet based
algorithms define the state-of-the-art for applications
including image coding (JPEG2000), restoration, and segmentation.
Despite their success, wavelets have significant shortcomings in their
treatment of edges. Wavelets do not parsimoniously capture even the
simplest geometrical structure in images, and wavelet based processing
algorithms often produce images with ringing around the edges.
As a first step towards accounting for this structure, we will show
how to explicitly capture the geometric regularity of contours in
cartoon images using the wedgelet representation and a multiscale geometry model. The wedgelet representation builds up an image out of simple piecewise constant functions with linear discontinuities. We will show how the geometry model, by putting a joint distribution on the orientations of the linear discontinuities, allows us to weigh several factors when choosing the wedgelet representation: the error between the representation and the original image, the parsimony of the representation, and whether the wedgelets in the representation form "natural" geometrical structures. Finally, we will analyze a simple wedgelet coder based on these principles, and show that it has optimal asymptotic performance for simple cartoon images.
Aggregate network traffic exhibits strong burstiness and non-Gaussian
distributions, which popular models such as
fractional Gaussian noise (fGn) fail to
capture. To better understand the cause of traffic burstiness, we
connection-level information of traffic traces. A careful study reveals that
traffic burstiness is directly related to the heterogeneity in connection
bandwidths and round-trip times and that a small number of high-bandwidth
connections are solely responsible for bursts. This separation of
connections has far-reaching implications on network control and leads to a new
model for network traffic which we call the alpha/beta model.
In this model, the network traffic is composed of two components:
a bursty, non-Gaussian
alpha component (stable Levy noise) and
a Gaussian, long range dependent beta component (fGn).
We present a fast scheme to separate the alpha and beta components
of traffic using wavelet denoising.
In this paper, we present an unsupervised scheme aimed at segmentation of laser radar (LADAR) imagery for Automatic Target Detection. A coding theoretic approach implements Rissanen's concept of Minimum Description Length (MDL) for estimating piecewise homogeneous regions. MDL is used to penalize overly complex segmentations. The intensity data is modeled as a Gaussian random field whose mean and variance functions are piecewise constant across the image. This model is intended to capture variations in both mean value (intensity) and variance (texture). The segmentation algorithm is based on an adaptive rectangular recursive partitioning scheme. We implement a robust constant false alarm rate (CFAR) detector on the segmented intensity image for target detection and compare our results with the conventional cell averaging (CA) CFAR detector.
In this paper, a coding theoretic approach is presented for the unsupervised segmentation of SAR images. The approach implements Rissanen's concept of Minimum Description Length (MDL) for estimating piecewise homogeneous regions. Our image model is a Gaussian random field whose mean and variance functions are piecewise constant across the image. The model is intended to capture variations in both mean value (intensity) and variance (texture). We adopt a multiresolution/progressive encoding approach to this segmentation problem and use MDL to penalize overly complex segmentations. We develop two different approaches both of which achieve fast unsupervised segmentation. One algorithm is based on an adaptive (greedy) rectangular recursive partitioning scheme. The second algorithm is based on an optimally-pruned wedgelet-decorated dyadic partition. We present simulation results on SAR data to illustrate the performance obtained with these segmentation techniques.
Recently, a real-time imaging system based on terahertz (THz) time-domain spectroscopy has been developed. This technique offers a range of unique imaging modalities due to the broad bandwidth, sub-picosecond duration, and phase-sensitive detection of the THz pulses. This paper provides a brief introduction of the state-of-the-art in THz imaging. It also focuses on expanding the potential of this new and exciting field through two major efforts. The first concentrates on improving the experimental sensitivity of the system. We are exploring an interferometric arrangement to provide a background-free reflection imaging geometry. The second applies novel digital signal processing algorithms to extract useful information from the THz pulses. The possibility exists to combine spectroscopic characterization and/or identification with pixel-by-pixel imaging. We describe a new parameterization algorithm for both high and low refractive index materials.
Edges in images convey a great deal of information, but wavelet transforms do not provide an economical representation. Thus, popular wavelet-based compression and restoration techniques perform poorly in the presence of edges. We present her a new multiresolution wedgelet transform based on the lifting construction. This transform provides an economical edge representation and thus offers the potential for improved image processing. We demonstrate this potential with applications in image denoising.
Besov spaces classify signals and images through the Besov norm, which is based on a deterministic smoothness measurement. Recently, we revealed the relationship between the Besov norm and the likelihood of an independent generalized Gaussian wavelet probabilistic model. In this paper, we extend this result by providing an information- theoretical interpretation of the Besov norm as the Shannon codelength for signal compression under this probabilistic mode. This perspective unites several seemingly disparate signal/image processing methods, including denoising by Besov norm regularization, complexity regularized denoising, minimum description length processing, and maximum smoothness interpolation. By extending the wavelet probabilistic model, we broaden the notion of smoothness space to more closely characterize real-world data. The locally Gaussian model leads directly to a powerful wavelet- domain. Wiener filtering algorithm for denoising.
Multiscale processing, in particular using the wavelet transform, has emerged as an incredibly effective paradigm for signal processing and analysis. In this paper, we discuss a close relative of the Haar wavelet transform, the multiscale multiplicative decomposition. While the Haar transform captures the differences between signal approximations at different scales, the multiplicative decomposition captures their ratio. The multiplicative decomposition has many of the properties that have made wavelets so successful. Most notably, the multipliers are a sparse representative for smooth signals, they have a dependency structure similar to wavelet coefficients, and they can be calculated efficiently. The multiplicative decomposition is also a more natural signal representation than the wavelet transform for some problems. For example, it is extremely easy to incorporate positivity constraints into multiplier domain processing. In addition, there is a close relationship between the multiplicative decomposition and the Poisson process; a fact that has been exploited in the field of photon-limited imaging. In this paper, we will show that the multiplicative decomposition is also closely tied with the Kullback-Leibler distance between two signals. This allows us to derive an n-term KL approximation scheme using the multiplicative decomposition.
We develop a general framework to simultaneously exploit texture and shape characterization in multiscale image segmentation. By posing multiscale segmentation as a model selection problem, we invoke the powerful framework offered by minimum description length (MDL). This framework dictates that multiscale segmentation comprises multiscale texture characterization and multiscale shape coding. Analysis of current multiscale maximum a posteriori segmentation algorithms reveals that these algorithms implicitly use a shape coder with the aim to estimate the optimal MDL solution, but find only an approximate solution.
Traditional filtering methods operate on the entire signal or image. In some applications, however, errors are concentrated in specific regions or features. A prime example is images generated using computed tomography. Practical implementations limit the amount of high frequency content in the reconstructed image, and consequently, edges are blurred. We introduce a new post-reconstruction edge enhancement algorithm, based on the reassignment principle and wavelets, that localizes its sharpening exclusively to edge features. Our method enhances edges without disturbing the low frequency textural details.
We study the segmentation of SAR imagery using wavelet-domain Hidden Markov Tree (HMT) models. The HMT model is a tree- structured probabilistic graph that captures the statistical properties of the wavelet transforms of images. This technique has been successfully applied to the segmentation of natural texture images, documents, etc. However, SAR image segmentation poses a difficult challenge owing to the high levels of speckle noise present at fine scales. We solve this problem using a 'truncated' wavelet HMT model specially adapted to SAR images. This variation is built using only the coarse scale wavelet coefficients. When applied to SAR images, this technique provides a reliable initial segmentation. We then refine the classification using a multiscale fusion technique, which combines the classification information across scales from the initial segmentation to correct for misclassifications. We provide a fast algorithm, and demonstrate its performance on MSTAR clutter data.
We present a new approach to SAR image segmentation based on a Poisson approximation to the SAR amplitude image. It has been established that SAR amplitude images are well approximated using Rayleigh distributions. We show that, with suitable modifications, we can model piecewise homogeneous regions (such as tanks, roads, scrub, etc.) within the SAR amplitude image using a Poisson model that bears a known relation to the underlying Rayleigh distribution. We use the Poisson model to generate an efficient tree-based segmentation algorithm guided by the minimum description length (MDL) criteria. We present a simple fixed tree approach, and a more flexible adaptive recursive partitioning scheme. The segmentation is unsupervised, requiring no prior training, and very simple, efficient, and effective for identifying possible regions of interest (targets). We present simulation results on MSTAR clutter data to demonstrate the performance obtained with this parsing technique.
We introduce a new document image segmentation algorithm, HMTseg, based on wavelets and the hidden Markov tree (HMT) model. The HMT is a tree-structured probabilistic graph that captures the statistical properties of the coefficients of the wavelet transform. Since the HMT is particularly well suited to images containing singularities (edges and ridges), it provides a good classifier for distinguishing between different document textures. Utilizing the inherent tree structure of the wavelet HMT and its fast training and likelihood computation algorithms, we perform multiscale texture classification at a range of different scales. We then fuse these multiscale classifications using a Bayesian probabilistic graph to obtain reliable final segmentations. Since HMTseg works on the wavelet transform of the image, it can directly segment wavelet-compressed images, without the need for decompression into the space domain. We demonstrate HMTseg's performance with both synthetic and real imagery.
The more a priori knowledge we encode into a signal processing algorithm, the better performance we can expect. In this paper, we overview several approaches to capturing the structure of singularities (edges, ridges, etc.) in wavelet-based signal processing schemes. Leveraging results from approximation theory, we discuss nonlinear approximations on trees and point out that an optimal tree approximant exists and is easily computed. The optimal tree approximation inspires a new hierarchical interpretation of the wavelet decomposition and a tree-based wavelet denoising algorithm that suppresses spurious noise bumps.
This paper develops new algorithms for adapted multiscale analysis and signal adaptive wavelet transforms. We construct our adaptive transforms with the lifting scheme, which decomposes the wavelet transform into prediction and update stages. We adapt the prediction stage to the signal structure and design the update stage to preserve the desirable properties of the wavelet transform. The resulting scale and spatially adaptive transforms are extended to the image estimation problem; our new image transforms show improved denoising performance over existing (non-adaptive) orthogonal transforms.
We discover a new relationship between two seemingly different image modeling methodologies; the Besov space theory and the wavelet-domain statistical image models. Besov spaces characterize the set of real-world images through a deterministic characterization of the image smoothness, while statistical image models capture the probabilistic properties of images. By establishing a relationship between the Besov norm and the normalized likelihood function under an independent wavelet-domain generalized Gaussian model, we obtain a new interpretation of the Besov norm which provides a natural generalization of the theory for practical image processing. Base don this new interpretation of the Besov space, we propose a new image denoising algorithm based on projections onto the convex sets defined in the Besov space. After pointing out the limitations of Besov space, we propose possible generalizations using more accurate image models.
We propose a hybrid approach to wavelet-based deconvolution that comprises Fourier-domain system inversion followed by wavelet-domain noise suppression. In contrast to conventional wavelet-based deconvolution approaches, the algorithm employs a regularized inverse filter, which allows it to operate even when the system in non-invertible. Using a mean-square-error (MSE) metric, we strike an optimal balance between Fourier-domain regularization (matched to the system) and wavelet-domain regularization (matched to the signal/image). Theoretical analysis reveals that the optimal balance is determined by the economics of the signal representation in the wavelet domain and the operator structure. The resulting algorithm is fast (O(Nlog22N) complexity for signals/images of N samples) and is well-suited to data with spatially-localized phenomena such as edges. In addition to enjoying asymptotically optimal rates of error decay for certain systems, the algorithm also achieves excellent performance at fixed data lengths. In simulations with real data, the algorithm outperforms the conventional time-invariant Wiener filter and other wavelet- based deconvolution algorithms in terms of both MSE performance and visual quality.
We introduce a new image texture segmentation algorithm, HMTseg, based on wavelet-domain hidden Markov tree (HMT) models. The HMT model is a tree-structured probabilistic graph that captures the statistical properties of wavelet coefficients. Since the HMT is particularly well suited to images containing singularities, it provides a good classifier for textures. Utilizing the inherent tree structure of the wavelet HMT and its fast training and likelihood computation algorithms, we perform multiscale texture classification at various scales. Since HMTseg works on the wavelet transform of the image, it can directly segment wavelet-compressed images, without the need for decompression. We demonstrate the performance of HMTseg with synthetic, aerial photo, and document image segmentations.
Wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree model captures the key features of the joint density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the need for computationally expensive iterative training. In this paper, we prose two reduced-parameter HMT models that capture the general structure of a broad class of real-world images. In the image HMT (iHMT) model we use the fact that for a large class of images the structure of the HMT is self-similar across scale. This allows us to reduce the complexity of the iHMT to just nine easily trained parameters. In the universal HMT (uHMT) we take a Bayesian approach and fix these nine parameters. The uHMT requires no training of any kind. While simple, we show using a series of image estimation/denoising experiments that these two new models retain nearly all of the key structure modeled by the full HMT. Finally, we propose a fast shift-invariant HMT estimation algorithm that outperforms all other wavelet- based estimators in the current literature, both in mean- square error and visual metrics.
Many imaging systems rely on photon detection as the basis of image formation. One of the major sources of error in these systems is Poisson noise due to the quantum nature of the photon detection process. Unlike additive Gaussian noise, Poisson noise is signal-dependent, and consequently separating signal from noise is a very difficult task. In this paper, we develop a novel wavelet-domain filtering procedure for noise removal in photon imaging systems. The filter adapts to both the signal and the noise and balances the trade-off between noise removal and excessive smoothing of image details. Designed using the statistical method of cross-validation, the filter is simultaneously optimal in a small-sample predictive sum of squares sense and asymptotically optimal in the mean square error sense. The filtering procedure has a simple interpretation as a joint edge detection/estimation process. Moreover, we derive an efficient algorithm for performing the filtering that has the same order of complexity as the fast wavelet transform itself. The performance of the new filter is assessed with simulated data experiments and tested with actual nuclear medicine imagery.
Wavelet shrinkage is a signal estimation technique that exploits the remarkable abilities of the wavelet transform for signal compression. Wavelet shrinkage using thresholding is asymptotically optimal in a minimax mean-square error (MSE) sense over a variety of smoothness spaces. However, for any given signal, the MSE-optimal processing is achieved by the Wiener filter, which delivers substantially improved performance. In this paper, we develop a new algorithm for wavelet denoising that uses a wavelet shrinkage estimate as a means to design a wavelet-domain Wiener filter. The shrinkage estimate indirectly yields an estimate of the signal subspace that is leveraged into the design of the filter. A peculiar aspect of the algorithm is its use of two wavelet bases: one for the design of the empirical Wiener filter and one for its application. Simulation results show up to a factor of 2 improvement in MSE over wavelet shrinkage, with a corresponding improvement in visual quality of the estimate. Simulations also yield a remarkable observation: whereas shrinkage estimates typically improve performance by trading bits for variance or vice versa, the proposed scheme typically decreases both bias and variance compared to wavelet shrinkage.
Most wavelet-based statistical signal and image processing techniques treat the wavelet coefficients as though they were statistically independent. This assumption is unrealistic; considering the statistical dependencies between wavelet coefficients can yield substantial performance improvements. In this paper, we develop a new framework for wavelet-based signal processing that employs hidden Markov models to characterize the dependencies between wavelet coefficients.
This paper addresses the problem of detection and classification of complicated signals in noise. Classical detection methods such as energy detectors and linear discriminant analysis do not perform well in many situations of practical interest. We introduce a new approach based on hidden Markov modeling in the wavelet domain. Using training data, we fit a hidden Markov model (HMM) to the wavelet transform to concisely represent its probabilistic time- frequency structure. The HMM provides a natural framework for performing likelihood ratio tests used in signal detection and classification. We compare our approach with classical methods for classification of nonlinear processes, change-point detection, and detection with unknown delay.
Nonlinearities are often encountered in the analysis and processing of real-world signals. This paper develops new signal decompositions for nonlinear analysis and processing. The theory of tensor norms is employed to show that wavelets provide an optimal basis for the nonlinear signal decompositions. The nonlinear signal decompositions are also applied to signal processing problems.
Time-frequency distributions (TFDs) have proven useful in a wide variety of nonstationary signal processing applications. While sophisticated optimal bilinear TFDs have been developed to extract the maximum possible time- frequency information from signals, certain applications dictate simpler linear, running-FFT processing techniques. In this paper, we propose a signal-dependent short-time Fourier transform/spectrogram that enjoys many of the advantages of optimal bilinear TFDs yet retains the simplicity and efficiency of running-FFT processing. In addition, we extend the optimal kernel design problem to linear spaces of signals.
We propose an extension of Thomson's multiple window spectrum estimation for stationary random processes to the time-varying spectrum estimation of non-stationary random processes. Unlike previous extensions of Thomson's method, in this paper we identify and utilize optimally concentrated window and wavelet functions for the time-frequency and time-scale planes respectively. Moreover, we develop a statistical test for detecting and extracting chirping line components.
Warping similarity transformations provide a powerful vehicle for generating new classes of joint distributions based on concepts different from time, frequency, and scale. These new signal representations focus on the critical characteristics of large classes of signals, and hence, prove useful for representing and processing signals that are not well matched by current techniques. Interestingly, all distributions that have been used to illustrate more complicated generalized distribution design techniques can be generated using the warping method.
The large variance of the Wigner-Ville distribution makes smoothing essential for producing readable estimates of the time-varying power spectrum of noise-corrupted signals. Since linear smoothing trades reduced variance for increased bias of the signal components, we explore two nonlinear estimation techniques based on soft thresholding in an orthonormal basis representation. Soft thresholding provides considerable variance reduction without greatly impairing the time-frequency resolution of the estimated spectrum.
An efficient algorithm is presented for computing the continuous wavelet transform and the wideband ambiguity function on a sample grid with uniform time spacing but arbitrary sampling in scale. The method is based on the chirp z-transform and requires the same order of computation as constant-bandwidth analysis techniques, such as the short-time Fourier transform and the narrowband ambiguity function. An alternative spline approximation method which is more efficient when the number of scale samples is large is also described.
Current bilinear time-frequency representations apply a fixed kernel to smooth the Wigner distribution. However the choice of a fixed kernel limits the class ofsignals that can be analyzed effectively. This paper presents optimality criteria for the design of signal-dependeni kernels that suppress cross-components while passing as much auto-component energy as possible irrespective of the form of the signal. A fast algorithm for the optimal kernel solution makes the procedure competitive computationaily with fixed kernel methods. Examples demonstrate the superior performance of the optimal kernel for a frequency modulated signal.
SC902: Compressive Sensing: Theory and Applications
Sensors and signal processing hardware and algorithms are under increasing pressure to accommodate ever larger and higher-dimensional data sets; ever faster capture, sampling, and processing rates; ever lower power consumption; communication over ever more difficult channels; and radically new sensing modalities. This four-hour course presents the fundamental theory and selected applications of Compressive Sensing, a new approach to data acquisition in which analog signals are digitized for processing not via uniform sampling but via inner products with random test functions. Unlike Nyquist-rate sampling, which completely describes a signal by exploiting its bandlimitedness, Compressive Sensing reduces the number of measurements required to completely describe a signal by exploiting its compressibility. The implications are promising for many applications and enable the design of new kinds of analog-to-digital converters, imaging systems and cameras, and radar systems, among others.