We present a novel method for robust tracking in video frame sequences via L1-Grassmann manifolds. The proposed method represents adaptively the target as a point on the Grassmann manifold, calculated by means of L1-norm Principal-Component Analysis (L1-PCA). For this purpose, an efficient algorithm for adaptive L1-PCA is presented. Our experimental studies illustrate that the presented tracking method, leveraging the outlier resistance of L1-PCA, demonstrates robustness against target occlusions and illumination variations.
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
Standard Principal-Component Analysis (PCA) is known to be very sensitive to outliers among the processed data.1 On the other hand, it has been recently shown that L1-norm-based PCA (L1-PCA) exhibits sturdy resistance against outliers, while it performs similar to standard PCA when applied to nominal or smoothly corrupted data.2, 3 Exact calculation of the K L1-norm Principal Components (L1-PCs) of a rank-r data matrix X∈ RD×N costs O(2NK), in the general case, and O(N(r-1)K+1) when r is fixed with respect to N.2, 3 In this work, we examine approximating the K L1-PCs of X by the K L1-PCs of its L2-norm-based rank-d approximation (K≤d≤r), calculable exactly with reduced complexity O(N(d-1)K+1). Reduced-rank L1-PCA aims at leveraging both the low computational cost of standard PCA and the outlier-resistance of L1-PCA. Our novel approximation guarantees and experiments on dimensionality reduction show that, for appropriately chosen d, reduced-rank L1-PCA performs almost identical to L1-PCA.
In this paper, we present a sparse image reconstruction approach for radar imaging through multilayered media with total variation minimization (TVM). The approach is well suited for high-resolution imaging for both ground penetrating radar (GPR) and through-the-wall radar imaging (TWRI) applications. The multilayered media Green’s function is incorporated in the imaging algorithm to efficiently model the wave propagation in the multilayered environment. For GPR imaging, the multilayered subsurface Green’s function is derived in closed form with saddle point method, which is significantly less time consuming than numerical methods. For through-the-wall radar imaging, where the first and last layers are freespace, a far field approximation of the Green’s function in analytical form is used to model the wave propagation through single or multilayered building walls. The TVM minimizes the gradient of the image resulting in excellent edge preservation and shape reconstruction of the image. Representative examples are presented to show high quality imaging results with limited data under various subsurface and through-the-wall imaging scenarios.
Most existing radar algorithms are developed under the assumption that the environment, data clutter, is known and stationary. However, in practice, the characteristics of clutter can vary enormously in time depending on the operational scenarios. If unaccounted for, these nonstationary variabilities may drastically hinder the radar performance. It is essential that the radar systems dynamically detect changes in the environment, and adapt to these changes by learning the new statistical characteristics of the environment. In this paper, we employ sparse recovery for clutter identification, specifically we identify the statistical profile the clutter follows. We use Monte Carlo simulations to simulate and test clutter data coming from various distributions.
Compressive sensing (CS) has proven to be a viable method for reconstructing high-resolution signals using low-resolution measurements. Integrating CS principles into an optical system allows for higher-resolution imaging using lower-resolution sensor arrays. In contrast to prior works on CS-based imaging, our focus in this paper is on imaging through fiber-optic bundles, in which manufacturing constraints limit individual fiber spacing to around 2 μm. This limitation essentially renders fiber-optic bundles as low-resolution sensors with relatively few resolvable points per unit area. These fiber bundles are often used in minimally invasive medical instruments for viewing tissue at macro and microscopic levels. While the compact nature and flexibility of fiber bundles allow for excellent tissue access in-vivo, imaging through fiber bundles does not provide the fine details of tissue features that is demanded in some medical situations. Our hypothesis is that adapting existing CS principles to fiber bundle-based optical systems will overcome the resolution limitation inherent in fiber-bundle imaging. In a previous paper we examined the practical challenges involved in implementing a highly parallel version of the single-pixel camera while focusing on synthetic objects. This paper extends the same architecture for fiber-bundle imaging under incoherent illumination and addresses some practical issues associated with imaging physical objects. Additionally, we model the optical non-idealities in the system to get lower modelling errors.
To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.
A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN’s signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.
We introduce maximum-SINR, sparse-binary waveforms that modulate data information symbols over the entire continuum of the available/device-accessible spectrum. We present an optimal algorithm that designs the proposed waveforms by maximizing the signal-to-interference-plus-noise ratio (SINR) at the output of the maximum- SINR linear filter at the receiver. In addition, we propose a suboptimal, computationally-efficient algorithm. Simulation studies compare the proposed sparse-binary waveforms with their conventional non-sparse binary counterparts and demonstrate their superior SINR performance. The post-filtering SINR and bit-error rate (BER) improvements attained by the proposed waveforms are also experimentally verified in a software-defined radio testbed operating in multipath laboratory environment, in the presence of colored interference.
Markers such as CD13 and CD133 have been used to identify Cancer Stem Cells (CSC) in various tissue images. It is highly likely that CSC nuclei appear as brown in CD13 stained liver tissue images. We observe that there is a high correlation between the ratio of brown to blue colored nuclei in CD13 images and the ratio between the dark blue to blue colored nuclei in H&E stained liver images. Therefore, we recommend that a pathologist observing many dark blue nuclei in an H&E stained tissue image may also order CD13 staining to estimate the CSC ratio. In this paper, we describe a computer vision method based on a neural network estimating the ratio of dark blue to blue colored nuclei in an H&E stained liver tissue image. The neural network structure is based on a multiplication free operator using only additions and sign operations. Experimental results are presented.
Electrodermal Activity (EDA) – a peripheral index of sympathetic nervous system activity - is a primary measure used in psychophysiology. EDA is widely accepted as an indicator of physiological arousal, and it has been shown to reveal when psychologically novel events occur. Traditionally, EDA data is collected in controlled laboratory experiments. However, recent developments in wireless biosensing have led to an increase in out-of-lab studies. This transition to ambulatory data collection has introduced challenges. In particular, artifacts such as wearer motion, changes in temperature, and electrical interference can be misidentified as true EDA responses. The inability to distinguish artifact from signal hinders analyses of ambulatory EDA data. Though manual procedures for identifying and removing EDA artifacts exist, they are time consuming – which is problematic for the types of longitudinal data sets represented in modern ambulatory studies. This manuscript presents a novel technique to automatically identify and remove artifacts in EDA data using curve fitting and sparse recovery methods. Our method was evaluated using labeled data to determine the accuracy of artifact identification. Procedures, results, conclusions, and future directions are presented.
Sparse Representation (SR) is an effective classification method. Given a set of data vectors, SR aims at finding the sparsest representation of each data vector among the linear combinations of the bases in a given dictionary. In order to further improve the classification performance, the joint SR that incorporates interpixel correlation information of neighborhoods has been proposed for image pixel classification. However, SR and joint SR demand significant amount of computational time and memory, especially when classifying a large number of pixels. To address this issue, we propose a superpixel sparse representation (SSR) algorithm for target detection in hyperspectral imagery. We firstly cluster hyperspectral pixels into nearly uniform hyperspectral superpixels using our proposed patch-based SLIC approach based on their spectral and spatial information. The sparse representations of these superpixels are then obtained by simultaneously decomposing superpixels over a given dictionary consisting of both target and background pixels. The class of a hyperspectral pixel is determined by a competition between its projections on target and background subdictionaries. One key advantage of the proposed superpixel representation algorithm with respect to pixelwise and joint sparse representation algorithms is that it reduces computational cost while still maintaining competitive classification performance. We demonstrate the effectiveness of the proposed SSR algorithm through experiments on target detection in the in-door and out-door scene data under daylight illumination as well as the remote sensing data. Experimental results show that SSR generally outperforms state of the art algorithms both quantitatively and qualitatively.
In this paper, we provide performance analysis of a sparsity-based interpolation technique for direction-of-arrival (DOA) estimation with partially augmentable non-uniform arrays. The degrees-of-freedom (DOFs) offered by a partially augmentable non-uniform array cannot be fully utilized for subspace-based DOA estimation due to the presence of holes in the corresponding difference coarray. The interpolation technique fills the ‘holes’ in the difference coarray, thereby permitting full use of the available DOFs for DOA estimation. We examine the performance of the interpolation-based DOA estimation scheme for scenes with varying source powers under different source separations. Co-prime arrays, which are a type of partially augmentable non-uniform configuration, are utilized for the performance analysis.
Compressed sensing (CS) is a technology to acquire and reconstruct sparse signals below the Nyquist rate. For images, total variation of the signal is usually minimized to promote sparseness of the image in gradient. However, similar to all L1-minimization algorithms, total variation has the issue of penalizing large gradient, thus causing large errors on image edges. Many non-convex penalties have been proposed to address the issue of L1 minimization. For example, homotopic L0 minimization algorithms have shown success in reconstructing images from magnetic resonance imaging (MRI). Homotopic L0 minimization may suffer from local minimum which may not be sufficiently robust when the signal is not strictly sparse or the measurements are contaminated by noise. In this paper, we propose a hybrid total variation minimization algorithm to integrate the benefits of both L1 and homotopic L0 minimization algorithms for image recovery from reduced measurements. The algorithm minimizes the conventional total variation when the gradient is small, and minimizes the L0 of gradient when the gradient is large. The transition between L1 and L0 of the gradients is determined by an auto-adaptive threshold. The proposed algorithm has the benefits of L1 minimization being robust to noise/approximation errors, and also the benefits of L0 minimization requiring fewer measurements for recovery. Experimental results using MRI data are presented to demonstrate the proposed hybrid total variation minimization algorithm yields improved image quality over other existing methods in terms of the reconstruction accuracy.
We study the performance of modal analysis using sparse linear arrays (SLAs) such as nested and co-prime arrays, in both first-order and second-order measurement models. We treat SLAs as constructed from a subset of sensors in a dense uniform linear array (ULA), and characterize the performance loss of SLAs with respect to the ULA due to using much fewer sensors. In particular, we claim that, provided the same aperture, in order to achieve comparable performance in terms of Cramér-Rao bound (CRB) for modal analysis, SLAs require more snapshots, of which the number is about the number of snapshots used by ULA times the compression ratio in the number of sensors. This is shown analytically for the case with one undamped mode, as well as empirically via extensive numerical experiments for more complex scenarios. Moreover, the misspecified CRB proposed by Richmond and Horowitz is also studied, where SLAs suffer more performance loss than their ULA counterpart.
Guided waves have gained popularity in structural health monitoring (SHM) due to their ability to inspect large areas with little attenuation, while providing rich interactions with defects. For thin-walled structures, the propagating waves are Lamb waves, which are a complex but well understood type of guided waves. Recent works have cast the defect localization problem of Lamb wave based SHM within the sparse reconstruction framework. These methods make use of a linear model relating the measurements with the scene reflectivity under the assumption of point-like defects. However, most structural defects are not perfect points but tend to assume specific forms, such as surface cracks or internal cracks. Knowledge of the "type" of defects is useful in the assessment phase of SHM. In this paper, we present a dual purpose sparsity-based imaging scheme which, in addition to accurately localizing defects, properly classifies the defects present simultaneously. The proposed approach takes advantage of the bias exhibited by certain types of defects toward a specific Lamb wave mode. For example, some defects strongly interact with the anti-symmetric modes, while others strongly interact with the symmetric modes. We build model based dictionaries for the fundamental symmetric and anti-symmetric wave modes, which are then utilized in unison to properly localize and classify the defects present. Simulated data of surface and internal defects in a thin Aluminum plate are used to validate the proposed scheme.
Network traffic or data traffic in a Wireless Local Area Network (WLAN) is the amount of network packets moving across a wireless network from each wireless node to another wireless node, which provide the load of sampling in a wireless network. WLAN’s Network traffic is the main component for network traffic measurement, network traffic control and simulation. Traffic classification technique is an essential tool for improving the Quality of Service (QoS) in different wireless networks in the complex applications such as local area networks, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, and wide area networks. Network traffic classification is also an essential component in the products for QoS control in different wireless network systems and applications. Classifying network traffic in a WLAN allows to see what kinds of traffic we have in each part of the network, organize the various kinds of network traffic in each path into different classes in each path, and generate network traffic matrix in order to Identify and organize network traffic which is an important key for improving the QoS feature. To achieve effective network traffic classification, Real-time Network Traffic Classification (RNTC) algorithm for WLANs based on Compressed Sensing (CS) is presented in this paper. The fundamental goal of this algorithm is to solve difficult wireless network management problems. The proposed architecture allows reducing False Detection Rate (FDR) to 25% and Packet Delay (PD) to 15 %. The proposed architecture is also increased 10 % accuracy of wireless transmission, which provides a good background for establishing high quality wireless local area networks.