We consider the problem of detecting a change in an arbitrary vector process by examining the evolution of calculated data subspaces. In our developments, both the data subspaces and the change identification criterion are novel and founded in the theory of L1-norm principal-component analysis (PCA). The outcome is highly accurate, rapid detection of change in streaming data that vastly outperforms conventional eigenvector subspace methods (L2-norm PCA). In this paper, illustrations are offered in the context of artificial data and real electroencephalography (EEG) and electromyography (EMG) data sequences.
Multi-modal tensor data sets arise with increasing frequency in modern day scientific and engineering applications, for example in biomedical sciences and autonomous engineered systems. Over the past twenty years, tensor-domain data analysis has been attempted primarily in the context of standard (L2-norm) eigenvector decompositions across tensor domains. The algorithms are not joint-tensor-domain optimal and exhibit the familiar sensitivity to faulty/corrupted/missing measurements that characterizes all L2-norm principal-component analysis methods. In this work, we present a robustified method to evaluate the conformity of tensor data entries with respect to the whole accessible data set. Conformity evaluation is based on a continuously refined sequence of calculated L1- norm tensor subspaces. The theoretical developments are illustrated in the context of a multisensor localization application that indicates unprecedented estimation performance and resistance to intermittent disturbances. An electroencephalogram (EEG) data analysis experiment is also presented.
We describe an iterative procedure for soft characterization of outlier data in any given data set. In each iteration, data compliance to nominal data behavior is measured according to current L1-norm principal-component subspace representations of the data set. Successively refined L1-norm subspace data set representations lead to successively refined outlier data characterization. The effectiveness of the proposed theoretical scheme is experimentally studied and the results show significantly improved performance compared to L2-PCA schemes, standard L1-PCA, and state-of-the-art robust PCA methods.
Proc. SPIE. 10646, Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII
KEYWORDS: Signal to noise ratio, Detection and tracking algorithms, Modulation, Interference (communication), Phase shift keying, Detection theory, Signal detection, Binary data, Frequency shift keying, Algorithms
We present a two-stage generalized-likelihood-ratio test (GLRT) based procedure for the classification of modulation schemes with unknown signal parameters, such as frequency, amplitude, phase and symbol sequence. Extensive simulation studies presented in this paper demonstrate the efficacy of the developed scheme under limited observation for various PSK and FSK signals, including those with nested symbol constellations.
Proc. SPIE. 10211, Compressive Sensing VI: From Diverse Modalities to Big Data Analytics
KEYWORDS: Transmitters, Modulation, Data hiding, Matrices, Receivers, Phase shift keying, Linear filtering, Telecommunications, Information technology, High dynamic range imaging, Picosecond phenomena, Binary data
We introduce maximum-SINR, sparse-binary waveforms that modulate data information symbols over the entire continuum of the available/device-accessible spectrum. We present an optimal algorithm that designs the proposed waveforms by maximizing the signal-to-interference-plus-noise ratio (SINR) at the output of the maximum- SINR linear filter at the receiver. In addition, we propose a suboptimal, computationally-efficient algorithm. Simulation studies compare the proposed sparse-binary waveforms with their conventional non-sparse binary counterparts and demonstrate their superior SINR performance. The post-filtering SINR and bit-error rate (BER) improvements attained by the proposed waveforms are also experimentally verified in a software-defined radio testbed operating in multipath laboratory environment, in the presence of colored interference.
Standard Principal-Component Analysis (PCA) is known to be very sensitive to outliers among the processed data.1 On the other hand, it has been recently shown that L1-norm-based PCA (L1-PCA) exhibits sturdy resistance against outliers, while it performs similar to standard PCA when applied to nominal or smoothly corrupted data.2, 3 Exact calculation of the K L1-norm Principal Components (L1-PCs) of a rank-r data matrix X∈ RD×N costs O(2NK), in the general case, and O(N(r-1)K+1) when r is fixed with respect to N.2, 3 In this work, we examine approximating the K L1-PCs of X by the K L1-PCs of its L2-norm-based rank-d approximation (K≤d≤r), calculable exactly with reduced complexity O(N(d-1)K+1). Reduced-rank L1-PCA aims at leveraging both the low computational cost of standard PCA and the outlier-resistance of L1-PCA. Our novel approximation guarantees and experiments on dimensionality reduction show that, for appropriately chosen d, reduced-rank L1-PCA performs almost identical to L1-PCA.
Terahertz (THz)-band communication is envisioned as a key wireless technology to satisfy the need for much higher wireless data rates. To date, major progress in electronic, photonic and plasmonic technologies is finally closing the so-called THz gap. However, the exceedingly large available bandwidth at THz frequencies comes at the cost of a very high propagation loss. Combined with the power limitations of THz transceivers, this results in very short communication distances. Moreover, the absorption by water vapor molecules further splits the THz band in multiple transmission windows, which shrink as the transmission distance increases. To overcome these limitations, the concept of Ultra-Massive Multi-Carrier Multiple Input Multiple Output (UMMC MIMO) communication, which relies on the use of ultra-dense frequency-tunable plasmonic nano-antenna arrays, has been recently proposed. In this paper, the end-to-end performance of a UMMC MIMO link is analytically and numerically investigated. More specifically, an optimization framework is developed to determine the information capacity of UMMC MIMO communication by taking into account both the capabilities of THz plasmonic nanoantenna arrays and the peculiarities of the THz channel. In relation to the arrays, the frequency tunability of each individual element in the transmitter’s and receiver’s plasmonic arrays is taken into account. In terms of the channel, the impact of the spreading loss and the molecular absorption loss is considered. Extensive numerical results are provided to illustrate the performance of the proposed communication scheme.
We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the “closest,” in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.
We consider the problem of online foreground extraction from compressed-sensed (CS) surveillance videos. A technically novel approach is suggested and developed by which the background scene is captured by an L1- norm subspace sequence directly in the CS domain. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to outliers, disturbances, and rank selection. Subtraction of the L1-subspace tracked background leads then to effective foreground/moving objects extraction. Experimental studies included in this paper illustrate and support the theoretical developments.
KEYWORDS: Principal component analysis, Video, Image restoration, Video surveillance, Surveillance, Video compression, Video processing, Reconstruction algorithms, Algorithm development, Compressed sensing
We consider the problem of foreground and background extraction from compressed-sensed (CS) surveillance video. We propose, for the first time in the literature, a principal component analysis (PCA) approach that computes the low-rank subspace of the background scene directly in the CS domain. Rather than computing the conventional L2-norm-based principal components, which are simply the dominant left singular vectors of the CS measurement matrix, we compute the principal components under an L1-norm maximization criterion. The background scene is then obtained by projecting the CS measurement vector onto the L1 principal components followed by total-variation (TV) minimization image recovery. The proposed L1-norm procedure directly carries out low-rank background representation without reconstructing the video sequence and, at the same time, exhibits significant robustness against outliers in CS measurements compared to L2-norm PCA.
Conventional subspace-based signal direction-of-arrival estimation methods rely on the familiar L2-norm-derived
principal components (singular vectors) of the observed sensor-array data matrix. In this paper, for the first
time in the literature, we find the L1-norm maximum projection components of the observed data and search
in their subspace for signal presence. We demonstrate that L1-subspace direction-of-arrival estimation exhibits
(i) similar performance to L2 (usual singular-value/eigen-vector decomposition) direction-of-arrival estimation
under normal nominal-data system operation and (ii) significant resistance to sporadic/occasional directional
jamming and/or faulty measurements.
We consider a compressive video acquisition system where frame blocks are sensed independently. Varying
block sparsity is exploited in the form of individual per-block open-loop sampling rate allocation with minimal
system overhead. At the decoder, video frames are reconstructed via sliding-window inter-frame total variation
minimization. Experimental results demonstrate that such rate-adaptive compressive video acquisition improves
noticeably the rate-distortion performance of the video stream over fixed-rate acquisition approaches.
Compressive sensing (CS) is an emerging field of research with far-reaching impact on a variety of applications. For signals admitting sparse representations, CS permits collection of a significantly reduced number of measurements than required by the Shannon-Nyquist sampling theorem and provides efficient superresolution signal reconstruction. In essence, CS provides means for reducing data volume and efficient sensor deployment without a degradation in system performance. Such capabilities are highly desirable for a range of civil and military applications.
Compressed sensing is the theory and practice of sub-Nyquist sampling of sparse signals of interest. Perfect reconstruction may then be possible with significantly fewer than the Nyquist required number of data. In this work, we consider a video system where acquisition is performed via framewise pure compressed sensing. The burden of quality video sequence reconstruction falls, then, solely on the decoder side. We show that effective decoding can be carried out at the receiver/decoder side in the form of interframe total variation minimization. Experimental results demonstrate these developments.
We consider a video acquisition system where motion imagery is captured only by direct compressive sampling
(CS) without any other form of intelligent encoding/processing. In this context, the burden of quality video
sequence reconstruction falls solely on the decoder/player side. We describe a video CS decoding method that
implicitly incorporates motion estimation via sliding-window sparsity-aware recovery from locally estimated
Karhunen-Loeve bases. Experiments presented herein illustrate and support these developments.
We suggest and explore a parallelism between linear block code parity check matrices and binary zero/one
measurement matrices for compressed sensing. The resulting family of deterministic compressive samplers renders
itself to the development of eective and ecient recovery algorithms for sparse signals that are not ℓ1-based.
Experimental results that we include herein demonstrate the utility of the presented developments.
For any given digital host image or audio file (or group of hosts) and any (block) transform domain of interest,
we find an orthogonal set of signatures that achieves maximum sum-signal-to-interference-plus-noise ratio (sum-
SINR) spread-spectrum message embedding for any fixed embedding amplitude values. We also find the sumcapacity
optimal amplitude allocation scheme for any given total distortion budget under the assumption of
(colored) Gaussian transform-domain host data. The practical implication of the results is sum-SINR, sumcapacity
optimal multiuser/multisignature spread-spectrum data hiding in the same medium. Theoretically,
the findings establish optimality of the recently presented Gkizeli-Pados-Medley multisignature eigen-design
We consider the problem of signature waveform design for code division medium-access-control (MAC) of wireless
sensor networks (WSN). In contract to conventional randomly chosen orthogonal codes, an adaptive signature
design strategy is developed under the maximum pre-detection SINR (signal to interference plus noise ratio)
criterion. The proposed algorithm utilizes slowest descent cords of the optimization surface to move toward the
optimum solution and exhibits, upon eigenvector decomposition, linear computational complexity with respect
to signature length. Numerical and simulation studies demonstrate the performance of the proposed method
and offer comparisons with conventional signature code sets.
Proc. SPIE. 5819, Digital Wireless Communications VII and Space Communication Technologies
KEYWORDS: Signal to noise ratio, Super resolution, Statistical analysis, Roentgenium, Digital filtering, Interference (communication), Data processing, Antennas, Chemical elements, Algorithm development
We develop a new direction-of-arrival (DOA) estimation procedure that utilizes a modified version of the orthogonal auxiliary-vector (AV) filtering algorithm. The procedure starts with the linear transformation of the array response scanning vector by the input autocorrelation matrix. Then, successive orthogonal maximum cross-correlation auxiliary vectors are calculated to form a basis of the scanner-extended signal subspace. As a performance evaluation example, our studies for uncorrelated sources demonstrate a gain in the order of 15dB over MUSIC, 7dB over ESPRIT, and 3dB over the grid-search maximum likelihood DOA estimator at probability of resolution 0.9 with a ten-element array and reasonably small observation data records. A reduced complexity version of the proposed DOA estimation algorithm is also suggested. Results for correlated sources are reported as well.
A doubly optimal binary signature set is a set of binary spreading sequences that can be used for code division multiplexing purposes and exhibits minimum total-squared-correlation (TSC)and minimum maximum-squared-correlation (MSC) at the same time. In this article,
we focus on such sets with signatures of odd length and we derive closed-form expressions for the signature cross-correlation matrix, its eigenvalues, and its inverse. Then, we derive analytic
expressions for (i) the bit-error-rate (BER) upon decorrelating processing,(ii) the maximum achievable signal-to-interference-plus-noise (SINR) ratio upon minimum-mean-square-error (MMSE) filtering, and (iii) the total asymptotic efficiency of the system. We find that doubly optimal sets with signature length of the form 4m+1, m=1,
2,..., are in all respects superior to doubly optimal sets with signature length of the form 4m-1 (the latter class includes the familiar Gold sets as a small proper subset). "4m+1" sets perform practically at the single-user-bound (SUB) after decorrelating or MMSE
processing (not true for "4m-1" sets). The total asymptotic efficiency of "4m+1" sets is lower bounded by 2/e for any system user load. The corresponding lower bound for "4m-1" sets is zero.
n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed.
At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by
realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.
Proc. SPIE. 4395, Digital Wireless Communication III
KEYWORDS: Signal to noise ratio, Detection and tracking algorithms, Sensors, Reliability, Receivers, Linear filtering, Computer simulations, Detector development, Signal detection, Knowledge management
The prohibitive -exponential in the number of users- computational complexity of the maximum likelihood (ML) multiuser detector for direct-sequence code-division-multiple-access (DS/CDMA) communications has fueled an extensive research effort for the development of low complexity multiuser detection alternatives. Notable examples are the zero-forcing (``decorrelating'') and minimum-mean-square-error (MMSE) linear filter receivers. In this paper, we show that we can efficiently and effectively approach the error rate performance of the optimum multiuser detector as follows. We utilize a multiuser zero-forcing or MMSE filter as a pre-processor whose output magnitude provides a reliability measure for each user bit decision. An ordered reliability-based error search sequence of length linear to the number of users returns the most likely user bit vector among all visited options. Numerical and simulation studies for moderately loaded systems that permit the exact implementation of the optimum detector indicate that the error rate performance of the optimum and the proposed detector are nearly indistinguishable. Similar studies for higher user loads (that prohibit comparisons with the optimum detector) demonstrate error rate performance gains of orders of magnitude in comparison with straight decorrelating or MMSE multiuser detection.
Proc. SPIE. 4395, Digital Wireless Communication III
KEYWORDS: Signal to noise ratio, Data modeling, Digital filtering, Interference (communication), Receivers, Linear filtering, Signal processing, Electronic filtering, Algorithm development, Binary data
In direct-sequence code-division-multiple-access (DS-CDMA) systems, the pre-detection signal-to-interference-plus-noise ratio (SINR) at the output of the single-user minimum-mean-square-error (MMSE) filter is a function of the specific user spreading code (signature). In this paper, we consider the adaptive optimization of the user signature assignment such that the output SINR of the MMSE filter is maximized under a transmitter power constraint. In the context of binary signatures, the complexity of the signature optimization procedure is exponential in the processing gain. A low-cost suboptimum adaptive binary signature assignment algorithm is derived based on conditional optimization principles. We use this algorithm to design an efficient system-wide multiuser adaptive signature set assignment scheme. The performance of the proposed scheme is evaluated under asynchronous multipath fading DS-CDMA channel models and is compared to the performance of systems with arbitrarily chosen signature sets.
Second-order multipath channel estimation procedures for direct-sequence code-division-multiple-access communications induce phase ambiguity that necessitates differential phase- shift-keying (DPSK) modulation and detection. The maximum likelihood (ML) single-symbol multiuser DPSK/CDMA detector is derived with direct generalization to multiple-symbol (block) multiuser DPSK/CDMA detection. Exponential complexity requirements limit the use of the ML rule to theoretical lower-bound bit-error-rate benchmarking. Linear filter DPSK demodulators are viewed as a practical alternative. Phase-ambiguous RAKE filtering followed by RAKE-output differential detection is considered. The familiar minimum-variance-distortionless-response (MVDR) PSK/CDMA filter (designed for minimum filter output energy under the constraint of distortionless response in a given RAKE vector direction) adds the valuable feature of active interference suppression; however, minimum disturbance variance at the differential logic output can be claimed formally only in the absence of multipath (no inter-symbol- interference). Short-data-record adaptive alternatives to costly and slow adaptive MVDR implementations are sought in the context of auxiliary-vector filtering. Numerical and simulation studies illustrate the developments.
We consider the problem of recovering a spread-spectrum signal in the presence of unknown highly correlated spread- spectrum interference and impulsive noise. In terms of basic system and signal model considerations, we assume availability of a narrowband adaptive antenna array that experiences additive white Gaussian noise in time and across elements, as well as impulsive disturbance that is correlated across elements. The space-time receiver design developed in this work is characterized by the following attributes: (1) Adaptive interference suppression is pursued in the joint space-time domain. (2) An adaptive parametric non-linear front-end offers effective suppression of impulsive disturbances at low computational cost. (3) An adaptive auxiliary-vector linear filter post-processor offers effective, low-complexity suppression of SS interferes that avoids eigen-decomposition and/or matrix inversion operations and leads to superior BER performance under rapid, short-data-record system adaptation. Numerical and simulation comparisons with plain and outlier resistant space-time MVDR filtering procedures are included to illustrate and support the theoretical developments.