Wide-bandwidth linear Frequency Modulated Continuous Waveforms, FMCW chirps, are frequently used in pulse compression ranging systems in order to obtain accurate results with high resolution. It is commonly known that the bandwidth and duration of chirped waveforms determine the resolution and pulse compression gain of the system.1 Traditional sampling theorems indicate that higher resolution requirements necessitate higher rate analog to digital devices which can be prohibitively expensive. This challenge provides the motivation to investigate alternative methods of detection that do not dictate such strict requirements on the sample rate. While compressed sensing radar is a active area of research, 2–4 and the concept of the chirp dictionary has been studied,5–8 little work has been done investigating uniform sub-sampling of time shifted FMCW lidar signals. In the following we present additional insights into the coherence structure of chirp dictionary, and investigate the phase transition behavior of OMP-like algorithms in noiseless and nosy environments.
FMCW detection/ranging systems often receive reflections from a finite number of objects within a scene. Specifically for LIDAR systems, laser beams with low divergence are used, and it can be assumed that only a single object is being illuminated at a given instance. This object can be modeled as a point reflector located some distance from the receiver. The entire system can be modeled by the linear equation:
where is the sparse length N object reflectivity vector with support one, the matrix C is the N by N chirp matrix, and the matrix S is the M by N sampling matrix. In this work we investigate sensing matrices that are uniformly down-sampled versions of the identity matrix. The chirp matrix and the sampling matrix combine to create A, the M by N sensing matrix. Here, N is the number of Nyquist samples and M is the number of measurements:
where Tc is the waveform duration, FN is the Nyquist sample rate, and Fs is the system sample rate. Each element of corresponds to a candidate range bin, and the width of each bin is given by the unambiguous range divided by N. The atoms of the chirp matrix consist of a total of N time shifted copies of the Nyquist sampled chirp signal and the sampling matrix measures the time shifted signal at uniformly spaced time instances. This formulation is similar to that of compressive sensing radar,2 but addresses the continuous nature of FMCW signals, and does not require sampling through random convolutions9 or other Xampling frameworks.3,10
The goal of this paper is to investigate the case when the number of uniformly sampled measurements is less than the number of traditionally required Nyquist samples (N/M > 1). The ability for sparse reconstruction algorithms to recover the target vector will be explored and bounds on recovery will be identified by characterizing the effect of chirp parameters on the uniformly sub-sampled model.
CHIRP SENSING MATRIX AND COHERENCE
When the number of measurements is greater than or equal to the number of required Nyquist samples (N/M ≤ 1), traditional detection methods such as matched filtering, and minimum mean squared error estimators, can be used to determine an objects location. When N/M > 1 traditional detection methods are inadequate due to the systems under-determined nature and effects of aliasing. However, the notion of sparsity may be used in order to accurately recover the vector. It has been shown11 that it is possible to recover a sparse vector for such systems at sample rates much lower than the Nyquist rate by solving the L1 minimization problem:
The purpose of this section is to identify the conditions required to accurately solve (4), and determine when recovery of the sparse object vector is possible.
The coherence of a sensing matrix, represented by μ, is a fundamental parameter of any dictionary based sensing system and is given by:
where the vectors , and are the ith and jth columns of the sensing matrix A. While it does not offer overtly strict bounds on the conditions necessary for successful recovery, the coherence is of interest as a simple tool that can be used to measure the correlation between atoms within a matrix. The coherence offers the pessimistic bound that μ must be less than one in order to recover a sparse vector with unit support in the absence of noise.12
The following provides empirical observations on the coherence of the chirp sensing matrix through the simulation of various chirp waveform parameters. In these simulations, the number of Nyquist samples, N, was changed by altering the chirp duration, Eq. 2, and the total number of measurements, M, was varied by changing the system sample rate, Eq. 3. The bandwidth, and therefore the Nyquist Rate, was held constant throughout all simulations. The coherence was then mapped to a color value and displayed in Fig. 1.
Fig. 1 shows that the coherence is at a minimum when the number of measurements is equal to the number of Nyquist samples, N/M = 1. In the Sub-Nyquist region N/M > 1, the coherence continues to be below one until the number of measurements is reduced by a factor of four from the total number of Nyquist samples. This is highlighted in Fig. 2 where the coherence is plotted as a function of the down-sampling parameter N/M. From Fig. 2 it can be seen that if down-sampling parameter is greater than four and even, the coherence of the sensing matrix can be equal to 1. This leads us to the first empirically observed bound for successful recovery: We expect to recover, through L1 minimization, a sparse vector with support one if either N/M < 4 or N/M > 4 and N/M is a non-integer multiple of two.
EMPIRICAL EVIDENCE OF SUCCESSFUL DETECTION
To investigate how well the proposed bound translates to the ability to recover the range of an illuminated object, simulations were completed for an ideal measurement system. The above-mentioned sensing system (Eq. 1) was simulated 1000 times at various measurement and Nyquist sample values. In these simulations, a sparse vector was generated with a non-zero magnitude at a random index. The measured vector was then processed using the Orthogonal Matching Pursuit Algorithm (OMP),13 and the recovered solution was compared to the original target vector. The greedy algorithm was used for speed of computation, and provides only a baseline for how sparse recovery algorithms can be expected to perform. If the correct atom was selected then the detection was counted as a success, and the recovery probability was calculated by dividing the total number of successes by the total number of trials. The definition of recovery here closer related to that of sparse support recovery versus exact signal recovery.
Fig. 3 shows the recovery probability as a function of the number of Measurements and the number of Nyquist samples. The points of failure to recover are discrete, which is in stark contrast to other sensing matrix types with steep but continuous transitions at predictable points.14,15 In Fig. 4, this discrete nature is more obvious and it is shown that the recovery probability is reduced when N/M is an even integer, as predicted by the coherence in the previous section.
It is interesting to note that there is still some chance to recover the sparse vector even while the coherence is one. This can be understood by observing that the columns of the sensing matrix are a sub-sampled realization of all possible N time shifts of the transmitted signal. To visualize the effect of this highly structured sensing matrix, the correlation between the first atom and all other atoms in the sensing matrix is shown in Fig. 5 for the cases when N/M = 1 (Fig. 5a) and N/M = 4 (Fig. 5b). For the Nyquist sampled case, N/M = 1, the correlation between all the column elements is ≪ 1 and is identical to the auto-correlation function of a chirp waveform. However, when N/M = 4 there is one additional atom that is perfectly correlated with the first. Upon further investigation of multiple different sensing matrix realizations for the case of N/M = 4, it was found that at most half of the atoms could have one identical copy located N/2 atoms away. This means that there exists a worst case scenario where half of the possible locations have two potential sparse solutions. However, the data shown in Fig. 5c makes it clear that this multiple solution scenario is easily avoided. By increasing the N/M ratio to be something other than an even integer, the correlation between all column elements can be reduced to be < 1 for every atom pair.
EFFECT OF DECREASED SNR
In practice it is impossible to achieve infinite SNR, so the effect of noise must be well understood to determine how applicable sub-Nyquist sampling is to real world systems. Traditional pulse compression methods benefit from the time bandwidth product gain of the transmitted chirp, and the matched filter is the ideal filter with respect to SNR. For the sub-Nyquist system, it is expected that the performance in the presence of noise will deteriorate due to the reduced number of measurements. To investigate the effect of a non-infinite SNR, the same simulation methods as presented in the previous section were used, and white Gaussian noise was added to the measured signal before implementing the OMP algorithm. The results of these simulations are presented in Fig. 6 and Fig. 7 for cases of SNR = 0 dB and SNR = -10 dB respectively
For the SNR = 0 simulation, the recovery probability diagram maintains similar structure to that observed for the ideal noiseless case. This demonstrates that the inherent properties of the sensing matrix dictate the number of measurements required to achieve proper recovery except at the most extreme down-sampling values. As the SNR is reduced to -10dB, more measurements are required to recover the sparse vector. However, this simulation provides evidence that even when the SNR is low; it is still possible to recover the sparse vector when operating in the sub-Nyquist regime. For various SNR levels, Figure 8 shows the number of measurements necessary to achieve a 99% recovery probability for a given number of Nyquist samples. Fig. 8 is provided as a means to better understand the minimum number of measurements required to achieve successful detection in the presence of noise. The results presented in Fig. 8 show that for the SNR = 0 dB case, the OMP algorithm is able to correctly detect an object’s location given ≈ 20 times fewer measurements than the number of traditional Nyquist samples (N=1000, M=40). In the more noisy environments, SNR = -10 dB, it remains possible to accurately detect objects with ≈ 4 times fewer samples (N=1000, M=240). It is expected that this performance could be improved by using alternative sparse recovery algorithms that are tailored to handling sparse recovery in the presence of noise.16
EXPERIMENTAL EVIDENCE OF RECOVERY
Experiments were completed to validate the simulated results under real world conditions using analog source signals and physical sampling hardware. Fig. 9 is a block diagram of the experimental setup, where the source is an Analog Devices AD9914 direct digital synthesizer, the noise generator is an Agilent noise source, and the A/D was a SP-Devices variable rate digitizer with a max sampling rate of 4GSPS. The chirp waveform parameters such as duration and bandwidth were set to match some of the simulated Nyquist sample and measurement values, and the noise source power level was adjusted to achieve the appropriate SNR.
For this experimental verification, a constant amount of cable length was used to simulate the round trip propagation delay to an object. The delay was measured using traditional pulse compression methods before the experiments were conducted and the measured value was used as the ground truth for comparison. The sensing matrix was generated using the known chirp parameters and sampling rate, and the OMP algorithm was again used to recover the sparse vector. Fig. 10 and Fig. 11 show the experimental and simulated results for the Nyquist sample values of N=250 and 1000 respectively. For each Nyquist sample value, the recovery probability was calculated and plotted as a function of the number of measurements obtained.
The simulated and experimental data display similar trends aside from a slight systematic reduction in performance for the experimental tests. One potential reason for this result has been identified as the difference between the ideal waveform used to create the sensing matrix, and the actual analog chirp signal. The mismatch is due the analog signal being generated through the use of a digital ramp generator, which consists of discrete frequency steps. This is different than the stored chirp waveform which is derived from an ideal linear frequency slope. This problem can be addressed by using prior knowledge of the signals structure to build a more suitable sensing matrix. Future work will determine the impact that this described mismatch has on recovery probability.
Another dissimilarity between the simulated and experimental results is the absence of abrupt reductions in performance due to the even values of N/M. Previously it was discussed that this reduction in performance is due to multiple solutions to the L1 minimization problem. In the simulations, this is an issue when the index of the non-zero element was situated directly in a range bin that had two potential solutions. However, in the experiments this ideal delta function object vector is actually a sparse approximation of a continuous time impulse response of the system. It is highly unlikely that this continuous time function will land directly on a single range bin, and even more unlikely to have a down-sampling parameter exactly equal to an even integer. While the matrix might have multiple solutions, the physical imperfections of the system lead to one solution being more likely than the other, eliminating the uncertainty observed in the simulations.
Lastly, it is important to point out how the number of Nyquist samples and measurements affect detection performance in the presence of noise for both the simulated and experimental results. Figures 10 and 11 show that it is possible to obtain accurate detection at low SNR levels by adjusting the number of measurements and the number of Nyquist samples to appropriately match the operating condition. As seen in Fig. 8 for a given SNR, the number of measurements necessary to achieve accurate detection is relatively constant. This implies that by increasing the number of Nyquist samples in the sensing matrix (Increasing Tc), and keeping the number of measurements constant (Reducing Fs), we can achieve successful recovery at high down-sampling ratios even in noisy conditions. This fact highlights the important inverse relationship between the system sample rate and necessary chirp duration. The observed trade-off is analogous to the benefit of increasing duration to increase the time bandwidth product gain of the traditional pulse compression systems.
In an effort to find more efficient ways to detect an object’s range, the assumption of a sparse object reflectivity vector was made for systems that actively probe the environment with chirped waveforms. It was argued that this assumption is valid for LIDAR ranging systems, and can be used to leverage the theories of sub-Nyquist sampling and sparse recovery to determine the range to an object. The traditional FMCW ranging problem was reformulated as a linear system of equations and a sensing matrix was identified. Empirically it was determined that the sensing matrix satisfies the theoretical coherence/L1 minimization bounds on recovery for a wide variety of parameters. Simulations indicate that it is indeed possible to recover sparse vectors with support one using far fewer measurements than dictated by traditional sampling theorems. The simulations also demonstrated that such a sub-Nyquist system has the ability to perform detection in noisy environments after the number of measurements breached a certain threshold. Experiments were conducted using a physical apparatus and the results of the simulations were confirmed by the experimental data. This proves that it is possible to use chirp waveforms, low-rate digitizers, and sparse recovery algorithms to determine the range to an object.
To further progress the understanding of these types of sub-Nyquist systems, the sensing matrix must be understood more intimately. We will extend existing theoretical results on chirp dictionaries7,8 to analytically characterize the empirically observed phase transition curves. Analysis of other modulation schemes will be done in order to investigate the possibility of alternative waveforms better suited for the sensing matrix. Ideally these waveforms should have minimal correlation across all sub-Nyquist sampled time shifts. Additionally, other sparse recovery algorithms should be tested to determine how recovery methods affect performance in the presence of noise. The presented work here could also be expanded by addressing the more general case of a sparse vector with support greater than one.
Skolnik, M. I., “Radar handbook,” (1970).Google Scholar
Baraniuk, R. and Steeghs, P., “Compressive radar imaging,” in [Radar Conference, 2007 IEEE], 128–133 (April 2007).Google Scholar
Bar-Ilan, O. and Eldar, Y. C., “Sub-nyquist radar,” in [Systems, Communication and Coding (SCC), Proceedings of 2013 9th International ITG Conference on], 1–6 (Jan 2013).Google Scholar
Herman, M. A. and Strohmer, T., “High-resolution radar via compressed sensing,” IEEE Transactions on Signal Processing 57, 2275–2284 (June 2009).Google Scholar
O’Neill, J. C. and Flandrin, P., “Chirp hunting,” in [Time-Frequency and Time-Scale Analysis, 1998. Proceedings of the IEEE-SP International Symposium on], 425–428 (Oct 1998).Google Scholar
Gribonval, R., “Fast matching pursuit with a multiscale dictionary of gaussian chirps,” IEEE Transactions on Signal Processing 49, 994–1001 (May 2001).Google Scholar
Nguyen, Y. T. H., Amin, M. G., Ghogho, M., and McLernon, D., “Time-frequency signature sparse reconstruction using chirp dictionary,” (2015).Google Scholar
Applebaum, L., Howard, S. D., Searle, S., and Calderbank, R., “Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery,” Applied and Computational Harmonic Analysis 26(2), 283 – 290 (2009).Google Scholar
Romberg, J., “Compressive sensing by random convolution,” SIAM J. Img. Sci. 2, 1098–1128 (Nov. 2009).Google Scholar
Mishali, M., Eldar, Y. C., and Elron, A. J., “Xampling: Signal acquisition and processing in union of subspaces,” IEEE Transactions on Signal Processing 59, 4719–4734 (Oct 2011).Google Scholar
Donoho, D. L., “For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution,” Comm. Pure Appl. Math 59, 797–829 (2004).Google Scholar
Donoho, D. L. and Huo, X., “Uncertainty principles and ideal atomic decomposition,” IEEE Trans. Inf. Theor. 47, 2845–2862 (Sept. 2006).Google Scholar
Cai, T. T. and Wang, L., “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Transactions on Information Theory 57, 4680–4688 (July 2011).Google Scholar
Davenport, M. A., Duarte, M. F., Eldar, Y. C., and Kutyniok, G., “Introduction to compressed sensing,”Google Scholar
Tropp, J. A. and Gilbert, A. C., “Signal recovery from partial information via orthogonal matching pursuit,” IEEE TRANS. INFORM. THEORY (2005).Google Scholar
Tropp, J. A. and Wright, S. J., “Computational methods for sparse solution of linear inverse problems,” Proceedings of the IEEE 98, 948–958 (June 2010).Google Scholar