25 January 2013 Two-stage through-the-wall radar image formation using compressive sensing
Author Affiliations +
We introduce a robust image-formation approach for through-the-wall radar imaging (TWRI). The proposed approach consists of two stages involving compressive sensing (CS) followed by delay-and-sum (DS) beamforming. In the first stage, CS is used to reconstruct a complete set of measurements from a small subset collected with a reduced number of transceivers and frequencies. DS beamforming is then applied to form the image using the reconstructed measurements. To promote sparsity of the CS solution, an overcomplete Gabor dictionary is employed to sparsely represent the imaged scene. The new approach requires far fewer measurement samples than the conventional DS beamforming and CS-based TWRI methods to reconstruct a high-quality image of the scene. Experimental results based on simulated and real data demonstrate the effectiveness and robustness of the proposed two-stage image formation technique, especially when the measurement set is drastically reduced.



Through-the-wall radar imaging (TWRI) is an emerging technology with considerable research interest and important applications in surveillance and reconnaissance for both civilian and military missions. To deliver high-resolution radar images in both range and crossrange, TWRI systems use wideband signals and large aperture arrays (physical or synthetic). This leads to prolonged data acquisition and high computational complexity because a large number of samples need to be processed. New approaches for TWRI are therefore needed to obtain high-quality images from fewer data samples at a faster speed. To this end, this paper proposes a new approach using compressive sensing (CS) for through-the-wall radar imaging. CS is used here to reconstruct a full measurement set, which is then employed for image formation using delay-and-sum (DS) beamforming.

CS enables a sparse signal to be reconstructed using considerably fewer data samples than what is required by the Nyquist-Shannon theorem.78.9 In through-the-wall radar imaging, the objective of applying CS is to speed up data acquisition and achieve high-resolution imaging.1011. So far, the application of CS in TWRI can be divided into two main categories. In the first category, CS is applied to reconstruct the imaged scene directly by solving an 1 optimization problem or using a greedy reconstruction algorithm.10,11,1314.15 In the second category, CS is employed in conjunction with traditional beamforming methods. In other words, CS is applied to reconstruct a full data volume, and the conventional image formation methods, such as DS beamforming, are then used to form the image of the scene.12 By exploiting CS, the latter approach enables conventional beamforming methods to reconstruct high-quality images from reduced data samples. Moreover, adopting a conventional image formation approach produces images suitable for target detection and classification tasks, which typically follow the image formation step.

In Ref. 12, the full measurement set is recovered from the range profiles obtained by solving a separate CS problem at each sensor location. CS is applied in the temporal frequency domain only, leaving uncompressed sensing in the spatial domain. To recover the full measurement set, several CS problems are solved independently—one for each sensing location. There are also limitations in reducing the measurements along the temporal frequencies. Since the target radar-cross-section depends highly on signal frequency, significant reduction in transmitted frequencies will lead to deficient information about the target.13 Thus, to guarantee accurate reconstruction, imaging the scene with extended targets may require an increase in the number of measurements.5,16

The conventional beamforming methods have been shown to be effective for image-based indoor target detection and localization when using a large aperture array and large signal bandwidth.1718. However, a limitation of the traditional beamforming methods is that they require the full data volume to form a high-quality image; otherwise, the image quality deteriorates rapidly with a reduction of measurements. The question is then how to exploit the advantages of the traditional beamforming methods to obtain high-quality images from a reduced set of measurements.

To answer the aforementioned question and address the limitation of existing CS-based imaging methods, this paper proposes a new CS approach for TWRI, whereby a significant reduction in measurements is achieved by compressing both the transmitted frequencies and the sensor locations. First, CS is employed to restore the full measurement set. Then DS beamforming is applied to reconstruct the image of the scene. To increase sparsity of the CS solution, an overcomplete Gabor dictionary is used for sparse representation of the imaged scene; Gabor dictionaries have been shown to be effective for image sparse decomposition and representation.2526.27 In the proposed approach, fast data acquisition is achieved by reducing both the number of transceivers and transmitted frequencies used to collect the measurement samples. In Ref. 12, data collection was performed at all antenna locations, using a reduced set of frequencies only. In contrast, the proposed approach achieves further measurement reduction by subsampling both the number of frequencies and antenna locations used for data collection. Furthermore, to satisfy the sparsity assumption, a Gabor dictionary is incorporated in the scene representation. In Ref. 14, a wavelet transform was used as a sparsifying basis for the scene. However, our preliminary experiments show that the performance is highly dependent on the particular wavelet function used. We also found that wavelets offer no significant advantage over Gabor basis in the problem of through-the-wall radar image formation. Finally, we should note that there are several approaches that have been proposed for wall clutter mitigation in TWRI,1,28 including recent successful CS-based techniques.29,30 In this paper, we assume that wall clutter can be removed using any of those techniques, or the background scene is available to perform background subtraction.

The remainder of the paper is organized as follows. Section 2 gives a brief review of CS theory. Section 3 presents TWRI using DS beamforming, and describes the proposed approach for TWRI image formation. Section 4 presents experimental results and analysis. Section 5 gives concluding remarks.


Compressive Sensing

CS is an innovative and revolutionary idea that offers joint sensing and compression for sparse signals.78.9,31 Consider a P-dimensional signal x to be represented using a dictionary ΨP×Q with Q atoms. The dictionary is assumed to be overcomplete, that is, Q>P. Signal x is said to be K-sparse if it can be expressed as


where α is a column vector with K nonzero components, i.e., K=α0. Stable reconstruction of a sparse α requires K to be significantly smaller than P.

Using a projection matrix Φ of size L×P, where K<LP, we can obtain an L-dimensional measurement vector y as follows:



The original signal x can be reconstructed from y by exploiting its sparsity. Among all α satisfying y=ΦΨα we seek the sparsest vector, and then obtain x using Eq. (1). This signal reconstruction requires solving the following problem:


minα0subject toy=ΦΨα.

Equation (3) is known to be NP-hard.32 Alternatively, the problem can be cast into an 1 regularization problem:


minα1subject toyΦΨα2ϵ,
where ϵ is a small constant. Several optimization methods, including 1-optimization,33 basis pursuit,34 and orthogonal matching pursuit,35 have been proposed that produce stable and accurate solutions.


Proposed Approach

In this section, we introduce the proposed two-stage TWRI approach based on CS. The main steps of the proposed approach are as follows. First, compressive measurements are acquired using a fast data acquisition scheme that requires only a reduced set of antenna locations and frequency bins. An additional Gabor dictionary is incorporated into the CS model to sparsely represent the scene. Next, the full TWRI data samples are recovered, and then conventional DS technique is applied to generate the scene image. In this section, we first give a brief review of the conventional DS beamforming method for image formation in Sec. 3.1, before presenting the new image formation approach in Sec. 3.2.


TWRI Using Delay-and-Sum Beamforming

Consider a stepped-frequency monostatic TWRI system that uses M transceivers and N narrowband signals to image a scene containing R targets. The signal received at the m-th antenna location and n-th frequency, fn, is given by


where σr(fn) is the reflection coefficient of the r-th target for the n-th frequency, and τm,r is the round-trip travel time of the signal from the m-th antenna location to the r-th target location. In the stepped-frequency approach, the frequency bins fn are uniformly distributed over the entire frequency band, with a step size Δf:


fn=f1+(n1)Δf,for n=1,2,,N,
where f1 is the first transmitted frequency.

The target space behind the wall is partitioned into a rectangular grid, with Nx pixels along the crossrange direction and Ny pixels along the downrange direction. Using DS beamforming, a complex image is formed by aggregating the measurements zm,n. The value of the pixel at location (x,y) is computed as follows:


where τm,(x,y) is the focusing delay between the m-th transceiver and the target located at the pixel position (x,y). Assuming that the wall thickness and relative permittivity are known, the focusing delay can be calculated using Snell’s law, the distance of the transceiver to the front wall, and the distance of the target to the back wall, see Refs. 11, 21, and 36.


Proposed Two-Stage TWRI

Let z be the column vector obtained by stacking the data samples zm,n in Eq. (5), where m=1,2,,M and n=1,2,,N. Let sxy be an indicator function defined as



The elements sxy are then lexicographically ordered into a column vector s. The magnitude of each element in s reflects the significance of a point in the scene. From Eq. (5), the full measurement vector z can be represented as


where Ψ is an overcomplete dictionary, which depends on the target scene, the antenna locations, and the transmitted frequencies. More precisely, Ψ is a matrix with (M×N) rows, and (Nx×Ny) columns. The entry at row r and column c is given as


where r=(m1)×N+n, and c=(x1)×Ny+y.

To reduce the data acquisition time and computational complexity, we propose acquiring only a small number of samples, represented by vector y. The measurements in y are obtained by selecting only a subset of Ma antenna locations and Nf frequencies. In this paper, the reduced antenna locations are uniformly selected, and at each selected antenna location, the same number of frequency bins are regularly subsampled. This fast data acquisition scheme leads to stable image quality and is more suitable for hardware implementation. Figure 1(a) shows the conventional radar imaging where full data samples are acquired. Figure 1(b) illustrates the space-frequency subsampling pattern used in the proposed approach.

Fig. 1

Data acquisition for TWRI: (a) conventional radar imaging scheme; (b) TWRI based on CS. The vertical axis represents the antenna location, and the horizontal axis represents the transmitted frequency. The filled rectangles represent the acquired data samples.


Mathematically, the CS data acquisition can be represented using a projection matrix Φ with (Ma×Nf) rows and (M×N) columns. Each row of Φ has only one non-zero entry with a value of 1 at a position determined by the selected antenna locations and frequency bins. Thus the reduced measurement vector y can be expressed as


where A=ΦΨ.

In practical situations, the scene behind the wall is not exactly sparse because of multipath propagations, wall reflections and the presence of extended objects, such as people or furniture. Therefore, the sparsity assumption of vector s may be violated. To overcome this problem, an additional overcomplete dictionary is employed to sparsely represent s. In our approach, a Gabor dictionary is used. Let W be the synthesis operator for the signal expansion. Thus, the vector s can be expressed as



Substituting Eq. (12) into Eq. (11) yields



For noisy radar signals, the compressive measurement vector y is modeled as


where v is the noise component.

The full data volume can be recovered by two techniques: the synthesis method and the analysis method. In the synthesis technique, the problem is cast as follows:


minα1 subject toyAWα2ϵ.

Once the coefficient α has been obtained by solving the optimization problem, the full TWRI data samples are obtained, using Eqs. (9) and (12),



In the analysis technique, the problem is formulated as


minW1s1 subject to yAs2ϵ.

By solving this optimization problem, we obtain the vector s directly, which can be used to reconstruct the full measurement vector z, see Eq. (9).

Note that it was suggested in Ref. 37 that the analysis technique is less sensitive to noise, compared to the synthesis technique. In our approach, we use the analysis technique for solving the CS problem. After reconstructing the full measurement vector z, we apply the conventional DS beamforming to generate the scene image as described in Sec. 3.1.


Experimental Results and Analysis

In this section, we evaluate the proposed approach using both synthetic and real TWRI data sets. First, the performance of the proposed approach is investigated in Sec. 4.1 using synthetic data. Then the experimental results on real data are presented in Sec. 4.2, along with the TWRI experimental setup for radar signal acquisition.


Synthetic TWRI Data

Data acquisition is simulated for a stepped-frequency radar system, with a frequency range between 0.7 and 3.1 GHz with a 12-MHz frequency step. Therefore, the number of frequency bins used is N=201. The scene is illuminated with an antenna array of length 1.24 m and an inter-element spacing of 0.022 m, which means the number of transceivers used is M=57. The full data volume z comprises M×N=57×201=11,457 samples. Our goal is to acquire much fewer data samples without degrading the quality of the image.

The TWRI system is placed in front of a wall at a standoff distance of Zoff=1.5m. The thickness and relative permittivity of the wall are d=0.143m and ϵr=7.6, respectively. The downrange and crossrange of the scene extend from 0 to 6 m, and 2 to 2 m, respectively. The pixel size is equal to the Rayleigh resolution of the radar, which gives an image of size 97×65pixels. In this experiment, three extended targets (each covering 4 pixels) are placed behind the wall at positions p1= (1.21 m, 0.78m), p2= (3.09 m, 1.09 m), and p3=(4.96m,0.16m). The reflection coefficients are considered to be independent of signal frequency: σ1=1, σ2=0.5 and σ3=0.7, respectively. In our experiment, the first-order method Nesta is used to solve the CS optimization problem with the analysis method due to the robustness, flexibility, and speed of the solver. More details about the Nesta solver can be found in Ref. 33. Here a dictionary consisting of the complex Gabor functions is used for sparse decomposition of the scene.27

The peak-signal-to-noise ratio (PSNR) is used to evaluate the quality of the reconstructed images:


where Imax denotes the maximum pixel value, and RMSE is the root-mean-square error between the reconstructed and the ground-truth images. The performance of the proposed approach in the presence of noise is evaluated by adding white Gaussian noise to the received signal.

Figure 2 shows the ground-truth image and the DS beamforming image reconstructed using the full measurement volume. Note that in this paper, all output images are normalized by the maximum image intensity. The true target position is indicated with a solid white rectangle. Figure 3 illustrates the images formed with reduced subsets of measurements (12% and 1%), using DS beamforming [Fig. 3(a) and 3(b)] and the proposed approach [Fig. 3(c) and 3(d)]. Here, the received signals are corrupted by additive white Gaussian (AWG) noise with SNR=20dB. Compared to the image obtained using DS beamforming with all measurement samples [Fig. 2(b)], the images produced using DS beamforming with reduced data samples [Fig. 3(a) and 3(b)] deteriorate significantly in quality and contain many false targets. By contrast, Fig. 3(c) and 3(d) shows that images obtained with the proposed approach, using the same reduced datasets, suffer little or no degradation. These results demonstrate that the proposed approach performs significantly better than the standard DS beamforming when the number of measurements is reduced significantly. Furthermore, the images produced by the proposed approach using the reduced data samples have the same visual quality as the images formed by the standard DS beamforming method using the full data volume.

Fig. 2

The behind-the-wall scene space: (a) ground-truth image; (b) DS image formed using full volume of data samples.


Fig. 3

Scene images formed by different settings: (a) DS using 12% full data volume; (b) DS using 1% full data volume; (c) proposed approach using 12% full data volume; (d) proposed approach using 1% full data volume. The signal is corrupted by the noise with SNR=20dB.


To evaluate the robustness of the proposed approach in the presence of noise, the measurement signals are corrupted with AWG with SNR equal to 5 and 30 dB. Figure 4 presents the average PSNR of the reconstructed images as a function of the ratio between the reduced measurement set and the full dataset. The figure clearly shows that the images formed with the proposed approach have considerably higher PSNR than the images formed with the standard DS beamforming, using the same measurements. This is because the proposed approach reconstructs the full data samples using CS, before applying DS beamforming.

Fig. 4

The PSNR of images created by the standard DS (dashed lines) and the proposed approach (solid lines).


To compare the performance of different imaging methods, we used three antenna locations and 40 uniformly selected frequencies, which represents 1% of the total data volume. Figure 5 shows the results obtained by different imaging methods. Figure 5(a) shows the CS image reconstructed with the method proposed in Ref. 10. Although the targets can easily be located, there are many false targets in the image. Figure 5(b) illustrates the image formed with the method presented in Ref. 12; this image is considerably degraded with the presence of heavy clutter. The reason is that the imaging method in Ref. 12 is not able to restore the full data volume from a reduced set of antenna locations. Figure 5(c) and 5(d) shows the images formed with the proposed approach using wavelet and Gabor sparsifying dictionaries, respectively. Here, the wavelet family is the dual-tree complex wavelet transform. It can clearly be observed that the image formed using the Gabor dictionary contains less clutter; however, both dictionaries yield high-quality images even with a significant reduction in the number of collected measurements.

Fig. 5

Scene images formed by different imaging approaches: (a) CS image formed by method in Ref. 10; (b) DS image formed by method in Ref. 12; (c) DS image formed by the proposed approach with wavelet dictionary; (d) DS image formed by the proposed approach with Gabor dictionary. The measurements made up 1% of full data volume. The signal is corrupted by the noise with SNR=10dB.


In the next experiment, only the frequency samples are reduced; the data is collected at all antenna locations, using M=57 transceivers. The reduced dataset represents 20% of the full data volume. Figure 6 presents the images formed using different approaches: standard CS method,10 the temporal frequency CS method,12 the proposed method with a wavelet dictionary, and the proposed method with a Gabor dictionary. It can be observed from Fig. 6(b) that there is a substantial improvement in the performance of the temporal frequency CS method.12 This is because when using all antenna locations, this imaging method can obtain the full data volume for forming the image. However, the proposed method yields images with less clutter, using both wavelet and Gabor dictionaries.

Fig. 6

Scene images formed by different imaging approaches: (a) CS image formed by method in Ref. 10; (b) DS image formed by method in Ref. 12; (c) DS image formed by the proposed approach with wavelet dictionary; (d) DS image formed by the proposed approach with Gabor dictionary. All antenna locations are used and the frequency bins are just 20% of the total transmitted frequency. The signal is corrupted by the noise with SNR=10dB.


In summary, experimental results on synthetic TWRI data demonstrate that the proposed approach produces high-quality images using far fewer measurements by applying CS data acquisition in both frequency domain and spatial domain. The proposed approach performs better than the conventional DS and CS-based TWRI methods, especially when the number of measurements is drastically reduced.


Real TWRI Data

In this experiment, the proposed approach is evaluated on real TWRI data. The data used in this experiment were collected at the Radar Imaging Laboratory of the Center for Advanced Communications, Villanova University, USA. The radar system was placed in front of a concrete wall of thickness 0.143 m, and relative permittivity ϵr=7.6. The imaged scene is depicted in Fig. 7. It contains a 0.4 m high and 0.3 m wide dihedral, placed on a turntable made of two 1.2×2.4m2 sheets of 0.013 m thick plywood. A step-frequency signal between 0.7 and 3.1 GHz, with 3-MHz frequency step, was used to illuminate the scene. The antenna array was placed at a height of 1.22 m above the floor and a standoff distance of 1.016 m away from the wall. The antenna array was 1.24 m long, with inter-element spacing of 0.022 m. Therefore, the number of antenna elements is M=57 and the number of frequencies is N=801; the full measurement vector z comprises M×N=57×801=45,657 samples. The imaged scene, extending from [0,3]m in downrange and [1,1]m in crossrange, the scene is partitioned into 81×54pixels.

Fig. 7

TWRI data acquisition: (a) a photo of the scene; (b) a top-view of the behind-the-wall scene.


To quantify the performance of the various imaging methods, we use the target-to-clutter ratio (TCR) as a measure of quality of reconstructed images. The TCR is defined as the ratio between the maximum magnitude of the target pixels and the average magnitude of clutter pixels (in dBs):1


where Rt is the target area, Rc is the clutter area, and Nc is the number of pixels in the clutter region. The target region is a 2×6 area selected manually around the true target position.

For reference purposes, Fig. 8(a) presents the image formed by the standard DS beamforming method using the full data volume. If all available data samples are used, the conventional DS beamforming method yields a high-quality image (TCR=30.33dB). However, when the number of samples is significantly reduced, the standard DS beamforming method alone does not yield a high-quality image. Figure 8(b) shows the image formed using 2 antenna locations and 200 frequency bins (i.e., 0.9% of the collected data). Clearly this image contains too much clutter (TCR=16.76dB).

Fig. 8

Images formed by different settings: (a) conventional DS using full data volume; (b) conventional DS using 0.9% full data volume.


Using the same reduced dataset, we compare the proposed approach with other CS-based TWRI methods. Figure 9(a) shows the standard CS image formed using the approach in Ref. 10. This is a significantly degraded image, compared to the image in Fig. 8(a) (obtained using DS beamforming with full measurements). The reason is that the imaging method in Ref. 10 directly forms the scene image by solving the conventional CS problem; when the measurements are drastically reduced and the CS solution is moderately sparse due to the presence of clutter and noise, the reconstructed image becomes less accurate. Because of the appearance of heavy clutter in Fig. 9(a), the TCR of the formed image drops to 21.78 dB. Figure 9(b) presents the image formed by the temporal frequency CS method of Ref. 12. The quality of the formed image deteriorates because this method does not recover the full data volume when the antenna locations are reduced. The background noise and clutter appear with stronger intensity than the target in the reconstructed image (TCR=12.13dB). Figure 9(c) and 9(d) shows, respectively, images formed by the proposed approach without and with the sparsifying Gabor dictionary. It can be observed that the image in Fig. 9(c), formed without the Gabor sparsifying basis, contains high clutter (TCR=14.40dB), resulting in false targets. By contrast, Fig. 9(d) shows that the image formed using the proposed approach is considerably enhanced by incorporating the Gabor dictionary; the true target is located accurately and the clutter is considerably suppressed (TCR=28.82dB).

Fig. 9

Images reconstructed by different imaging methods: (a) CS image by imaging method in Ref. 10; (b) DS image by imaging method in Ref. 12; (c) DS image by proposed approach without Gabor dictionary; (d) DS image by proposed approach with Gabor dictionary. The measurements constitute 0.9% of full data volume.


The effectiveness of the proposed approach is partly due to the excellent space-frequency localization of Gabor atoms. The Gabor functions are optimum in the sense that they achieve the minimum space-bandwidth product (by analogy to time-bandwidth product), which gives the best tradeoff between signal localization in space and spatial frequency domains. Figure 10 shows the recovered signal coefficients s for the dihedral scene shown in Fig. 7. The signal coefficients recovered with the Gabor dictionary, shown in Fig. 10(a), are much more sparse and concentrated on the target location, whereas the signal coefficients recovered without using the Gabor dictionary, Fig. 10(b), are more spread out.

Fig. 10

Reconstructed signal coefficients s for the dihedral scene: (a) using the Gabor signal representation; (b) without using the Gabor signal representation.


In the final experiment, we use several wavelet families [Daubechies 8, Coiflet 2, and the dual-tree complex wavelet transform (DT-CWT)] as sparsifying basis, and compare their performances with that of the Gabor dictionary. All wavelet transforms use three decomposition levels. Figure 11 illustrates the images formed using different wavelet transforms [Fig. 11(a) to 11(c)], and the image formed with the Gabor dictionary [Fig. 11(d)]. It can be observed from the figure that the images reconstructed with the DT-CWT and the Gabor dictionaries are of superior quality than those obtained with the Daubechies and Coiflet wavelets. The formed images using the DT-CWT and the Gabor dictionary have similar TCRs of 28.71 and 28.82 dB, respectively. The superiority of the DT-CWT and the Gabor dictionaries can be explained by better directional selectivity and localization in space and spatial-frequency. However, we should note that the choice of the best dictionary for a specific TWRI system depends on many factors, such as the scene characteristics, target structure and the decomposition level.

Fig. 11

Images formed by the proposed approach with different sparsifying basis: (a) Daubechies 8 (TCR=18.56dB); (b) Coiflet 2 (TCR=26.46dB); (c) DT-CWT (TCR=28.71); (d) complex Gabor dictionary (TCR=28.82dB).




In this paper, we proposed a new approach for TWRI image formation based on CS and DS beamforming. The proposed approach requires significantly fewer number of frequency bins and antenna locations for sensing operations. This leads to a considerable reduction in data acquisition, processing time, and computational complexity, while producing TWRI images of almost the same quality as the DS beamforming approach with full data volume. The experimental results demonstrate that the proposed approach produces images with considerably higher PSNRs and is less sensitive to noise or the number of data samples used, compared to the standard DS beamforming. Furthermore, experimental results on real TWRI data indicate that the proposed approach produces images with higher TCRs compared to other CS-based image formation methods. Our approach also produces images of similar TCRs compared with the DS beamforming approach that uses the full data volume. Therefore it would be reasonable to expect that the proposed approach will enhance TWRI target detection, localization and classification, while allowing a reduction in the number of measurements and data acquisition time.


We thank the Center of Advanced Communications at Villanova University, USA, for providing the real TWRI data used in the experiments. This work is supported in part by a grant from the Australian Research Council. We thank the anonymous reviewers for the constructive comments and suggestions.


1. Y.-S. YoonM. G. Amin, “Spatial filtering for wall-clutter mitigation in through-the-wall radar imaging,” IEEE Trans. Geosci. Remote Sens. 47(9), 3192–3208 (2009).IGRSD20196-2892 http://dx.doi.org/10.1109/TGRS.2009.2019728 Google Scholar

2. F. AhmadM. G. AminS. A. Kassam, “Synthetic aperture beamformer for imaging through a dielectric wall,” IEEE Trans. Aero. Electron. Syst. 41(1), 271–283 (2005).IEARAX0018-9251 http://dx.doi.org/10.1109/TAES.2005.1413761 Google Scholar

3. W. GenyuanM. G. Amin, “Imaging through unknown walls using different standoff distances,” IEEE Trans. Signal Process. 54(10), 4015–4025 (2006).ITPRED1053-587X http://dx.doi.org/10.1109/TSP.2006.879325 Google Scholar

4. F. AhmadM. G. Amin, “Noncoherent approach to through-the-wall radar localization,” IEEE Trans. Aero. Electron. Syst. 42(4), 1405–1419 (2006).IEARAX0018-9251 http://dx.doi.org/10.1109/TAES.2006.314581 Google Scholar

5. M. G. Amin, Ed., Through-The-Wall Radar Imaging, CRC Press, Boca Raton, Florida (2010). Google Scholar

6. M. G. AminK. Sarabandi, “Special issue of IEEE transactions on geosciences and remote sensing,” IEEE Trans. Geosci. Remote Sens. 47(5), 1267–1268 (2009).IGRSD20196-2892 http://dx.doi.org/10.1109/TGRS.2009.2017053 Google Scholar

7. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theor. 52(4), 1289–1306 (2006).IETTAW0018-9448 http://dx.doi.org/10.1109/TIT.2006.871582 Google Scholar

8. E. J. CandesJ. RombergT. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59(8), 1207–1223 (2006).CPMAMV0010-3640 http://dx.doi.org/10.1002/(ISSN)1097-0312 Google Scholar

9. E. J. CandesJ. RombergT. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inform. Theor. 52(2), 489–509 (2006).IETTAW0018-9448 http://dx.doi.org/10.1109/TIT.2005.862083 Google Scholar

10. Y.-S. YoonM. G. Amin, “Compressed sensing technique for high-resolution radar imaging,” in Proc. SPIE 6968, 69681A (2008).PSISDG0277-786X http://dx.doi.org/10.1117/12.777175 Google Scholar

11. Q. Huanget al., “UWB through-wall imaging based on compressive sensing,” IEEE Trans. Geosci. Remote Sens. 48(3), 1408–1415 (2010).IGRSD20196-2892 http://dx.doi.org/10.1109/TGRS.2009.2030321 Google Scholar

12. Y.-S. YoonM. G. Amin, “Through-the-wall radar imaging using compressive sensing along temporal frequency domain,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Process., pp. 2806–2809, IEEE, New York (2010). Google Scholar

13. M. G. AminF. AhmadZ. Wenji, “Target RCS exploitations in compressive sensing for through wall imaging,” in Proc. Int. Waveform Diversity and Design Conf., pp. 150–154, IEEE, New York (2010). Google Scholar

14. M. LeigsneringC. DebesA. M. Zoubir, “Compressive sensing in through-the-wall radar imaging,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Process., pp. 4008–4011, IEEE, New York (2011). Google Scholar

15. J. Yanget al., “Multiple-measurement vector model and its application to through-the-wall radar imaging,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Process., pp. 2672–2675, IEEE, New York (2011). Google Scholar

16. M. DumanA. C. Gurbuz, “Analysis of compressive sensing based through the wall imaging,” in Proc. IEEE Radar Conf., pp. 0641–0646, IEEE, New York (2012). Google Scholar

17. F. AhmadM. G. AminG. Mandapati, “Autofocusing of through-the-wall radar imagery under unknown wall characteristics,” IEEE Trans. Image Process. 16(7), 1785–1795 (2007).IIPRE41057-7149 http://dx.doi.org/10.1109/TIP.2007.899030 Google Scholar

18. M. G. AminF. Ahmad, “Wideband synthetic aperture beamforming for through-the-wall imaging [lecture notes],” IEEE Signal Process. Mag. 25(4), 110–113 (2008).ISPRE61053-5888 http://dx.doi.org/10.1109/MSP.2008.923510 Google Scholar

19. C. DebesM. G. AminA. M. Zoubir, “Target detection in single- and multiple-view through-the-wall radar imaging,” IEEE Trans. Geosci. Remote Sens. 47(5), 1349–1361 (2009).IGRSD20196-2892 http://dx.doi.org/10.1109/TGRS.2009.2013460 Google Scholar

20. C. H. Senget al., “Fuzzy logic-based image fusion for multi-view through-the-wall radar,” in Proc. Int. Conf. Digital Image Computing: Techniques and Applications, pp. 423–428, IEEE, New York (2010). Google Scholar

21. F. Ahmad, “Multi-location wideband through-the-wall beamforming,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Process., pp. 5193–5196, IEEE, New York (2008). Google Scholar

22. F. AhmadM. G. Amin, “Multi-location wideband synthetic aperture imaging for urban sensing applications,” J. J. Franklin Inst. 345(6), 618–639 (2008).JFINAB0016-0032 http://dx.doi.org/10.1016/j.jfranklin.2008.03.003 Google Scholar

23. K. M. Yemelyanovet al., “Adaptive polarization contrast techniques for through-wall microwave imaging applications,” IEEE Trans. Geosci. Remote Sens. 47(5), 1362–1374 (2009).IGRSD20196-2892 http://dx.doi.org/10.1109/TGRS.2009.2015569 Google Scholar

24. A. A. MostafaC. DebesA. M. Zoubir, “Segmentation by classification for through-the-wall radar imaging using polarization signatures,” IEEE Trans. Geosci. Remote Sens. 50(9), 3425–3439 (2012). http://dx.doi.org/10.1109/TGRS.2011.2181951 Google Scholar

25. S. FischerG. CristobalR. Redondo, “Sparse overcomplete Gabor wavelet representation based on local competitions,” IEEE Trans. Image Process. 15(2), 265–272 (2006).IIPRE41057-7149 http://dx.doi.org/10.1109/TIP.2005.860614 Google Scholar

26. R. Fazel-RezaiW. Kinsner, “Image decomposition and reconstruction using two-dimensional complex-valued Gabor wavelets,” in Proc. IEEE Int. Conf. Cognitive Informatics, pp. 72–78, IEEE, New York (2007). Google Scholar

27. K. N. ChaudhuryM. Unser, “Construction of Hilbert transform pairs of wavelet bases and Gabor-like transforms,” IEEE Trans. Signal Process. 57(9), 3411–3425 (2009).ITPRED1053-587X http://dx.doi.org/10.1109/TSP.2009.2020767 Google Scholar

28. F. H. C. TiviveA. BouzerdoumM. G. Amin, “An SVD-based approach for mitigating wall reflections in through-the-wall radar imaging,” in Proc. IEEE Radar Conf., pp. 519–524, IEEE, New York (2011). Google Scholar

29. E. Lagunaset al., “Wall mitigation techniques for indoor sensing within the compressive sensing framework,” in Proc. IEEE 7th Sensor Array and Multichannel Signal Process. Workshop, pp. 213–216, IEEE, New York (2012). Google Scholar

30. E. Lagunaset al., “Joint wall mitigation and compressive sensing for indoor image reconstruction,” IEEE Trans. Geosci. Remote Sens. 51(2), 891–906 (2013). http://dx.doi.org/10.1109/TGRS.2012.2203824 Google Scholar

31. E. J. CandesM. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).ISPRE61053-5888 http://dx.doi.org/10.1109/MSP.2007.914731 Google Scholar

32. O. Scherzer, Handbook of Mathematical Methods in Imaging, Springer Science, New York (2011). Google Scholar

33. J. BobinS. BeckerE. J. Candes, “Nesta: a fast and accurate first-order method for sparse recovery,” Technical report in California Institute of Technology, Tech. Rep., April (2009). Google Scholar

34. D. L. DonohoM. EladV. N. Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise,” IEEE Trans. Inform. Theor. 52(1), 6–18 (2006).IETTAW0018-9448 http://dx.doi.org/10.1109/TIT.2005.860430 Google Scholar

35. J. A. TroppA. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inform. Theor. 53(12), 4655–4666 (2007).IETTAW0018-9448 http://dx.doi.org/10.1109/TIT.2007.909108 Google Scholar

36. C. Debeset al., “Target discrimination and classification in through-the-wall radar imaging,” IEEE Trans. Signal Process. 59(10), 4664–4676 (2011).ITPRED1053-587X http://dx.doi.org/10.1109/TSP.2011.2157914 Google Scholar

37. M. EladP. MilanfarR. Rubinstein, “Analysis versus synthesis in signal priors,” in Proc. European Signal Process. Conf., EURASIP, Florence, Italy (4–8 September 2006). Google Scholar



Van Ha Tang received a BEng degree in 2005 and an MEng degree in 2008, both in computer engineering, from Le Quy Don Technical University, Hanoi, Vietnam. He is currently completing his PhD degree in computer engineering from the University of Wollongong, Australia.


Abdesselam Bouzerdoum received his MSc and PhD degrees in electrical engineering from the University of Washington, Seattle. In 1991, he joined the University of Adelaide, Australia, and in 1998, he was appointed associate professor at Edith Cowan University, Perth, Australia. Since 2004, he has been professor of computer engineering with the University of Wollongong, where he served as head of School of Electrical, Computer and Telecommunications Engineering (2004 to 2006), and associate dean of research, Faculty of Informatics (2007 to 2013). He is the recipient of the Eureka Prize for Outstanding Science in Support of Defence or National Security (2011), Chester Sall Award (2005), and a Chercheur de Haut Niveau Award from the French Ministry of Research (2001). He has published over 280 technical articles and graduated many PhD and master’s students. He is a senior member of IEEE and a member of the International Neural Network Society and the Optical Society of America.


Son Lam Phung received the BEng degree with first-class honors in 1999 and a PhD degree in 2003, all in computer engineering, from Edith Cowan University, Perth, Australia. He received the University and Faculty Medals in 2000. He is currently a senior lecturer in the School of Electrical, Computer and Telecommunications Engineering, University of Wollongong. His general research interests are in the areas of image and signal processing, neural networks, pattern recognition, and machine learning.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Van Ha Tang, Van Ha Tang, Abdesselam Bouzerdoum, Abdesselam Bouzerdoum, Son Lam Phung, Son Lam Phung, } "Two-stage through-the-wall radar image formation using compressive sensing," Journal of Electronic Imaging 22(2), 021006 (25 January 2013). https://doi.org/10.1117/1.JEI.22.2.021006 . Submission:

Back to Top