## 1.

## Introduction

Through-the-wall radar imaging (TWRI) is an emerging technology that addresses the desire to see inside buildings using electromagnetic (EM) waves for various purposes, including determining the building layout, discerning the building intent and nature of activities, locating and tracking the occupants, and even identifying and classifying inanimate objects of interest within the building. TWRI is highly desirable for law enforcement, fire and rescue, and emergency relief, and military operations.^{1}2.3.4.5.^{–}^{6}

Applications primarily driving TWRI development can be divided based on whether information on motions within a structure or on imaging the structure and its stationary contents is sought out. The need to detect motion is highly desirable to discern about the building intent and in many fire and hostage situations. Discrimination of movements from background clutter can be achieved through change detection (CD) or exploitation of Doppler.^{7}8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.^{–}^{24} One-dimensional (1-D) motion detection and localization systems employ a single transmitter and receiver and can only provide range-to-motion, whereas two- and three-dimensional (2-D and 3-D) multi-antenna systems can provide more accurate localization of moving targets. The 3-D systems have higher processing requirements compared with 2-D systems. However, the third dimension provides height information, which permits distinguishing people from animals, such as household pets. This is important since radar cross-section alone for behind-the-wall targets can be unreliable.

Imaging of structural features and stationary targets inside buildings requires at least 2-D and preferably 3-D systems.^{25}26.27.28.29.30.31.32.33.34.35.36.37.38.39.40.41.42.^{–}^{43} Because of the lack of any type of motion, these systems cannot rely on Doppler processing or CD for target detection and separation. Synthetic aperture radar (SAR) based approaches have been the most commonly used algorithms for this purpose. Most of the conventional SAR techniques usually neglect propagation distortions such as those encountered by signals passing through walls.^{44} Distortions degrade the performance and can lead to ambiguities in target and wall localizations. Free-space assumptions no longer apply after the EM waves propagate through the first wall. Without factoring in propagation effects, such as attenuation, reflection, refraction, diffraction, and dispersion, imaging of contents within buildings will be severely distorted. As such, image formation methods, array processing techniques, target detection, and image sharpening paradigms must work in concert and be reexamined in view of the nature and specificities of the underlying sensing problem.

In addition to exterior walls, the presence of multipath and clutter can significantly contaminate the radar data leading to reduced system capabilities for imaging of building interiors and localization and tracking of targets behind walls. The multiple reflections within the wall result in wall residuals along the range dimension. These wall reverberations can be stronger than target reflections, leading to its masking and undetectability, especially for weak targets close to the wall.^{45} Multipath stemming from multiple reflections of EM waves off the targets in conjunction with the walls may result in the power being focused at pixels different than those corresponding to the target. This gives rise to ghosts, which can be confused with the real targets inside buildings.^{46}47.48.^{–}^{49} Further, uncompensated refraction through walls can lead to localization or focusing errors, causing offsets and blurring of imaged targets.^{26}^{,}^{39} SAR techniques and tomographic algorithms, specifically tailored for TWRI, are capable of making some of the adjustments for wave propagation through solid materials.^{26}27.28.^{–}^{30}^{,}^{36}37.38.39.40.^{–}^{41}^{,}^{50}51.52.53.54.55.56.^{–}^{57} While such approaches are well suited for shadowing, attenuation, and refraction effects, they do not account for multipath as well as strong reflections from the front wall.

The problems caused by the front wall reflections can be successfully tackled through wall clutter mitigation techniques. Several approaches have been devised, which can be categorized into those based on estimating the wall parameters and others incorporating either wall backscattering strength or invariance with antenna location.^{39}^{,}^{45}^{,}^{58}59.60.^{–}^{61} In Refs. 39 and 58, a method to extract the dielectric constant and thickness of the nonfrequency dependent wall from the time-domain scattered field was presented. The time-domain response of the wall was then analytically modeled and removed from the data. In Ref. 45, a spatial filtering method was applied to remove the DC component corresponding with the constant-type radar return, typically associated with the front wall. The third method, presented in Refs. 59–61, was based not only on the wall scattering invariance along the array but also on the fact that wall reflections are relatively stronger than target reflections. As a result, the wall subspace is usually captured in the most dominant singular values when applying singular value decomposition (SVD) to the measured data matrix. The wall contribution can then be removed by orthogonal subspace projection.

Several methods have also been devised for dealing with multipath ghosts in order to provide proper representation of the ground truth. Earlier work attempted to mitigate the adverse effects stemming from multipath propagation.^{27} Subsequently, research has been conducted to utilize the additional information carried by the multipath returns. The work in Ref. 49 considered multipath exploitation in TWRI, assuming prior knowledge of the building layout. A scheme taking advantage of the additional energy residing in the target ghosts was devised. An image was first formed, the ghost locations for each target were calculated, and then the ghosts were mapped back onto the corresponding target. In this way, the image became ghost-free with increased signal-to-clutter ratio (SCR).

More recently, the focus of the TWRI research has shifted toward addressing constraints on cost and acquisition time in order to achieve the ultimate objective of providing reliable situational awareness through high-resolution imaging in a fast and efficient manner. This goal is primarily challenged due to use of wideband signals and large array apertures. Most radar imaging systems acquire samples in frequency (or time) and space and then apply compression to reduce the amount of stored information. This approach has three inherent inefficiencies. First, as the demands for high resolution and more accurate information increase, so does the number of data samples to be recorded, stored, and subsequently processed. Second, there are significant data redundancies not exploited by the traditional sampling process. Third, it is wasteful to acquire and process data samples that will be discarded later. Further, producing an image of the indoor scene using few observations can be logistically important, as some of the measurements in space and frequency or time can be difficult, unavailable, or impossible to attain.

Toward the objective of providing timely actionable intelligence in urban environments, the emerging compressive sensing (CS) techniques have been shown to yield reduced cost and efficient sensing operations that allow super-resolution imaging of sparse behind-the-wall scenes.^{10}^{,}^{62}63.64.65.66.67.68.69.70.71.72.73.74.75.^{–}^{76} Compressive sensing is an effective technique for scene reconstruction from a relatively small number of data samples without compromising the imaging quality.^{77}78.79.80.81.82.83.84.85.86.87.88.^{–}^{89} In general, the minimum number of data samples or sampling rate that is required for scene image formation is governed by the Nyquist theorem. However, when the scene is sparse, CS provides very efficient sampling, thereby significantly decreasing the required volume of data collected.

In this paper, we focus on CS for TWRI and present a review of ${l}_{1}$ norm reconstruction techniques that address the unique challenges associated with fast and efficient imaging in urban operations. Sections 234–5 deal with imaging of stationary scenes, whereas moving target localization is discussed in Sec. 6 and 7. More specifically, Sec. 2 deals with CS based strategies for stepped-frequency based radar imaging of sparse stationary scenes with reduced data volume in spatial and frequency domains. Prior and complete removal of clutter is assumed, which renders the scene sparse. Section 3 presents CS solutions in the presence of front wall clutter. Wall mitigation in conjunction with application of CS is presented for the case when the same reduced frequency set is used from all of the employed antennas. Section 4 considers imaging of the building interior structures using a CS-based approach, which exploits prior information of building construction practices to form an appropriate sparse representation of the building interior layout. Section 5 presents CS based multipath exploitation technique to achieve good image reconstruction in rich multipath indoor environments from few spatial and frequency measurements. Section 6 deals with joint localization of stationary and moving targets using CS based approaches, provided that the indoor scene is sparse in both stationary and moving targets. Section 7 discusses a sparsity-based CD approach to moving target indication for TWRI applications, and deals with cases when the heavy clutter caused by strong reflections from exterior and interior walls reduces the sparsity of the scene. Concluding remarks are provided in Sec. 8. It is noted that for the sake of not overcomplicating the notation, some symbols are used to indicate different variables over different sections of the paper. However, for those cases, these variables are redefined to reflect the change.

The progress reported in this paper is substantial and noteworthy. However, many challenging scenarios and situations remain unresolved using the current techniques and, as such, further research and development are required. However, with the advent of technology that brings about better hardware and improved system architectures, opportunities for handling more complex building scenarios will definitely increase.

## 2.

## CS Strategies in Frequency and Spatial Domains for TWRI

In this section, we apply CS to through-the-wall imaging of stationary scenes, assuming prior and complete removal of the front wall clutter.^{62}^{,}^{63} For example, if the reference scene is known, then background subtraction can be performed for removal of wall clutter, thereby improving the sparsity of the behind-the-wall stationary scene. We assume stepped-frequency-based SAR operation. We first present the through-the-wall signal model, followed by a description of the sparsity-based scene reconstruction, highlighting the key equations. It is noted that the problem formulation can be modified in a straightforward manner for pulsed operation and multistatic systems.

## 2.1.

### Through-the-Wall Signal Model

Consider a homogeneous wall of thickness $d$ and dielectric constant $\epsilon $ located along the $x$-axis, and the region to be imaged located beyond the wall along the positive $z$-axis. Assume that an $N$-element line array of transceivers is located parallel to the wall at a standoff distance ${z}_{\text{off}}$, as shown in Fig. 1. Let the $n$’th transceiver, located at ${\mathbf{x}}_{n}=({x}_{n},-{z}_{\text{off}})$, illuminate the scene with a stepped-frequency signal of $M$ frequencies, which are equispaced over the desired bandwidth ${\omega}_{M-1}-{\omega}_{0}$,

## (1)

$${\omega}_{m}={\omega}_{0}+m\mathrm{\Delta}\omega ,\phantom{\rule[-0.0ex]{2em}{0.0ex}}m=0,1,\dots ,M-1,$$## (2)

$$y(m,n)=\sum _{p=0}^{P-1}{\sigma}_{p}\text{\hspace{0.17em}}\mathrm{exp}(-j{\omega}_{m}{\tau}_{p,n}),$$^{27}

^{–}

^{28}

^{,}

^{40}

## (3)

$${\tau}_{p,n}=\frac{2{l}_{np,\mathrm{air},1}}{c}+\frac{2{l}_{np,\text{wall}}}{\upsilon}+\frac{2{l}_{np,\mathrm{air},2}}{c},$$An equivalent matrix-vector representation of the received signals in Eq. (2) can be obtained as follows. Assume that the region of interest is divided into a finite number of pixels ${N}_{x}\times {N}_{z}$ in cross-range and downrange, and the point targets occupy no more than $P(\ll {N}_{x}\times {N}_{z})$ pixels. Let $r(k,l)$, $k=0,1,\dots ,{N}_{x}-1$, $l=0,1,\dots ,{N}_{z}-1$, be a weighted indicator function, which takes the value ${\sigma}_{p}$ if the $p$’th point target exists at the $(k,l)$’th pixel; otherwise, it is zero. With the values $r(k,l)$ lexicographically ordered into a column vector $\mathbf{r}$ of length ${N}_{x}{N}_{z}$, the received signal corresponding to the $n$’th antenna can be expressed in matrix-vector form as

where ${\mathrm{\Psi}}_{n}$ is a matrix of dimensions $M\times {N}_{x}{N}_{z}$, and its $m$’th row is given by## (5)

$${[{\mathrm{\Psi}}_{n}]}_{m}=[\begin{array}{ccc}{e}^{-j{\omega}_{m}{\tau}_{00,n}}& \cdots & {e}^{-j{\omega}_{m}{\tau}_{({N}_{x}{N}_{z}-1),n}}\end{array}].$$## (8)

$$\mathrm{\Psi}={[{\mathrm{\Psi}}_{0}^{T}{\mathrm{\Psi}}_{1}^{T}\cdots {\mathrm{\Psi}}_{N-1}^{T}]}^{T}.$$## 2.2.

### Sparsity-Based Data Acquisition and Scene Reconstruction

The expression in Eq. (7) involves the full set of measurements made at the $N$ array locations using the $M$ frequencies. For a sparse scene, it is possible to recover $\mathbf{r}$ from a reduced set of measurements. Consider $\stackrel{\u0306}{\mathbf{y}}$, which is a vector of length ${Q}_{1}{Q}_{2}(\ll MN)$ consisting of elements chosen from $\mathbf{y}$ as follows:

where $\mathrm{\Phi}$ is a ${Q}_{1}{Q}_{2}\times MN$ matrix of the form,## (10)

$$\mathrm{\Phi}=\mathrm{kron}(\vartheta ,{\mathbf{I}}_{{Q}_{1}})\xb7\mathrm{diag}\{{\phi}^{(0)},{\phi}^{(1)},\dots ,{\phi}^{(N-1)}\}.$$^{80}Given $\stackrel{\u0306}{\mathbf{y}}$, we can recover $\mathbf{r}$ by solving the following equation (ideally, minimization of the ${l}_{0}$ norm would provide the sparsest solution. Unfortunately, it is NP-hard to solve the resulting minimization problem. The ${l}_{1}$ norm has been shown to serve as a good surrogate for ${l}_{0}$ norm.

^{90}The ${l}_{1}$ minimization problem is convex, which can be solved in polynomial time):

## (11)

$$\widehat{\mathbf{r}}=\mathrm{arg}\text{\hspace{0.17em}}\mathrm{min}\Vert r{\Vert}_{{l}_{1}}\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{subject to}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{\u0306}{\mathbf{y}}\approx \mathrm{\Phi}\mathrm{\Psi}\mathbf{r}.$$We note that the problem in Eq. (11) can be solved using convex relaxation, greedy pursuit, or combinatorial algorithms.^{91}92.93.94.95.^{–}^{96} In this section, we consider orthogonal matching pursuit (OMP), which is known to provide a fast and easy to implement solution. Moreover, OMP is better suited when frequency measurements are used.^{95} It is noted that the number of iterations of the OMP is usually associated with the level of sparsity of the scene. In practice, this piece of information is often unavailable *a priori*, and the stopping condition is heuristic. Underestimating the sparsity would result in the image not being completely reconstructed (underfitting), while overestimation would cause some of the noise being treated as signal (overfitting). Use of cross-validation (CV) has been also proposed to determine the stopping condition for the greedy algorithms.^{97}98.^{–}^{99} Cross-validation is a statistical technique that separates a data set into a training set and a CV set. The training set is used to detect the optimal stopping iteration. There is, however, a tradeoff between allocating the measurements for reconstruction or CV. More details can be found in Refs. 97 and 98.

## 2.3.

### Illustrative Results

A through-the-wall wideband SAR system was set up in the Radar Imaging Lab at Villanova University. A 67-element line array with an inter-element spacing of 0.0187 m, located along the $x$-axis, was synthesized parallel to a 0.14-m-thick solid concrete wall of length 3.05 m and at a standoff distance equal to 1.24 m. A stepped-frequency signal covering the 1 to 3 GHz frequency band with a step size of 2.75 MHz was employed. Thus, at each scan position, the radar collects 728 frequency measurements. A vertical metal dihedral was used as the target and was placed at (0, 4.4) m on the other side of the front wall. The size of each face of the dihedral is $0.39\times 0.28\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathrm{m}}^{2}$. The back and the side walls of the room were covered with RF absorbing material to reduce clutter. The empty scene without the dihedral target present was also measured to enable background subtraction for wall clutter removal.

The region to be imaged is chosen to be $4.9\times 5.4\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathrm{m}}^{2}$ centered at (0, 3.7) m and divided into $33\times 73\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$, respectively. For CS, 20% of the frequencies and 51% of the array locations were used, which collectively represent 10.2% of the total data volume. Figure 2(a) and 2(c) depict the images corresponding to the full dataset obtained with back-projection and ${l}_{1}$ norm reconstruction, respectively. Figure 2(b) and 2(d) show the images corresponding to the measured scene obtained with back-projection and ${l}_{1}$ norm reconstruction, respectively, applied to the reduced background subtracted dataset. In Fig. 2 and all subsequent figures in this paper, we plot the image intensity with the maximum intensity value in each image normalized to 0 dB. The true target position is indicated with a solid red rectangle. We observe that, with the availability of the empty scene measurements, background subtraction renders the scene sparse, and thus a CS-based approach generates an image using reduced data where the target can be easily identified. On the other hand, back-projection applied to reduced dataset results in performance degradation, indicated by the presence of many artifacts in the corresponding image. OMP was used to generate the CS images. For this particular example, the number of OMP iterations was set to five.

## 3.

## Effects of Walls on Compressive Sensing Solutions

The application of CS for TWRI as presented in Sec. 2 assumed prior and complete removal of front wall EM returns. Without this assumption, strong wall clutter, which extends along the range dimension, reduces the sparsity of the scene and, as such, impedes the application of CS.^{71}72.^{–}^{73} Having access to the background scene is not always possible in practical applications. In this section, we apply joint CS and wall mitigation techniques using reduced data measurements. In essence, we address wall clutter mitigations in the context of CS.

There are several approaches, which successfully mitigate the front wall contribution to the received signal.^{39}^{,}^{45}^{,}^{58}59.60.^{–}^{61} These approaches were originally introduced to work on the full data volume and did not account for reduced data measurements especially randomly. We examine the performance of the subspace projection wall mitigation technique^{60} in conjunction with sparse image reconstruction. Only a small subset of measurements is employed for both wall clutter reduction and image formation. We consider the case where the same subset of frequencies is used for each employed antenna. Wall clutter mitigation under use of different frequencies across the employed antennas is discussed in Refs. 68 and 73. It is noted that, although not reported in this paper, the spatial filtering based wall mitigation scheme^{45} in conjunction with CS provides a similar performance to the subspace projection scheme.^{73}

## 3.1.

### Wall Clutter Mitigation

We first extend the through-the-wall signal model of Eq. (2) to include the front wall return. Without the assumption of prior wall return removal, the output of the $n$’th transceiver corresponding to the $m$’th frequency for a scene of $P$ point targets is given by

## (12)

$$y(m,n)={\sigma}_{w}\text{\hspace{0.17em}}\mathrm{exp}(-j{\omega}_{m}{\tau}_{w})+\sum _{p=0}^{p-1}{\sigma}_{p}\text{\hspace{0.17em}}\mathrm{exp}(-j{\omega}_{m}{\tau}_{p,n}),$$From Eq. (12), we note that ${\tau}_{w}$ does not vary with the antenna location since the array is parallel to the wall. Furthermore, as the wall is homogeneous and assumed to be much larger than the beamwidth of the antenna, the first term in Eq. (12) assumes the same value across the array aperture. Unlike ${\tau}_{w}$, the time delay ${\tau}_{p,n}$, given by Eq. (3), is different for each antenna location, since the signal path from the antenna to the target is different from one antenna to the other.

The signals received by the $N$ antennas at the $M$ frequencies are arranged into an $M\times N$ matrix, $\mathbf{Y}$,

where ${\mathbf{y}}_{n}$ is the $M\times 1$ vector containing the stepped-frequency signal received by the $n$’th antenna, with $y(m,n)$ given by Eq. (12). The eigen-structure of the imaged scene is obtained by performing the SVD of $\mathbf{Y}$, where $H$ denotes the Hermitian transpose, $\mathbf{U}$ and $\mathbf{V}$ are unitary matrices containing the left and right singular vectors, respectively, and $\mathrm{\Lambda}$ is a diagonal matrix## (17)

$$\mathrm{\Lambda}=\left(\begin{array}{ccc}{\lambda}_{1}& \dots & 0\\ \vdots & \ddots & \vdots \\ 0& \dots & {\lambda}_{N}\\ \vdots & \ddots & \vdots \\ 0& \cdots & 0\end{array}\right),$$## (19)

$${\mathbf{S}}_{\text{wall}}^{\perp}=\mathbf{I}-{\mathbf{S}}_{\text{wall}}{\mathbf{S}}_{\text{wall}}^{H},$$^{60}The resulting data matrix has little or no contribution from the front wall.

## 3.2.

### Joint Wall Mitigation and CS

Subspace projection method for wall clutter reduction relies on the fact that the wall reflections are strong and assume very close values at the different antenna locations. When the same set of frequencies is employed for all employed antennas, the condition of spatial invariance of the wall reflections is maintained.^{72}^{,}^{73} This permits direct application of the subspace projection method as a preprocessing step to the ${l}_{1}$ norm based scene reconstruction of Eq. (11).

## 3.3.

### Illustrative Results

We consider the same experimental setup as in Sec. 2.3. Figure 3(a) shows the result obtained with ${l}_{1}$ norm reconstruction using 10.2% of the raw data volume without background subtraction. The number of OMP iterations was set to 100. Comparing Fig. 3(a) and the corresponding background subtracted image of Fig. 2(d), it is evident that in the absence of access to the background scene, the wall mitigation techniques must be applied, as a preprocessing step, prior to CS in order to detect the targets behind the wall.

First, we consider the case when the same set of reduced frequencies is used for a reduced set of antenna locations. We employ only 10.2% of the data volume, i.e., 20% of the available frequencies and 51% of the antenna locations. The subspace projection method is applied to a $\mathbf{Y}$ matrix of reduced dimension $146\times 34$. The corresponding ${l}_{1}$ norm reconstructed image obtained with OMP is depicted in Fig. 3(b). It is clear that, even when both spatial and frequency observations are reduced, the joint application of wall clutter mitigation and CS techniques successfully provides front wall clutter suppression and unmasking of the target.

## 4.

## Designated Dictionary for Wall Detection

In this section, we address the problem of imaging building interior structures using a reduced set of measurements. We consider interior walls as targets of interest and attempt to reveal the building interior layout based on CS techniques. We note that construction practices suggest the exterior and interior walls to be parallel or perpendicular to each other. This enables sparse scene representations using a dictionary of possible wall orientations and locations.^{76} Conventional CS recovery algorithms can then be applied to reduced number of observations to recover the positions of various walls, which is a primary goal in TWRI.

## 4.1.

### Signal Model Under Multiple Parallel Walls

Considering a monostatic stepped-frequency SAR system with $N$ antenna positions located parallel to the front wall, as shown in Fig. 1, we extend the signal model in Eq. (12) to include reflections from multiple parallel interior walls, in addition to the returns from the front wall and the $P$ point targets. That is, the received signal at the $n$’th antenna location corresponding to the $m$’th frequency can be expressed as

## (21)

$$y(m,n)={\sigma}_{w}\text{\hspace{0.17em}}\mathrm{exp}(-j{\omega}_{m}{\tau}_{w})+\sum _{p=0}^{P-1}{\sigma}_{p}\text{\hspace{0.17em}}\mathrm{exp}(-j{\omega}_{m}{\tau}_{p,n})\phantom{\rule{0ex}{0ex}}+\sum _{i=0}^{{I}_{w}-1}{\sigma}_{{w}_{i}}\text{\hspace{0.17em}}\mathrm{exp}(-j{\omega}_{m}{\tau}_{{w}_{i}}),$$Note that the above model contains contributions only from interior walls parallel to the front wall and the antenna array. This is because, due to the specular nature of the wall reflections, a SAR system located parallel to the front wall will only be able to receive direct returns from walls, which are parallel to the front wall. The detection of perpendicular walls is possible by concurrently detecting and locating the canonical scattering mechanism of corner features created by the junction of walls of a room or by having access to another side of the building. Extension of the signal model to incorporate corner returns is reported in Ref. 76.

Instead of the point-target based sensing matrix described in Eq. (7), where each antenna accumulates the contributions of all the pixels, we use an alternate sensing matrix, proposed in Ref. 68, to relate the scene vector, $\mathbf{r}$, and the observation vector, $\mathbf{y}$. This matrix underlines the specular reflections produced by the walls. Due to wall specular reflections, and since the array is assumed parallel to the front wall and, thus, parallel to interior walls, the rays collected at the $n$’th antenna will be produced by portions of the walls that are only in front of this antenna [see Fig. 4(a)]. The alternate matrix, therefore, only considers the contributions of the pixels that are located in front of each antenna. In so doing, the returns of the walls located parallel to the array axis are emphasized. As such, it is most suited to the specific building structure imaging problem, wherein the signal returns are mainly caused by EM reflections of exterior and interior walls. The alternate linear model can be expressed as

where## (23)

$$\overline{\mathrm{\Psi}}=[\begin{array}{cccc}{\overline{\mathrm{\Psi}}}_{0}^{T}& {\overline{\mathrm{\Psi}}}_{1}^{T}& \dots & {\overline{\mathrm{\Psi}}}_{N-1}^{T}\end{array}],$$## (24)

$${[{\overline{\mathrm{\Psi}}}_{n}]}_{m}=[\begin{array}{ccc}{\mathfrak{I}}_{[(0,0),n]}{e}^{-j{\omega}_{m}{\tau}_{(0,0)}}& \dots & {\mathfrak{I}}_{[({N}_{x}-1,{N}_{z}-1),n]}{e}^{-j{\omega}_{m}{\tau}_{({N}_{x}-1,{N}_{z}-1)}}\end{array}].$$## (25)

$${\mathfrak{I}}_{[(k,l),n]}\phantom{\rule{0ex}{0ex}}=\{\begin{array}{cc}1,& \text{if the}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{(k,l)}^{\prime}\mathrm{th}\text{\hspace{0.17em}}\text{pixel is in front of the}\text{\hspace{0.17em}}{n\text{\hspace{0.17em}}}^{\prime}\text{\hspace{0.17em}}\mathrm{th}\text{\hspace{0.17em}}\text{antenna}\\ 0,& \text{otherwise}\end{array}.$$## 4.2.

### Sparsifying Dictionary for Wall Detection

Since the number of parallel walls is typically much smaller compared with the downrange extent of the building, the decomposition of the image into parallel walls can be considered as sparse. Note that although other indoor targets, such as furniture and humans, may be present, their projections onto the horizontal lines are expected to be negligible compared to those of the walls.

In order to obtain a linear matrix-vector relation between the scene and the horizontal projections, we define a sparsifying matrix $\mathbf{R}$ composed of possible wall locations. Specifically, each column of the dictionary $\mathbf{R}$ represents an image containing a single wall of length ${l}_{x}$ pixels, located at a specific cross-range and at a specific downrange in the image. Consider the cross-range to be divided into ${N}_{c}$ nonoverlapping blocks of ${l}_{x}$ pixels each [see Fig. 5(a)], and the downrange division defined by the pixel grid. The number of blocks ${N}_{c}$ is determined by the value of ${l}_{x}$, which is the minimum expected wall length in the scene. Therefore, the dimension of $\mathbf{R}$ is ${N}_{x}{N}_{z}\times {N}_{c}{N}_{z}$,where the product ${N}_{c}{N}_{z}$ denotes the number of possible wall locations. Figure 5(b) shows a simplified scheme of the sparsifying dictionary generation. The projection associated with each wall location is given by

where $B[b]$ indicates the $b$’th cross-range block and $b=1,2,\dots ,{N}_{c}$. Defining## (27)

$$\mathbf{g}=[\begin{array}{cccccccccc}{g}^{(1)}(0)& \cdots & {g}^{({N}_{c})}(0)& {g}^{(1)}(1)& \cdots & {g}^{({N}_{c})}(1)& \cdots & {g}^{(1)}({N}_{z}-1)& \cdots & {g}^{({N}_{c})}({N}_{z}-1)\end{array}],$$It is noted that we are implicitly assuming that the extents of the walls in the scene are integer multiples of the block of ${l}_{x}$ pixels. In case this condition is not satisfied, the maximum error in determining the wall extent will be at most equal to the chosen block size. Note that incorporation of the corner effects will help resolve this issue, since the localization of corners will identify the wall extent.^{76}

## 4.3.

### Illustrative Results

A through-the-wall SAR system was set up in the Radar Imaging Lab, Villanova University. A stepped-frequency signal consisting of 335 frequencies covering the 1 to 2 GHz frequency band was used for interrogating the scene. A monostatic synthetic aperture array, consisting of 71-element locations with an inter-element spacing of 2.2 cm, was employed. The scene consisted of two parallel plywood walls, each 2.25 cm thick, 1.83 m wide, and 2.43 m high. Both walls were centered at 0 m in cross-range. The first and the second walls were located at respective distances of 3.25 and 5.1 m from the antenna baseline. Figure 6(a) depicts the geometry of the experimental scene.

The region to be imaged is chosen to be $5.65(\text{cross}\text{-}\text{range})\times 4.45\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}(\text{down range})$, centered at (0, 4.23) m, and is divided into $128\times 128\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$. For the CS approach, we use a uniform subset of only 84 frequencies at each of the 18 uniformly spaced antenna locations, which represent 6.4% of the full data volume. The CS reconstructed image is shown in Fig. 6(b). We note that the proposed algorithm was able to reconstruct both walls. However, it can be observed in Fig. 6(b) that ghost walls appear immediately behind each true wall position. These ghosts are attributed to the dihedral-type reflections from the wall-floor junctions.

## 5.

## CS and Multipath Exploitation

In this section, we consider the problem of multipath in view of the requirements of fast data acquisition and reduced measurements. Multipath ghosts may cast a sparse scene as a populated scene, and at minimum will render the scene less sparse, degrading the performance of CS-based reconstruction. A CS method that directly incorporates multipath exploitation into sparse signal reconstruction for imaging of stationary scenes with a stepped-frequency monostatic SAR is presented. Assuming prior knowledge of the building layout, the propagation delays corresponding to different multipath returns for each assumed target position are calculated, and the multipath returns associated with reflections from the same wall are grouped together and represented by one measurement matrix. This allows CS solutions to focus the returns on the true target positions without ghosting. Although not considered in this section, it is noted that the clutter due to front wall reverberations can be mitigated by adapting a similar multipath formulation, which maps back multiple reflections within the wall after separating wall and target returns.^{100}

## 5.1.

### Multipath Propagation Model

We refer to the signal that propagates from the antenna through the front wall to the target and back to the antenna as the direct target return. Multipath propagation corresponds with indirect paths, which involve reflections at one or more interior walls by which the signal may reach the target. Multipath can also occur due to reflections from the floor and ceiling and interactions among different targets. In considering wall reflections and assuming diffuse target scattering, there are two typical cases for multipath. In the first case, the wave traverses a path that consists of two parts—one part is the propagation path to the target and back to the receiver, and the other part is a round trip path from the target to an interior wall. As the signal weakens at each secondary wall reflection, this case can usually be neglected. Furthermore, except when the target is close to an interior wall, the corresponding propagation delay is high and, most likely, would be equivalent to the direct-path delay of a target that lies outside the perimeter of the room being imaged. Thus, if necessary, this type of multipath can be gated out. The second case is a bistatic scattering scenario, where the signal propagation on transmit and receive takes place along different paths. This is the dominant case of multipath with one of the paths being the direct propagation, to or from the target, and the other involving a secondary reflection at an interior wall.

Other higher-order multipath returns are possible as well. Signals reaching the target can undergo multiple reflections within the front wall. We refer to such signals as wall ringing multipaths. Also the reflection at the interior wall can occur at the outer wall-air interface. This will result, however, in additional attenuation and, therefore, can be neglected. In order to derive the multipath signal model, we assume perfect knowledge of the front wall, i.e., location, thickness, and dielectric constant, as well as the location of the interior walls.

## 5.1.1.

#### Interior wall multipath

Consider the antenna-target geometry illustrated in Fig. 7(a), where the front wall has been ignored for simplicity. The $p$’th target is located at ${\mathbf{x}}_{p}=({x}_{p},{z}_{p})$, and the interior wall is parallel to the $z$-axis and located at $x={x}_{w}$. Multipath propagation consists of the forward propagation from the $n$’th antenna to the target along the path ${P}^{\prime \prime}$ and the return from the target via a reflection at the interior wall along the path ${P}^{\prime}$. Assuming specular reflection at the wall interface, we observe from Fig. 7(a) that reflecting the return path about the interior wall yields an alternative antenna-target geometry. We obtain a virtual target located at ${{\mathbf{x}}^{\prime}}_{p}=(2{x}_{w}-{x}_{p},{z}_{p})$, and the delay associated with path ${P}^{\prime}$ is the same as that of the path ${\tilde{P}}^{\prime}$ from the virtual target to the antenna. This correspondence simplifies the calculation of the one-way propagation delay ${\tau}_{p,n}^{(P\prime})$ associated with path ${P}^{\prime}$. It is noted that this principle can be used for multipath via any interior wall.

From the position of the virtual target of an assumed target location, we can calculate the propagation delay along path ${P}^{\prime}$ as follows. Under the assumption of free space propagation, the delay can be simply calculated as the Euclidean distance from the virtual target to the receiver divided by the propagation speed of the wave. In the TWRI scenario, however, the wave has to pass through the front wall on its way from the virtual target to the receiver. As the front wall parameters are assumed to be known, the delay can be readily calculated from geometric considerations using Snell’s law.^{28}

## 5.1.2.

#### Wall ringing multipath

The effect of wall ringing on the target image can be delineated through Fig. 7(b), which depicts the wall and the incident, reflected, and refracted waves. The distance between the target and the array element in cross-range direction, $\mathrm{\Delta}x$, can be expressed as

## (29)

$$\mathrm{\Delta}x=(\mathrm{\Delta}z-d)\mathrm{tan}\text{\hspace{0.17em}}{\theta}_{\mathrm{air}}+d(1+2{i}_{w})\mathrm{tan}\text{\hspace{0.17em}}{\theta}_{\text{wall}},$$## (30)

$$\frac{\mathrm{sin}\text{\hspace{0.17em}}{\theta}_{\mathrm{air}}}{\mathrm{sin}\text{\hspace{0.17em}}{\theta}_{\text{wall}}}=\sqrt{\epsilon}.$$^{101}

## 5.2.

### Received Signal Model

Having described the two principal multipath mechanisms in TWRI, namely the interior wall and wall ringing types of multipath, we are now in a position to develop a multipath model for the received signal. We assume that the front wall returns have been suppressed and the measured data contains only the target returns. The case with the wall returns present in the measurements is discussed in Ref. 100.

Each path $P$ from the transmitter to a target and back to receiver can be divided into two parts, ${P}^{\prime}$ and ${P}^{\prime \prime},$ where ${P}^{\prime \prime}$ denotes the partial path from the transmitter to the scattering target and ${P}^{\prime}$ is the return path back to the receiver. For each target-transceiver combination, there exist a number of partial paths due to the interior wall and wall ringing multipath phenomena. Let ${P}_{{i}_{1}}^{\prime}$, ${i}_{1}=0,1,\dots ,{R}_{1}-1$, and ${P}_{{i}_{2}}^{\prime \prime}$, ${i}_{2}=0,1,\dots ,{R}_{2}-1$, denote the feasible partial paths. Any combination of ${P}_{{i}_{1}}^{\prime}$ and ${P}_{{i}_{2}}^{\prime \prime}$ results in a round-trip path ${P}_{i}$, $i=0,1,\dots ,R-1$. We can establish a function that maps the index $i$ of the round-trip path to a pair of indices of the partial paths, $i\mapsto ({i}_{1},{i}_{2})$. Hence we can determine the maximum number $R\le {R}_{1}{R}_{2}$ of possible paths for each target-transceiver pair. Note that, in practice, $R\ll {R}_{1}{R}_{2}$, as some round-trip paths may be equal due to symmetry while some others could be strongly attenuated and thereby can be neglected. We follow the convention that ${P}_{0}$ refers to the direct round-trip path.

The round-trip delay of the signal along path ${P}_{i}$, consisting of the partial parts ${P}_{{i}_{1}}^{\prime}$ and ${P}_{{i}_{2}}^{\prime \prime}$,can be calculated as

We also associate a complex amplitude ${w}_{p}^{(i)}$ for each possible path corresponding to the $p$’th target, with the direct path, which is typically the strongest in TWRI, having ${w}_{p}^{(0)}=1$.Without loss of generality, we assume the same number of propagation paths for each target. The unavailability of a path for a particular target is reflected by a corresponding path amplitude of zero. The received signal at the $n$’th antenna due to the $m$’th frequency can, therefore, be expressed as

## (33)

$$y(m,n)=\sum _{i=0}^{R-1}\sum _{p=0}^{P-1}{w}_{p}^{(i)}{\sigma}_{p}^{(i)}\text{\hspace{0.17em}}\mathrm{exp}(-j{\omega}_{m}{\tau}_{p,n}^{(i)}).$$## (34)

$$y(m,n)=\sum _{i=0}^{R-1}\sum _{p=0}^{P-1}{\sigma}_{p}^{(i)}\text{\hspace{0.17em}}\mathrm{exp}(-j{\omega}_{m}{\tau}_{p,n}^{(i)}).$$The matrix-vector form for the received signal under multipath propagation is given by

## (35)

$$\mathbf{y}={\mathrm{\Psi}}^{(0)}{\mathbf{r}}^{(0)}+{\mathrm{\Psi}}^{(1)}{\mathbf{r}}^{(1)}+\dots +{\mathrm{\Psi}}^{(R-1)}{\mathbf{r}}^{(R-1)},$$## (36)

$${\mathbf{r}}^{(i)}={[\begin{array}{ccc}{r}_{00}^{(i)}& \dots & {r}_{{N}_{x}{N}_{z}-1}^{(i)}\end{array}]}^{\begin{array}{l}T\end{array}}\phantom{\rule{0ex}{0ex}}{[{\mathrm{\Psi}}^{(i)}]}_{sq}=\mathrm{exp}(-j{\omega}_{m}{\tau}_{q,n}^{(i)}),\phantom{\rule[-0.0ex]{1em}{0.0ex}}m=s\text{\hspace{0.17em}}\mathrm{mod}\text{\hspace{0.17em}}M,\phantom{\rule[-0.0ex]{1em}{0.0ex}}n=\lfloor s/M\rfloor \phantom{\rule{0ex}{0ex}}s=0,\phantom{\rule[-0.0ex]{1em}{0.0ex}}1,\dots ,MN-1,\phantom{\rule[-0.0ex]{1em}{0.0ex}}q=0,\phantom{\rule[-0.0ex]{1em}{0.0ex}}1,\dots ,{N}_{x}{N}_{z}-1.\phantom{\rule{0ex}{0ex}}$$## 5.3.

### Sparse Scene Reconstruction with Multipath Exploitation

Within the CS framework, we aim at undoing the ghosts, i.e., inverting the multipath measurement model and achieving a reconstruction, wherein only the true targets remain.

In practice, any prior knowledge about the exact relationship between the various subimages ${\mathbf{r}}^{(i)}$ of the sparse scene is either limited or nonexistent. However, we know with certainty that the sub-images ${\mathbf{r}}^{(0)},{\mathbf{r}}^{(1)},\dots {\mathbf{r}}^{(R-1)}$ describe the same underlying scene. That is, the support of the $R$ images is the same, or at least approximately the same. The common structure property of the sparse scene suggests the application of a group sparse reconstruction.

All unknown vectors in Eq. (35) can be stacked to form a tall vector of length ${N}_{x}{N}_{z}R$

## (37)

$$\overrightarrow{\mathbf{r}}={[\begin{array}{cccc}{\mathbf{r}}^{{(0)}^{T}}& {\mathbf{r}}^{{(1)}^{T}}& \cdots & {\mathbf{r}}^{{(R-1)}^{T}}\end{array}]}^{T}\mathrm{.}$$We proceed to reconstruct the images $\overrightarrow{\mathbf{r}}$ from $\stackrel{\u0306}{\mathbf{y}}$ under measurement model in Eq. (38). It has been shown that a group sparse reconstruction can be obtained by a mixed ${l}_{1}-{l}_{2}$ norm regularization.^{102}103.104.^{–}^{105} Thus we solve

## (39)

$$\hat{\overrightarrow{\mathbf{r}}}=\mathrm{arg}\text{\hspace{0.17em}}\underset{\overrightarrow{\mathbf{r}}}{\mathrm{min}}\frac{1}{2}\Vert \stackrel{\u0306}{\mathbf{y}}-\mathbf{B}\overrightarrow{\mathbf{r}}\Vert +\alpha {\Vert \overrightarrow{\mathbf{r}}\Vert}_{2,1},$$## (40)

$${\Vert \overrightarrow{\mathbf{r}}\Vert}_{2,1}=\sum _{q=0}^{{N}_{x}{N}_{z}-1}{\Vert {[{r}_{q}^{(0)},{r}_{q}^{(1)},\dots ,{r}_{q}^{(R-1)}]}^{T}\Vert}_{2}=\sum _{q=0}^{{N}_{x}{N}_{z}-1}\sqrt{\sum _{i=0}^{R-1}{r}_{q}^{(i)}{r}_{q}^{{(i)}^{*}}}$$^{106}The convex optimization problem in Eq. (39) can be solved using SparSA,

^{102}YALL group,

^{103}or other available schemes.

^{105}

^{,}

^{107}

Once a solution $\widehat{\overrightarrow{\mathbf{r}}}$ is obtained, the subimages can be noncoherently combined to form an overall image with an improved signal-to-noise-and-clutter ratio (SCNR), with the elements of the composite image ${\widehat{\mathbf{r}}}_{\mathrm{GS}}$ defined as

## 5.4.

### Illustrative Results

An experiment was conducted in a semi-controlled environment at the Radar Imaging Lab, Villanova University. A single aluminum pipe (61 cm long, 7.6 cm diameter) was placed upright on a 1.2-m-high foam pedestal at 3.67 m downrange and 0.31 m cross-range, as shown in Fig. 8. A 77-element uniform linear monostatic array with an inter-element spacing of 1.9 cm was used for imaging. The origin of the coordinate system is chosen to be at the center of the array. The 0.2-m-thick concrete front wall was located parallel to the array at 2.44 m downrange. The left sidewall was at a cross-range of $-1.83\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$, whereas the back wall was at 6.37 m downrange (see Fig. 8). Also there was a protruding corner on the right at 3.4 m cross-range and 4.57 m downrange. A stepped-frequency signal, consisting of 801 equally spaced frequency steps covering the 1 to 3 GHz band was employed. The left and right side walls were covered with RF absorbing material, but the protruding right corner and the back wall were left uncovered.

We consider background-subtracted data to focus only on target multipath. Figure 9(a) depicts the backprojection image using all available data. Apparently, only the multipath ghosts due to the back wall, and the protruding corner in the back right are visible. Hence we only consider these two multipath propagation cases for the group sparse CS scheme. We use 25% of the array elements and 50% of the frequencies. The corresponding CS reconstruction is shown in Fig. 9(b). The multipath ghosts have been clearly suppressed.

## 6.

## CS-Based Change Detection for Moving Target Localization

In this section, we consider sparsity-driven CD for human motion indication in TWRI applications. CD can be used in lieu of Doppler processing, wherein motion detection is accomplished by subtraction of data frames acquired over successive probing of the scene. In so doing, CD mitigates the heavy clutter that is caused by strong reflections from exterior and interior walls and also removes stationary objects present in the enclosed structure, thereby rendering a densely populated scene sparse.^{7}^{,}^{9}^{,}^{10} As a result, it becomes possible to exploit CS techniques for achieving reduction in the data volume. We assume a multistatic imaging system with physical transmit and receive apertures and a wideband transmit pulse. We establish an appropriate CD model for translational motion that permits formulation of linear modeling with sensing matrices, so as to apply CS for scene reconstruction. Other types of human motions involving sudden short movements of the limbs, head, and/or torso are discussed in Ref. 70.

## 6.1.

### Signal Model

Consider wideband radar operation with $M$ transmitters and $N$ receivers. A sequential multiplexing of the transmitters with simultaneous reception at multiple receivers is assumed. As such, a signal model can be developed based on single active transmitters. We note that the timing interval for each data frame is assumed to be a fraction of a second so that the moving target appears stationary during each data collection interval.

Let ${s}_{T}(t)$ be the wideband baseband signal used for interrogating the scene. For the case of a single point target with reflectivity ${\sigma}_{p}$, located at ${\mathbf{x}}_{p}=({x}_{p},{z}_{p})$ behind a wall, the pulse emitted by the $m$’th transmitter with phase center at ${\mathbf{x}}_{\mathrm{tm}}=({x}_{\mathrm{tm}},-{z}_{\text{off}})$ is received at the $n$’th receiver with phase center at ${\mathbf{x}}_{\mathrm{rn}}=({x}_{\mathrm{rn}},-{z}_{\text{off}})$ in the form

## (42)

$${y}_{mn}(t)={a}_{mn}(t)+{b}_{mn}(t),\phantom{\rule{0ex}{0ex}}{a}_{mn}(t)={\sigma}_{p}{s}_{T}(t-{\tau}_{p,mn})\mathrm{exp}(-j{\omega}_{c}{\tau}_{p,mn}),$$In its simplest form, CD is achieved by coherent subtraction of the data corresponding to two data frames, which may be consecutive or separated by one or more data frames. This subtraction operation is performed for each range bin. CD results in the set of difference signals,

## (43)

$$\delta {y}_{mn}(t)={y}_{mn}^{(L+1)}(t)-{y}_{mn}^{(1)}(t)={a}_{mn}^{(L+1)}(t)-{a}_{mn}^{(1)}(t),$$## (44)

$$\delta {y}_{mn}(t)={\sigma}_{p}{s}_{T}(t-{\tau}_{p,mn}^{(L+1)})\mathrm{exp}(-j{\omega}_{c}{\tau}_{p,mn}^{(L+1)})-{\sigma}_{p}{s}_{T}(t-{\tau}_{p,mn}^{(1)})\mathrm{exp}(-j{\omega}_{c}{\tau}_{p,mn}^{(1)}),$$## 6.2.

### Sparsity-Driven Change Detection under Translational Motion

Consider the difference signal in Eq. (44) for the case where the target is undergoing translational motion. Two nonconsecutive data frames with relatively long time difference are used, i.e., $L\gg 1$ (Ref. 108). In this case, the target will change its range gate position during the time elapsed between the two data acquisitions. As seen from Eq. (44), the moving target will present itself as two targets, one corresponding to the target position during the first time interval, and the other corresponding to the target location during the second data frame. It is noted that the imaged target at the reference position corresponding to the first data frame cannot be suppressed for the coherent CD approach. On the other hand, the noncoherent CD approach that deals with differences of image magnitudes corresponding to the two data frames, allows suppression of the reference image through a zero-thresholding operation.^{23} However, as the noncoherent approach requires the scene reconstruction to be performed prior to CD, it is not a feasible option for sparsity-based imaging, which relies on coherent CD to render the scene sparse. Therefore, we rewrite Eq. (44) as

## (45)

$$\delta {y}_{mn}(t)=\sum _{i=1}^{2}{\tilde{\sigma}}_{i}{s}_{T}(t-{\tau}_{i,mn})\mathrm{exp}(-j{\omega}_{c}{\tau}_{i,mn}),$$## (46)

$${\tilde{\sigma}}_{i}=\{\begin{array}{cc}{\sigma}_{p}& i=1\\ -{\sigma}_{p}& i=2\end{array}\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{and}\phantom{\rule[-0.0ex]{1em}{0.0ex}}{\tau}_{i,mn}=\{\begin{array}{cc}{\tau}_{p,mn}^{(L+1)}& i=1\\ {\tau}_{p,mn}^{(1)}& i=2\end{array}.$$^{70}

^{,}

^{83}

## (48)

$${[{\mathrm{\Psi}}_{mn}]}_{k,q}=\frac{{s}_{T}({t}_{k}-{\tau}_{q,mn})\mathrm{exp}(-j{\omega}_{c}{\tau}_{q,mn})}{{\Vert {\mathbf{s}}_{q,mn}\Vert}_{2}},\phantom{\rule[-0.0ex]{1em}{0.0ex}}k=0,1,\dots ,K-1,\phantom{\rule[-0.0ex]{1em}{0.0ex}}q=0,1,\dots ,{N}_{x}{N}_{z}-1,$$The CD model described in Eqs. (47) and (48) permits the scene reconstruction within the CS framework. We measure a $J(\ll K)$ dimensional vector of elements randomly chosen from ${\mathrm{\Delta}\mathbf{y}}_{mn}$. The new measurements can be expressed as

## (49)

$$\mathrm{\Delta}{\stackrel{\u0306}{\mathbf{y}}}_{mn}={\phi}_{mn}{\mathrm{\Delta}\mathbf{y}}_{mn}={\phi}_{mn}{\mathrm{\Psi}}_{mn}\mathbf{r},$$^{83}

^{,}

^{86}

^{,}

^{109}and the references therein. To name a few, a measurement matrix whose elements are drawn from a Gaussian distribution, a measurement matrix having random $\pm 1$ entries with probability of 0.5, or a random matrix whose entries can be constructed by randomly selecting rows of a $K\times K$ identity matrix. It was shown in Ref. 83 that the measurement matrix with random $\pm 1$ elements requires the least amount of compressive measurements for the same radar imaging performance, and permits a relatively straight forward data acquisition implementation. Therefore, we choose to use such a measurement matrix in image reconstructions.

Given $\mathrm{\Delta}{\stackrel{\u0306}{\mathbf{y}}}_{mn}$ for $m=0$, $1,\dots ,M-1$, $n=0$, $1,\dots ,N-1$, we can recover $\mathbf{r}$ by solving the following equation:

## (50)

$$\widehat{\mathbf{r}}=\mathrm{arg}\text{\hspace{0.17em}}\underset{\mathbf{r}}{\mathrm{min}}{\Vert \mathbf{r}\Vert}_{{l}_{1}}\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{subject to}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{\Phi}\mathrm{\Psi}\mathbf{r}\approx \mathrm{\Delta}\stackrel{\u0306}{\mathbf{y}},$$## (51)

$$\mathrm{\Psi}={[{\mathrm{\Psi}}_{00}^{T}{\mathrm{\Psi}}_{01}^{T}\dots {\mathrm{\Psi}}_{(M-1)(N-1)}^{T}]}^{T},\phantom{\rule[-0.0ex]{1em}{0.0ex}}\phantom{\rule{0ex}{0ex}}\mathrm{\Phi}=\mathrm{diag}({\phi}_{00},{\phi}_{01},\dots ,{\phi}_{(M-1)(N-1)})\phantom{\rule{0ex}{0ex}}\mathrm{\Delta}\stackrel{\u0306}{\mathbf{y}}={[\mathrm{\Delta}{\stackrel{\u0306}{\mathbf{y}}}_{00}^{T}\mathrm{\Delta}{\stackrel{\u0306}{\mathbf{y}}}_{01}^{T}\dots \mathrm{\Delta}{\stackrel{\u0306}{\mathbf{y}}}_{(M-1)(N-1)}^{T}]}^{T}.$$## 6.3.

### Illustrative Results

A through-the-wall wideband pulsed radar system was used for data collection in the Radar Imaging Lab at Villanova University. The system uses a 0.7 ns Gaussian pulse for scene interrogation. The pulse is up-converted to 3 GHz for transmission and down-converted to baseband through in-phase and quadrature demodulation on reception. The system operational bandwidth from 1.5 to 4.5 GHz provides a range resolution of 5 cm. The peak transmit power is 25 dBm. Transmission is through a single horn antenna, which is mounted on a tripod. An eight-element line array with an inter-element spacing of 0.06 m, is used as the receiver and is placed to the right of the transmit antenna. The center-to-center separation between the transmitter and the leftmost receive antenna is 0.28 m, as shown in Fig. 10. A $3.65\times 2.6\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathrm{m}}^{2}$ wall segment was constructed utilizing 1-cm-thick cement board on a 2-x-4 wood stud frame. The transmit antenna and the receive array were at a standoff distance of 1.19 m from the wall. The system refresh rate is 100 Hz.

In the experiment, a person walked away from the wall in an empty room (the back and the side walls were covered with RF absorbing material) along a straight line path. The path is located 0.5 m to the right of the center of the scene, as shown in Fig. 10. The data collection started with the target at position 1 and ended after the target reached position 3, with the target pausing at each position along the trajectory for a second. Consider the data frames corresponding to the target at positions 2 and 3. Each frame consists of 20 pulses, which are coherently integrated to improve the signal-to-noise ratio. The imaging region (target space) is chosen to be $3\times 3\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathrm{m}}^{2}$, centered at (0.5 m, 4 m), and divided into $61\times 61$ grid points in cross-range and downrange, resulting in 3721 unknowns. The space-time response of the target space consists of $8\times 1536$ space-time measurements. For sparsity-based CD, only 5% of the 1536 time samples are randomly selected at each of the eight receive antenna locations, resulting in $8\times 77$ space-time measured data. Figure 11 depicts the corresponding result. We observe that, as the human changed its range gate position during the time elapsed between the two acquisitions, it presents itself as two targets in the image, and is correctly localized at both of its positions.

## 7.

## CS General Formulation for Stationary and Moving Targets

As seen in the previous sections, the presence of the front wall renders the target detection problem very difficult and challenging and has an adverse effect on the scene reconstruction performance when employing CS. Different strategies have been devised for suppression of the wall clutter to enable target detection behind walls. Change detection enables detection and localization of moving targets. Clutter cancellation filtering provides another option.^{87}^{,}^{110} However, along with the wall clutter, both of these methods also suppress the returns from the stationary targets of interest in the scene, and as such, allow subsequent application of CS to recover only the moving targets. Wall clutter mitigation methods can be applied to remove the wall and enable joint detection of stationary and moving targets. However, these methods assume monostatic operation with the array located parallel to the front wall and exploit the strength and invariance of the wall return across the array under such a deployment for mitigating the wall return. As such, they may not perform as well under other situations.

For multistatic imaging radar systems using ultra-wideband (UWB) pulses, an alternate option is to employ time gating, in lieu of the aforementioned clutter cancellation methods. The compact temporal support of the signal renders time gating a viable option for suppressing the wall returns. This enhances the SCR and maintains the sparsity of the scene, thereby permitting the application of CS techniques for simultaneous localization of stationary and moving targets with few observations.^{74}

## 7.1.

### Signal Model

Consider the scene layout depicted in Fig. 12. Note that although the $M$-element transmit and $N$-element receive arrays are assumed to be parallel to the front wall for notational simplicity, this is not a requirement. Let ${T}_{r}$ be the pulse repetition interval. Consider a coherent processing interval of $I$ pulses per transmitter and a single point target moving slowly away from the origin with constant horizontal and vertical velocity components $({v}_{xp},{v}_{zp})$, as depicted in Fig. 12. Let the target position be ${\mathbf{x}}_{p}=({x}_{p},{z}_{p})$ at time $t=0$. Assume that the timing interval for sequencing through the transmitters is short enough so that the target appears stationary during each data collection interval of length ${\mathrm{IT}}_{r}$. This implies that the target position corresponding to the $i$’th pulse is given by

The baseband target return measured by the $n$’th receiver corresponding to the $i\u2019\mathrm{th}$ pulse emitted by the $m$’th transmitter is given by^{74}

## (53)

$${y}_{mni}^{p}(t)={\sigma}_{p}{s}_{T}[t-i{\mathrm{IT}}_{r}-m{T}_{r}-{\tau}_{p,mn}(i)]\mathrm{exp}[-j{\omega}_{c}{\tau}_{p,mn}(i)],$$On the other hand, as the wall is a specular reflector, the baseband wall return received at the $n$’th receiver corresponding to the $i$’th pulse emitted by the $m$’th transmitter can be expressed as

## (54)

$${y}_{mni}^{\text{wall}}(t)={\sigma}_{w}{s}_{T}[t-i{\mathrm{IT}}_{r}-m{T}_{r}-{\tau}_{w,mn})]\phantom{\rule{0ex}{0ex}}\mathrm{exp}(-j{\omega}_{c}{\tau}_{w,mn})+{B}_{mni}^{\text{wall}}(t),$$^{111}

## (55)

$${\tau}_{w,mn}=\frac{\sqrt{{({x}_{\mathrm{tm}}-{x}_{w,mn})}^{2}+{z}_{\text{off}}^{2}}+\sqrt{{({x}_{rn}-{x}_{w,mn})}^{2}+{z}_{\text{off}}^{2}}}{c},$$Combining Eqs. (53) and (54), the total baseband signal received by the $n$’th receiver, corresponding to the $i$’th pulse with the $m$’th transmitter active, is given by

By gating out the wall return in the time domain, we gain access to the sparse behind-the-wall scene of a few stationary and moving targets of interest. Therefore the time-gated received signal contains only contributions from the $P$ targets behind the wall as well as any residuals of the wall not removed or fully mitigated by gating. In this section, we assume that wall clutter is effectively suppressed by gating. Therefore, using Eq. (57), we obtain

## 7.2.

### Linear Model Formulation and CS Reconstruction

With the observed scene divided into ${N}_{x}\times {N}_{z}$ pixels in cross-range and downrange, consider ${N}_{{v}_{x}}$ and ${N}_{{v}_{z}}$ discrete values of the expected horizontal and vertical velocities, respectively. Therefore an image with ${N}_{x}\times {N}_{z}$ pixels in cross-range and downrange is associated with each considered horizontal and vertical velocity pair, resulting in a four-dimensional (4-D) target space. Note that the considered velocities contain the (0, 0) velocity pair to include stationary targets.

Sampling the received signal ${y}_{mni}(t)$ at times $\{{t}_{k}{\}}_{k=0}^{K-1}$, we obtain a $K\times 1$ vector ${\mathbf{y}}_{mni}$. For the $l$’th velocity pair $({v}_{xl},{v}_{zl})$, we vectorize the corresponding cross-range versus downrange image into an ${N}_{x}{N}_{z}\times 1$ scene reflectivity vector $\mathbf{r}({v}_{xl},{v}_{zl})$. The vector $\mathbf{r}({v}_{xl},{v}_{zl})$ is a weighted indicator vector defining the scene reflectivity corresponding to the $l$’th considered velocity pair, i.e., if there is a target at the spatial grid point ($x$, $z$) with motion parameters $({v}_{xl},{v}_{zl})$, then the value of the corresponding element of $\mathbf{r}({v}_{xl},{v}_{zl})$ should be nonzero; otherwise, it is zero.

Using the developed signal model in Eqs. (53) and (58), we obtain the linear system of equations

## (59)

$${\mathbf{y}}_{mni}={\mathrm{\Psi}}_{mni}({v}_{xl},{v}_{zl})\mathbf{r}({v}_{xl},{v}_{zl}),\phantom{\rule[-0.0ex]{2em}{0.0ex}}l=0,1,\dots ,{N}_{{v}_{x}}{N}_{{v}_{z}}-1,$$## (60)

$${[{\mathrm{\Psi}}_{mni}({v}_{xl},{v}_{zl})]}_{k,q}={s}_{T}[{t}_{k}-iI{T}_{r}-m{T}_{r}-{\tau}_{q,mn}(i)]\phantom{\rule{0ex}{0ex}}\mathrm{exp}[-j{\omega}_{c}{\tau}_{q,mn}(i)],\phantom{\rule[-0.0ex]{1em}{0.0ex}}\phantom{\rule{0ex}{0ex}}q=0,1,\dots ,{N}_{x}{N}_{z}-1$$Stacking the received signal samples corresponding to $I$ pulses from all $MN$ transmitting and receiving element pairs, we obtain the $MNIK\times 1$ measurement vector $\mathbf{y}$ as

## (61)

$$\mathbf{y}=\mathrm{\Psi}({v}_{xl},{v}_{zl})\mathbf{r}({v}_{xl},{v}_{zl}),\phantom{\rule[-0.0ex]{2em}{0.0ex}}\phantom{\rule{0ex}{0ex}}l=0,1,\dots ,({N}_{{v}_{x}}{N}_{{v}_{z}}-1),$$## (62)

$$\mathrm{\Psi}({v}_{xl},{v}_{zl})={[{\mathrm{\Psi}}_{000}^{T}({v}_{xl},{v}_{zl}),\dots ,{\mathrm{\Psi}}_{(M-1)(N-1)(I-1)}^{T}({v}_{xl},{v}_{zl})]}^{T}.$$## (63)

$$\mathrm{\Psi}=[\mathrm{\Psi}({v}_{x0},{v}_{z0}),\dots ,\mathrm{\Psi}({v}_{x({N}_{{v}_{x}}{N}_{{v}_{z}}-1)},{v}_{z({N}_{{v}_{x}}{N}_{{v}_{z}}-1)})],$$The model described in Eq. (64) permits the scene reconstruction within the CS framework. We measure a $J<MNIK$ dimensional vector of elements randomly chosen from $\mathbf{y}$. The reduced set of measurements can be expressed as

where $\mathrm{\Phi}$ is a $J\times MNIK$ measurement matrix. For measurement reduction simultaneously along the spatial, slow time, and fast time dimensions, the specific structure of the matrix $\mathrm{\Phi}$ is given by## (66)

$$\mathrm{\Phi}=\mathrm{kron}({\mathrm{\Phi}}_{1},{\mathbf{I}}_{{J}_{1}{J}_{2}{N}_{1}})\xb7\mathrm{kron}({\mathrm{\Phi}}_{2},{\mathbf{I}}_{{J}_{1}{J}_{2}M})\xb7\mathrm{kron}({\mathrm{\Phi}}_{3},{\mathbf{I}}_{{J}_{1}MN})\phantom{\rule{0ex}{0ex}}\xb7\mathrm{diag}\{{\mathrm{\Phi}}_{4}^{(0)},{\mathrm{\Phi}}_{4}^{(1)},\dots ,{\mathrm{\Phi}}_{4}^{(MNI-1)}\},$$Given the reduced measurement vector $\stackrel{\u0306}{\mathbf{y}}$ in Eq. (65), we can recover $\hat{\mathbf{r}}$ by solving the following equation,

## (67)

$$\hat{\hat{\mathbf{r}}}=\mathrm{arg}\text{\hspace{0.17em}}\underset{\hat{\mathbf{r}}}{\mathrm{min}}{\parallel \hat{\mathbf{r}}\parallel}_{{l}_{1}}\text{\hspace{0.17em}}\text{subject to}\text{\hspace{0.17em}}\mathrm{\Phi}\mathrm{\Psi}\hat{\mathbf{r}}\approx \stackrel{\u0306}{\mathbf{y}}\mathrm{.}$$## 7.3.

### Illustrative Results

A real data collection experiment was conducted in the Radar Imaging Laboratory, Villanova University. The system and signal parameters are the same as described in Sec. 6.3. The origin of the coordinate system was chosen to be at the center of the receive array. The scene behind the wall consisted of one stationary target and one moving target, as shown in Fig. 14. A metal sphere of 0.3 m diameter, placed on a 1-m-high Styrofoam pedestal, was used as the stationary target. The pedestal was located 1.25 m behind the wall, centered at (0.49 m, 2.45 m). A person walked toward the front wall at a speed of $0.7\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}/\mathrm{s}$ approximately along a straight line path, which is located 0.2 m to the right of the transmitter. The back and the right side wall in the region behind the front wall were covered with RF absorbing material, whereas the 8-in.-thick concrete side-wall on the left and the floor were uncovered. A coherent processing interval of 15 pulses was selected.

The image region is chosen to be $4\times 6\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathrm{m}}^{2}$, centered at ($-0.31\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$, $3\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$), and divided into $41\times 36\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$ in cross-range and downrange. As the human moves directly toward the radar, we only consider varying vertical velocity from $-1.4$ to $0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}/\mathrm{s}$, with a step size of $0.7\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}/\mathrm{s}$, resulting in three velocity pixels. The space-slow time-fast time response of the scene consists of $8\times 15\times 2872$ measurements. First, we reconstruct the scene without time gating the wall response. Only 33.3% of the 15 pulses and 13.9% of the fast-time samples are randomly selected for each of the eight receive elements, resulting in $8\times 5\times 400$ space-slow time-fast time measured data. This is equivalent to 4.6% of the total data volume. Figure 15 depicts the CS based result, corresponding to the three velocity bins, obtained with the number of OMP iterations set to 50. We observe from Fig. 15(a) and 15(b) that both the stationary sphere and the moving person cannot be localized. The reason behind this failure is twofold: (1) the front wall is a strong extended target, and as such most of the degrees of freedom of the reconstruction process are used up for the wall, and (2) the low SCR, due to the much weaker returns from the moving and stationary targets compared to the front wall reflections, causes the targets to be not reconstructed with the residual degrees of freedom of the OMP. These results confirm that the performance of the sparse reconstruction scheme is hindered by the presence of the front wall.

After removal of the front wall return from the received signals through time gating, the space-slow time-fast time data includes $8\times 15\times 2048$ measurements. For CS, we used all eight receivers, randomly selected five pulses (33.3% of 15) and chose 400 Gaussian random measurements (19.5% of 2048) in fast time, which amounts to using 6.5% of the total data volume. The number of OMP iterations was set to 4. Figure 16(a)–16(c) shows the respective images corresponding to the 0, $-0.7$, and $-1.4\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}/\mathrm{s}$ velocities. It is apparent that with the wall gated out, both the stationary and moving targets have been correctly localized even with the reduced set of measurements.

## 8.

## Conclusion

In this paper, we presented a review of important approaches for sparse behind-the-wall scene reconstruction using CS. These approaches address the unique challenges associated with fast and efficient imaging in urban operations. First, considering stepped-frequency SAR operation, we presented a linear matrix modeling formulation, which enabled application of sparsity-based reconstruction of a scene of stationary targets using a significantly reduced data volume. Access to background scene without the targets of interest was assumed to render the scene sparse upon coherent subtraction. Subsequent sparse reconstruction using a much reduced data volume was shown to successfully detect and accurately localize the targets.

Second, assuming no prior access to a background scene, we examined the performance of joint mitigation of the wall backscattering and sparse scene reconstruction in TWRI applications. We focused on subspace projections approach, which is a leading method for combating wall clutter. Using real data collected with a stepped-frequency radar, we demonstrated that the subspace projection method maintains proper performance when acting on reduced data measurements.

Third, a sparsity-based approach for imaging of interior building structure was presented. The technique made use of the prior information about building construction practices of interior walls to both devise an appropriate linear model and design a sparsifying dictionary based on the expected wall alignment relative to the radar’s scan direction. The scheme was shown to provide reliable determination of building layouts, while achieving substantial reduction in data volume.

Fourth, we described a group sparse reconstruction method to exploit the rich indoor multipath environment for improved target detection under efficient data collection. A ray-tracing approach was used to derive a multipath model, considering reflections not only due to targets interactions with interior walls, but also the multipath propagation resulting from ringing within the front wall. Using stepped-frequency radar data, it was shown that this technique successfully reconstructed the ground truth without multipath ghosts and also increased the SCR at the true target locations.

Fifth, we detected and localized moving humans behind walls and inside enclosed structures using an approach that combines sparsity-driven radar imaging and change detection. Removal of stationary background via CD resulted in a sparse scene of moving targets, whereby CS schemes could exploit full benefits of sparsity-driven imaging. An appropriate CD linear model was developed that allowed scene reconstruction within the CS framework. Using pulsed radar operation, it was demonstrated that a sizable reduction in the data volume is provided by CS without degradation in system performance.

Finally, we presented a CS-based technique for joint localization of stationary and moving targets in TWRI applications. The front wall returns were suppressed through time gating, which was made possible by the short temporal support characteristic of the UWB transmit waveform. The SCR enhancement as a result of time gating permitted the application of CS techniques for scene reconstruction with few observations. We established an appropriate signal model that enabled formulation of linear modeling with sensing matrices for reconstruction of the downrange-cross-range-velocity space. Results based on real data experiments demonstrated that joint localization of stationary and moving targets can be achieved via sparse regularization using a reduced set of measurements without any degradation in system performance.

## References

M. G. Amin, Ed., Through-the-Wall Radar Imaging, CRC Press, Boca Raton, FL (2010).Google Scholar

M. G. Amin, Ed., “Special issue on Advances in Indoor Radar Imaging,” J. Franklin Inst. 345(6), 556–722 (2008).JFINAB0016-0032http://dx.doi.org/10.1016/j.jfranklin.2008.05.001Google Scholar

M. G. AminK. Sarabandi, Eds., “Special issue on remote sensing of building interior,” IEEE Trans. Geosci. Rem. Sens. 47(5), 1270–1420 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2009.2017053Google Scholar

E. Baranoski, “Through-wall imaging: historical perspective and future directions,” J. Franklin Inst. 345(6), 556–569 (2008).JFINAB0016-0032http://dx.doi.org/10.1016/j.jfranklin.2008.01.005Google Scholar

S. E. Borek, “An overview of through-the-wall surveillance for homeland security,” in Proc. 34th, Applied Imagery and Pattern Recognition Workshop, pp. 19–21, IEEE (2005).Google Scholar

H. Burchett, “Advances in through wall radar for search, rescue and security applications,” in Proc. Inst. of Eng. and Tech. Conf. Crime and Security, pp. 511–525, IET, London, UK (2006).Google Scholar

A. MartoneK. RanneyR. Innocenti, “Through-the-wall detection of slow-moving personnel,” Proc. SPIE 7308, 73080Q1 (2009).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.818513Google Scholar

X. P. Masbernatet al., “An MIMO-MTI approach for through-the-wall radar imaging applications,” in Proc. 5th Int. Waveform Diversity and Design Conf., IEEE (2010).Google Scholar

M. G. AminF. Ahmad, “Change detection analysis of humans moving behind walls,” IEEE Trans. Aerosp. Electron. Syst. 49(3) (2013).IEARAX0018-9251Google Scholar

M. AminF. AhmadW. Zhang, “A compressive sensing approach to moving target indication for urban sensing,” in Proc. IEEE Radar Conf., pp. 509–512, IEEE, Kansas City, MO (2011).Google Scholar

J. Moultonet al., “Target and change detection in synthetic aperture radar sensing of urban structures,” in Proc. IEEE Radar Conf., IEEE, Rome, Italy (2008).Google Scholar

A. MartoneK. RanneyR. Innocenti, “Automatic through-the-wall detection of moving targets using low-frequency ultra-wideband radar,” in Proc. IEEE Radar Conf., pp. 39–43, IEEE, Washington, DC, (2010).Google Scholar

S. S. RamH. Ling, “Through-wall tracking of human movers using joint Doppler and array processing,” IEEE Geosci. Rem. Sens. Lett. 5(3), 537–541 (2008).IGRSBY1545-598Xhttp://dx.doi.org/10.1109/LGRS.2008.924002Google Scholar

C. P. LaiR. M. Narayanan, “Through-wall imaging and characterization of human activity using ultrawideband (UWB) random noise radar,” Proc. SPIE 5778, 186–195 (2005).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.604154Google Scholar

C. P. LaiR. M. Narayanan, “Ultrawideband random noise radar design for through-wall surveillance,” IEEE Trans. Aerosp. Electronic Syst. 46(4), 1716–1730 (2010).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2010.5595590Google Scholar

S. S. Ramet al., “Doppler-based detection and tracking of humans in indoor environments,” J. Franklin Inst. 345(6), 679–699 (2008).JFINAB0016-0032http://dx.doi.org/10.1016/j.jfranklin.2008.04.001Google Scholar

E. F. Greneker, “RADAR flashlight for through-the-wall detection of humans,” in Proc. SPIE 3375, 280–285 (1998).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.327172Google Scholar

T. ThayaparanL. StankovicI. Djurovic, “Micro-Doppler human signature detection and its application to gait recognition and indoor imaging,” J. Franklin Inst. 345(6), 700–722 (2008).JFINAB0016-0032http://dx.doi.org/10.1016/j.jfranklin.2008.01.003Google Scholar

I. OrovicS. StankovicM. Amin, “A new approach for classification of human gait based on time-frequency feature representations,” Signal Process. 91(6), 1448–1456 (2011).SPRODR0165-1684http://dx.doi.org/10.1016/j.sigpro.2010.08.013Google Scholar

A. R. Hunt, “Use of a frequency-hopping radar for imaging and motion detection through walls,” IEEE Trans. Geosci. Rem. Sens. 47(5), 1402–1408 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2009.2016084Google Scholar

F. AhmadM. G. AminP. D. Zemany, “Dual-frequency radars for target localization in urban sensing,” IEEE Trans. Aerosp. Electronic Syst. 45(4), 1598–1609 (2009).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2009.5310321Google Scholar

N. Maarefet al., “A study of UWB FM-CW Radar for the detection of human beings in motion inside a building,” IEEE Trans. Geosci. Rem. Sens. 47(5), 1297–1300 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.2010709Google Scholar

F. SoldovieriR. SolimeneR. Pierri, “A simple strategy to detect changes in through the wall imaging,” Prog. Electromagn. Res. M 7, 1–13 (2009).PELREX1043-626XGoogle Scholar

T. S. RalstonG. L. CharvatJ. E. Peabody, “Real-time through-wall imaging using an ultrawideband multiple-input multiple-output (MIMO) phased array radar system,” in Proc. IEEE Intl. Symp. Phased Array Systems and Technology, pp. 551–558, IEEE, Boston, MA (2010).Google Scholar

F. Ahmadet al., “Design and implementation of near-field, wideband synthetic aperture beamformers,” IEEE Trans. Aerosp. Electronic Syst. 40(1), 206–220 (2004).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2004.1292154Google Scholar

F. AhmadM. G. AminS. A. Kassam, “Synthetic aperture beamformer for imaging through a dielectric wall,” IEEE Trans. Aerosp. Electronic Syst. 41(1), 271–283 (2005).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2005.1413761Google Scholar

M. G. AminF. Ahmad, “Wideband synthetic aperture beamforming for through-the-wall imaging,” IEEE Signal Process. Mag. 25(4) 110–113 (2008).ISPRE61053-5888http://dx.doi.org/10.1109/MSP.2008.923510Google Scholar

F. AhmadM. Amin, “Multi-location wideband synthetic aperture imaging for urban sensing applications,” J. Franklin Inst. 345(6), 618–639 (2008).JFINAB0016-0032http://dx.doi.org/10.1016/j.jfranklin.2008.03.003Google Scholar

F. SoldovieriR. Solimene, “Through-wall imaging via a linear inverse scattering algorithm,” IEEE Geosci. Rem. Sens. Lett. 4(4), 513–517 (2007).IGRSBY1545-598Xhttp://dx.doi.org/10.1109/LGRS.2007.900735Google Scholar

F. SoldovieriG. PriscoR. Solimene, “A multi-array tomographic approach for through-wall imaging,” IEEE Trans. Geosci. Rem. Sens. 46(4), 1192–1199 (2008).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.915754Google Scholar

E. M. Lavelyet al., “Theoretical and experimental study of through-wall microwave tomography inverse problems,” J. Franklin Inst. 345(6), 592–617 (2008).JFINAB0016-0032http://dx.doi.org/10.1016/j.jfranklin.2008.01.006Google Scholar

M. M. Nikolicet al., “An approach to estimating building layouts using radar and jump-diffusion algorithm,” IEEE Trans. Antennas Propag. 57(3), 768–776 (2009).IETPAK0018-926Xhttp://dx.doi.org/10.1109/TAP.2009.2013420Google Scholar

C. Leet al., “Ultrawideband (UWB) radar imaging of building interior: measurements and predictions,” IEEE Trans. Geosci. Rem. Sens. 47(5), 1409–1420 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2009.2016653Google Scholar

E. ErtinR. L. Moses, “Through-the-wall SAR attributed scattering center feature estimation,” IEEE Trans. Geosci. Rem. Sens. 47(5), 1338–1348 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.2008999Google Scholar

M. AftanasM. Drutarovsky, “Imaging of the building contours with through the wall UWB radar system,” Radioeng. J. 18(3), 258–264 (2009).Google Scholar

F. AhmadY. ZhangM. G. Amin, “Three-dimensional wideband beamforming for imaging through a single wall,” IEEE Geosci. Rem. Sens. Lett. 5(2), 176–179 (2008).IGRSBY1545-598Xhttp://dx.doi.org/10.1109/LGRS.2008.915742Google Scholar

L. P. SongC. YuQ. H. Liu, “Through-wall imaging (TWI) by radar: 2-D tomographic results and analyses,” IEEE Trans. Geosci. Rem. Sens. 43(12), 2793–2798 (2005).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2005.857914Google Scholar

M. DehmollaianM. ThielK. Sarabandi, “Through-the-wall imaging using differential SAR,” IEEE Trans. Geosci. Rem. Sens. 47(5), 1289–1296 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.2010052Google Scholar

M. DehmollaianK. Sarabandi, “Refocusing through building walls using synthetic aperture radar,” IEEE Trans. Geosci. Rem. Sens. 46(6), 1589–1599 (2008).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.916212Google Scholar

F. AhmadM. G. Amin, “Noncoherent approach to through-the-wall radar localization,” IEEE Trans. Aerosp. Electronic Syst. 42(4), 1405–1419 (2006).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2006.314581Google Scholar

F. AhmadM. G. Amin, “A noncoherent radar system approach for through-the-wall imaging,” Proc. SPIE 5778, 196–207 (2005).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.609867Google Scholar

Y. YangA. Fathy, “Development and implementation of a real-time see-through-wall radar system based on FPGA,” IEEE Trans. Geosci. Rem. Sens. 47(5), 1270–1280 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.2010251Google Scholar

F. AhmadM. G. Amin, “High-resolution imaging using capon beamformers for urban sensing applications,” in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Process., pp. II-985–II-988, IEEE, Honolulu, HI (2007).Google Scholar

M. Soumekh, Synthetic Aperture Radar Signal Processing with Matlab Algorithms, John Wiley and Sons, New York, NY (1999).Google Scholar

Y-S. YoonM. G. Amin, “Spatial filtering for wall-clutter mitigation in through-the-wall radar imaging,” IEEE Trans. Geosci. Rem. Sens. 47(9), 3192–3208 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2009.2019728Google Scholar

R. Burkholder, “Electromagnetic models for exploiting multi-path propagation in through-wall radar imaging,” in Proc. Int. Conf. Electromagnetics in Advanced Applications, pp. 572–575, IEEE (2009).Google Scholar

T. DogaruC. Le, “SAR images of rooms and buildings based on FDTD computer models,” IEEE Trans. Geosci. Rem. Sens. 47(5), 1388–1401 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2009.2013841Google Scholar

S. KideraT. SakamotoT. Sato, “Extended imaging algorithm based on aperture synthesis with double-scattered waves for UWB radars,” IEEE Trans. Geosci. Rem. Sens. 49(12), 5128–5139 (2011).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2011.2158108Google Scholar

P. SetlurM. AminF. Ahmad, “Multipath model and exploitation in through-the-wall and urban radar sensing,” IEEE Trans. Geosci. Rem. Sens. 49(10), 4021–4034 (2011).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2011.2128331Google Scholar

F. AhmadM. G. AminS. A. Kassam, “A beamforming approach to stepped-frequency synthetic aperture through-the-wall radar imaging,” in Proc. IEEE Int. Workshop on Computational Advances in Multi-Sensor Adaptive Processing, vol. 345, pp. 24–27, IEEE, Puerto Vallarta, Mexico (2005).Google Scholar

F. AhmadM. G. Amin, “Performance of autofocusing schemes for single target and populated scenes behind unknown walls,” Proc. SPIE 6547, 654709 (2007).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.720085Google Scholar

F. AhmadM. G. AminG. Mandapati, “Autofocusing of through-the-wall radar imagery under unknown wall characteristics,” IEEE Trans. Image Process. 16(7), 1785–1795 (2007).IIPRE41057-7149http://dx.doi.org/10.1109/TIP.2007.899030Google Scholar

G. WangM. G. Amin, “Imaging through unknown walls using different standoff distances,” IEEE Trans. Signal Process. 54(10), 4015–4025 (2006).ITPRED1053-587Xhttp://dx.doi.org/10.1109/TSP.2006.879325Google Scholar

G. WangM. G. AminY. Zhang, “A new approach for target locations in the presence of wall ambiguity,” IEEE Trans. Aerosp. Electronic Syst. 42(1), 301–315 (2006).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2006.1603424Google Scholar

Y. YoonM. G. Amin, “High-resolution through-the-wall radar imaging using beamspace music,” IEEE Trans. Antennas Propag. 56(6), 1763–1774 (2008).IETPAK0018-926Xhttp://dx.doi.org/10.1109/TAP.2008.923336Google Scholar

Y. YoonM. G. AminF. Ahmad, “MVDR beamforming for through-the-wall radar imaging,” IEEE Trans. Aerosp. Electronic Syst. 47(1), 347–366 (2011).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2011.5705680Google Scholar

W. Zhanget al., “Full polarimetric beamforming algorithm for through-the-wall radar imaging,” Radio Sci. 46(5), RS0E16 (2011).RASCAD0048-6604http://dx.doi.org/10.1029/2010RS004631Google Scholar

C. ThajudeenW. ZhangA. Hoorfar, “Time-domain wall parameter estimation and mitigation for through-the-wall radar image enhancement,” in Proc. Progress in Electromagnetics Research Symp., EMW publishing, Cambridge, MA (2010).Google Scholar

F. TiviveM. AminA. Bouzerdoum, “Wall clutter mitigation based on eigen-analysis in through-the-wall radar imaging,” in Proc. IEEE Workshop on DSP, IEEE (2011).Google Scholar

F. H. C. TiviveA. BouzerdoumM. G. Amin, “An SVD-based approach for mitigating wall reflections in through-the-wall radar imaging,” in Proc. IEEE Radar Conf., pp. 519–524, IEEE, Kansas City, MO (2011).Google Scholar

R. Chandraet al., “An approach to remove the clutter and detect the target for ultra-wideband through wall imaging,” J. Geophys. Eng. 5(4), 412–419 (2008).1742-2132http://dx.doi.org/10.1088/1742-2132/5/4/005Google Scholar

Y.S. YoonM. G. Amin, “Compressed sensing technique for high-resolution radar imaging,” Proc. SPIE 6968, 69681A (2008).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.777175Google Scholar

Q. Huanget al., “UWB through-wall imaging based on compressive sensing,” IEEE Trans. Geosci. Rem. Sens. 48(3), 1408–1415 (2010).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2009.2030321Google Scholar

Y. S. YoonM. G. Amin, “Through-the-wall radar imaging using compressive sensing along temporal frequency domain,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, IEEE, Dallas, TX (2010).Google Scholar

M. G. AminF. AhmadW. Zhang, “Target RCS exploitations in compressive sensing for through wall imaging,” in Proc. 5th Int. Waveform Diversity and Design Conf., IEEE, Niagara Falls, Canada (2010).Google Scholar

M. LeigsneringC. DebesA. M. Zoubir, “Compressive sensing in through-the-wall radar imaging,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Process., pp. 4008–4011, IEEE, Prague, Czech Republic (2011).Google Scholar

J. Yanget al., “Multiple-measurement vector model and its application to through-the-wall Radar imaging,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Process., IEEE, Prague, Czech Republic (2011).Google Scholar

F. AhmadM. G. Amin, “Partially sparse reconstruction of behind-the-wall scenes,” Proc. SPIE 8365, 83650W (2012).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.919527Google Scholar

R. SolimeneF. AhmadF. Soldovieri, “A novel CS-TSVD strategy to perform data reduction in linear inverse scattering problems,” IEEE Geosci. Rem. Sens. Lett. 9(5), 881–885 (2012).IGRSBY1545-598Xhttp://dx.doi.org/10.1109/LGRS.2012.2185679Google Scholar

F. AhmadM. G. Amin, “Through-the-wall human motion indication using sparsity-driven change detection,” IEEE Trans. Geosci. Rem. Sens. 51(2), 881–890 (2013).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2012.2203310Google Scholar

E. L. Targaronaet al., “Compressive sensing for through wall radar imaging of stationary scenes using arbitrary data measurements,” in Proc. 11th Int. Conf. on Information Science, Sig. Proc. and Their App., IEEE, Montreal, Canada (2012).Google Scholar

E. L. Targaronaet al., “Wall mitigation techniques for indoor sensing within the CS framework,” in Proc. Seventh IEEE Workshop on Sensor Array and Multi-Channel Signal Proc., IEEE, Hoboken, NJ (2012).Google Scholar

E. Lagunaset al., “Joint wall mitigation and compressive sensing for indoor image reconstruction,” IEEE Trans. Geosci. Rem. Sens. 51(2), 891–906 (2013).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2012.2203824Google Scholar

J. QianF. AhmadM. G. Amin, “Joint localization of stationary and moving targets behind walls using sparse scene recovery,” J. Electronic Im. 22(2), 021002 (2013) http://dx.doi.org/10.1117/1.JEI.22.2.021002.JEIME51017-9909Google Scholar

W. Zhanget al., “Ultra-wideband impulse radar through-the-wall imaging with compressive sensing,” Int. J. Antennas Propag. 2012, 11 (2012).1687-5869http://dx.doi.org/10.1155/2012/251497Google Scholar

E. Lagunaset al., “Determining building interior structures using compressive sensing,” J. Electron. Imaging 22(2), 021003 (2013) http://dx.doi.org/10.1117/1.JEI.22.2.021003.JEIME51017-9909Google Scholar

E. CandesJ. RombergT. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59(8), 1207–1223 (2006).CPMAMV0010-3640http://dx.doi.org/10.1002/(ISSN)1097-0312Google Scholar

D. DonohoM. EladV. Temlyakov, “Stable recovery of sparse over-complete representations in the presence of noise,” IEEE Trans. Inf. Theor. 52(1), 6–18 (2006).IETTAW0018-9448http://dx.doi.org/10.1109/TIT.2005.860430Google Scholar

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theor. 52(4), 1289–1306 (2006).IETTAW0018-9448http://dx.doi.org/10.1109/TIT.2006.871582Google Scholar

R. BaraniukP. Steeghs, “Compressive radar imaging,” in Proc. IEEE Radar Conf., pp. 128–133, IEEE, Waltham, MA (2007).Google Scholar

E. J. CandesM. B. Wakin, “An introduction to compressed sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).ISPRE61053-5888http://dx.doi.org/10.1109/MSP.2007.914731Google Scholar

M. HermanT. Strohmer, “High-resolution radar via compressive sensing,” IEEE Trans. Signal Process. 57(6), 2275–2284 (2009).ITPRED1053-587Xhttp://dx.doi.org/10.1109/TSP.2009.2014277Google Scholar

A. GurbuzJ. McClellanW. Scott, “Compressive sensing for subsurface imaging using ground penetrating radar,” Signal Process. 89(10), 1959–1972 (2009).SPRODR0165-1684http://dx.doi.org/10.1016/j.sigpro.2009.03.030Google Scholar

A. GurbuzJ. McClellanW. Scott, “A compressive sensing data acquisition and imaging method for stepped frequency GPRs,” IEEE Trans. Signal Process. 57(7), 2640–2650 (2009).ITPRED1053-587Xhttp://dx.doi.org/10.1109/TSP.2009.2016270Google Scholar

M. C. ShastryR. M. NarayananM. Rangaswamy, “Compressive radar imaging using white stochastic waveforms,” in Proc. Intl. Waveform Diversity and Design Conf., pp. 90–94, Niagara Falls, Canada (2010).Google Scholar

L. C. Potteret al., “Sparsity and compressed sensing in radar imaging,” Proc. IEEE 98(6), 1006–1020 (2010).IEEPAD0018-9219http://dx.doi.org/10.1109/JPROC.2009.2037526Google Scholar

Y. YuA. P. Petropulu, “A study on power allocation for widely separated CS-based MIMO radar,” Proc. SPIE 8365, 83650S (2012).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.919734Google Scholar

F. Ahmad, Ed., “Compressive sensing,” Proc. SPIE 8365, 836501 (2012).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.981277Google Scholar

K. KruegerJ. H. McClellanW. R. Scott Jr., “3-D imaging for ground penetrating radar using compressive sensing with block-toeplitz structures,” in Proc. IEEE 7th Sensor Array and Multichannel Signal Process. Workshop, IEEE, Hoboken, NJ (2012).Google Scholar

D. L. Donoho, “For most large underdetermined systems of linear equations, the minimal ${l}_{1}$ -norm solution is also the sparsest solution,” Commun. Pre Appl. Math. 59(6), 797–829 (2006).CPMAMV0010-3640http://dx.doi.org/10.1002/(ISSN)1097-0312Google Scholar

S. BoydL. Vandenberghe, Convex Optimization, Cambridge University Press, UK (2004).Google Scholar

S. S. ChenD. L. DonohoM. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. 20(1), 33–61 (1998).SJOCE31064-8275http://dx.doi.org/10.1137/S1064827596304010Google Scholar

S. MallatZ. Zhang, “Matching pursuit with time-frequency dictionaries,” IEEE Trans. Signal Process. 41(12), 3397–3415 (1993).ITPRED1053-587Xhttp://dx.doi.org/10.1109/78.258082Google Scholar

J. A. Tropp, “Greed is good: algorithmic results for sparse approximation,” IEEE Trans. Inf. Theor. 50(10), 2231–2242 (2004).IETTAW0018-9448http://dx.doi.org/10.1109/TIT.2004.834793Google Scholar

J. A. TroppA. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theor. 53(12), 4655–4666 (2007).IETTAW0018-9448http://dx.doi.org/10.1109/TIT.2007.909108Google Scholar

D. NeedellJ. A. Tropp, “CoSaMP: iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009).ACOHE91063-5203http://dx.doi.org/10.1016/j.acha.2008.07.002Google Scholar

P. BoufounosM. DuarteR. Baraniuk, “Sparse signal reconstruction from noisy compressive measurements using cross validation,” in Proc. IEEE 14th Statistical Signal Process. Workshop, pp. 299–303, IEEE, Madison, WI (2007).Google Scholar

R. Ward, “Compressed sensing with cross validation,” IEEE Trans. Inf. Theor. 55(12), 5773–5782 (2009).IETTAW0018-9448http://dx.doi.org/10.1109/TIT.2009.2032712Google Scholar

T. Doet al., “Sparsity adaptive matching pursuit algorithm for practical compressed sensing,” in Proc. 42nd Asilomar Conf. on Signals, Systems and Computers, pp. 581–587, IEEE, Pacific Grove, CA (2008).Google Scholar

M. Leigsneringet al., “Multipath exploitation in through-the-wall radar imaging using sparse reconstruction,” IEEE Trans. Aerosp. Electronic Syst., under review.0018-9251Google Scholar

A. KarousosG. KoutitasC. Tzaras, “Transmission and reflection coefficients in time-domain for a dielectric slab for UWB signals,” in Proc. IEEE Vehicular Technology Conf., pp. 455–458, IEEE (2008).Google Scholar

S. WrightR. NowakM. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57(7), 2479–2493 (2009).ITPRED1053-587Xhttp://dx.doi.org/10.1109/TSP.2009.2016892Google Scholar

W. DengW. YinY. Zhang, “Group sparse optimization by alternating direction method,” Department of Computational and Applied Mathematics, Rice University, Technical Report TR11-06 (2011).Google Scholar

M. YuanY. Lin, “Model selection and estimation in regression with grouped variables,” J. R. Stat. Soc. Ser. B 68(1), 49–67 (2006).JSTBAJ0035-9246http://dx.doi.org/10.1111/rssb.2006.68.issue-1Google Scholar

R. G. Baraniuket al., “Model-based compressive sensing,” IEEE Trans. Inf. Theory 56(4), 1982–2001 (2010).IETTAW0018-9448http://dx.doi.org/10.1109/TIT.2010.2040894Google Scholar

F. Bachet al., “Convex optimization with sparsity-inducing norms,” in Optimization for Machine Learning, S. SraS. NowozinS. J. Wright, Eds., MIT Press, Cambridge, MA (2011).Google Scholar

Y. EldarP. KuppingerH. Bolcskei, “Block-sparse signals: uncertainty relations and efficient recovery,” IEEE Trans. Signal Process. 58(6), 3042–3054 (2010).ITPRED1053-587Xhttp://dx.doi.org/10.1109/TSP.2010.2044837Google Scholar

F. AhmadM. G. Amin, “Sparsity-based change detection of short human motion for urban sensing,” in Proc. Seventh IEEE Workshop on Sensor Array and Multi-Channel Signal Processing, IEEE, Hoboken, NJ (2012).Google Scholar

X. X. ZhuR. Bamler, “Tomographic SAR inversion by L1-norm regularization—the compressive sensing approach,” IEEE Trans. Geosci. Rem. Sens. 48(10), 3839–3846 (2010).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2010.2048117Google Scholar

A. S. KhawajaJ. Ma, “Applications of compressed sensing for SAR moving-target velocity estimation and image compression,” IEEE Trans. Instrum. Meas. 60(8), 2848–2860 (2011).IEIMAO0018-9456http://dx.doi.org/10.1109/TIM.2011.2122190Google Scholar

F. AhmadM. G. Amin, “Wall clutter mitigation for MIMO radar configurations in urban sensing,” in Proc. 11th Intl. Conference on Information Science, Signal Proc., and Their App., IEEE, Montreal, Canada (2012).Google Scholar

## Biography

**Moeness G. Amin** received his PhD degree in 1984 from the University of Colorado, Boulder, Colorado, in electrical engineering. He has been on the faculty of the Department of Electrical and Computer Engineering at Villanova University since 1985. In 2002, he became the director of the Center for Advanced Communications, College of Engineering. He is a fellow of the Institute of Electrical and Electronics Engineers (IEEE); fellow of the SPIE; and a fellow of the Institute of Engineering and Technology. He is a recipient of the IEEE Third Millennium Medal; recipient of the 2009 Individual Technical Achievement Award from the European Association of Signal Processing; recipient of the 2010 NATO Scientific Achievement Award; recipient of the Chief of Naval Research Challenge Award; recipient of Villanova University Outstanding Faculty Research Award, 1997; and the recipient of the IEEE Philadelphia Section Award, 1997. He has over 550 journal and conference publications in the areas of wireless communications, time-frequency analysis, sensor array processing, waveform design and diversity, interference cancellation in broadband communication platforms, satellite navigations, target localization and tracking, direction finding, channel diversity and equalization, ultrasound imaging, and radar signal processing. He coauthored 20 book chapters and is the editor of the first book on through-the-wall radar imaging.

**Fauzia Ahmad** received her MS degree in electrical engineering in 1996 and PhD degree in electrical engineering in 1997, both from the University of Pennsylvania, Philadelphia, Pennsylvania. From 1998 to 2000, she was an assistant professor in the College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Pakistan. From 2000 to 2001, she served as an assistant professor at Fizaia College of Information Technology, Pakistan. Since 2002, she has been with the Center for Advanced Communications, Villanova University, Villanova, Pennsylvania, where she is now a research associate professor and the director of the radar-imaging lab. She is a senior member of the Institute of Electrical and Electronics Engineers (IEEE) and senior member of a SPIE. She chairs the SPIE Compressive Sensing Conference and serves on the technical program committees of the SPIE Radar Sensor Technology Conference, IEEE Radar Conference, and IET International Conference on Radar Systems. She served as a lead guest editor of the SPIE/IS&T *Journal of Electronic Imaging* April 2013 special section on compressive sensing for imaging. She has over 120 journal and conference publications in the areas of radar imaging, radar signal processing, waveform design and diversity, compressive sensing, array signal processing, sensor networks, ultrasound imaging, and over-the-horizon radar. She has also coauthored three book chapters in the aforementioned areas.