12 April 2017 Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
Author Affiliations +
Abstract
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
Qin, Wu, Wang, and Dong: Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

1.

Introduction

Space-time adaptive processing (STAP) can effectively suppress strong ground/sea clutter and improve the moving target indication performance for airborne/spaceborne radar systems.1 In full-dimension STAP algorithms, however, a large number of independent and identically distributed (I.I.D.) training snapshots are required to yield an average signal-to-clutter-noise ratio (SCNR) loss of 3  dB.2 Moreover, full-dimension STAP algorithms have a high system complexity and require many memory elements.3 In practical applications, it is generally difficult to satisfy these requirements.

To date, many algorithms have been proposed to overcome the drawbacks of full-dimension STAP algorithms. Reduced-rank STAP algorithms can reduce the clutter space while maintaining the performance of fully STAP algorithms.4,5 Consequently, the required number of snapshots can be reduced. However, eigenvalue decomposition is used, which is computationally expensive. To reduce the computational expense and the number of training snapshots simultaneously, some typical reduced-dimension STAP algorithms have been proposed, such as the joint domain localized approach, auxiliary channel processing, etc.67.8 However, the nonadaptive selection of the reduced-dimension projection matrix, which relies on intuitive experience, results in a performance degradation to a certain extent.2

The sparsity of the filter coefficients in STAP has recently been studied, and the theoretical framework for sparsity-based STAP algorithms using the 1-regularized constraint, which is the so-called least absolute shrinkage and selection operator (LASSO), has been established.910.11.12 The classical algorithms for solving the LASSO problem adopt convex optimization, e.g., the interior point algorithm, to obtain a sparse solution. The complexity of the algorithms can be very high when the size of the problem is large, which is not pragmatic in practice. To effectively solve the optimization problem, the 1-regularized recursive least-squares STAP (RLS-STAP) algorithm,13 the 1-regularized least-mean-square STAP algorithm,14 and the homotopy-STAP algorithm15 have been proposed. Compared with conventional STAP methods, sparsity-based STAP techniques have been shown to provide high resolution and exhibit better performance than conventional STAP algorithms.16

The alternating direction method of multipliers (ADMM) is a technique used to combine the decomposability of dual ascent with the rapid convergence speed of the method of multipliers.17,18 This technique is well suited for solving the optimization problems of the 1 constraint, particularly large-scale problems.19 The ADMM technique can converge within a few tens of iterations, which is acceptable in practical use.20 In this study, according to the optimal criterion of minimizing the mean-square error, we propose an algorithm based on the ADMM technique to solve the 1-regularized STAP problem. The proposed method provides better performance with a small number of I.I.D. training snapshots and without a large number of calculations.

The reminder of this paper is organized as follows. The system model of the generalized side-lobe canceler (GSC) form of the sparsity-based STAP is introduced in Sec. 2. In Sec. 3, the theory of the ADMM algorithm is introduced, and the 1-regularized ADMM-STAP algorithm is proposed. The associated optimization problem is formulated and solved analytically. The performance improvement of the proposed algorithm is shown in Sec. 4. Section 5 provides the conclusion.

Notation: In this paper, a variable, a column vector, and a matrix are represented by a lowercase letter, a lowercase bold letter, and a capital bold letter, respectively. The operations of transposition, complex conjugation, and conjugate transposition are denoted by (·)T, (·)*, and (·)H, respectively. The symbol ; denotes the Kronecker product, and the symbol ·n denotes the n-norm operator. E(x) denotes the expected value of x, |x| indicates the absolute value of x, and (x)+max(0,x). sign(·) is the component-wise sign function.13

2.

Background and Problem Formulation

2.1.

System Model

The STAP technique is known for its ability to suppress clutter energy interference while detecting moving targets. Consider an airborne radar system equipped with a uniform linear array (ULA) consisting of N receiving elements, as shown in Fig. 1. The radar transmits K identical pulses at a constant pulse repetition frequency (PRF) fr1/Tr during a coherent processing interval (CPI), where Tr is the pulse repetition interval. The received signal from the range bin of interest is represented as x=xt+xc+n, where xt is the target vector, xc is the clutter vector, and n is the thermal noise vector with noise power σn2 on each channel and pulse. The space-time clutter vector can be represented as21

(1)

xc=n=1Ncσc,nv(fd,n,fs,n),
where Nc denotes the number of clutter patches in the range bin of interest and σc,n denotes the random complex reflection coefficient. fd,n2vpTrsinϕn/λ and fs,ndsinϕn/λ are the Doppler frequency and spatial frequency for the n’th clutter patch, respectively, where λ is the wavelength and d is the innersensor spacing of the ULA. v(fd,n,fs,n)CNK×1 is the space-time steering vector, which is defined as a Kronecker product of the temporal and spatial steering vectors, i.e., v(fd,fs)=vd(fd)vs(fs), where

(2)

vd(fd)=[1exp(j2πfd)exp[j2πfd(K1)]]Tvs(fs)=[1exp(j2πfs)exp[j2πfs(N1)]]T.
The target vector is xt=σtv(fd,t,fs,t), where fd,t2vpTrsinϕt/λ+2vtTr/λ and fs,tdsinϕt/λ. vt is the radial velocity of the moving target, and ϕt represents the angle of arrival (AOA) of the target. Note that in the following, v(fd,t,fs,t) is rewritten as vt for convenience.

Fig. 1

Radar platform flies at speed vp along the azimuth direction (x-axis). Without loss of generality, the center of elements is defined as the origin of coordinates. hp is the flight height, and ϕ represents the AOA of the clutter patch in the isorange clutter ring.

JARS_11_2_026004_f001.png

To clearly illustrate how the STAP method works, the GSC form of the STAP method is shown in Fig. 2. BCNK×(NK1) is the signal blocking matrix, which satisfies BHvt=0 and BBH=I. Generally, B can be obtained by singular value decomposition (SVD):

(3)

[USV]=svd(vtH),B=V(:,2:NK).
After the transformation by b=BHxC(NK1)×1, NK1 clutter data are available. In the full-dimension STAP, all the data are selected to cancel the clutter. The output is

(4)

y=d0ωbHb,
where d0=vtHx and ωb=Rb1rbd. Rb=E(bbH) is the clutter covariance matrix, and rbd=E(bd0*) is the cross-correlation vector between d0 and b. The output clutter power can be computed as

(5)

P=vtHRxvtrbdHRb1rbd,
where Rx=E(xxH) is the input covariance matrix. The output SCNR can be expressed as

(6)

ξ=NM|α|2vtHRxvtrbdHRb1rbd.
Maximizing the output SCNR is equivalent to maximizing the detection probability. However, Rb and rbd are unknown in practice, and the secondary training snapshots are required to estimate these parameters.15 The best performance can be achieved if there are sufficient I.I.D. training snapshots. However, in many practical cases, it is impossible to obtain sufficient snapshots, and the performance degrades significantly.

Fig. 2

(a) GSC form of the conventional STAP and (b) GSC form of the sparsity-based STAP.

JARS_11_2_026004_f002.png

2.2.

Sparsity-Based STAP

According to the STAP theory, it has been shown that the rank of clutter covariance is far lower than the DOFs of the system.22,23 Consequently, some RR-STAP and RD-STAP algorithms have been used to reduce the filter length, i.e., the filter coefficient vector obtained by full-dimension STAP is sparse.14 Hence, in the GSC form of the sparsity-based STAP algorithm (see Fig. 2), the filter coefficient vector ωb can be replaced by ω˜b=Vωb, where V=Δdiag(v) and vCNK1 denote a sparse vector. Then, we obtain

(7)

z=VHbC(NK1)×1.
The output of the sparsity-based STAP is

(8)

yr=d0ωbHz=[1ωbHωbHVH][yb].
Hence, the output clutter power for the sparsity-based STAP can be computed as

(9)

Pr=vtHRxvtrbdHRb1rbd+ϵHRbϵ,
where ϵ=ωbω˜b is the weight error vector caused by the sparsity constraint. Note that the target signal power is not affected by the sparsity constraint. The output SCNR can be expressed as

(10)

ξr=NM|α|2vtHRxvtrbdHRb1rbd+ϵHRbϵ.
Hence, the aim is to minimize the mean-square error ϵHRbϵ. The objective function of the minimization problem can be rewritten as

(11)

ϵHRbϵ=rbdHRb1rbdrbdHω˜bω˜bHrbd+ω˜bHRbω˜b.
ω˜b is sparse, i.e., most of its elements are considerably smaller than the others. Hence, the minimization problem can be expressed as

(12)

min  rbdHω˜bω˜bHrbd+ω˜bHRbω˜b+λω˜b0,
where λ is the regularization parameter for regulating the sparseness of ω˜b. However, the 0-norm problem is nonconvex. Consequently, it is intractable even for optimization problems with a moderate size. Equation (12) can be further programmed as an LASSO algorithm

(13)

min  rbdHω˜bω˜bHrbd+ω˜bHRbω˜b+λω˜b1.
In contrast to Eq. (12), Eq. (13) is convex and can be solved by convex optimization algorithms, such as the interior point method (IPM). The complexity of IPM-STAP can be very high when the size of the problem is large, which is not pragmatic in practice.

3.

Proposed ℓ1-Regularized STAP Algorithm

3.1.

Variable Splitting

In general, the ADMM algorithm can converge rapidly when a modest-accuracy result is acceptable. Fortunately, this is the case for the parameter estimation problem in the STAP application that we are considering. For statistical problems, solving a parameter estimation problem to a very high accuracy often yields little improvement.19 The ADMM-STAP algorithm is based on the algorithm of variable splitting, i.e., we split the variable ω˜b into a pair of variables, say, ω˜b and z, and add a constraint that the two variables are equal. Moreover, the objective function is split as the sum of two functions, and then we minimize the sum of the two functions. Explicitly, Eq. (13) can be rewritten in the ADMM form

(14)

minω˜b,z  rbdHω˜bω˜bHrbd+ω˜bHRbω˜b+λz1s.t.  ω˜b=z.
The problems of Eqs. (13) and (14) are clearly equivalent. In many cases, it is easier to solve the constrained problem Eq. (14) than the original unconstrained problem. As in the method of multipliers, the augmented Lagrangian function is formed as19,20

(15)

Lρ(ω˜b,z,y)=rbdHω˜bω˜bHrbd+ω˜bHRbω˜b+λz1+(ρ/2)ω˜bz22+yH(ω˜bz),
where ρ>0 is the augmented Lagrangian parameter and y is a vector of Lagrange multipliers.

3.2.

1-Regularized ADMM-STAP

Define the residual and the scaled dual variable as r=ω˜bz and d=(1/ρ)y, respectively. Then, we have

(16)

(ρ/2)ω˜bz22+yH(ω˜bz)=(ρ/2)r22+yHr=(ρ/2)r+d22(ρ/2)d22.
Subsequently, the ADMM-STAP algorithm can be rewritten in a convenient form

(17)

ω˜b(k+1)=argminω˜b(rbdHω˜bω˜bHrbd+ω˜bHRbω˜b+(ρ/2)ω˜bz(k)+d(k)22),z(k+1)=argminz(λz1+(ρ/2)ω˜b(k+1)z+d(k)22),d(k+1)=d(k)+r(k+1),
where r(k)=ω˜b(k)z(k) is the residual at the k’th iteration and d(k)=d(0)+j=1kr(j) is the summation of the residuals. In the first line of Eq. (17), the objective is to minimize a strictly convex quadratic function, and the solution can be easily obtained as

(18)

ω˜b(k+1)=(Rb+ρI)1[rbd+ρ(z(k)d(k))].
As mentioned, Rb and rbd are unknown in practice, and they can be estimated as Rb=l=1Lb(l)bH(l)/L and rbd=l=1Lb(l)d0*(l)/L, where L denotes the number of snapshots that are used. Moreover, b(l)=BHx(l) and d0(l)=vtHx(l), where x(l) denotes the l’th space-time snapshot.1314.15

The solution of Eq. (18) can be obtained directly, i.e., noniteratively. However, it is impractical because the inversion of (Rb+ρI) has a high computational complexity of O[(NK1)3]. Note that, according to Fig. 3, the clutter covariance matrix constructed by the training snapshots with regard to the current detecting snapshot can be written as

(19)

Rb=R͡b+m=14(1)mbmbmHL,
where R͡b is constructed by the training snapshots with regard to the previous detecting snapshot. Denote P(0)=(R͡b+ρI)1; then, according to the matrix inversion lemma,24 we obtain

(20)

P(m)=P(m1)P(m1)bmbmHP(m1)L(1)m+bmHP(m1)bm,m=1,2,3,4.
It is clear that P(4)=(Rb+ρI)1. Hence, the computational complexity can be reduced to O[8(NK1)2]. A full analysis of the computational complexity is presented in Table 1.

Fig. 3

Selection of I.I.D. training snapshots. The guard bands are used to guarantee that the training snapshots contain no components of the moving target.

JARS_11_2_026004_f003.png

Fig. 4

The detailed iterative procedure of ADMM-STAP.

JARS_11_2_026004_f004.png

Table 1

Computational complexity.

AlgorithmComplex multiplicationsComplex additions
SMI-STAPO[(NK1)3]O[(NK1)3]
RLS-STAP[4(NK)22  NK1]L[3(NK)23  NK]L
OCD-STAP[4(NK)25  NK+2]L[3(NK)24  NK+1]L
ADMM-STAP(M+8)(NK1)2+4  NK4(M+8)(NK1)2+4M(NK1)4

In the second line of Eq. (17), the z-update can be represented as

(21)

zi(k+1)=argminzi[λ|zi|+(ρ/2)(ziw˜i(k+1)di(k))2].
Although the absolute value function is not differentiable, a simple closed-form solution can easily be obtained. Explicitly, the solution is

(22)

zi(k+1)=Sλ/ρ(w˜i(k+1)+di(k)),
where Sλ(z) is the soft-thresholding operator. The soft-thresholding operator is essentially a shrinkage operator, which moves a point toward zero.

In the ADMM-STAP algorithm, ω˜b and z are updated alternately, which accounts for the term alternating direction. The reasonable stopping criteria are that the primal and dual residuals must be small,

(23)

ω˜b(k)z(k)2ϵpriandρ(z(k)z(k1))2ϵdual,
where ϵpri and ϵdual are thresholds that are chosen by absolute and relative criteria

(24)

ϵpri=pϵabs+ϵrelmax{ω˜b(k)2,z(k)2},ϵdual=nϵabs+ϵrely(k)2.
A reasonable value for ϵrel is 104103, and the choice of ϵabs depends on the scale of the typical variable values. The detailed iterative procedure of ADMM-STAP is shown in Fig. 4.

3.3.

Analysis of Convergence

A proof of the convergence result is presented in this section. First, we begin our proof by presenting the following theorem.

Theorem 1

(Eckstein–Bertsekas):25 Consider the problem

(25)

minu  f1(u)+f2(v)s.t.  v=Gu,
in the case where the functions f1(·) and f2(·) are closed, proper, and convex and G has a full column rank. Let {ηk0,k=0,1,} and {γk0,k=0,1,} be two sequences such that

(26)

k=0ηk<0andk=0γk<0.

Assume that there are three sequences {uk,k=0,1,}, {vk,k=0,1,}, and {tk,k=0,1,} that satisfy

(27)

ηkuk+1argminu{f1(u)+(ρ/2)Guvktk22}γkvk+1argminv{f2(v)+(ρ/2)Guk+1vtk22}tk+1=tk(Guk+1vk+1).
Then, if Eq. (25) has an optimal solution u, the sequence {uk} converges to this solution, i.e., uku.

First, since Eq. (14) is a particular instance when G=I, the full-rank condition in Theorem 1 can be satisfied. Second, it is clear that f1(ω˜b)=rbdHω˜bω˜bHrbd+ω˜bHRbω˜b and f2(z)=2λz1 in Eq. (14) are closed, proper, and convex. Moreover, the sequences {ω˜b(k)}, {z(k)}, and {u(k)} generated by Eq. (17) satisfy the conditions of Eq. (27) in a strict sense (ηk=γk=0). Hence, the convergence is guaranteed.

3.4.

Analysis of Computational Complexity

A comparison of the computational complexities of four STAP algorithms, namely, the conventional sample matrix inversion (SMI) STAP,2 1-regularized RLS-STAP,14 1-regularized online coordinate descent (OCD) STAP,26 and the proposed ADMM-STAP algorithms, is presented in Table 1. The computational complexity is measured by the number of complex multiplications and additions. As shown in Table 1, the ADMM-STAP algorithm has a computational complexity of O[(M+8)(NK1)2], where M is the number of iterations. According to the simulation in Sec. 4, the algorithm can converge to an acceptable solution within a few tens of iterations, i.e., M+8 would be less than 4L and NK1. Hence, the ADMM-STAP algorithm has the lowest level of computational complexity.

4.

Simulation Results

The simulation parameters for the ground moving target indication application are listed in Table 2: a radar system equipped with a side-looking ULA is employed, and the elements are spaced half a wavelength apart, i.e., d=λ/2. Additive noise is modeled as spatially and temporally independent complex Gaussian noise with zero mean and unit variance. fr=4vp/λ; hence, β=2vpTr/d=1. All the results are obtained from the average of 100 independent Monte–Carlo simulations.

Table 2

Simulation parameters for airborne radar.

ParameterNotationValueUnit
Antenna array spacingdλ/2m
Pulse repetition frequencyfr2314Hz
Carrier frequencyfc1.24GHz
Array element numberN10
CPI pulse numberK10
Bandwidth10MHz
Platform velocityvp140m/s
Platform heighthp8000m
Signal-to-noise ratioSNR0dB

4.1.

Setting of Regularization Parameter

The regularization parameter provides a tradeoff between the SCNR steady-state performance and the convergence speed. Although it is clear that the value of λ should be proportional to the noise power and be inversely proportional to the rank of the clutter covariance matrix, it is still difficult to determine the optimal value. Adjusting the regularization parameter adaptively is an interesting research area (e.g., Refs. 13 and 14). However, this area is not the main focus of our paper. In this paper, the regularization parameter is selected from a fixed set Ω={0.1,1,10,50}.

The output SCNR versus the number of snapshots that are used with different values of the regularization parameter λ is shown in Fig. 5. In this simulation, we assume that the signal of the moving target impinges the array from a DOA of 90 deg and that the radial velocity of the moving target vt is 28  m/s (the Doppler frequency of the moving target is nearly 231 Hz). The results in Fig. 5 indicate that (i) the value of λ is crucial to the output SCNR performance, and there is a reasonable range of values, i.e., 1λ10, that can improve the convergence speed and the output SCNR steady-state performance simultaneously; (ii) the output SCNR is degraded when λ is too large since the filter weight vector is shrunk to zero; and (iii) the output SCNR performance is not considerably improved when λ is too small. In this case, the output SCNR performance is nearly similar to that of the conventional STAP algorithm.

Fig. 5

Output SCNR versus the number of used snapshots with different regularization parameters. (a) CNR=20  dB and (b) CNR=40  dB.

JARS_11_2_026004_f005.png

The output SCNR performance versus the Doppler frequency of the moving target at a DOA of 90 deg is shown in Fig. 6. The range of potential Doppler frequency is from 500− to 500 Hz, and 60 snapshots are used to optimize the filter vector. The same conclusion can be obtained. This figure shows that the ADMM-STAP algorithm with 1λ10 provides a satisfactory output SCNR performance.

Fig. 6

Output SCNR performance versus Doppler frequency with different regularization parameters, and the range of Doppler frequency is from 500 to 500 Hz. (a) CNR=20  dB and (b) CNR=40  dB.

JARS_11_2_026004_f006.png

The number of iterations with different values of λ is shown in Fig. 7. As shown, if we choose λ from an appropriate range (0.5λ10), then the ADMM-STAP algorithm can converge rapidly within a few tens of iterations, which is acceptable in practice. Otherwise, the number of iterations increases significantly, and the iteration output cannot converge to the optimal solution leading, to a performance degradation to a certain extent.

Fig. 7

Number of iterations versus the value of λ. The radial velocity of the moving target is 28  m/s, and 60 snapshots are used in the simulation. (a) CNR=20  dB and (b) CNR=40  dB.

JARS_11_2_026004_f007.png

4.2.

Comparison with Other Algorithms

In this section, we will compare the output SCNR performance of our proposed algorithm with that of IPM-STAP, OCD-STAP, and RLS-STAP algorithms. The regularization parameter λ is set to 1 for all the algorithms, and the other parameters are the same as in the previous simulations. The output SCNR performances versus the number of used snapshots and the target Doppler frequency are compared in Figs. 8 and 9. As shown in these figures, we can see that (i) the output SCNR performance of the IPM-STAP algorithm is superior to that of the RLS-STAP and OCD-STAP algorithms. However, it is achieved at a high computational cost and (ii) the output SCNR performance of the ADMM-STAP algorithm can outperform that of the IPM-STAP algorithm, which supports our previous conclusion that optimizing the problem of parameter estimation to a high accuracy generally yields no improvement.

Fig. 8

Output SCNR versus the number of used snapshots when the radial velocity of the moving target is 28  m/s. (a) CNR=20  dB and (b) CNR=40  dB.

JARS_11_2_026004_f008.png

Fig. 9

Output SCNR performance versus Doppler frequency with 60 snapshots, and the range of Doppler frequency is from 500 to 500 Hz. (a) CNR=20  dB and (b) CNR=40  dB.

JARS_11_2_026004_f009.png

5.

1-Regularized STAP with Mountaintop Data

The performance of the 1-regularized STAP approaches is verified here using the Mountaintop data set (data No. t38pre01v1) acquired with the experimental radar system RSTER (radar surveillance technology experimental radar) sponsored by the Advanced Research Projects Agency. The Mountaintop program is devoted to supporting the mission requirements of next-generation airborne early warning platforms and to supporting the evaluation of STAP algorithms. The antenna for the system is a 5-m wide by 10-m high horizontally polarized array composed of 14 column elements. The CPI pulse number is 16, the antenna array spacing is 0.333 m, the PRF is 625 Hz, the carrier frequency is 435 MHz, and the bandwidth is 500 kHz. The transmit beam is steered to illuminate a mountain range (a large clutter scatter).

The data set is divided into two subsets in our experiment. The first subset, including 100 snapshots, is used to train the STAP filters. The second subset, including 100 snapshots, is used to test the performance. Two simulated moving targets are added to the test data subset. The signal of the first target impinges the array from a DOA of 25  deg, and the Doppler frequency is 62.5 Hz. The signal of the second target impinges the array from a DOA of 20 deg, and the Doppler frequency is 187.5 Hz. Hence, the first target can essentially be regarded as a ground moving vehicle in the mountain, and the second target can be regarded as an aircraft near the mountain. The minimum variance distortionless response (MVDR) spectra of the two subsets are shown in Fig. 10.

Fig. 10

(a) MVDR spectrum of the training subset and (b) MVDR spectrum of the test subset.

JARS_11_2_026004_f010.png

The improvement factor (IF) performance, which is defined as the ratio of the output SCNR to the input SCNR, is investigated in Fig. 11. The regularization parameter λ is set to 1 for all the algorithms. As shown, the IF performance of the proposed ADMM-STAP approach substantially outperforms that of the other approaches. Hence, the effectiveness of the proposed approach is confirmed by an experimental multichannel radar system RSTER.

Fig. 11

(a) IF performance versus the number of used snapshots for the first target and (b) IF performance versus the number of used snapshots for the second target.

JARS_11_2_026004_f011.png

6.

Conclusions

In this paper, we proposed a sparsity-based approach based on an 1-regularized constraint to accelerate the convergence speed of STAP. The optimization problem with an additional 1-regularized constraint was solved using the ADMM, and the detailed iterative procedure of ADMM-SATP was derived. Through the examples, it was demonstrated that the proposed method can effectively decrease the required number of secondary snapshots and provide better performance than the 1-regularized OCD-STAP and 1-regularized RLS-STAP methods.

Acknowledgments

The authors thank the National Natural Science Foundation of China under Grant No. 61101178 and the China Scholarship Council for their support.

References

1. H. Wang et al., “Robust waveform design for MIMO-STAP to improve the worst-case detection performance,” EURASIP J. Adv. Signal Process. 2013(1), 1–8 (2013). http://dx.doi.org/10.1186/1687-6180-2013-52 Google Scholar

2. W. L. Melvin, “A STAP overview,” IEEE Aerosp. Electron. Syst. Mag. 19(1), 19–35 (2004).IESMEA0885-8985 http://dx.doi.org/10.1109/MAES.2004.1263229 Google Scholar

3. X. Guo et al., “Modified reconstruction algorithm based on space-time adaptive processing for multichannel synthetic aperture radar systems in azimuth,” J. Appl. Remote Sens. 10(3), 035022 (2016). http://dx.doi.org/10.1117/1.JRS.10.035022 Google Scholar

4. R. Fa and R. C. De Lamare, “Reduced-rank STAP algorithms using joint iterative optimization of filters,” IEEE Trans. Aerosp. Electron. Syst. 47(3), 1668–1684 (2011).IEARAX0018-9251 http://dx.doi.org/10.1109/TAES.2011.5937257 Google Scholar

5. R. Fa, R. C. de Lamare and L. Wang, “Reduced-rank STAP schemes for airborne radar based on switched joint interpolation, decimation and filtering algorithm,” IEEE Trans. Signal Process. 58(8), 4182–4194 (2010).ITPRED1053-587X http://dx.doi.org/10.1109/TSP.2010.2048212 Google Scholar

6. R. Li et al., “Reduced-dimension space-time adaptive processing based on angle-Doppler correlation coefficient,” EURASIP J. Adv. Signal Process. 2016(97), 1–9 (2016). http://dx.doi.org/10.1186/s13634-016-0395-2 Google Scholar

7. W. Zhang et al., “Multiple-input multiple-output radar multistage multiple-beam beamspace reduced-dimension space-time adaptive processing,” IET Radar Sonar Navig. 7(3), 295–303 (2013). http://dx.doi.org/10.1049/iet-rsn.2012.0078 Google Scholar

8. W. Zhang et al., “A method for finding best channels in beam-space post-Doppler reduced-dimension STAP,” IEEE Trans. Aerosp. Electron. Syst. 50(1), 254–264 (2014).IEARAX0018-9251 http://dx.doi.org/10.1109/TAES.2013.120145 Google Scholar

9. K. Sun, H. Meng and F. Lapierre, “Registration-based compensation using sparse representation in conformal-array STAP,” Signal Process. 91(10), 2268–2276 (2011). http://dx.doi.org/10.1016/j.sigpro.2011.04.008 Google Scholar

10. K. Sun, H. Meng and Y. Wang, “Direct data domain STAP using sparse representation of clutter spectrum,” Signal Process. 91(9), 2222–2236 (2011). http://dx.doi.org/10.1016/j.sigpro.2011.04.006 Google Scholar

11. S. Sen, “OFDM radar space-time adaptive processing by exploiting spatio-temporal sparsity,” IEEE Trans. Signal Process. 61(1), 118–130 (2013).ITPRED1053-587X http://dx.doi.org/10.1109/TSP.2012.2222387 Google Scholar

12. Z. Yang, X. Li and H. Wang, “Space-time adaptive processing based on weighted regularized sparse recovery,” Prog. Electromagnet. Res. B 42, 245–262 (2012). http://dx.doi.org/10.2528/PIERB12051804 Google Scholar

13. Z. Yang, R. C. De Lamare and X. Li, “L1-regularized STAP algorithm with a generalized side-lobe canceler architecture for airborne radar,” IEEE Trans. Signal Process. 60(2), 674–686 (2012).ITPRED1053-587X http://dx.doi.org/10.1109/TSP.2011.2172435 Google Scholar

14. Z. Gao et al., “L1-regularised joint iterative optimisation space-time adaptive processing algorithm,” IET Radar Sonar Navig. 10(3), 435–441 (2016). http://dx.doi.org/10.1049/iet-rsn.2015.0044 Google Scholar

15. Z. Yang et al., “Sparsity-based space-time adaptive processing using complex-valued homotopy technique for airborne radar,” IET Signal Process. 8(5), 552–564 (2014). http://dx.doi.org/10.1049/iet-spr.2013.0069 Google Scholar

16. M. Shen et al., “An efficient moving target detection algorithm based on sparsity-aware spectrum estimation,” Sensors 14(9), 17055–17067 (2014).SNSRES0746-9462 http://dx.doi.org/10.3390/s140917055 Google Scholar

17. J. Qin, I. Yanovsky and W. Yin, “Efficient simultaneous image deconvolution and upsampling algorithm for low-resolution microwave sounder data,” J. Appl. Remote Sens. 9(1), 095035 (2015). http://dx.doi.org/10.1117/1.JRS.9.095035 Google Scholar

18. H. Zhai et al., “Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images,” J. Appl. Remote Sens. 10(4), 046014 (2016). http://dx.doi.org/10.1117/1.JRS.10.046014 Google Scholar

19. S. Boyd et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3(1), 1–122 (2010). http://dx.doi.org/10.1561/2200000016 Google Scholar

20. M. Afonso, J. Bioucas-Dias and M. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19(9), 2345–2356 (2010).IIPRE41057-7149 http://dx.doi.org/10.1109/TIP.2010.2047910 Google Scholar

21. Z. Yang et al., “On clutter sparsity analysis in space-time adaptive processing airborne radar,” IEEE Geosci. Remote Sens. Lett. 10(5), 1214–1218 (2013). http://dx.doi.org/10.1109/LGRS.2012.2236639 Google Scholar

22. G. M. Herbert, “Clutter modeling for space-time adaptive processing in airborne radar,” IET Radar Sonar Navig. 4(2), 178–186 (2010). http://dx.doi.org/10.1049/iet-rsn.2009.0064 Google Scholar

23. H. Sun et al., “Estimation of the ocean clutter rank for HF/VHF radar space-time adaptive processing,” IET Radar Sonar Navig. 4(6), 755–763 (2010).IETTAW0018-9448 http://dx.doi.org/10.1049/iet-rsn.2009.0252 Google Scholar

24. B. Chen et al., “Quantized kernel recursive least squares algorithm,” IEEE Trans. Neural Netw. Learn. Syst. 24(9), 1484–1491 (2013). http://dx.doi.org/10.1109/TNNLS.2013.2258936 Google Scholar

25. J. Eckstein and D. Bertsekas, “On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Math. Program. 55(1), 293–318 (1992).MHPGA41436-4646 http://dx.doi.org/10.1007/BF01581204 Google Scholar

26. D. Angelosante, J. A. Bazerque and G. B. Giannakis, “Online adaptive estimation of sparse signals: where RLS meets the L1-norm,” IEEE Trans. Signal Process. 58(7), 3436–3447 (2010).ITPRED1053-587X http://dx.doi.org/10.1109/TSP.2010.2046897 Google Scholar

Biography

Lilong Qin is working toward his PhD at the National University of Defense Technology, Changsha, China, and is working with Aalto University, Espoo, Finland. He received his BS degree in information engineering and his MS degree in circuit and system from the Electronic Engineering Institute, Hefei, China, in 2010 and 2013, respectively. His current research interests include synthetic aperture radar and adaptive beamforming.

Manqing Wu received his MS degree from the National University of Defense Technology, Changsha, China, in 1990. Currently, he is a professor with the China Electronics Technology Group Corporation, Beijing, China, and is a member of the Chinese Academy of Engineering. His research field is radar signal processing.

Xuan Wang received her PhD in signal and information processing from Beijing Institute of Technology, Beijing, China, in 2016. Currently, she is working at Delft University of Technology, Delft, the Netherlands. Her current research interest is synthetic aperture radar.

Zhen Dong received his PhD from the National University of Defense Technology, Changsha, China, in 2001. Currently, he is a professor with the National University of Defense Technology, and his research field includes synthetic aperture radar interferometry and array radar.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Lilong Qin, Manqing Wu, Xuan Wang, Zhen Dong, "Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers," Journal of Applied Remote Sensing 11(2), 026004 (12 April 2017). https://doi.org/10.1117/1.JRS.11.026004 . Submission: Received: 22 December 2016; Accepted: 24 March 2017
Received: 22 December 2016; Accepted: 24 March 2017; Published: 12 April 2017
JOURNAL ARTICLE
13 PAGES


SHARE
Back to Top