## 1.

## Introduction

Inverse synthetic aperture radar (ISAR) has been proven to be a powerful signal processing tool for imaging of moving targets in military and civilian applications.^{1}2.^{–}^{3} In ISAR imaging, the finer range resolution can be obtained by transmitting larger-bandwidth signals, while the cross-range resolution can be improved by wider aspect observations. Generally, wide aspect observations can be obtained by performing long-time observation with a monostatic ISAR or by acquiring multiaspect observations with multiple radar receiver configurations.^{4} In other words, multiaspect observations can be utilized to form a higher-resolution two-dimensional (2-D) ISAR image as well as perform three-dimensional (3-D) reconstruction of a target. Reference 4 studies the parameter estimation of ISAR imaging with multiaspect observations, which further extends the application of multiaspect observations. However, in general, the ISAR image is just a 2-D range-Doppler projection of the 3-D target’s reflectivity function onto an image plane,^{2}^{,}^{3}^{,}^{5}^{,}^{6} which is mainly determined by the motion of targets with respect to the line of radar sight (LOS) and cannot be predicted. Thus the conventional 2-D ISAR image no longer meets the increasing demand of target recognition and target identification to some extent.

Recently, to further improve the ability of target recognition, especially for a noncooperative target, many algorithms are introduced for different imaging modes. Reference 7 puts forward the data level fusion method with multiaspect observations, which can obtain the target spatial structure information with known imaging geometry. In contrast to 2-D ISAR images, given the capability of providing the target’s structure information, the 3-D ISAR imaging techniques for maneuvering targets have attracted wide attention in many applications such as target identification and target recognition.^{8}^{,}^{9} There is much literature that covers 3-D ISAR images with various algorithms. The algorithms in Refs. 10 and 11 require a 2-D antenna array to generate the 3-D images of a target; however, multiantennas inevitably result in great system complexity. References 1213.14.15.–16 present the interferometric ISAR (InISAR) imaging technique, which combines the interferometric processing and ISAR processing to form 3-D images. Fortunately, the InISAR imaging technique has notable advantages over the aforementioned techniques in both system structure and signal processing; therefore, it attracts the attention of many researchers.

Nevertheless, in order to employ interferometry via each antenna’s ISAR images, the InISAR imaging techniques in Refs. 12 and 15 take the linear time-frequency transform and searching procedure successively, but neglect the bilinear time-frequency transform with the higher time-frequency resolution. Different from the aforementioned InISAR technique, this paper presents a 3-D InISAR imaging algorithm for maneuvering targets based on the joint cross modified Wigner-Ville distribution (MWVD). In this paper, three antennas forming two orthogonal interferometric baselines are located in the same plane orthogonal to the LOS, and a uniform range alignment and phase adjustment must be implemented together on the three antennas’ echo signals to keep the coherence among them. In addition, the joint cross MWVD of the data acquired from each two antennas located along one baseline can be adopted for each range cell, and then the 3-D structure positions of all scatterers can be solved directly from the preserved phase information in the distribution, where each scatterer is distinctly separated. Meanwhile, the 3-D images of the maneuvering target can be obtained.

The remainder of this paper is organized as follows. The InISAR system model and signal format are described in Sec. 2. In Sec. 3, the joint cross MWVD and its application are discussed in detail. Section 4 gives the analyses of the cross-terms suppression and the computational cost. In Sec. 5, a 3-D InISAR imaging algorithm is proposed based on joint cross MWVD. Finally, the simulation results of the presented algorithm and the conclusion are given in Secs. 6 and 7.

## 2.

## Interferometric Inverse Synthetic Aperture Radar Model and Signal Format

The InISAR system in Fig. 1 is based on the assumption that the model is the approximate choice for real application scenarios. The proposed model consists of three antennas located at points $A$, $B$, and $C$, respectively, and $XYZ$ defines a Cartesian coordinate with the origin $O$ as the location of the antenna $A$. In order to achieve 3-D images of the target, these antennas have to lie on a horizontal and a vertical baseline, respectively. Antenna $A$, doubling as both a transmitter and a receiver, is chosen at the origin; the LOS is the axis $Y$, and the receiving antennas $B$ and $C$ are located on the axis $X$ and axis $Z$ with the coordinates $(-L,\mathrm{0,0})$ and $(\mathrm{0,0},-L)$, respectively.

In practical application, a maneuvering target will produce rotational motion that is represented by the rotation vector ${\omega}_{T}$, whose projection onto the plane perpendicular to the LOS is called the effective rotation vector ${\omega}_{e}$. The point ${O}^{\prime}$ is assumed to be the autofocus center for three receivers during the whole observation time, which will be mentioned in Sec. 3.

Assume the transmitted linear frequency modulation (LFM) signal takes the following form:

## (1)

$$s(t)=\mathrm{exp}\left[j2\pi \right({f}_{c}t+\frac{1}{2}\mu {t}^{2}\left)\right]\phantom{\rule[-0.0ex]{2em}{0.0ex}}|t|\le \frac{{T}_{s}}{2},$$## (2)

$${s}_{{\mathrm{\Gamma}}_{P}}(t,{t}_{m})={\delta}_{P}B\text{\hspace{0.17em}}\mathrm{sin}\mathrm{c}\text{\hspace{0.17em}}\left\{B\right[t-\frac{{R}_{{A}_{P}}({t}_{m})+{R}_{{\mathrm{\Gamma}}_{P}}({t}_{m})}{c}\left]\right\}\mathrm{exp}[-j2\pi \frac{{R}_{{A}_{P}}({t}_{m})+{R}_{{\mathrm{\Gamma}}_{P}}({t}_{m})}{\lambda}],$$## (3)

$${R}_{{A}_{P}}({t}_{m})={\{{[{x}_{p}+\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})]}^{2}+{[{y}_{p}+\mathrm{\Delta}R({t}_{m})+\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})]}^{2}+{[{z}_{p}+\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})]}^{2}\}}^{1/2},$$## (4)

$${R}_{{B}_{P}}({t}_{m})={\{{[{x}_{p}+L+\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})]}^{2}+{[{y}_{p}+\mathrm{\Delta}R({t}_{m})+\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})]}^{2}+{[{z}_{p}+\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})]}^{2}\}}^{1/2}\phantom{\rule{0ex}{0ex}}={R}_{{A}_{P}}({t}_{m})+\frac{2[{x}_{p}+\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})]L+{L}^{2}}{2{R}_{{A}_{P}}({t}_{m})},$$## (5)

$${R}_{{C}_{P}}({t}_{m})={\{{[{x}_{p}+\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})]}^{2}+{[{y}_{p}+\mathrm{\Delta}R({t}_{m})+\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})]}^{2}+{[{z}_{p}+L+\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})]}^{2}\}}^{1/2}\phantom{\rule{0ex}{0ex}}={R}_{{A}_{P}}({t}_{m})+\frac{2[{z}_{p}+\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})]L+{L}^{2}}{2{R}_{{A}_{P}}({t}_{m})},$$Obviously, it is the ${R}_{{\mathrm{\Gamma}}_{P}}({t}_{m})$ in Eq. (2) that results in the range migration (the translational range migration and the rotational range migration) and Doppler frequency shift (induced by the translational motion and rotational motion). Similar to the ISAR image, in order to achieve 3-D InISAR images, the motion compensation without destroying the coherence among the three receivers should be first achieved. However, notice that the terms to work for range alignment in the three receivers are different due to the existence of the second terms in Eqs. (4) and (5). Consider, under the far-field conditions, that the target size does not exceed 60 m, and the distances of the radar to target and the baseline length are no fewer than 10,000 and 1 m, respectively. Also, assume that the autofocus point is in the $X$ and $Z$ axes, and the effective range displacements $\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})$ and $\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})$ do not exceed 4 m; then, we have

## (6)

$$\frac{2[{x}_{p}+\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})]L+{L}^{2}}{2{R}_{{A}_{P}}({t}_{m})}\le \frac{2(60+4)\times 1+100}{2\times \mathrm{10,000}}\le 0.05\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{m},$$## (7)

$$\frac{2[{z}_{p}+\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})]L+{L}^{2}}{2{R}_{{A}_{P}}({t}_{m})}\le \frac{2(60+4)\times 1+100}{2\times \mathrm{10,000}}\le 0.05\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{m}.$$The distance differences induced by rotation between three antennas are much smaller than the range resolution, when the radar range resolution is 0.1 to 0.3 m. Therefore, under this approximation, the translational range migration of all three receivers can be accomplished by using the same range alignment compensation function. Here we choose receiver $A$ as the reference channel to implement the translational range migration and the Doppler frequency shift induced by the translation, which are the same for all scatterers on the target and can be eliminated by the standard range alignment method^{17}^{,}^{18} and the phase gradient autofocus method,^{19} respectively.

However, when the target size is a little larger and the required resolution becomes higher, the migration through resolution cells (MTRC), which is related to the location of each scatterer, can no longer be neglected. The Radon–Fourier transform and generalized Radon–Fourier transform (RFT/GRFT) were proposed in Refs. 2021.–22 to deal with the couple between the envelope and Doppler, which has been shown to be very effective in muchliterature.^{20}21.22.^{–}^{23} Thus, the RFT can be used to mitigate the MTRCs and correct all scatterers into the right cell. We will not make a detailed discussion about range alignment in this paper and will only focus on the Doppler frequency shift induced by rotational motion for 3-D InISAR reconstruction.

After the motion compensation, the azimuth echo from the scatterer $P$ can be rewritten as

## (8)

$${s}_{{\mathrm{\Gamma}}_{P}}({t}_{m})={\sigma}_{P}\text{\hspace{0.17em}}\mathrm{exp}[-j{\mathrm{\Phi}}_{{\mathrm{\Gamma}}_{P}}({t}_{m})]={\sigma}_{P}\text{\hspace{0.17em}}\mathrm{exp}[-j2\pi \frac{{R}_{{A}_{P}}({t}_{m})+{R}_{{\mathrm{\Gamma}}_{P}}({t}_{m})}{\lambda}],$$## (9)

$${s}_{{\mathrm{\Gamma}}_{P}}({t}_{m})={\sigma}_{P}\text{\hspace{0.17em}}\mathrm{exp}(-j{\varphi}_{{\mathrm{\Gamma}}_{P}})\mathrm{exp}[-j\frac{4\pi}{\lambda}\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})],$$## (10)

$${\varphi}_{{A}_{P}}=\frac{4\pi}{\lambda}{R}_{P},\phantom{\rule[-0.0ex]{1em}{0.0ex}}{\varphi}_{{B}_{P}}=\frac{4\pi}{\lambda}{R}_{P}+\frac{2\pi L{x}_{p}}{\lambda {R}_{P}},\phantom{\rule[-0.0ex]{1em}{0.0ex}}{\varphi}_{{C}_{P}}=\frac{4\pi}{\lambda}{R}_{P}+\frac{2\pi L{z}_{p}}{\lambda {R}_{P}}.$$From Eq. (10), although the terms ${\varphi}_{{\mathrm{\Gamma}}_{P}}$ are independent of the slow time and unnecessary for the ISAR imaging, they carry significant position information of the scatterer, which should be preserved in the 3-D InISAR image processing, the details of which will be thoroughly explained in Sec. 3.2. The second terms in Eq. (9), completely consistent with each other in three antennas, can work for separating the different scatterers in the same range cell. From Eqs. (9) and (10), we obtain

## (11)

$$\mathrm{\Delta}{\varphi}_{AB}={\varphi}_{{B}_{P}}-{\varphi}_{{A}_{P}}=\frac{2\pi L{x}_{p}}{\lambda {R}_{P}},\phantom{\rule[-0.0ex]{2em}{0.0ex}}\mathrm{\Delta}{\varphi}_{AC}={\varphi}_{{C}_{P}}-{\varphi}_{{A}_{P}}=\frac{2\pi L{z}_{p}}{\lambda {R}_{P}}.$$Combined with the range information ${R}_{n}\approx {R}_{P}$, where ${R}_{n}$ is the distance of the radar to the $n$’th range cell, the position of the scatterer $P$ can be obtained as

## (12)

$${x}_{p}=\frac{\lambda {R}_{n}\mathrm{\Delta}{\varphi}_{AB}}{2\pi L},\phantom{\rule[-0.0ex]{1em}{0.0ex}}{y}_{P}={R}_{n},\phantom{\rule[-0.0ex]{1em}{0.0ex}}{z}_{p}=\frac{\lambda {R}_{n}\mathrm{\Delta}{\varphi}_{AC}}{2\pi L}.$$Therefore, the interferometric phase information is crucial to successfully reconstruct the 3-D position of target, which is also the research emphasis in this paper.

## 3.

## Joint Cross Modified Wigner-Ville Distribution

## 3.1.

### Proposed Algorithm for Signal Separation

After the uniform motion compensation, the position of the autofocus centers for each ISAR image remains the same relative to the three receiving antennas during the imaging time. Identical to the ISAR image for maneuvering targets, the effective rotating velocities ${\omega}_{x}$ and ${\omega}_{z}$ are time-variant and do cause the change of the Doppler frequency, which can be utilized to realize the high-resolution ISAR imaging. They can be approximated as

## (13)

$${\omega}_{x}({t}_{m})={\alpha}_{x}+{\beta}_{x}{t}_{m},\phantom{\rule[-0.0ex]{2em}{0.0ex}}{\omega}_{z}({t}_{m})={\alpha}_{z}+{\beta}_{z}{t}_{m}.$$Thus, the corresponding effective range displacement can be expressed as

## (14)

$$\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})={x}_{P}{\int}_{0}^{{t}_{m}}{\omega}_{x}(\mu )\mathrm{d}\mu +{z}_{P}{\int}_{0}^{{t}_{m}}{\omega}_{z}(\mu )\mathrm{d}\mu ={x}_{P}({\alpha}_{x}{t}_{m}+\frac{1}{2}{\beta}_{x}{t}_{m}^{2})+{z}_{P}({\alpha}_{z}{t}_{m}+\frac{1}{2}{\beta}_{z}{t}_{m}^{2}).$$Then the echo signals received by receivers $A$, $B$, and $C$ in the $n$’th range cell become, respectively,

## (15)

$${s}_{\mathrm{\Gamma}}({t}_{m})=\sum _{i}^{P}{\sigma}_{i}\text{\hspace{0.17em}}\mathrm{exp}(-j{\varphi}_{{\mathrm{\Gamma}}_{\iota}})\mathrm{exp}\left[j2\pi \right({f}_{i}{t}_{m}+\frac{1}{2}{\mu}_{i}{t}_{m}^{2}\left)\right],$$^{24}which is extensively applied for ISAR imaging. However, when the WVD is directly used on the echo signal itself of a single antenna, the interferometric phases will be completely lost. As a result, the joint cross MWVD is introduced to separate the scatterer in the same range cell due to its good phase preservation and searching-free procedure, where the term “joint cross” refers to the joint cross-correlation operation of the data acquired from two different antennas of all three antennas. The key to the so-called joint cross MWVD in this paper is the definition of the symmetric instantaneous cross-correlation function (SICCF) from the two different receivers, which is essentially different from the MWVD that performs the instantaneous autocorrelation only aiming at one receiver. The results of bilinear transform on the signal itself in Ref. 24 will bring about the loss of the time-invariant interferometric phase information, which further results in the failure in the image interferometry. Given the symmetric relation of the antennas $B$ and $C$, here we only take the interferometric antenna pair $AB$ as an example to explain the above analysis. The SICCF of the receiver pair $AB$ can be defined by

## (16)

$${R}_{AB}({t}_{m},\tau )={s}_{A}({t}_{m}+\frac{\tau}{2}){s}_{B}^{*}({t}_{m}-\frac{\tau}{2})\phantom{\rule{0ex}{0ex}}=\sum _{i}^{P}{\sigma}_{i}^{2}\text{\hspace{0.17em}}\mathrm{exp}(j\mathrm{\Delta}{\varphi}_{AB})\mathrm{exp}[j2\pi ({f}_{i}+{\mu}_{i}{t}_{m})\tau ]+{R}_{AB,\text{cross}},$$## (17)

$${W}_{AB}({t}_{m},{f}_{\tau})=\int {s}_{A}({t}_{m}+\frac{\tau}{2}){s}_{B}^{*}({t}_{m}-\frac{\tau}{2})\mathrm{exp}(-j2\pi {f}_{\tau}\tau )\mathrm{d}\tau \phantom{\rule{0ex}{0ex}}=\sum _{i}^{P}{\sigma}_{i}^{2}\text{\hspace{0.17em}}\mathrm{exp}(j\mathrm{\Delta}{\varphi}_{AB})\delta [{f}_{\tau}-({f}_{i}+{\mu}_{i}{t}_{m})]+{W}_{AB,\text{cross}}.$$In Eq. (17), the slow time variable ${t}_{m}$ and the lag variable $\tau $ linearly couple with each other; thus, the joint cross WVD between the receiver pairs $AB$ peaks along the straight line ${f}_{\tau}={f}_{i}+{\mu}_{i}{t}_{m}$, whose intercept and slope are related to the CF ${f}_{i}$ and the CR ${\mu}_{i}$ of the $i$’th scatterer. Borrowing the idea from the classical scale transform (ST), we propose the joint cross MWVD, which can be denoted as

## (18)

$${G}_{AB}({f}_{\tau},{f}_{\tau {t}_{m}})={\iint}_{\tau ,\tau {t}_{m}}{R}_{AB}({t}_{m},\tau )\mathrm{d}[\tau {t}_{m}]\mathrm{d}[\tau ]\phantom{\rule{0ex}{0ex}}=\sum _{i}^{P}{\sigma}_{i}^{2}\text{\hspace{0.17em}}\mathrm{exp}(j\mathrm{\Delta}{\varphi}_{AB})\delta ({f}_{\tau}-{f}_{i})\delta ({f}_{\tau {t}_{m}}-{\mu}_{i})+{G}_{AB,\text{cross}},$$For Eq. (18), the linear couple between the slow time variable ${t}_{m}$ and the lag variable $\tau $ is removed, and the signal energy is completely accumulated only by FFT operation without searching any parameters. Also, it is clearly seen that the CF and the CR are closely related to its coordinates; different scatterers with different coordinates will be discriminated from each other in the centroid frequency and chirp rate domain (CFCRD). After the joint cross MWVD, each scatterer can be easily separated as a peak point in the CFCRD.

## 3.2.

### Proposed Algorithm for Information Extraction

Without loss of generality, the joint cross MWVD of the scatterer $P$ from the antenna pair $AB$ can be denoted as

## (19)

$${G}_{AB}({f}_{\tau},{f}_{\tau {t}_{m}})={\mathrm{FFT}}_{\tau}({\mathrm{FFT}}_{{t}_{m}^{\prime}}\{{\mathrm{ST}}_{\tau {t}_{m}}[{R}_{AB}({t}_{m},\tau )]\})={\sigma}_{p}^{2}\text{\hspace{0.17em}}\mathrm{exp}(j\mathrm{\Delta}{\varphi}_{AB})\delta ({f}_{\tau}-{f}_{p})\delta ({f}_{\tau {t}_{m}}-{\mu}_{p}),$$## (20)

$$\mathrm{\Delta}{\varphi}_{AB}=\angle [{G}_{AB}({f}_{p},{\mu}_{p})],\phantom{\rule[-0.0ex]{2em}{0.0ex}}\mathrm{\Delta}{\varphi}_{AC}=\angle [{G}_{AC}({f}_{p},{\mu}_{p})].$$We can use Eq. (12) to solve the 3-D position of all scatterers in the *n*’th range cell. Hence, the 3-D positions of all scatterers on the target will be easily obtained by using the same process in all range cells.

In addition, it is also worthwhile to mention that only when the phase differences $\mathrm{\Delta}{\varphi}_{AB}$ and $\mathrm{\Delta}{\varphi}_{AC}$ do not exceed $2\pi $ is the solution to $({x}_{p},{z}_{p})$ in Eq. (12) correct. Hence,

## (21)

$$\mathrm{\Delta}{\varphi}_{AB}=\frac{2\pi L{x}_{p}}{\lambda {R}_{P}}\le 2\pi ,\phantom{\rule[-0.0ex]{2em}{0.0ex}}\mathrm{\Delta}{\varphi}_{AC}=\frac{2\pi L{z}_{p}}{\lambda {R}_{P}}\le 2\pi $$## 3.3.

### Rotation Parameter Retrieval

The rotation parameter estimation is an essential task for ISAR and has drawn much attention.^{25}^{,}^{26} According to Eqs. (13)–(15), when the coordinates of the scatterer are fixed, the CF ${f}_{p}$ and the CR ${\mu}_{p}$ of the scatterer $P$ mainly depend on the angular velocity ${\alpha}_{x}$, ${\alpha}_{z}$ and angular acceleration ${\beta}_{x}$, ${\beta}_{z}$ along the $X$ and $Z$ axes, respectively. That is,

## (23)

$${f}_{p}=-2({x}_{P}{\alpha}_{x}+{z}_{P}{\alpha}_{z})/\lambda =-2{\alpha}_{e}({x}_{P}\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta +{z}_{P}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta )/\lambda ,$$## (24)

$${\mu}_{p}=-2({x}_{P}{\beta}_{x}+{z}_{P}{\beta}_{z})/\lambda =-2{\beta}_{e}({x}_{P}\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta +{z}_{P}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta )/\lambda ,$$Accordingly, the parameters ${\alpha}_{e}$ and ${\beta}_{e}$ can be estimated with the estimated parameters ${x}_{P}$ and ${z}_{P}$. We can rewrite Eqs. (23) and (24) by considering only the contribution of the $i$’th scatterer.

where $C=-\lambda {f}_{i}/2$, $D=-\lambda {\mu}_{i}/2$, ${X}_{i}={x}_{i}$, ${Z}_{i}={z}_{i}$, $a={\alpha}_{e}\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta $, $b={\alpha}_{e}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta $, $c={\beta}_{e}\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta $, and $d={\beta}_{e}\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta $. ${C}_{i}$ and ${D}_{i}$, respectively, represent the CF and the CR of the $i$’th scatterer, which have been extracted in Eq. (19). Also, the coordinates ${X}_{i}$, ${Z}_{i}$ of the $i$’th scatterer can be calculated from Eq. (12). Then, the parameters ${\alpha}_{e}$ and ${\beta}_{e}$ can be accomplished by estimating $a$ and $b$.The situation can be mathematically dealt with by minimizing the function

## (27)

$$\mathrm{\Psi}(a,b)=\sum _{i}^{{N}_{P}}{[{C}_{i}-(a{X}_{i}+b{Z}_{i})]}^{2},\phantom{\rule[-0.0ex]{2em}{0.0ex}}\mathrm{\Upsilon}(c,d)=\sum _{i}^{{N}_{P}}{[{D}_{i}-(c{X}_{i}+d{Z}_{i})]}^{2},$$Consequently, the effective IRV, ${\alpha}_{e}$, and effective RA, ${\beta}_{e}$, can be obtained by the estimated $\widehat{a}$, $\widehat{b}$, $\widehat{c}$, $\widehat{d}$.

## 4.

## Performance of Joint Cross Modified Wigner-Ville Distribution

## 4.1.

### Analysis of the Cross-Terms

In order to obtain the high-resolution imaging, the echo signals have to be modeled as multicomponent LFM signals in each range cell. Moreover, due to the nonlinear characteristic of the SICCF, the cross-terms are inevitable and may affect the detection of self-terms. We need to analyze the performance of the joint cross MWVD under the situation of multi-LFM signals. Here, assume that there are two scatterers, and the proof process of the multi-LFM signals can refer to the following discussion:

## (29)

$${s}_{A}({t}_{m})={s}_{{A}_{P}}({t}_{m})+{s}_{{A}_{Q}}({t}_{m})\phantom{\rule{0ex}{0ex}}={\sigma}_{p}\text{\hspace{0.17em}}\mathrm{exp}(-j{\varphi}_{{A}_{p}})\mathrm{exp}\left[j2\pi \right({f}_{p}{t}_{m}+\frac{1}{2}{\mu}_{p}{t}_{m}^{2}\left)\right]+{\sigma}_{q}\text{\hspace{0.17em}}\mathrm{exp}(-j{\varphi}_{{A}_{q}})\mathrm{exp}\left[j2\pi \right({f}_{q}{t}_{m}+\frac{1}{2}{\mu}_{q}{t}_{m}^{2}\left)\right],$$## (30)

$${s}_{B}({t}_{m})={s}_{{B}_{P}}({t}_{m})+{s}_{{B}_{Q}}({t}_{m})\phantom{\rule{0ex}{0ex}}={\sigma}_{p}\text{\hspace{0.17em}}\mathrm{exp}(-j{\varphi}_{{B}_{p}})\mathrm{exp}\left[j2\pi \right({f}_{p}{t}_{m}+\frac{1}{2}{\mu}_{p}{t}_{m}^{2}\left)\right]+{\sigma}_{q}\text{\hspace{0.17em}}\mathrm{exp}(-j{\varphi}_{{B}_{q}})\mathrm{exp}\left[j2\pi \right({f}_{q}{t}_{m}+\frac{1}{2}{\mu}_{q}{t}_{m}^{2}\left)\right].$$After performing the SICCF on Eq. (16), we obtain

## (31)

$${R}_{AB}({t}_{m},\tau )={R}_{A{B}_{p,\text{self}}}({t}_{m},\tau )+{R}_{A{B}_{q,\text{self}}}({t}_{m},\tau )+{R}_{A{B}_{pq,\text{cross}}}({t}_{m},\tau )+{R}_{A{B}_{qp,\text{cross}}}({t}_{m},\tau ),$$## (32)

$${R}_{A{B}_{pq,\text{cross}}}({t}_{m},\tau )={\sigma}_{p}{\sigma}_{q}\text{\hspace{0.17em}}\mathrm{exp}[j({\varphi}_{{B}_{q}}-{\varphi}_{{A}_{p}})]\phantom{\rule{0ex}{0ex}}\mathrm{exp}\left\{j2\pi \right[({f}_{p}-{f}_{q}){t}_{m}+({f}_{p}+{f}_{q})\frac{\tau}{2}+\frac{1}{2}({\mu}_{p}-{\mu}_{q}){t}_{m}^{2}+\frac{1}{2}({\mu}_{p}+{\mu}_{q})\tau {t}_{m}+\frac{1}{2}({\mu}_{p}-{\mu}_{q})\frac{{\tau}^{2}}{4}\left]\right\},$$## (33)

$${\mathrm{ST}}_{A{B}_{pq,\text{cross}}}({t}_{m}^{\prime},\tau )={\sigma}_{p}{\sigma}_{q}\text{\hspace{0.17em}}\mathrm{exp}[j({\varphi}_{{B}_{q}}-{\varphi}_{{A}_{p}})]\phantom{\rule{0ex}{0ex}}\mathrm{exp}\left\{j2\pi \right[({f}_{p}-{f}_{q})\frac{{t}_{m}^{\prime}}{\tau}+({f}_{p}+{f}_{q})\frac{\tau}{2}+\frac{1}{2}({\mu}_{p}-{\mu}_{q}){\left(\frac{{t}_{m}^{\prime}}{\tau}\right)}^{2}+\frac{1}{2}({\mu}_{p}+{\mu}_{q}){t}_{m}^{\prime}+\frac{1}{2}({\mu}_{p}-{\mu}_{q})\frac{{\tau}^{2}}{4}\left]\right\}.$$It can be found from Eq. (33) that the ST can only correct the linear CR migration of self-terms, but not of the cross-terms. Thus, the energy of self-terms is well accumulated after MWVD, while the energy of cross-terms is typically dispersed in the whole distribution. Here, we do a simulation to testify that the proposed algorithm in the paper can handle the situation of multicomponents. Consider three components denoted by AU1, AU2, and AU3, respectively. The sample frequency is 256 Hz, and the effective signal length is 512. The CF and CR of AU1, AU2, and AU3 are as follows: ${f}_{1}=20\text{\hspace{0.17em}}Hz$, ${\mu}_{1}=20\text{\hspace{0.17em}}Hz/s$, ${f}_{2}=-20\text{\hspace{0.17em}}Hz$, ${\mu}_{2}=20\text{\hspace{0.17em}}Hz/s$, and ${f}_{3}=20\text{\hspace{0.17em}}Hz$, ${\mu}_{3}=-20\text{\hspace{0.17em}}Hz/s$.

As illustrated in Fig. 2, the couplings of the self-terms are removed by ST, but this does not work for the cross-terms. Therefore, the energy of self-terms is well accumulated in CFCRD, and the proposed method is more suitable for the complicated situation. In the above simulation, the amplitudes of the three LFM signals are considered to be the same. However, under the situation of the different amplitudes in a real-world application, the modified clean technique has to be performed to separate strong and weak LFM signals without the loss of significant interferometric phase information.

## 4.2.

### Analysis of the Computational Cost

Due to the existence of CR in Eq. (15), the echo signals received by the different receivers cannot be directly integrated to form the image. Therefore, algorithms have been proposed to estimate the parameter to reconstruct the 3-D position of moving target, such as Radon transform,^{15} chirplet decomposition algorithm,^{12} and Lv’s distribution.^{27} However, searching procedures are necessary for Radon transform and the adaptive chirplet algorithm, which will reduce the computational efficiency. Although Lv’s distribution can effectively avoid searching procedures, this advantage is at the cost of the redundancy information. However, there is little difficulty in obtaining the redundancy information to some extent due to the target’s maneuverability in real ISAR applications.^{28}

From the above key analysis, we can see that the computation is mainly decided by the part of signal separation. Therefore, the implementation procedures include the defined SICCF of each receiver pair $[O({N}_{{t}_{m}}^{2})]$, ST based on the chirp-z transform $[O(3{N}_{{t}_{m}}^{2}\text{\hspace{0.17em}}{\mathrm{log}}_{2}\text{\hspace{0.17em}}{N}_{{t}_{m}})]$, and the Fourier transform with respect to the lag variable $[O({N}_{{t}_{m}}^{2}\text{\hspace{0.17em}}{\mathrm{log}}_{2}\text{\hspace{0.17em}}{N}_{{t}_{m}})]$. However, the algorithms like the Radon transform and chirplet algorithm are searching techniques to find the most matched signal in the parameter domain. Because of the high computational load, $[O(M{N}_{{t}_{m}}\text{\hspace{0.17em}}{\mathrm{log}}_{2}\text{\hspace{0.17em}}{N}_{{t}_{m}})]$, where $M$ denotes the number of searching points, which is normally greater than the number of echoes ${N}_{{t}_{m}}$, these existing algorithms are less suitable for real-time high-resolution ISAR imaging.^{28} Moreover, when the estimated parameters are in the larger scope, the searching steps and initial parameters’ set are more difficult to control and cannot achieve a balance.

As is well known, the RFT/GRFT (Refs. 2021.22.–23) have been developed for motion estimation of maneuvering targets with arbitrary parameterized motion, and much research has verified the effectiveness of RFT and GRFT. What is more, GRFT have been successfully extended to space-time RFT (Ref. 29) for wideband digital array radar and 3-D space RFT (3-D SRFT)^{30} to realize the 3-D reconstruction of the moving target. However, the research on the fast implementation for GRFT should be further conducted because of the prohibitive computational burden induced by the multidimensional searching. Fortunately, the particle swarm optimizer^{28}^{,}^{31} has been widely employed in solving the multiparameter searching problems mentioned above.

Nonetheless, compared to the proposed method in this paper, the 3-D SRFT in Ref. 28, where the acceleration is omitted, just aims at a slow maneuvering target via searching the minimum 3-D image entropy versus six-dimensional motion parameters. Also, the searching procedures will be more complicated with an increase of the motion parameters in many scenarios, like targets with complex motion whose rotational motion may cause quadratic phase terms.

## 5.

## Three-Dimensional Interferometric Inverse Synthetic Aperture Radar Imaging Algorithm Based on Joint Cross Modified Wigner-Ville Distribution

For 3-D InISAR imaging for maneuvering targets, the echo signals in a range cell at all receivers can be characterized as the same multicomponent LFM signals after uniform range alignment and phase adjustment. By using the MWVD algorithm without a searching procedure, the scatterer separation and phase extraction are simultaneously accomplished, and then 3-D images are achieved. Consequently, the 3-D InISAR imaging algorithm based on MWVD is illustrated in detail in the following, and the corresponding flowchart is shown in Fig. 3.

Step 1: Complete the range compression of the echo signals received by the three antennas.

Step 2: Choose antenna $A$ as the reference channel to accomplish uniform motion compensation with the existing methods in Refs. 1718.–19, and then the scatterers on the target remain the uniform position and autofocus center for the three antennas.

Step 3: Utilize the function $\mathrm{exp}(j\pi {L}^{2}/\lambda {R}_{P})$ to compensate the phase difference due to $L$.

Step 4: For each range cell, separate each scatterer in CFCRD after joint cross MWVD between two antenna pairs $AB$ and $AC$.

Step 5: Apply the dechirping method to estimate the amplitude and subtract the estimated LFM from the original signal without loss of significant interferometric phase information. Meanwhile, extract the interferometric phase of the scatterer and regain the corresponding coordinate by using Eq. (12).

Step 6: Repeat steps 4 and 5 until the residual energy of the signal is smaller than threshold T.

Step 7: Repeat the aforementioned steps (4 to 6) until all the range cells have been finished.

Step 8: Combine the range information along the $Y$ axis and output 3-D images.

## 6.

## Simulation Results

In realistic applications, the scatterers on the target may be composed of some disturbed sources, and it is also possible that some scatterers may be sheltered by the body of the target. In this case, the readers can refer to Ref. 32 for a detailed solution. However, when compared to the radar wavelength, the target size is much larger; the assumption is normally valid that the scatterers on a real target can be regarded as separated point-like,^{8}^{,}^{9}^{,}^{12}13.14.15.^{–}^{16} and the obtained image may be depicted by the location of strong scatterers. Therefore, the simulation in this paper is always under the condition that the scatterers on the target are point-like.

## 6.1.

### Example A

In this section, a simple turntable target shown in Fig. 4(a) is modeled as seven scatterers. The parameters used are set as follows: target distance $R=10\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{km}$, baseline length $L=1\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{m}$, effective velocity ${\alpha}_{x}=0.08\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{rad}/\mathrm{s}$, ${\alpha}_{z}=0.04\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{rad}/\mathrm{s}$, and effective acceleration ${\beta}_{x}=0.06\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{rad}/{\mathrm{s}}^{2}$, ${\beta}_{z}=0.06\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{rad}/{\mathrm{s}}^{2}$. The pulse repetition frequency is 256 Hz, and the number of effective pulses is 512.

Figure 4(a) shows the ideal target model including seven ideal scatterer points. As is clearly seen in Fig. 4(b), in the joint cross WVD of antenna pairs $AB$ in a certain range cell (225 range cell), though the three scatterers are presented as different lines, the signal energy is not accumulated well and cross-terms do exist and should be considered in extreme cases. Based on the aforementioned consideration, we introduce the joint cross MWVD to accomplish energy accumulation without loss of the interferometric phase information. In Fig. 4(c), each scatterer is distinctly separated in the CFCRD, and the coordinates of each scatterer can be obtained easily from the interferometric phase information on its peak. Consequently, the 3-D images of a target are achieved based on the proposed joint cross MWVD algorithm. Figure 4(d) shows the reconstructed result of a 3-D InISAR image of an ideal model, where the real scatterer points are presented to make a comparison.

To quantitatively evaluate the performance of the proposed algorithm, the relative mean square error (MSE) of 3-D reconstructed coordinates is computed, and the MSE is defined by^{33}

## (34)

$$\mathrm{MSE}=\frac{{\Vert {\mathfrak{R}}_{\mathrm{est}}-\mathfrak{R}\Vert}_{2}}{{\Vert \mathfrak{R}\Vert}_{2}},$$## Table 1

Reconstruction performance of example A.

Reconstructed coordinates | X-coordinate | Z-coordinate |
---|---|---|

MSE (%) | 7.31 | 9.53 |

## 6.2.

### Example B

In this simulation, we perform the proposed 3-D InISAR algorithm on a synthetic airplane model, which is a rigid object composed of 137 ideal scatterers. The corresponding parameters used are shown in Table 2 and the target in the simulation is moving along a straight line with respect to the radar LOS. Here, we assume that uniform motion compensation has been completed and all scatterers have the same autofocus center for the three antennas.

## Table 2

Simulation parameters.

Radar | Target | |||
---|---|---|---|---|

Carrier frequency | 10 G | Target distance | 100 km | |

Bandwidth | 500 M | Baseline length | 10 m | |

pulse repetition frequency | 512 | Effective rotating | X | Y |

Effective echo | 1024 | Velocity ( | 0.12 | 0.40 |

Sample frequency | 1 G | Acceleration ( | 0.08 | 0.36 |

The models are shown in Fig. 5, and the results of the 3-D InISAR image in different views from three visual angles are given in Fig. 6, where the reconstructed results correspond to its projections on the $XY$, $XZ$, and $YZ$ planes, respectively. As is clearly seen from those figures, though not all of the scatterer are achieved correctly, the proposed algorithm can perform high-quality 3-D InISAR imaging of the maneuvering target. The position error of the scatterer shown in Fig. 6 results from the above approximation, noise, and cross-terms in the correlation algorithm. For the interference of the noise and cross-terms, the spurious scatterers have been reconstructed, which will lead to inconsistency in the number between the ideal scatterers and the reconstructed scatterers. In addition, in real ISAR imaging applications, because the number of scatterers on the target is usually unknown, the MSE of 3-D reconstructed coordinates cannot be obtained. Fortunately, the reconstruction accuracy is closely related to the parameter estimation precision.

Therefore, similar to providing a quantitative evaluation for the reconstruction performance of the 3-D target in Sec. 6.1, in order to characterize the parameter estimation precision of the proposed algorithm quantitatively, the MSEs of IRV and RA are calculated by using Eq. (34). Here, the input signal-to-noise is 20 dB, and the experiment is repeated 50 trials. From Table 3, it can be found that the MSEs of the estimated parameters with the proposed algorithm in the paper are relatively small and within the acceptable range in a real application scenario,^{33} which indirectly demonstrates the effectiveness of the reconstruction via the joint cross MWVD algorithm. On the other hand, in order to improve the 3-D image quality in the future, more attention should be paid to the areas that can acquire a higher antinoise performance and effectively suppress cross-terms.

## Table 3

Reconstruction performance of example B.

Estimated parameters | True value | Average estimated value | MSE (%) |
---|---|---|---|

IRV | 0.4176 | 0.3570 | 14.52 |

RA | 0.3688 | 0.3489 | 5.41 |

## 7.

## Conclusion

This paper has presented a 3-D InISAR imaging algorithm for maneuvering targets based on the joint cross MWVD. The characteristics of such a 3-D InISAR imaging algorithm include the following: (1) it is a nonsearching method in both cross-range resolution and interferometric phase extraction; (2) it can deal with the multicomponents due to its good performance in suppressing the cross-terms via coherent integration; and (3) it can accurately implement the retrieval of the rotation parameters, which is essential for target recognition and target identification in ISAR imaging application.

## Appendices

## Appendix:

### Derivation of Signal Phase

This appendix mainly presents the simplification of the phase in Eq. (9). Rearranging Eqs. (3)–(5), we have

## (35)

$${R}_{{A}_{P}}({t}_{m})\approx {R}_{P}+\frac{2{y}_{p}\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})}{2{R}_{P}}+\frac{+2{y}_{p}\mathrm{\Delta}R({t}_{m})+{[\mathrm{\Delta}R({t}_{m})]}^{2}}{2{R}_{P}}\phantom{\rule{0ex}{0ex}}\frac{+2{x}_{p}\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})+2{z}_{p}\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})+{[\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})]}^{2}+{[\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})]}^{2}+{[\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})]}^{2}+2\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})\mathrm{\Delta}R({t}_{m})}{2{R}_{P}},$$## (36)

$${\mathrm{\Phi}}_{{A}_{P}}({t}_{m})\approx \frac{4\pi}{\lambda}[{R}_{P}+\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})]+\frac{2\pi}{\lambda}\{2\mathrm{\Delta}R({t}_{m})+\frac{{[\mathrm{\Delta}R({t}_{m})]}^{2}}{{R}_{{O}^{\prime}}}\}+\frac{2\pi}{\lambda}\phantom{\rule{0ex}{0ex}}\left\{\frac{2{x}_{p}\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})+2{z}_{p}\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})+{[\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})]}^{2}+{[\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})]}^{2}+{[\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})]}^{2}+2\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m})\mathrm{\Delta}R({t}_{m})}{{R}_{P}}\right\}.$$Evidently, the second terms in Eq. (36), independent of the scatterer, are just autofocus, which should be estimated and removed from all scatterers on the target in the motion compensation. Moreover, the rotation angle is small, and as demonstrated in Ref. 12, the third terms in Eq. (36) can be neglected.

After the autofocus and the aforementioned approximation, the phase in Eq. (9) can be rewritten as

## (37)

$${\mathrm{\Phi}}_{{A}_{P}}({t}_{m})=\frac{4\pi}{\lambda}{R}_{P}+\frac{4\pi}{\lambda}\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m}),$$## (38)

$${\mathrm{\Phi}}_{{B}_{P}}({t}_{m})={\mathrm{\Phi}}_{{A}_{P}}({t}_{m})+\frac{2\pi}{\lambda}\frac{L{x}_{p}}{{R}_{P}}+\frac{2\pi}{\lambda}\frac{L\mathrm{\Delta}{R}_{{x}_{P}}({t}_{m})}{{R}_{P}}+\frac{\pi}{\lambda}\frac{{L}^{2}}{{R}_{P}},$$## (39)

$${\mathrm{\Phi}}_{{C}_{P}}({t}_{m})={\mathrm{\Phi}}_{{A}_{P}}({t}_{m})+\frac{2\pi}{\lambda}\frac{L{z}_{p}}{{R}_{P}}+\frac{2\pi}{\lambda}\frac{L\mathrm{\Delta}{R}_{{z}_{P}}({t}_{m})}{{R}_{P}}+\frac{\pi}{\lambda}\frac{{L}^{2}}{{R}_{P}}.$$Similarly, when the baselines are shorter compared to radar-target distance, the third terms in Eqs. (37)–(39) can also be neglected. As the length of the baselines $L$ is known, the fourth terms can be compensated easily. So, the phases become

## (40)

$${\mathrm{\Phi}}_{{A}_{P}}({t}_{m})=\frac{4\pi}{\lambda}{R}_{P}+\frac{4\pi}{\lambda}\mathrm{\Delta}{R}_{{y}_{P}}({t}_{m}),$$## Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant Nos. 61271024 and 61201292. Qian Lv conceived the work in this paper that led to the submission, designed experiment, and accomplished the writing of the manuscript. Jibin Zheng is responsible for interpreting the results and drafting the manuscript. Jiancheng Zhang played an important role in revising the manuscript and providing English language support. Tao Su is mainly responsible for data analysis and approval of the final version.

## References

F. Berizzi et al., “High-resolution ISAR imaging of maneuvering targets by means of the range instantaneous Doppler technique: modeling and performance analysis,” IEEE Trans. Image Process. 10(12), 1880–1890 (2001).IIPRE41057-7149http://dx.doi.org/10.1109/83.974573Google Scholar

Y. Wang and B. Zhao, “Inverse synthetic aperture radar imaging of nonuniformly rotating target based on the parameters estimation of multicomponent quadratic frequency-modulated signals,” IEEE Sensors J. 15(7), 4053–4061 (2015).ISJEAZ1530-437Xhttp://dx.doi.org/10.1109/JSEN.2015.2409884Google Scholar

Y. Li et al., “Inverse synthetic aperture radar imaging of targets with nonsevere maneuverability based on the centroid frequency chirp rate distribution,” J. Appl. Remote Sens. 9, 095065 (2015).http://dx.doi.org/10.1117/1.JRS.9.095065Google Scholar

C. M. Ye et al., “Key parameter estimation for radar rotating object imaging with multi-aspect observations,” Sci. China Inf. Sci. 53(8), 1641–1652 (2010).http://dx.doi.org/10.1007/s11432-010-4028-3Google Scholar

J. Zheng et al., “ISAR imaging of targets with complex motions based on the keystone time-chirp rate distribution,” IEEE Geosci. Remote Sens. Lett. 11(7), 1275–1279 (2014).http://dx.doi.org/10.1109/LGRS.2013.2291992Google Scholar

Y. Wang, “Inverse synthetic aperture radar imaging of manoeuvring target based on range-instantaneous-Doppler and range-instantaneous-chirp-rate algorithms,” IET Radar, Sonar Navig. 6(9), 921–928 (2012).http://dx.doi.org/10.1049/iet-rsn.2012.0091Google Scholar

Z. Li, S. Papson and R. M. Narayanan, “Data-level fusion of multilook inverse synthetic aperture radar images,” IEEE Trans. Geosci. Remote Sens. 46(5), 1394–1406 (2008).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.916088Google Scholar

Y. Liu et al., “Achieving high-quality three-dimensional InISAR imageries of maneuvering target via super-resolution ISAR imaging by exploiting sparseness,” IEEE Geosci. Remote Sens. Lett. 11(4), 828–832 (2014).http://dx.doi.org/10.1109/LGRS.2013.2279402Google Scholar

M. Martorella et al., “3D interferometric ISAR imaging of noncooperative targets,” IEEE Trans. Aerosp. Electron. Syst. 50(4), 3102–3114 (2014).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2014.130210Google Scholar

C. Z. Ma et al., “Three-dimensional ISAR imaging based on antenna array,” IEEE Trans. Geosci. Remote Sens. 46(2), 504–515 (2008).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2007.909946Google Scholar

C. Z. Ma et al., “Three-dimensional ISAR imaging using a two-dimensional sparse antenna array,” IEEE Geosci. Remote Sens. Lett. 5(3), 378–382 (2008).http://dx.doi.org/10.1109/LGRS.2008.916071Google Scholar

G. Wang, X. Xia and V. C. Chen, “Three-dimensional ISAR imaging of maneuvering targets using three receivers,” IEEE Trans. Image Process. 10(3), 436–447 (2001).IIPRE41057-7149http://dx.doi.org/10.1109/83.908519Google Scholar

X. Xu and R. M. Narayanan, “Three-dimensional interferometric ISAR imaging for target scattering diagnosis and modeling,” IEEE Trans. Image Process. 10(7), 1094–1102 (2001).IIPRE41057-7149http://dx.doi.org/10.1109/83.931103Google Scholar

Q. Zhang, T. S. Yeo and G. Du, “Estimation of three-dimensional motion parameters in interferometric ISAR imaging,” IEEE Trans. Geosci. Remote Sens. 42(2), 292–300 (2004).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2003.815669Google Scholar

D. Zhang et al., “A new interferometric ISAR image processing method for 3-D image reconstruction,” in Proc. of IEEE Conf. on Synthetic Aperture Radar, pp. 555–558, IEEE (2007).Google Scholar

Y. Liu et al., “High-quality 3-D InISAR imaging of maneuvering target based on a combined processing algorithm,” IEEE Geosci. Remote Sens. Lett. 10(5), 1036–1040 (2013).http://dx.doi.org/10.1109/LGRS.2012.2227935Google Scholar

J. Wang and D. Kasilingam, “Global range alignment for ISAR,” IEEE Trans. Aerosp. Electron. Syst. 39(1), 351–357 (2003).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2003.1188917Google Scholar

M. Xing, R. Wu and Z. Bao, “High resolution ISAR imaging of high speed moving targets,” IEE Proc. Radar, Sonar Navig. 152(2), 58–67 (2005).http://dx.doi.org/10.1049/ip-rsn:20045084Google Scholar

X. Li, G. Liu and J. Ni, “Autofocusing of ISAR images based on entropy minimization,” IEEE Trans. Aerosp. Electron. Syst. 35(4), 1240–1252 (1999).IEARAX0018-9251http://dx.doi.org/10.1109/7.805442Google Scholar

J. Xu et al., “Radon-Fourier transform for radar target detection, I: generalized Doppler filter bank,” IEEE Trans. Aerosp. Electron. Syst. 47(2), 1186–1202 (2011).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2011.5751251Google Scholar

J. Xu et al., “Radon-Fourier transform for radar target detection (II): blind speed sidelobe suppression,” IEEE Trans. Aerosp. Electron. Syst. 47(4), 1186–1202 (2011).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2011.6034645Google Scholar

J. Yu et al., “Radon-Fourier transform for radar target detection (III): optimality and fast implementations,” IEEE Trans. Aerosp. Electron. Syst. 48(2), 991–1004 (2012).IEARAX0018-9251http://dx.doi.org/10.1109/TAES.2012.6178044Google Scholar

J. Xu et al., “Radar maneuvering target motion estimation based on generalized Radon-Fourier transform,” IEEE Trans. Signal Process. 60(12), 6190–6201 (2012).ITPRED1053-587Xhttp://dx.doi.org/10.1109/TSP.2012.2217137Google Scholar

M. Xing et al., “New ISAR imaging algorithm based on modified Wigner-Ville distribution,” IET Radar, Sonar Navig. 3(1), 70–80 (2009).http://dx.doi.org/10.1049/iet-rsn:20080003Google Scholar

C. M. Yeh et al., “Rotational motion estimation for ISAR via triangle pose difference on two range-Doppler images,” IET Radar, Sonar Navig. 4(4), 528–536 (2010).http://dx.doi.org/10.1049/iet-rsn.2009.0042Google Scholar

S. B. Peng et al., “Inverse synthetic aperture radar rotation velocity estimation based on phase slope difference of two prominent scatterers,” IET Radar, Sonar Navig. 5(9), 1002–1009 (2011).http://dx.doi.org/10.1049/iet-rsn.2010.0255Google Scholar

X. Lv et al., “Lv’s distribution: principle, implementation, properties, and performance,” IEEE Trans. Signal Process. 59(8), 3576–3591 (2011).ITPRED1053-587Xhttp://dx.doi.org/10.1109/TIP.2009.2032892Google Scholar

J. Zheng et al., “ISAR imaging of nonuniformly rotating target based on a fast parameter estimation algorithm of cubic phase signal,” IEEE Trans. Geosci. Remote Sens. 53(9), 4727–4740 (2015).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2015.2408350Google Scholar

J. Xu et al., “Space-time Radon-Fourier transform and applications in radar target detection,” IET Radar, Sonar Navig. 6(9), 846–857 (2012).http://dx.doi.org/10.1049/iet-rsn.2011.0132Google Scholar

J. Xu et al., “Radar target imaging using three-dimensional space Radon-Fourier transform,” in Proc. of Int. Radar Conf., pp. 1–6 (2014).Google Scholar

L. C. Qian et al., “Fast implementation of generalised Radon-Fourier transform for manoeuvring radar target detection,” Electron. Lett. 48(22), 1427–1428 (2012).ELTNBK0013-4759http://dx.doi.org/10.1049/el.2012.2255Google Scholar

X. Bai et al., “High-resolution three-dimensional imaging of spinning space debris,” IEEE Trans. Geosci. Remote Sens. 47(7), 2352–2362 (2009).IGRSD20196-2892http://dx.doi.org/10.1109/TGRS.2008.2010854Google Scholar

Y. Wu et al., “Fast marginalized sparse Bayesian learning for 3-D interferometric ISAR image formation via super-resolution ISAR imaging,” IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens. 8(10), 4942–4951 (2015).http://dx.doi.org/10.1109/JSTARS.2015.2455508Google Scholar

## Biography

**Qian Lv** received her BS degree in measuring and control techniques and instruments from Xi’an Shiyou University, Shaanxi, China, in 2013. Currently, she is working toward her PhD with the National Laboratory of Radar Signal Processing, Xidian University, Xi’an, China. Her research interests include SAR and inverse SAR signal processing, time-frequency analysis, and interferometric InISAR imaging.

**Tao Su** received his BS degree in information theory, MS degree in mobile communication, and PhD in signal and information processing from Xidian University, Xi’an, China, in 1990, 1993, and 1999, respectively. Currently, he is a professor with the National Laboratory of Radar Signal Processing, School of Electronic Engineering, since 1993. His research interests include high-speed real-time signal processing on radar, sonar and telecommunications, digital signal processing, parallel processing system design, and FPGA IP design.

**Jibin Zheng** received his degree in electronic information science and technology from Shandong Normal University, Shandong, China, in 2009 and his PhD in signal and information processing from Xidian University, Xi’an, China, in 2015. From September 2012 to September 2014, he worked as a visiting PhD student at the Department of Electrical Engineering, Duke University, Durham, North Carolina. His research interests include SAR and inverse SAR signal processing, and cognitive radar.

**Jiancheng Zhang** received his BS degree in measurement and control technology and instrumentation from Xidian University, Shaanxi, China, in 2011. Currently, he is pursuing his PhD in the National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an, China. His research interests include target detection, parameter estimation, time-frequency analysis, and radar imaging.