- Research
- Open Access

# Adaptive matching pursuit with constrained total least squares

- Tianyao Huang
^{1}, - Yimin Liu
^{1}Email author, - Huadong Meng
^{1}and - Xiqin Wang
^{1}

**2012**:76

https://doi.org/10.1186/1687-6180-2012-76

© Huang et al; licensee Springer. 2012

**Received:**12 September 2011**Accepted:**4 April 2012**Published:**4 April 2012

The Erratum to this article has been published in EURASIP Journal on Advances in Signal Processing 2015 2015:81

## Abstract

Compressive sensing (CS) can effectively recover a signal when it is sparse in some discrete atoms. However, in some applications, signals are sparse in a continuous parameter space, e.g., frequency space, rather than discrete atoms. Usually, we divide the continuous parameter into finite discrete grid points and build a dictionary from these grid points. However, the actual targets may not exactly lie on the grid points no matter how densely the parameter is grided, which introduces mismatch between the predefined dictionary and the actual one. In this article, a novel method, namely adaptive matching pursuit with constrained total least squares (AMP-CTLS), is proposed to find actual atoms even if they are not included in the initial dictionary. In AMP-CTLS, the grid and the dictionary are adaptively updated to better agree with measurements. The convergence of the algorithm is discussed, and numerical experiments demonstrate the advantages of AMP-CTLS.

## Keywords

- Grid Point
- Compressive Sense
- Match Pursuit
- Orthogonal Match Pursuit
- Total Little Square

## 1 Introduction

A new class of techniques called compressed sampling or compressive sensing (CS) has been widely used recently, due to the fact that CS techniques have shown good performance in different areas such as signal processing, communication and statistics; see, e.g., [1]. Generally, CS finds the sparsest vector **x** from measurements **y = Φx**, where **Φ** is often referred to as *dictionary* with more columns than rows, and each column of the dictionary is called an *atom* or a *basis*.

Matching pursuit (MP) is a set of popular greedy approaches to compressive sensing. The basic idea is to sequentially find the support set of x and then project on the selected atoms. The atoms selected in the support set are mainly determined by correlations between atoms and the regularized measurements [2]. MP methods include standard MP [3], and several other examples, such as orthogonal matching pursuit (OMP) [4], regularized OMP (ROMP) [5], stage-wise OMP (StOMP) [6], compressive sampling matching pursuit (CoSaMP) [7] and subspace pursuit (SP) [2].

These MP methods [2–7] do not consider the off-grid problem in grid-based CS approaches. In some applications of CS, such as harmonic retrieval and radar signal processing (e.g., range profiling [8, 9], direction of arrival estimate [10–12]), we usually divide a continuous parameter space into discrete grid points to generate the dictionary. For example, in harmonic retrieval, frequency space is divided and dictionary is a discrete Fourier transform (DFT) matrix. The off-grid problem emerges when the actual frequencies are placed off the predefined grid. The mismatch between the predefined and actual atom can lead to performance degradation in sparse recovery (e.g., [13–15]).

The grid misalignment problem in CS has recently received growing interest. The sensitivity of CS to the mismatch between the predefined and actual atoms is studied in [13]; however, the focus of that article is mainly on mismatch analysis rather than development of an algorithm. Cabrera et al. [16] and Zhu et al. [14], respectively, provided an iterative re-weighted (IRW)-based and a Lasso-based method to recover an unknown vector considering the atom misalignment, whereas we focus on MP methods in this article. Compared with IRW or Lasso, MP methods greedily find the support set and greatly reduce the dimension of the CS problem; thus, they have an advantage in computability. Gabriel [17] proposed best basis compressive sensing in a tree-structured dictionary, but some dictionaries (e.g., DFT matrix) do not possess a tree structure.

To alleviate the off-grid problem in matching pursuit, we developed adaptive matching pursuit with constrained total least squares (AMP-CTLS). In AMP-CTLS, we model the grid as an unknown parameter, and adaptively search for the best one. We choose harmonic retrieval to demonstrate the performance of AMP-CTLS. The algorithm can also be applied to jointly estimate range and velocity in randomized step frequency (RSF) radar. Note that in the RSF scenario range-velocity estimation is hard to be directly solved by subspace-based methods, e.g., Capon's method, MUSIC and ESPRIT [18]. Since only one snapshot data is available in RSF radar, to obtain the covariance matrix these subspace-based methods need to apply smoothing method, which requires uniform and linear condition [18]. However, this condition is not satisfied in the case of random frequency model.

This article is structured as follows. Section 2 introduces grid-based CS and outlines the procedures of AMP-CTLS. In Sections 3 and 4, we discuss the implementation of AMP-CTLS in harmonic retrieval and RSF radar, respectively. In Section 5, numerical examples are presented to illustrate merits of AMP-CTLS. Section 6 is dedicated to a brief conclusion. *Notations:* (·)^{H} denotes conjugate transpose matrix; (·)^{T} transpose matrix; (·)* conjugate matrix; (·)^{†} pseudo-inverse matrix; **I** _{
L
}/**0**_{
L
} the *L × L* identity/zero matrix; || · ||_{2} the ℓ_{2} norm; {·} denotes a set; *|* · *|* the absolute value of a complex number or the cardinality of a set; (·)_{Λ} denotes elements/columns indexed in the set Λ of a vector/matrix; supp(·) is the support set of a vector, that is, the indices of the nonzero elements in the vector; Re(·) the real part of a complex number; ⊗ denotes the right Kronecker product [19]; and *E*[·] denotes the expectation of a random variable.

## 2 Grid-based CS and the AMP-CTLS algorithm

The signal model of grid-based CS is introduced in Section 2.1. We combine the greedy idea of MP methods and the constrained total least squares (CTLS) technique [20], and thus produce AMP-CTLS to alleviate the off-grid problem. In AMP-CTLS, the grid is cast as an unknown parameter, and is jointly estimated together with x. In Section 2.2, the framework of AMP-CTLS is given. Section 2.3 is dedicated to the iterative joint estimator (IJE) algorithm, which is implemented in AMP-CTLS. In the IJE algorithm, the CTLS technique is used, which is presented in Section 2.4. Section 2.5 summarizes the entire procedure of AMP-CTLS. In Section 2.6, the convergence of IJE is analyzed.

### 2.1 Grid-based CS

where **y** ∈ ℂ^{M × 1}and **w** ∈ ℂ^{M × 1}are measurement vector and white Gaussian noise (WGN) vector, respectively. **x** ∈ ℂ^{N × 1}is to be learned. **g** ∈ ℂ^{N × 1}are discrete grid points **g** = [*g*_{1}, *g*_{2}, . . . , *g*_{
N
} ]. **Φ**(**g**) ∈ ℂ ^{
M × N
} is built from **g**, **Φ**(**g**) = [*ϕ*(*g*_{1}), *ϕ*(*g*_{2}), . . . , *ϕ*(*g*_{
N
} )], and the mapping **g** → **Φ** is known. For example, to recover a frequency sparse signal, we grid the frequency space into discrete frequency points $\mathbf{g}={\left[0,\phantom{\rule{2.77695pt}{0ex}}\frac{1}{N},\frac{2}{N},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}\frac{N-1}{N}\right]}^{\mathsf{\text{T}}}$. **Φ** is a DFT matrix, of which the *m* th-row, *n* th-column element is exp $\left(j2\pi \frac{n}{N}m\right)$. However, the signal is only sparse in the DFT atoms if all of the sinusoids are exactly at the pre-defined grid points [13]. In some cases, no matter how densely we grid the frequency space, the sinusoids could be off-grid, which saps the performance of CS methods [13].

### 2.2 Main idea of AMP-CTLS

**g**as well as the sparsest

**x**by solving the optimum problem:

where *η* is the noise power. Equation (2) is similar to that used in traditional MP methods [2–7], except that we recover **x** *and* simultaneously estimate the grid. In most cases, solving (2) is a complex non-linear optimum problem. In this article, an iterative method is introduced.

^{(k)}after the

*k*th iteration, and denote the corresponding grid points as ${\widehat{\mathbf{g}}}_{\text{\Lambda}}^{\left(k\right)}$. In traditional MP methods [2–7],

**x**

_{Λ}is estimated by solving a least squares problem. In AMP-CTLS, considering the off-grid problem, we jointly search for

**x**

_{Λ}and the best grid points in the neighboring continuous region of ${\widehat{\mathbf{g}}}_{\text{\Lambda}}^{\left(k\right)}$ via (3), in which we minimize norm of the

*residual error*, which is defined as

**r**=

**y**-

**Φ**(

**g**

_{Λ})

**x**

_{Λ}.

We develop the iterative joint estimator (IJE) algorithm to solve (3), which is detailed in ensuing section.

### 2.3 IJE algorithm

**g**

_{Λ}in the neighborhood of ${\widehat{\mathbf{g}}}_{\text{\Lambda}}\left(0\right)$. The mismatch of the grid is denoted as $\text{\Delta}{\mathbf{g}}_{\text{\Lambda}}={\mathbf{g}}_{\text{\Lambda}}-{\widehat{\mathbf{g}}}_{\text{\Lambda}}\left(0\right)={\left[\text{\Delta}{g}_{1},...,\text{\Delta}{g}_{\left|\text{\Lambda}\right|}\right]}^{\mathsf{\text{T}}}$. IJE includes three steps: calculate the estimation of the mismatch, ${\widehat{\Delta \mathbf{g}}}_{\Lambda}$; update the grid with ${\widehat{\Delta \mathbf{g}}}_{\Lambda}$; and estimate

**x**

_{Λ}with projection onto the new grid points. These three steps are executed iteratively to pursue more accurate results. To distinguish from iterations in search for the support set in (3), we denote

*l*as the counter of loops in IJE; thus, IJE is expressed as follows:

**g**

_{Λ}and

**x**

_{ Λ }, and ${\hat{\text{\Delta}\mathbf{g}}}_{\text{\Lambda}}\left(l\right)$ and

**x**

_{CTLS}are the results.

*C*

_{CTLS}denotes the penalty function of CTLS, which is detailed in Section 2.4. Since (6) is a linear least squares problem, the closed-form solution is

The loops are terminated when the norm of residual error is scarcely reduced.

### 2.4 CTLS technique

Traditional MP methods [2–7] apply least squares to calculate amplitudes of **x**_{Λ} after finding the support set. When there are off-grid signals, mismatches occur in the dictionary; thus, we replace the least squares model with total least squares (TLS) criterion, which is appropriate to deal with the fitting problem when perturbations exist in both the measurement vector and in the dictionary [21]. Since the dictionary mismatches are constrained by errors of grid points, we introduce the constrained total least squares (CTLS) technique [20] in AMP-CTLS to jointly estimate the grid point errors and **x**_{Λ}, i.e., solving (4). It has been proved that CTLS is a constrained space state maximum likelihood estimator [20].

*l*th IJE iteration. Assume that the mismatch Δ

**g**

_{Λ}is significantly small; thus we can approximate the perfect dictionary

**Φ**(

**g**

_{ Λ }) as a linear combination of the mismatch Δ

**g**with Taylor expansion:

**R**

_{ i }∈ ℂ

^{M ×|Λ|}is

*o*(·) denotes higher order terms. For simplicity, in this section we ignore the iteration counter in the notations, and ${\mathbf{R}}_{i}\phantom{\rule{0.3em}{0ex}}\left({\u011d}_{\text{\Lambda}}\phantom{\rule{0.3em}{0ex}}\left(l\right)\right),\phantom{\rule{0.3em}{0ex}}\mathbf{\Phi}\left({\u011d}_{\text{\Lambda}}\phantom{\rule{0.3em}{0ex}}\left(l\right)\right)$ are, respectively, simplified as

**R**

_{ i },

**Φ**

_{ Λ }. Neglect $o\left(\text{\Delta}{g}_{i}^{2}\right)$ and the signal model in (1) is replaced by:

**g**

_{ Λ }as an unknown random perturbation vector. The grid misalignment and the noise vector are combined into a (

*M*+

*|*Λ

*|*)-dimensional vector $\mathbf{v}={\left[{\left(\text{\Delta}{\mathbf{g}}_{\text{\Lambda}}\right)}^{\mathsf{\text{T}}},{\mathbf{w}}^{\mathsf{\text{T}}}\right]}^{\mathsf{\text{T}}}$, and CTLS aims at minimizing $\left|\right|\mathbf{v}|{|}_{2}^{2}$. It has been proved that CTLS is a constrained space state maximum likelihood estimator if

**v**is a WGN vector [20]. Thus, we first whiten

**v**. Assume that Δ

**g**

_{ Λ }is independent of

**w**. The covariance matrix of Δ

**g**

_{ Λ }is ${\mathbf{C}}_{\mathbf{g}}=E\left[\text{\Delta}{\mathbf{g}}_{\text{\Lambda}}{\left(\text{\Delta}{\mathbf{g}}_{\text{\Lambda}}\right)}^{\mathsf{\text{H}}}\right]\in {\u2102}^{\left|\text{\Lambda}\right|\times \left|\text{\Lambda}\right|}$.

**D**∈ ℂ

^{|Λ|×|Λ|}obeys ${\mathbf{C}}_{\mathbf{g}}^{-1}={\mathbf{D}}^{\mathsf{\text{H}}}\mathbf{D}$. The variance of white noise

**w**is ${\sigma}_{\mathbf{w}}^{2}$. We denote an unknown normalized vector

**u**∈ ℂ

^{(M+| Λ|) × 1}as (11); thus,

**u**is a WGN vector.

**W**

_{ x }= [

**H**

*σ*

_{ w }

**I**

_{ M }] ∈ ℂ

^{M × (| Λ|+M)}.

**H**∈ ℂ

^{M ×| Λ|}is defined as

**W**

_{ x }is of full-row rank, the optimum problem (12, 14) are equivalent to (17)-(19), which has been proved in [20].

**x**

_{ Λ }required in Newton's method for (17) can be given as:

${\hat{\text{\Delta}\mathbf{g}}}_{\text{\Lambda}}$ is extracted from **û** via ${\hat{\text{\Delta}\mathbf{g}}}_{\text{\Lambda}}=\left[{\mathbf{D}}^{-1}\phantom{\rule{2.77695pt}{0ex}}{0}_{N}\right]\widehat{\mathbf{u}}$, thus (4) is solved. The sketch of CTLS is given in Algorithm 1. As the authors' best knowledge, the convergence guarantees for this Newton method are still open question.

### 2.5 Sketch of AMP-CTLS

Similarly to traditional MP methods [2–7], AMP-CTLS first greedily finds the support set. Then AMP-CTLS adaptively optimizes the grid points indexed in the support set. In this article, we imitate the greedy approach of OMP, in which only one atom is added to the support set in each iteration. If the number of atoms is known, terminate the iterations when the cardinality of the support set reaches the pre-specified number. If it is not known, we can apply some other successfully used stopping criterions, e.g., norm of residual being below a threshold [22]. A sketch of AMP-CTLS is presented in Algorithm 2.

### 2.6 Convergence of the IJE algorithm

**g**

_{Λ}

*→*

**Φ**(

**g**

_{Λ}) is linear, which means

and **G** should be a constant matrix.

**Proposition**. If the measurement y is perturbed by WGN and (21) is obeyed, IJE monotonically reduces values of the penalty function in (3). The estimates of

**x**

_{ Λ }and

**g**

_{ Λ }satisfy:

**Proof**. Define a penalty function as follows:

**x**

_{CTLS}are obtained by solving

where the last inequality is taken from (6). The inequalities in (25) and (26) are transformed to equalities if and only if ${\hat{\text{\Delta}\mathbf{g}}}_{\text{\Lambda}}\left(l\right)=0$. □

For simplicity, we assume that the transform **Φ**(**g**_{Λ}) is linear. In some practical applications like harmonic retrieval, linearity is not strictly guaranteed. However, when atom mismatch $\Delta \text{g}$ is significantly small, the higher order errors due to Taylor expansion (8) are ignorable, and (21) is approximately satisfied. Numerical examples are performed in Section 5, which demonstrate the convergence of the proposed algorithm in the case of harmonic retrieval.

## 3 Application in the harmonic retrieval

In this section, we apply AMP-CTLS in harmonic retrieval. In Section 3.1, the signal model of harmonic retrieval is presented and adverse effects of MP approaches [2–7] in harmonic retrieval is discussed. In Section 3.2, we detail the implementation of AMP-CTLS in harmonic retrieval.

### 3.1 Signal model of harmonic retrieval

*y*

_{ m }is the

*m*th measurement, and

*w*

_{ m }is the

*m*th noise,

*m*= 0, 1, . . . ,

*M -*1. There are

*K*sinusoids, and amplitude

*α*

_{ k }, frequency

*f*

_{ k }of the

*k*th sinusoid are unknown parameters. When the sinusoids are sparse, i.e.,

*K*<<

*M*, harmonic retrieval problem can be solved by grid-based CS approaches. Divide the digital frequency

*f*∈ [0 1) into

*N*grid points

**g**= [

*g*

_{1},

*g*

_{2}, . . . ,

*g*

_{ N }]

^{T}. When all frequencies are exactly at grid points, rewrite (27) as

*g*

_{ n }is the frequency of the

*n*th grid point and

where the *m* th-row, *n* th-column element of **Φ** is of the form *ϕ*(*m*, *n*) = exp(*j* 2*πg*_{
n
}*m*). Apply CS methods to seek the sparsest solution of (30). Then, estimates of the frequencies and amplitudes are obtained with the indices and magnitudes of nonzero coefficients in **x**, respectively. The sparsest solution can be obtained with computational MP methods, which greedily minimize the ℓ_{0} norm. It can also be obtained by minimizing the *ℓ*_{1} norm [23], the quasi-norm [24, 25] or the ℓ_{p ≤ 1}p-norm-like diversity [26].

We focus on MP methods in this article for the high computation efficiency. However, conventional MP methods [2–7] suffer from performance degradation if the frequency space is not perfectly grided. When the frequency is sparsely divided, sinusoids may lie off the grid points, and accuracy of frequency estimates is limited by the gap between neighboring grid points. MP methods iteratively search for the sinusoids. If an off-grid sinusoid emerges, the energy of this sinusoid can not be totally canceled and performs as an interference in the next iterations. The leakage of the energy may mask the weak sinusoids. On the other hand, if the frequency space is densely divided, correlations between atoms are enhanced [27], which also reduces the performance of MP methods. Especially in those MP methods that select multiple atoms into the support set in a single iteration, e.g., CoSaMP, SP, ROMP, and StOMP, highly correlated atoms could be chosen in a same iteration, which impairs the numerical stability of projection onto the adopted atoms.

### 3.2 Harmonic retrieval with AMP-CTLS

The AMP-CTLS algorithm can be applied for harmonic retrieval. AMP-CTLS adaptively finds the atoms and recovers the sinusoids. In those MP approaches with constant predefined atoms, frequency estimates are discrete values, depending on grid points. In AMP-CTLS frequency estimates are continuous, since estimates of the grid misalignments are continuous. In this section, we adjust two steps of AMP-CTLS presented in Section 2 to better fit the harmonic retrieval problem.

**R**matrix in (9). According to (9), the

*m*th-row,

*i*th-column element of the

**R**

_{ i }is expressed as follows:

Elements in other columns are all zeros.

**g**

_{ Λ }is assumed to be a complex vector; therefore, the estimate ${\widehat{\Delta \mathbf{g}}}_{\Lambda}$ is complex. However, frequency grid points are restrained to be real, so regularization Δ

**g**

_{ Λ }= (

**Δg**

_{ Λ })* should be added to (12) in the case of harmonic retrieval. Unfortunately, the solver becomes complex, which is derived in Appendix 2. For simplicity, (5) is replaced with (32) to approximatively update the grid points:

## 4 Application in RSF radar

AMP-CTLS can also be applied in randomized step frequency (RSF) radar. RSF radar can improve the range-velocity resolution and avoid range-velocity coupling problems [28, 29]. However, RSF radar suffers from the sidelobe pedestal problem, which results in small targets being masked by noise-like components due to dominant targets [29]. Our problem of interest is to recover small targets. When the observed scene is sparse, i.e., only few targets exists, we can use sparse recovery to exploit the sparseness [9]. AMP-CTLS relieves the sidelobe pedestal problem in RSF radar and recovers small targets well.

Correlation-matrix-based spectral analysis methods, e.g., MUSIC, ESPRIT [18], are hard to be directly utilized in range-Doppler estimation in RSF scheme. Since only one snapshot of radar data is available and radar echoes from different scatterers are coherent, smoothing technique is invoked to obtain a full rank correlation matrix [30]. Smoothing method requires that the array is uniform and linear [18]. However, in RSF radar, the echoes are determined by a random permutation of integers, see (34); thus, the uniform and linear condition is not satisfied, which restricts application of correlation-matrix-based methods.

*m*th pulse is

*f*

_{0}+

*C*

_{ m }

*δf*,

*m*= 0, 1, . . . ,

*M -*1, where

*f*

_{0}is carrier frequency and

*δf*is frequency step size.

*C*

_{ m }is a random permutation of integers from 0 to

*M -*1. The

*m*th echo of radar can be expressed as (see [8, 28, 29]):

where *w*_{
m
} is noise in the *m* th echo. *K* denotes the number of targets and *k* denotes *k* th target. *α*_{
k
} , *p*_{
k
} , and *q*_{
k
} are to be learned. *α*_{
k
} presents the scattering intensity. *p*_{
k
} ∈ [0 1) and *q*_{
k
} ∈ [0 1) are determined by range and radial velocity of the *k* th target, respectively. Note that in (34) the echo is simultaneously related to the sequence *m* and the random integer *C*_{
m
} .

*p*space into

*C*grid points

*p*

_{ c }=

*c/C*,

*c*= 0, 1, . . . ,

*C -*1. Divide

*q*space into

*D*grid points

*p*

_{ d }=

*d/D*,

*d*= 0, 1, . . . , D - 1. Rewrite (33) as:

where the *m* th-row, (*c* + *dC*)th-column element of **Φ** (**p**, **q**) ∈ ℂ ^{
M × CD
} is *s*_{
m
} (*p*_{
c
}*, q*_{
d
} ).

*p*mismatch Δ

**p**

_{Λ}∈ ℝ

^{|Λ|× 1}and

*q*mismatch Δ

**q**

_{Λ}∈ ℝ

^{|Λ|× 1}, $\text{\Delta}{\mathbf{g}}_{\text{\Lambda}}={\left[{\left(\text{\Delta}{\mathbf{p}}_{\text{\Lambda}}\right)}^{\mathsf{\text{T}}},{\left(\text{\Delta}{\mathbf{q}}_{\text{\Lambda}}\right)}^{\mathsf{\text{T}}}\right]}^{\mathsf{\text{T}}}\in {\mathbb{R}}^{2\left|\text{\Lambda}\right|\times 1}$. The

**R**∈ ℂ

^{ M × CD }matrix

**p**

_{Λ}and Δ

**q**

_{Λ}are independent of each other and of the noise. The covariance matrix of Δ

**g**

_{Λ}is

where ${\mathbf{C}}_{\mathbf{p}}=E\left[\text{\Delta}{\mathbf{p}}_{\text{\Lambda}}{\left(\text{\Delta}{\mathbf{p}}_{\text{\Lambda}}\right)}^{\mathsf{\text{H}}}\right]\in {\u2102}^{\left|\text{\Lambda}\right|\times \left|\text{\Lambda}\right|}$, ${\mathbf{C}}_{\mathbf{q}}=E\left[\text{\Delta}{\mathbf{q}}_{\text{\Lambda}}{\left(\text{\Delta}{\mathbf{q}}_{\text{\Lambda}}\right)}^{\mathsf{\text{H}}}\right]\in {\u2102}^{\left|\text{\Lambda}\right|\times \left|\text{\Lambda}\right|}$, ${\mathbf{C}}_{\mathbf{p}}^{-1}={\mathbf{D}}_{\mathbf{p}}^{\mathbf{H}}{\mathbf{D}}_{\mathbf{p}}$ and ${\mathbf{C}}_{\mathbf{q}}^{-1}={\mathbf{D}}_{\mathbf{q}}^{\mathbf{H}}{\mathbf{D}}_{\mathbf{q}}$. In the case of RSF radar $\mathbf{u}={\left[{\left({\mathbf{D}}_{\mathbf{p}}\text{\Delta}{\mathbf{p}}_{\text{\Lambda}}\right)}^{\mathsf{\text{T}}},{\left({\mathbf{D}}_{\mathbf{q}}\text{\Delta}{\mathbf{q}}_{\text{\Lambda}}\right)}^{\mathsf{\text{T}}},{\left(\frac{1}{{\sigma}_{\mathbf{w}}}\mathbf{w}\right)}^{\mathsf{\text{T}}}\right]}^{\mathsf{\text{T}}}\in {\u2102}^{\left(2\left|\text{\Lambda}\right|+M\right)\times 1}$, and **W**_{
x
}= [**H**_{
p
}, **H**_{
q
}, *σ*_{
w
}**I**_{
N
}] ∈ ℂ^{M × (2|Λ|+M)}, where ${\mathbf{G}}_{\mathbf{p}}=\left[{\mathbf{R}}_{{p}_{1}},..,{\mathbf{R}}_{{p}_{\left|\text{\Lambda}\right|}}\right]\in {\u2102}^{M\times |\text{\Lambda}{|}^{2}}$, ${\mathbf{H}}_{\mathbf{p}}={\mathbf{G}}_{\mathbf{p}}\left({\mathbf{D}}_{\mathbf{p}}^{-1}\otimes {\mathbf{I}}_{\left|\text{\Lambda}\right|}\right)\left({\mathbf{I}}_{\left|\text{\Lambda}\right|}\otimes {\mathsf{\text{x}}}_{\text{\Lambda}}\right)\in {\u2102}^{M\times \left|\text{\Lambda}\right|}$, ${\mathbf{G}}_{\mathbf{q}}=\left[{\mathbf{R}}_{{q}_{1}},..,{\mathbf{R}}_{{q}_{\left|\text{\Lambda}\right|}}\right]\in {\u2102}^{M\times |\text{\Lambda}{|}^{2}}$ and ${\mathbf{H}}_{\mathbf{q}}={\mathbf{G}}_{\mathbf{q}}\left({\mathbf{D}}_{\mathbf{q}}^{-1}\otimes {\mathbf{I}}_{\left|\text{\Lambda}\right|}\right)\left({\mathbf{I}}_{\left|\text{\Lambda}\right|}\otimes {\mathbf{x}}_{\text{\Lambda}}\right)\in {\u2102}^{M\times \left|\text{\Lambda}\right|}$. Since **p** and **q** are both real, formula (32) s used to update grid points, in which the imaginary parts of Δ**p** and Δ**q** estimates are abandoned.

## 5 Simulations

Numerical results are provided to illustrate the performance of the new algorithm. In all examples, the noise is additive Gaussian white noise.

### 5.1 Accuracy of AMP-CTLS

*α*= 1 and the signal to noise ratio SNR =

*α*

^{2}

*/σ*

^{2}= 5 dB, where

*σ*

^{2}is the variance of noise. The number of measurements

*M*is 32. The frequency of sinusoid is varied between two adjoining frequency grid points. The mean square errors (MSEs) of frequency estimates are calculated. The MSEs are compared with the corresponding Cramer-Rao lower bound (CRB) [31]. The frequency is uniformly divided into

*m*grid points in xMP

_{ m }(OMP

_{ M }, OMP

_{10M}, CoSaMP

_{ M }, etc.) and into

*M*points in AMP-CTLS. AMP-CTLS is configured as follows: IJE loops no more than 14 times; the normalization factors in (11) are

**D**=

**I**

*/*(

*σ*

_{Δf}),

*σ*

_{Δf}= 0.005, and

*σ*

_{ w }= 1. As shown in Figure 1, MSEs of AMP-CTLS are close to the CRB and lower than those of OMP, except when the sinusoid is in the vicinity of the grid point.

### 5.2 Convergence of AMP-CTLS

We first discuss the convergence speed of the proposed IJE algorithm in noise-free case. Suppose that the sinusoid is located at *f* = 9.5/*M*, *M* = 32. Other conditions are the same as described in Section 5.1. In the *l* th iteration of IJE, we can obtain a grid point $\u011d\left(l\right)$ with (5) and residual error $\mathbf{r}\left(l\right)=\mathbf{y}-\mathbf{\Phi}\left(\mathit{\u011d}\right)\widehat{\mathit{x}}\left(l\right)$ after (6). we calculate the norm of residual ||**r**(*l*)||_{2} and the grid error $|\mathit{\u011d}\left(l\right)-f|$, and normalize the results with ||**r**(0)||_{2} and $|\mathit{\u011d}\left(0\right)-f|$, respectively. As shown in Figure 2, both the residual error and the grid error converge fast (about five steps) to 0 in noiseless case.

**Φ**is linear. This is only approximately satisfied in harmonic retrieval when the higher order terms of Taylor expansion (8) are ignorable, which means that the grid points indexed in the support set are required to be close to the actual frequencies. We assign SNR = 5 dB and the initial frequency grid point as

*g*(0) = 9

*/M*. The true frequency of the sinusoid varies from 9

*/M*to 11

*/M*.

*/M*) between the true frequency and the initial grid point is less than 0.7, the initial grid is adjusted to be close to the actual value, and MSEs of the frequency estimates converge to CRB. When the distance is greater than 1, the AMP-CTLS curve is close to the initial distance, which means that AMP-CTLS fails to improve the initial grid, because errors of Taylor expansion cannot be ignored and affect convergence of the algorithm.

### 5.3 Input of sparsity

In Sections 5.1 and 5.2, we assume that the sparsity *K*, i.e., the number of modes, is known and we use *K* to terminate AMP-CTLS, while a priori sparsity is not obligatory. When *K* is unknown, we can use norm of residual error **r** = **y** - **Φ** (**g**_{
Λ
}) **x**_{
Λ
} as termination criterion.

Furthermore, AMP-CTLS does not seriously rely on the given sparsity *K'*, and the performance is slightly affected when *K' > K*. Suppose there are three sinusoids denoted as Si_{1}, Si_{2}, and Si_{3}, where *α*_{1} = 20, *α*_{2} = 15, *α*_{3} = 1, *f*_{1} = 3.15*/M*, *f*_{2} = 4.2*/M*, *f*_{3} = 7.25*/M*, *M* = 32. $\mathsf{\text{SN}}{\mathsf{\text{R}}}_{\mathsf{\text{3}}}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\alpha}_{3}^{2}/{\sigma}^{2}=10\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB}}\mathsf{\text{.}}$ In AMP-CTLS, frequency is uniformly grided to 2*M* points, and other configurations are the same as described in Section 5.1.

**r**

^{(K')}||

_{2}versus

*K'*and present the results in Figure 4. When all of the sinusoids have been chosen into the support set and

*K' ≥ K*, energy of the sinusoids are canceled thoroughly and only noise exist in the residual. The norm of residual error becomes small and is slowly reduced along with

*K'*. The results illustrate that we can use threshold of values or decrease rate of the norm of residual to end AMP-CTLS loops.

*K' > K*. Denote the amplitude estimates by ${\widehat{\alpha}}_{1},{\widehat{\alpha}}_{2},\dots ,{\widehat{\alpha}}_{{K}^{\prime}}$ in descend order of magnitudes and their counterparts of frequency estimates by ${\widehat{f}}_{1},{\widehat{f}}_{2},\dots ,{\widehat{f}}_{{K}^{\prime}}$. MSEs of

*f*

_{3}estimates versus

*K'*are presented in Figure 5, which indicates that accuracy of frequency estimates of Si

_{3}is slightly affected (

*<*2 dB) by

*K'*.

*<*0.2) and are not sensitive to

*K'*.

*f*

_{3}estimates versus SNR at different

*K'*. Noise variance

*σ*

^{2}is altered such that SNR

_{3}varies. The MSEs converges to CRB at high SNR (SNR

_{3}

*>*2 dB) when

*K'*=

*K*= 3. The results of

*K'*= 6 are close to those of

*K'*= 3.

### 5.4 Recovering small sinusoids

We compare the performance on recovering weak sinusoids of AMP-CTLS with CS methods, e.g., OMP and CoSaMP, and conventional spectral analysis methods, e.g., ESPRIT and root MUSIC [18]. Suppose there are three sinusoids denoted as Si_{1}, Si_{2}, and Si_{3}, where *α*_{1} = 20, *α*_{2} = 15, *α*_{3} = 1, *f*_{1} = 3.15*/M*, *f*_{2} = 5.2*/M*, *f*_{3} = 3.95*/M*, *M* = 32.$\mathsf{\text{SN}}{\mathsf{\text{R}}}_{\mathsf{\text{3}}}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\alpha}_{3}^{2}/{\sigma}^{2}=5\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB}}$. The number of sinusoids *K* = 3 is assumed to be known. CoSaMP iterates 50 times. In both ESPRIT and root MUSIC, the model orders are set as *K*, and the covariance matrix orders are *M/* 2 according to [32]. ESPRIT and root MUSIC output frequency estimates, and the corresponding magnitudes are obtained by projection on these frequencies. AMP-CTLS is configured the same as mentioned in Section 5.3.

_{3}is recovered by the AMP-CTLS algorithm and is masked via other tested algorithms. CoSaMP

_{100M}is also tested, but the results are not displayed because the amplitude estimates are too large (

*>*1,000), which is caused by projection onto the ill-conditioned matrix consisting of highly correlated atoms. In OMP

_{2M}, the sinusoids are not exactly at the grid points, so the energies of Si

_{1}and Si

_{2}cannot be totally canceled in the beginning two iterations, and the leakage of the energies masks the smallest signal Si

_{3}. In OMP

_{100M}, all sinusoids are placed at the grid points, and Si

_{1}and Si

_{2}are better recovered than in OMP

_{2M}, but energy leakage of dominant sinusoids still exists. In AMP-CTLS, the grid points are adaptively adjusted to match the sinusoids, so the algorithm is less sensitive to grid mismatch and can achieve better performance than OMP and CoSaMP even if the frequency space is sparsely divided. ESPRIT and root MUSIC do not correctly recover Si

_{3}as AMP-CTLS does. Since there is only one snapshot data, smoothing method [30] is used in these two methods to estimate the covariance matrix, which results in aperture loss [18].

### 5.5 Range-velocity joint estimate in RSF radar

_{1}and T

_{2}and a small target T

_{3}. The number of measurements

*M*is 32. The scattering intensities are

*α*

_{1}=

*α*

_{2}= 10,

*α*

_{3}= 1, and the ranges and the velocities are set such that the

*p*,

*q*parameters are

*p*

_{1}= 10.1

*/M*,

*p*

_{2}= 10.7

*/M*,

*p*

_{3}= 20

*/M*,

*q*

_{1}= 19.4

*/M*,

*q*

_{2}= 10.2

*/M*and

*q*

_{3}= 15.2

*/M*. AMP-CTLS is configured as follows: the

*p*,

*q*spaces are both uniformly divided into

*M*grid points; the normalization factors are

**D**

_{ p }=

**D**

_{ q }=

**I**

*/σ*

_{Δ},

*σ*

_{Δ}= 0.025,

*σ*

_{ w }= 1; and the IJE algorithm iterates fewer than 14 times. In OMP

_{ m }, both the

*p*and

*q*spaces are uniformly divided into

*m*grid points. Note that all of the targets lie on the grid points in OMP

_{10M}. We focus on the results of recovering the weakest target T

_{3}. Change the noise covariance

*σ*

^{2}; thus, the signal to noise ratio $\mathsf{\text{SN}}{\mathsf{\text{R}}}_{\mathsf{\text{3}}}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\alpha}_{3}^{2}/{\sigma}^{2}$ varies. Calculate MSEs of

*p*

_{3},

*q*

_{3}parameters. As shown in Figure 9, the MSEs with AMP-CTLS are lower than those with OMP and converge to the CRB when the SNR

_{3}is no less than 2 dB. The difference between these MSEs of AMP-CTLS at high SNR and CRB is less than 0.5 dB.

### 5.6 DoA estimation

*M*= 8 elements and the interval between neighboring elements

*d*= 1

*/*2 wavelength. There are two sources (

*K*= 2) from angles

*θ*

_{1}= -29° and

*θ*

_{2}= 13°. The amplitudes

*α*

_{1}=

*α*

_{2}= 1 and $\mathsf{\text{SN}}{\mathsf{\text{R}}}_{\mathsf{\text{1}}}=\mathsf{\text{SN}}{\mathsf{\text{R}}}_{\mathsf{\text{2}}}={\alpha}_{1}^{2}/{\sigma}^{2}$, where

*σ*

^{2}is the noise variance. The angle space from -90° to 90° are uniformly divided to

*N*= 90 grid points; thus both sources are 1° off the nearest grid points. The WSS-TLS algorithm is set according to [14]. Since WSS-TLS returns multiple nonzero DoA estimates, we choose two peaks with largest magnitudes as the estimates of

*θ*

_{1}and

*θ*

_{2}. AMP-CTLS models DoA estimation as an harmonic retrieval problem and outputs frequency estimates $\widehat{f}\in [01)$. Denote $\stackrel{\u0303}{f}=\widehat{f}-0.5\left(\mathsf{\text{sgn}}\phantom{\rule{1em}{0ex}}\left(\widehat{f}-0.5\right)+1\right)$, where sgn(·) represents signum function; thus the DoA estimate with AMP-CTLS is obtained by $\widehat{\theta}={\text{arcsin}}^{-1}(\tilde{f}/d)$. In AMP-CTLS, IJE loops no more than 50 times; the normalization factors in (11) are

**D**=

**I**

*/*(

*σ*

_{Δf}),

*σ*

_{Δf}= 0.005,

*σ*

_{ w }= 1. MSEs of

*θ*

_{1}and

*θ*

_{2}estimates versus SNR are shown in Figure 10a, b, respectively. The results indicate that MSEs of AMP-CTLS are closer to CRB than those of WSS-TLS.

## 6 Conclusion

To alleviate the off-grid problem in grid-based MP methods, we implement CTLS into the OMP framework and propose a new algorithm, namely AMP-CTLS. Unlike traditional MP methods, AMP-CTLS adaptively adjusts the grid and dictionary. The convergence of AMP-CTLS is analyzed, and the initial conditions of the algorithm are discussed. Numerical examples indicate that the advantages of AMP-CTLS over OMP and CoSaMP are twofold: (1) it is still efficient even when the continuous parameter space is sparsely divided, but OMP or CoSaMP suffers from performance degradation when the space is not divided reasonably; (2) it can achieve higher accuracy, and the MSEs converge to CRB.

## Appendix 1

## Appendix 2

**W**

_{1}=

**H**,

**W**

_{2}=

*σ*

_{ w }

**I**

_{ N }, $\mathbf{u}={\left[{\mathbf{u}}_{1}^{\mathsf{\text{T}}},{\mathbf{u}}_{2}^{\mathsf{\text{T}}}\right]}^{\mathsf{\text{T}}}$,

**z = y**-

**Φ**

_{Λ}

**x**

_{Λ}. Notice that the matrix

**H**is relative to

**x**

_{Λ}. Replace the optimum problem in (12), (14) with:

**x**

_{Λ}is known, and seek the solution of

**u**. If both

**W**

_{1}and

**W**

_{2}are of full-row rank, we have

**u**

_{1},

**u**

_{2}depends on

**x**

_{Λ}. Then, calculate

**x**

_{Λ}as

However, it is difficult to solve (54), because the Jacobi matrix and Hessian matrix of ${\mathbf{u}}_{1}^{\mathsf{\text{T}}}{\mathbf{u}}_{1}+{\mathbf{u}}_{2}^{\mathsf{\text{H}}}{\mathbf{u}}_{2}$ versus **x**_{Λ} are complex. In this article, we simply consider the frequencies to be complex and ignore the imaginary parts.

**Proof**: We prove that when

**x**

_{Λ}is known, the solution of

**u**is given as (49) to (53). The Lagrangian

*γ*(

**v**):

Because **W**_{1}, **W**_{2} are of full-row rank, $\frac{{\partial}^{2}\gamma}{\partial {\mathbf{v}}^{\mathsf{\text{T}}}\partial {\mathbf{v}}^{*}}$ is a negative definite matrix. We solve **v** with the optimum condition $\frac{\partial \theta}{\partial {\mathbf{v}}^{*}}=0$, and obtain (51) to (53). The proof is complete.

## Algorithm 1. The CTLS technique

- 1)
Input the dictionary

**Φ**_{ Λ }and all the coefficient matrices**R**_{ i }. - 2)
Compose

**W**_{ x }and solve (17) with the initial value given in (20). - 3)
Calculate

**û**via (18) and (19). - 4)
Extract ${\widehat{\Delta \mathbf{g}}}_{\Lambda}$ from

**û**.

## Algorithm 2. The AMP-CTLS algorithm

- 1)
Divide the continuous parameter

*f*into grid point ${\widehat{\mathbf{g}}}^{\left(0\right)}$; input the sparsity level*K*or residual threshold*δ*.Set the support set Λ

^{(0)}= ∅, and the residual error**r**^{(0)}=**y**. - 2)
Calculate the correlations ${p}_{i}=\u27e8{r}^{\left(k\right)},\mathbf{\Phi}\left({\mathit{\u011d}}_{i}^{\left(k\right)}\right)\u27e9$.

- 3)
Find the index

*n*= arg max |*p*_{ i }|. - 4)
Merge the support set Λ

^{(k+1)}= Λ^{(k)}∪ {n}.

**5) Solve (3) with the IJE algorithm**. Then we get ${\widehat{\mathbf{g}}}_{\Lambda}^{(k+1)}$ and ${\widehat{\mathbf{x}}}_{\Lambda}^{(k+1)}$.

- 6)
Update the residual error ${\mathbf{r}}^{(k+1)}=\mathbf{y}-\Phi \left({\widehat{\mathbf{g}}}_{\Lambda}^{(k+1)}\right){\widehat{\mathbf{x}}}_{\Lambda}^{(k+1)}$.

- 7)
Increase

*k*. Return to Step 2 until stop criterion, e.g.,*k < K*, ||**r**^{(k)}||_{2}*< δ*or ||**r**^{(k)}||_{2}*< δ*||**r**^{(k- 1)}||_{2}, is satisfied. - 8)
Simultaneously output ${\widehat{\mathbf{x}}}_{\text{\Lambda}}^{\left(k\right)}$ and ${\widehat{\mathbf{g}}}_{\text{\Lambda}}^{\left(k\right)}$, and set the elements of

**x**not indexed in Λ to**0**.

## Notes

## Declarations

### Acknowledgements

This study was supported in part by the National Natural Science Foundation of China (No. 40901157) and in part by the National Basic Research Program of China (973 Program, No. 2010CB731901). Thanks to the anonymous reviewers for many valuable comments and to Hao Zhu for helpful discussions and her Matlab^{®} programs of WSS-TLS.

## Authors’ Affiliations

## References

- Baraniuk RG: Compressive sensing [lecture notes].
*IEEE Signal Process Mag*2007, 24(4):118-121.View ArticleGoogle Scholar - Dai W, Milenkovic O: Subspace pursuit for compressive sensing signal reconstruction.
*IEEE Trans Inf Theory*2009, 55(5):2230-2249.MathSciNetView ArticleGoogle Scholar - Mallat SG, Zhang Z: Matching pursuits with time-frequency dictionaries.
*IEEE Trans Signal Process*1993, 41(12):3397-3415. 10.1109/78.258082View ArticleMATHGoogle Scholar - Davenport MA, Wakin MB: Analysis of orthogonal matching pursuit using the restricted isometry property.
*IEEE Trans Inf Theory*2010, 56(9):4395-4401.MathSciNetView ArticleGoogle Scholar - Needell D, Vershynin R: Uniform uncertainty principle and signal recovery via regular-ized orthogonal matching pursuit.
*Found Comput Math*2009, 9(3):317-334. 10.1007/s10208-008-9031-3MathSciNetView ArticleMATHGoogle Scholar - Donoho D, Drori I, Tsaig Y, Starck J:
*Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit.*Department of Statistics, Stanford University, California; 2006.Google Scholar - Needell D, Tropp J, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples.
*Appl Comput Harmonic Anal*2009, 26(3):301-321. 10.1016/j.acha.2008.07.002View ArticleMathSciNetMATHGoogle Scholar - Huang T, Liu Y, Meng H, Wang X: Randomized step frequency radar with adaptive compressed sensing.
*Proc IEEE Radar Conf (RADAR), Kansas City, Missouri, USA*2011, 411-414.Google Scholar - Shah S, Yu Y, Petropulu A: Step-frequency radar with compressive sampling (SFR-CS).
*Proc IEEE Int Acoustics Speech and Signal Processing (ICASSP) Conf, Dallas, Texas, USA*2010, 1686-1689.Google Scholar - Hyder MM, Mahata K: Direction-of-arrival estimation using a mixed ℓ
_{2,0}norm approx-imation.*IEEE Trans Signal Process*2010, 58(9):4646-4655.MathSciNetView ArticleGoogle Scholar - Zheng C, Li G, Zhang H, Wang X: An approach of regularization parameter estimation for sparse signal recovery.
*Proc IEEE 10th Int Signal Processing (ICSP) Conf, Beijing, China*2010, 385-388.View ArticleGoogle Scholar - Zheng C, Li G, Zhang H, Wang X: An approach of DOA estimation using noise subspace weighted
*ℓ*_{1}minimization.*Proc IEEE Int Acoustics, Speech and Signal Processing (ICASSP) Conf, Prague, Czech*2011, 2856-2859.Google Scholar - Chi Y, Scharf LL, Pezeshki A, Calderbank AR: Sensitivity to basis mismatch in com-pressed sensing.
*IEEE Trans Signal Process*2011, 59(5):2182-2195.MathSciNetView ArticleGoogle Scholar - Zhu H, Leus G, Giannakis GB: Sparsity-cognizant total least-squares for perturbed compressive sampling.
*IEEE Trans Signal Process*2011, 59(5):2002-2016.MathSciNetView ArticleGoogle Scholar - Chae DH, Sadeghi P, Kennedy RA: Effects of basis-mismatch in compressive sampling of continuous sinusoidal signals.
*Proc 2nd Int Future Computer and Communication (ICFCC) Conf, Wuhan, China*2010, 2: V2.739-V2.743.Google Scholar - Cabrera SD, Malladi S, Mulpuri R, Brito AE: Adaptive refinement in maximally sparse harmonic signal retrieval.
*IEEE 11th Proc and the 3rd IEEE Signal Processing Education Workshop Digital Signal Processing Workshop, Taos Ski Valley, New Mexico, USA*2004, 231-235.Google Scholar - Peyre G: Best basis compressed sensing.
*IEEE Trans Signal Process*2010, 58(5):2613-2622.MathSciNetView ArticleGoogle Scholar - Stoica P, Moses R:
*Spectral Analysis of Signals.*Pearson/Prentice Hall, Upper Saddle River; 2005.Google Scholar - Bellman R:
*Introduction to Matrix Analysis.*Society for Industrial Mathematics, Philadelphia; 1997.Google Scholar - Abatzoglou TJ, Mendel JM, Harada GA: The constrained total least squares technique and its applications to harmonic superresolution.
*IEEE Trans Signal Process*1991, 39(5):1070-1087. 10.1109/78.80955View ArticleMATHGoogle Scholar - Golub G, Van Loan C: An analysis of the total least squares problem.
*SIAM J Numer Anal*1980, 17(6):883-893. 10.1137/0717073MathSciNetView ArticleMATHGoogle Scholar - Boufounos P, Duarte MF, Baraniuk RG: Sparse signal reconstruction from noisy compressive measurements using cross validation.
*Proc IEEE/SP 14th Workshop Statistical Signal Processing SSP'07, Madison, Wisconsin, USA*2007, 299-303.Google Scholar - Brito AE, Cabrera SD, Villalobos C: Optimal sparse representation algorithms for harmonic retrieval.
*Proc Conf Signals, Systems and Computers Record of the Thirty-Fifth Asilomar Conf, Pacific Grove, California, USA*2001, 2: 1407-1411.View ArticleGoogle Scholar - Rao BD, Kreutz-Delgado K: An affine scaling methodology for best basis selection.
*IEEE Trans Signal Process*1999, 47: 187-200. 10.1109/78.738251MathSciNetView ArticleMATHGoogle Scholar - Gorodnitsky IF, Rao BD: Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm.
*IEEE Trans Signal Process*1997, 45(3):600-616. 10.1109/78.558475View ArticleGoogle Scholar - Cabrera S, Rosiles J, Brito A: Affine scaling transformation algorithms for harmonic retrieval in a compressive sampling framework.
*Proc Wavelets XII, SPIE, San Diego, California, USA, 6701*2007, 67012D.1-67012D.12.Google Scholar - Tropp JA: Greed is good: algorithmic results for sparse approximation.
*IEEE Trans Inf Theory*2004, 50(10):2231-2242. 10.1109/TIT.2004.834793MathSciNetView ArticleMATHGoogle Scholar - Axelsson SRJ: Analysis of random step frequency radar and comparison with experiments.
*IEEE Trans Geosci Remote Sens*2007, 45(4):890-904.MathSciNetView ArticleGoogle Scholar - Liu Y, Meng H, Li G, Wang X: Range-velocity estimation of multiple targets in randomised stepped-frequency radar.
*Electron Lett*2008, 44(17):1032-1034. 10.1049/el:20081608View ArticleGoogle Scholar - Odendaal JW, Barnard E, Pistorius CWI: Two-dimensional superresolution radar imaging using the MUSIC algorithm.
*IEEE Trans Antennas Propag*1994, 42(10):1386-1391. 10.1109/8.320744View ArticleGoogle Scholar - Yau SF, Bresler Y: A compact Cramer-Rao bound expression for parametric estimation of superimposed signals.
*IEEE Trans Signal Process*1992, 40(5):1226-1230. 10.1109/78.134484View ArticleMATHGoogle Scholar - Mahata K, Soderstrom T: ESPRIT-like estimation of real-valued sinusoidal frequencies.
*IEEE Trans Signal Process*2004, 52(5):1161-1170. 10.1109/TSP.2004.826169MathSciNetView ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.