 Research
 Open Access
 Published:
Sparse multidimensional modal analysis using a multigrid dictionary refinement
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 60 (2012)
Abstract
We address the problem of multidimensional modal estimation using sparse estimation techniques coupled with an efficient multigrid approach. Modal dictionaries are obtained by discretizing modal functions (damped complex exponentials). To get a good resolution, it is necessary to choose a fine discretization grid resulting in intractable computational problems due to the huge size of the dictionaries. The idea behind the multigrid approach amounts to refine the dictionary over several levels of resolution. The algorithm starts from a coarse grid and adaptively improves the resolution in dependence of the active set provided by sparse approximation methods. The proposed method is quite general in the sense that it allows one to process in the same way monoand multidimensional signals. We show through simulations that, as compared to highresolution modal estimation methods, the proposed sparse modal method can greatly enhance the estimation accuracy for noisy signals and shows good robustness with respect to the choice of the number of components.
1 Introduction
The topic of sparse signal representation has received considerable attention in the last decades since it can find application in a variety of problems, including monoand multidimensional deconvolution [1], statistical regression [2], and radar imaging [3]. Sparse approximation consists of finding a decomposition of a signal y as a linear combination of a limited number of elements from a dictionary Φ ∈ ℂ^{M × N}, i.e., finding a coefficient vector x that satisfies y ≈ Φx, where Φ is overcomplete (M < N). The sparsity condition on x ensures that the underdetermined problem does not have an infinite number of solutions. The dictionary Φ can be chosen according to its ability to represent the signal with a limited number of coefficients or it can be imposed by the inverse problem at hand. In the latter case, we consider dictionaries whose atoms are function of some parameters. The different atoms of the dictionary are then formed by evaluating this function over a grid which has to be very fine to achieve a certain degree of resolution. This is the case for the modal estimation problem in which the atoms are formed by discretizing the frequency and damping factor axes. In this situation, the challenge is to get a good approximation without a prohibitive computational cost due to the huge size of the dictionary.
This study addresses the modal retrieval problem. This is an important topic in various applications including nuclear magnetic resonance (NMR) spectroscopy [4], wireless communications, radar, and sonar [5]. A modal signal is modeled as a sum of damped complex sinusoids. Several methods have been developed to address the modal estimation problem such as maximum likelihood [6, 7] and subspacebased methods [5, 8–12]. A special case of modal estimation is the harmonic retrieval problem (null damping factor) which has been formulated as a sparse approximation in a number of contributions. In the case of 1D harmonic retrieval problem, we can cite FOCUSS [13], the method of Moal and Fuchs [14], basis pursuit [15], adaptive weighted norm extrapolation [16]. Some other contributions may be found in [17, 18]. Nevertheless, only a few methods have been applied to the damped case. For instance, [19] presents a sparse estimation example of 1D NMR (modal) data by using Lasso [20], LARS [21] and OMP [22]. Goodwin et al. [23] proposed a damped sinusoidal signal decomposition for 1D signals using Matching Pursuit [24]. Similarly, regarding multigrid approaches associated with sparse approximation methods, only some studies are considering the 1D harmonic signals [25, 26]. In the case of 2D signals, an approach combining adaptive multigrid decomposition and TLSProny estimation was proposed in [27]. However, to authors knowledge, there is no study that deals with the problem of estimating parameters of multidimensional (RD) damped sinusoidal signals by sparse approximation methods. This article provides a multidimensional generalization of the study presented in [28, 29].
The goal of this article is to present an efficient approach that reduces the computational cost of sparse algorithms for RD modal estimation problems. The main contributions of the article are as follows. (i) We propose a procedure which iteratively improves the set of atoms in the dictionary. The goal of this procedure is to improve resolution by avoiding computationally expensive operations due to the processing of large matrices; we refer to this procedure as the multigrid approach. (ii) We show how the 1D modal retrieval problem can be addressed using sparse estimation approach by building a dictionary whose atoms are calculated by sampling the modal function over a 2D grid (frequency and damping factor) in order to obtain all possible modes combinations. (iii) We show how to extend the sparse 1D modal estimation problem to RD modal problems.
The article is organized as follows. In Section 2, we provide background material and definitions for sparse signal representation. We present some known sparse methods and we recall the single best replacement (SBR) [30] algorithm and its advantages as compared to other algorithms such as OMP, OLS, and CoSaMP, to name a few. In Section 3, we present the multigrid dictionary refinement approach and we discuss its usefulness to accelerate computation and to improve resolution. In Section 4, we see how the 1D modal retrieval problem may be addressed using sparse approximations and how the multigrid approach can be applied. In Section 5, we extend the sparse multigrid approach to the RD modal estimation problem. In Section 6, experimental results are presented first to compare SBR to a greedy algorithm (OMP) and a solver to the basis pursuit problem. Then, the effectiveness of the multigrid approach will be illustrated on simulated 1D and 2D modal signals. Conclusions are drawn in section 7.
Notations: Upper and lower bold face letters will be used for matrices and column vectors, respectively. A^{T}denotes the transpose of A. "⊙" will denote the KhatriRao product (columnwise Kronecker) and "⊗" will denote the Kronecker product.
2 Sparse approximations
2.1 Key ideas of sparse approximations
Consider an observation vector y ∈ ℂ^{M}which has to be approximated by a sum of vectors from a matrix Φ such that y ≈ Φx, where Φ = [ϕ_{1}..., ϕ_{ N } ] ∈ ℂ^{M × N}and x ∈ ℂ^{N}contains coefficients that select and weight columns ϕ_{ n } . We refer to Φ as a dictionary and to x as a representation of the signal y with respect to the dictionary. To find an accurate approximation for any arbitrary signal y, the dictionary has to be overcomplete, i.e., has to contain a large number of atoms. Therefore, we have to solve an underdetermined system when M < N. Clearly, there is an infinite number of solutions that can be used to represent y. This is why additional conditions have to be imposed. Let us introduce the pseudo norm ℓ_{0},  ⋅ _{0}: ℂ^{N}→ℕ, which counts the number of nonzero components in its arguments. We say that a vector x is ssparse, when x_{0} = s. In the case for an observed signal corrupted with noise, the problem of estimating the sparsest vector x such as Φx approximates y at best can be stated as an ℓ_{2}  ℓ_{0} minimization problem admitting two formulations:

the constrained ℓ _{2}  ℓ _{0} problem whose goal is to seek the minimal error possible at a given level of sparsity s ≥ 1:
$$\underset{{\Vert \mathbf{x}\Vert}_{0}\le s}{\text{argmin}}\{\mathcal{\mathcal{E}}(\mathbf{x}\text{)}={\Vert \mathbf{y}\mathbf{\Phi}\mathbf{x}\Vert}^{2}\}$$(1) 
the penalized ℓ _{2}  ℓ _{0} problem:
$$\underset{\mathbf{x}\in {\u2102}^{n}}{\text{arg}\phantom{\rule{0.5em}{0ex}}\text{min}}\text{{}\mathcal{J}(\mathbf{x},\lambda )=\mathcal{\mathcal{E}}(\mathbf{x})+\lambda {\Vert \mathbf{x}\Vert}_{0}\}.$$(2)
The goal is to balance between the two objectives (fitting error and sparsity). Here, the solution sparsity level is controlled by the λ parameter.
The ℓ_{2}ℓ_{0} problem is known to yield an NP complete combinatorial problem which is usually handled by using suboptimal search algorithm. Restricting our attention to greedy algorithms, the main advantage of the ℓ_{2}ℓ_{0} penalized form is to allow both insertion and removal of elements in x, while the constrained form only allows the insertion when optimization is carried through a descent approach [30, 31].
A well known greedy method for sparse approximation is orthogonal matching pursuit (OMP) [22]. It minimizes iteratively the error ℰ(x) until a stoping criterion is met. At each iteration the current estimate of the coefficient vector x is refined by selecting one more atom to yield a substantial improvement of the signal approximation.
There are other paradigms for solving sparse approximation problems by using ℓ_{2} ℓ_{ p } minimization for p ≤ 1. One of these is basis pursuit (BP) [32], which is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest ℓ_{1} norm of coefficients among all such decompositions:
This principle leads to approximation that can be sparse and this minimization problem can be solved via linear programming [32]. Instead of ℓ_{2}  ℓ_{1} penalized problem, FOCUSS algorithm [13] uses a ℓ_{ 2 }  ℓ_{ p } penalized criterion. For p < 1, the cost function is nonconvex, and the convergence to global minima is not guaranteed. It is indicated in [33], that the best results are obtained for p close to 1, whereas the convergence is also slowest for p = 1.
In this article, we will use the SBR algorithm together with the multigrid approach. This algorithm has very interesting performance particularly in the case where the dictionary elements are strongly correlated [30], this is precisely the case with modal atoms. The algorithm is briefly recalled in the following paragraph.
2.2 SBR algorithm for penalized ℓ_{2} ℓ_{0}problem
The heuristic SBR algorithm (see, Table 1) was proposed in [30] to minimize the mixed ℓ_{2}  ℓ_{0} cost function $\mathcal{J}\left(\mathbf{x},\lambda \right)$ defined in (2) for a fixed parameter value λ. It is a forwardbackward algorithm inspired by the SMLR method [34]. It is an iterative search algorithm that addresses the penalized ℓ_{2}ℓ_{0} problem for fixed λ. We denote by Ω • n the insertion or removal of an index n into/from the active set Ω
At each iteration, the N possible single replacements Ω • n (n = 1,..., N) are tested (i.e., N least square problems are solved to compute the minimum squared error ℰ_{Ω•n}related to each support Ω•n), then the replacement yielding the minimal value of the cost function $\mathcal{J}\left(\mathbf{x},\lambda \right)$, i.e., ${J}_{\Omega \u2022n}(\lambda ):={\mathcal{E}}_{\Omega \u2022n}+\lambda \text{Card(}\Omega \u2022n)$, is selected. In Table 1, the replacement rule is formulated by "n_{ k }∈..." in case several replacements yield the same value of $\mathcal{J}\left(\mathbf{x},\lambda \right)$. However, this special case is not likely to occur when dealing with real data. A detailed analysis and performance evaluation can be found in [30] where it is shown that SBR performs very well in the case of highly correlated dictionary atoms (which is the case here). We note that unlike many algorithms which require to fix either a maximum number of iterations to be performed or a threshold on the squared error variation (OMP and OLS for instance), the SBR algorithm does not need any stopping condition since it stops when the cost function $\mathcal{J}\left(\mathbf{x},\lambda \right)$ does not decrease anymore. However it requires to tune the parameter λ which is done empirically.
3 Multigrid dictionary refinement
As mentioned before, we restrict our attention to the case of modal dictionaries whose atoms are calculated by evaluating a function over a multidimensional grid, the grid dimension being equal to the number of unknown modal parameters. To achieve a highresolution modal estimation, a possible way is to define a high resolution dictionary often resulting in a prohibitive computational burden. Rather than defining a highly resoluted dictionary, we propose to adaptively refine a coarse one through a multigrid scheme. This results in the algorithm sketched on Table 2, where the key step is the adaptation of the dictionary as a function of the previous dictionary and the estimated vector x. The algorithm amounts to insert (resp, remove) atoms in (resp, from) Φ and to rerun the sparse approximation algorithm. We propose two procedures to refine the dictionary. The first one consists in inserting new atoms in the Φ matrix in the neighborhood of active ones. In other words, we first restore the signal x_{(l)}related to the dictionary Φ_{(l)}by applying a sparse approximation method (SAM) at level l. Then we refine the dictionary by inserting atoms in between pairs of Φ_{(l)}, in the neighborhood of each activated atom and we apply again the SAM at level l + 1 to restore x_{(l+1)}with respect to the refined dictionary Φ_{(l+1)}. Thus we refine iteratively the dictionary until the maximum level l = L  1 is reached. This procedure is illustrated on Figure 1a where the dictionary atoms depend on two parameters, f and α. The disadvantage of this procedure is that the size of the dictionary is increasing as new atoms are constantly added between two resolution levels. Hence, the computational cost will be increasing. To cope with this limitation, we propose a second procedure consisting in adding new atoms as in the first procedure and deleting remote nonactive ones (Figure 1b). The later multigrid approach may suffer from one main shortcoming. Indeed, removing nonactive atoms excludes the possibility of further having active components in the neighborhood of already suppressed atoms. A possible way to overcome this problem consists in maintaining all the atoms from the initial dictionary in all the Φ_{(l)}'s.
The multigrid dictionary refinement is proposed in the context of modal analysis. However, it is worth noticing that this idea can be straightforwardly extended to any dictionary obtained by sampling a continuous function over a grid.
4 Monodimensional modal estimation using sparse approximation and multigrid
4.1 1D data model
A 1D complex modal signal containing F modes can be written as:
for m = 0,...,M  1, where (${a}_{i}={e}^{\left({\alpha}_{i}+j2\pi {f}_{i}\right)}$), with ${\left\{{\alpha}_{i}\right\}}_{i=1}^{F}$ the damping factors and ${\left\{{f}_{i}\right\}}_{i=1}^{F}$ the frequencies. ${\left\{{c}_{i}\right\}}_{i=1}^{F}$ are complex amplitudes and e(m) is an additive noise. The problem is to estimate the set of parameters ${\left\{{a}_{i},{c}_{i}\right\}}_{i=1}^{F}$ from the observed sequence y(m). Equation (5) can be written under a matrix form as:
where A is an M × F Vandermonde matrix:
and c = [c_{1},...,c_{ F } ]^{T}.
4.2 1D sparse modal estimation
The problem of modal estimation is an inverse problem since y is given and A,c, and F are unknown. It can be formulated as a sparse signal estimation problem by defining the dictionary Φ gathering all the possible modes obtained by sampling α (P samples) and f (K samples) on a 2D grid. Φ is expressed in (7) with ${\varphi}_{p,k}\left(m\right)={e}^{\left({\alpha}_{p}+j2\pi {f}_{k}\right)m}$ and N = PK Provided that α and f are finely sampled, we can assume that A is a submatrix of Φ so that c correspond to the nonzero elements in x. Then the modal estimation problem can be formulated as a penalized ℓ_{2}  ℓ_{0} sparse signal estimation problem (2). The multigrid approach presented before can be used to that end.
5 Multidimensional modal estimation using sparse approximation and multigrid
5.1 RD data model
A multidimensional complex modal signal containing F modes can be written as:
where m_{ r } = 0,...,M_{ r }  1 for r = 1,...,R. M_{ r } denotes the sample support of the r th dimension, ${a}_{i,r}={e}^{\left({\alpha}_{i,r}+j2\pi {f}_{i,r}\right)}$ is the i th mode in the r th dimension, with ${\left\{{\alpha}_{i,r}\right\}}_{i=1,r=1}^{F,R}$ the damping factors and ${\left\{{f}_{i,r}\right\}}_{i=1,r=1}^{F,R}$ the frequencies, ${\left\{{c}_{i}\right\}}_{i=1}^{F}$ the complex amplitudes, and e(m_{1}m_{2}...,m_{ R } ) stands for an additive observation noise. The problem is to estimate the set of parameters ${\left\{{\alpha}_{i,r}\right\}}_{i=1,r=1}^{F,R}$ and ${\left\{{c}_{i}\right\}}_{i=1}^{F}$ from the samples y(m_{1},...,m_{ R } ).
In order to facilitate the presentation, we rewrite the data model using the KhatriRao product. Given (8), we define the vector y as:
Then, we define R Vandermonde matrices ${\mathbf{A}}_{r}\in {\u2102}^{{M}_{r}\times F}$ with generators ${\left\{{a}_{i,r}\right\}}_{i=1}^{F}$ such that
with r = 1,..., R. It can be checked that
where c = [c_{1},c_{2},..., c_{ F } ]^{T}gathers the complex amplitudes and e is the noise vector.
5.2 ED sparse modal estimation
Similar to the 1D case, the RD modal retrieval problem can be formulated as a sparse signal estimation problem by defining a dictionary that gathers all possible combinations of 1D modes obtained by sampling damping factors and frequencies for each dimension on 2D grids. Let P_{ r } be the number of damping factors α_{1,r}, α2,r,...,αP_{ r,r }and K_{ r } the number of frequencies f_{1,r,}f_{2,r},...,f_{ Kr,r }resulting from the sampling of the r^{th} dimension, then the corresponding dictionary is given by
where ${\varphi}_{p,k}^{\left(r\right)}={\left[{\varphi}_{p,k}^{\left(r\right)}\left(0\right),\dots ,{\varphi}_{p,k}^{\left(r\right)}\left({M}_{r}1\right)\right]}^{T}$ and ${\varphi}_{p,k}^{\left(r\right)}\left({m}_{r}\right)={e}^{\left({\alpha}_{p,r}+j2\pi {f}_{k,r}\right){m}_{r}}$ for p = 1,...,P_{ r } and k = 1,...,K_{ r } . Finally, the dictionary involved in the RD sparse modal approximation is defined by:
where the number of atoms is $N={\Pi}_{r=1}^{R}{N}_{r}$, with N_{ r } = P_{ r }K_{ r } . Note that the dictionary Φ can be seen as a 2Rdimensional sampling of the Rdimensional modal function. Then the RD modal retrieval problem can be formulated as a penalized ℓ_{2}  ℓ_{0} sparse signal estimation problem (2).
5.3 Multigrid approach for RD modal estimation
According to (10), the dictionary is obtained by doing the Kronecker product of R 1D modal dictionaries. Thus, we can still use the multigrid approach presented in section 3 to adapt each 1D dictionary to form the RD dictionary. This results in the algorithm sketched in Table 3.
6 Experimental results
In this section, we present some experimental results for the multigrid sparse modal estimation. First, we present two examples on 1D simulated modal signals. Next, we present and discuss results on a 2D simulated signal and we compare them with those obtained by the subspace method "2D ESPRIT" [5]. We chose the 2D ESPRIT method because a comparative performance study [35] has shown that among different subspacebased high resolution modal estimation techniques, it was the one which was giving the best results.
6.1 1D modal estimation results
First, we compare the results achieved by SBR, OMP, and the primaldual logarithmic barrier (logbarrier) algorithm for solving the BP problem [15]. Here we used the SparseLab^{a} implementations of OMP and BP (SolveOMP and SolveBP). Then, we present the results achieved using the multigrid SBR approach.
The first dataset is a noisefree 1D modal signal y composed of M = 30 samples and made up of three 1D superimposed damped complex sinusoids having the same amplitude. The 1D modes are:
The dictionary is constructed using 20 equally spaced frequency points in the interval [0 1], where each frequency point is coupled with 20 points of damping factors in [0 0.5] and each atom represents a 1D complex sinusoid of M samples. Thus, the dictionary Φ is of size 30 × 400. We notice that the simulated 1D modes belong to the dictionary. Thus, in the noise free case, it is possible to have an exact representation of the signal.
We estimate the parameters of y using SBR, OMP, and logbarrier; the results are shown in Figure 2. The representation given at the bottom of Figure 2 plots the active modes in the frequencymagnitude plane: the vertical lines are located at the frequencies of the active set Ω and their heights represent the corresponding estimated magnitudes  x_{ Ω }. The horizontal segments represent the damping factors. Clearly, the results obtained by SBR and OMP are more sparse than those achieved by the BP solver because BP detects much more than three modes. This is due to the fact that BP is an l_{2} l_{1} solver and thus tends to detect many atoms having low amplitudes, while OMP and SBR do not impose any l_{1} penalty on the amplitudes allowing for the detection of a small number of atoms possibly having large amplitudes. SBR exactly yields the three modes (exact recovery) whereas OMP gives the true frequencies but leads to a wrong α_{2}. The Fourier transform of signal y and its estimates obtained by SBR, OMP and logbarrier algorithms are given on Figure 2 (top). We observe that unlike OMP and logbarrier, SBR correctly estimates the modal parameters of y. Although logbarrier algorithm estimates correctly the frequencies for harmonic signals, it does not estimate correctly the parameters of 1D modal signals and the solution is less sparse than the solutions provided by SBR and OMP.
In the second example, SBR algorithm coupled to multigrid approach is applied to estimate the 1D modes from a simulated 1D modal signal expressed in (5) with 30 samples embedded in additive Gaussian white noise such that the SNR is 23 dB. We start restoration using the same dictionary described in the first example, then we refined it with the multigrid approach. The simulated modes are:
These modes are chosen in such a way that they cannot be separated by the Fourier transform (Figure 3) and they are not initially in the dictionary Φ. Figure 3a shows the spectrum of each sinusoid activated in the first level. Using the second multigrid procedure presented before, we see on Figure 3b that the two 1D modes have been well separated in level 7, which proves the effectiveness of the approach. To give some figures about the efficiency of the multigrid approach, it is interesting to compare the size of the dictionary at the 7th degree of resolution to the uniform dictionary allowing the same resolution. For our example, Φ_{(7)} is of dimension 30 × 520 while the uniform one achieving the same resolution would require a dictionary of size 30 × 6553600. This dramatic increase in the number of atoms is due to the bidimensional nature of the dictionary. Obviously, this complexity becomes huge for biand multidimensional modal signals.
6.2 2D modal estimation results
First, SBR is used in combination with the multigrid approach to estimate parameters of a 2D simulated signal (y_{ sim }) of dimensions 20 × 20 which contain three modes with parameters:
Note that the first two modes are not separated by 2D Fourier transform. Amplitudes are (c_{1}, c_{2}, c_{3}) = (1,1,3) and the additive white noise variance is such that the SNR of the first mode is 7 dB (SNR_{1} = 7). In the following simulations we use this simulated 2D signal (y_{ sim }) with the same modes and amplitudes, we only change the SNR value. The spectrum of the simulated signal is represented by contour lines in Figure 4a where it is verified that the first two peaks are not separable by Fourier transform. The SBR method coupled with the proposed multigrid approach detects well the three components at the third resolution level. Their respective spectras are shown in Figure 4b. To give an idea about the gain in computational cost, the size of the dictionary at the third level is equal to 400 × 3136. The size of the uniform dictionary achieving the same resolution is 400 × 409600; the gain in term of size is a multiplicative factor 130.
In Figure 5, we compare the estimated modes obtained by the 2D ESPRIT [5] and our proposed technique. We use the 2D simulated signal (y_{ sim }) with the SNR set to 20 dB. Both our technique and 2D ESPRIT are able to separate the three modes, whereas there is a slight error made by 2D ESPRIT on the first and second modes. In Figure 6, we decrease the SNR to 7 dB, and only the proposed algorithm is still able to estimate the three modes with an accuracy similar to what was obtained when the SNR equals 20 dB. In that case, the 2D ESPRIT performance decreases and the modal parameters are biased.
In Figure 7, we test the sensitivity of our technique to the correct determination of the number of modes in the signal. In the previous examples, the parameter λ of the penalized cost function in SBR algorithm was fixed to 0.01 and we did not give any constraint on the number of modes to be estimated. However, in the example presented in Figure 7, we use the 2D simulated signal with SNR equal to 20 dB, and we force 2D ESPRIT and the proposed algorithm to estimate 5 modes (while the actual number of modes is 3). We observe that the proposed algorithm is not very sensitive to the correct determination of the number of existing modes in the sense that the true modes are activated and the other active atoms lies in the neighborhood of the true modes. On the contrary, the 2D ESPRIT yields spurious modes located very far from the true ones.
In Figure 8, we analyze the sensitivity of the multigrid algorithm to noise power. We use the same signal y_{sim} with different noise levels SNR_{1}. For each noise level we do 20 Monte Carlo trials and then we calculate the percentage of successful estimations obtained after three multigrid levels. We can see that the proposed algorithm reconstruct exactly the signal with a rate upper than 80% for an SNR_{1} more than 6 dB; and the rate of success is 100% with an SNR_{1} upper than 15 dB.
7 Conclusion
We presented a multigrid technique that adaptively refines ordered dictionaries for sparse approximation. The algorithm may be associated with any sparse method, but clearly the accuracy of the final results will depend on the accuracy of the sparse approximation. Then sparse approximation associated to multigrid are used to tackle monoand multidimensional modal (damped sinusoids) estimation problem. Thus, we applied the SBR algorithm which is shown, using simulation results, to perform better than OMP and Basis Pursuit for modal approximation. Finally, we examined performances of our proposed algorithm over existing Rmodal estimation algorithms. It allows one to separate modes that the Fourier transform cannot resolve without a huge increase in the computational cost, improves robustness to noise and does not require initialization. As perspectives, we will study possible improvements for the sparse multigrid approach in the case of multidimensional modal signals. In particular, we can envisage to used multiple 1D modal estimation to get a low dimension initial dictionary for RD modal estimation. We also are planning to study the convergence properties of the multigrid approach and we will apply the method to the modal estimation of real NMR signals.
Endnote
References
 1.
Dupé FX, Fadili JM, Starck JL: A proximal iteration for deconvolving poisson noisy images using sparse representations. IEEE Trans Image Process 2009, 18(2):310321.
 2.
Miller AJ: Subset Selection in Regression. Chapman and Hall, London, UK; 2002.
 3.
Cetin M, Karl W: Featureenhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans Image Process 2001, 10(4):623631. 10.1109/83.913596
 4.
Hoch JC, Stern A: Classification and treatment of zygomatic fractures: a review of 1,025 cases. In NMR Data Processing. WileyLiss, NY; 1996.
 5.
Rouquette S, Najim M: Estimation of frequencies and damping factors by twodimensional ESPRIT type methods. IEEE Trans Signal Process 2001, 49(1):237245. 10.1109/78.890367
 6.
Bresler Y, Makovski A: Exact maximum likelihood parameter estimation of superimposed exponential in noise. IEEE Trans Acoustics Speech Signal Process 1986, 35(5):10811089.
 7.
Clark MP, Scharf LL: Twodimensional modal anlysis based on maximum likelihood. IEEE Trans Signal Process 1994, 42(6):14431451. 10.1109/78.286959
 8.
Kumaresan R, Tufts DW: Estimating the parameters of exponentially damped sinusoids and polezero modeling in noise. IEEE Trans Acoustics Speech Signal Process 1982, 30: 833840. 10.1109/TASSP.1982.1163974
 9.
Stoica P, Nehorai A: MUSIC, maximum likelihood, and cramerrao bound. IEEE Trans Acoustic Speech Signal Process 1989, 37(5):720741. 10.1109/29.17564
 10.
Roy R, Kailath T: ESPRIT: Estimation of signal parameters via rotational invariance. IEEE Trans Acoustics Speech Signal Process 1989, 37(7):984995. 10.1109/29.32276
 11.
Sacchini JJ, Steedly WM, Moses RL: Twodimensional Prony modeling and parameter estimation. IEEE Trans Signal Process 1993, 41(11):31273137. 10.1109/78.257242
 12.
Liu J, Liu X, Ma X: Multidimensional frequency estimation with finite snapshots in the presence of identical frequencies. IEEE Trans Signal Process 2007, 55: 51795194.
 13.
Gorodnitsky IF, Rao BD: Sparse signal reconstruction from limited data using FOCUSS: a reweighted minimum norm algorithm. IEEE Trans Signal Process 1997, 45(3):600616. 10.1109/78.558475
 14.
Moal N, Fuchs J: Sinusoids in white noise: a quadratic programming approach. In IEEE Proc ICASSP. Volume 4. Seattle, WA, USA; 1998:22212224.
 15.
Chen SS, Donoho DL: Application of basis pursuit in spectrum estimation. In IEEE Proc ICASSP. Volume 3. Seattle, WA, USA; 1998:18651868.
 16.
Cabrera S, Boonsri T, Brito AE: Principal component separation in sparse signal recovery for harmonic retrieval. Proc of the IEEE SAM Workshop 2002, 249253.
 17.
Bourguignon S, Carfantan H, Idier J: A sparsitybased method for the estimation of spectral lines from irregularly sampled data. IEEE J Sel Topics Signal Process 2007, 1(4):575585.
 18.
Fuchs J: Convergence of a sparse representations algorithm applicable to real or complex data. IEEE J Sel Topics Signal Process 2007, 1(4):598605.
 19.
Donoho DL, Tsaig Y: Fast solution of ℓ_{1}norm minimization problems when the solution may be sparse. IEEE Trans Inf 2008, 54(11):47894812. Theory
 20.
Tibshirani R: Regression shrinkage and selection via the lasso. In J Royal Stat Soc. Volume 58. Methodol; 1996:267288. Series B
 21.
Efron B, Hastie T, Johnstone I, Tibshirani R: Least angle regression. Ann Statist 2004, 32(2):407499. 10.1214/009053604000000067
 22.
Pati YC, Rezaiifar R, Krishnaprasad PS: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In 1993 Conference Record of The TwentySeventh Asilomar Conference on Signals, Systems and Computers. Volume 1. Pacific Grove, CA, USA; 1993:4044.
 23.
Goodwin M, Vetterli M: Matching pursuit and atomic signal models based on recursive filter banks. IEEE Trans Signal Process 1999, 47(7):18901902. 10.1109/78.771038
 24.
Mallat SG, Zhifeng Zhang: Matching pursuits with timefrequency dictionaries. IEEE Trans Signal Process 1993, 41(12):33973415. 10.1109/78.258082
 25.
Cabrera S, Malladi S, Mulpuri R, Brito A: Adaptive refinement in maximally sparse harmonic signal retrieval. IEEE Digital Signal Processing Workshop 2004, 231235.
 26.
Malioutov D, Cetin M, Willsky AS: A sparse signal recontruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 2005, 53(8):30103022.
 27.
Djermoune EH, Kasalica G, Brie D: Estimation of the parameters of twodimensional NMR spectroscopy signals using an adapted subband decomposition. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2008). Las Vegas, USA; 2008:36413644.
 28.
Sahnoun S, Djermoune E, Soussen C, Brie D: Analyse modale bidimensionnelle par approximation parcimonieuse et multirésolution. In GRETSI. Bordeaux, France; 2011.
 29.
Sahnoun S, Djermoune E, Soussen C, Brie D: Sparse multiresolution modal estimation. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP2011). France; 2011:309312.
 30.
Soussen C, Idier J, Brie D, Duan J: From BernoulliGaussian deconvolution to sparse signal restoration. IEEE Trans Signal Process 2011, 56(10):45724584.
 31.
Herzet C, Drémeau A: Bayesian pursuit algorithms. In Proceedings of the European Signal Processing Conference (EUSIPCO2010). Aalborg, Denmark; 2010:14741478.
 32.
Chen SS, Donoho D, Saunders M: Atomic decomposition by basis pursuit. SIAM J SIAM J Sci Comput 1998, 20(1):3361. 10.1137/S1064827596304010
 33.
Rao B, KreutzDelgado K: An affine scaling methodology for best basis selection. IEEE Trans Signal Process 1999, 47(1):187200. 10.1109/78.738251
 34.
Kormylo J, Mendel J: Maximum likelihood detection and estimation of BernoulliGaussian processes. IEEE Trans Inf 1982, 28(3):482488. Theory 10.1109/TIT.1982.1056496
 35.
Sahnoun S, Djermoune EH, Brie D: A comparative study of subspacebased methods for 2D nuclear magnetic resonance spectroscopy signals. In Tech rep. CRAN; 2010.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
All authors contributed to all aspects of the article. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Sahnoun, S., Djermoune, EH., Soussen, C. et al. Sparse multidimensional modal analysis using a multigrid dictionary refinement. EURASIP J. Adv. Signal Process. 2012, 60 (2012). https://doi.org/10.1186/16876180201260
Received:
Accepted:
Published:
Keywords
 modal estimation
 multidimensional damped sinusoids
 adaptive sparse approximation
 multi grid