 Research
 Open Access
Sparse multidimensional modal analysis using a multigrid dictionary refinement
 Souleymen Sahnoun^{1}Email author,
 ElHadi Djermoune^{1},
 Charles Soussen^{1} and
 David Brie^{1}
https://doi.org/10.1186/16876180201260
© Sahnoun et al; licensee Springer. 2012
 Received: 15 September 2011
 Accepted: 8 March 2012
 Published: 8 March 2012
Abstract
We address the problem of multidimensional modal estimation using sparse estimation techniques coupled with an efficient multigrid approach. Modal dictionaries are obtained by discretizing modal functions (damped complex exponentials). To get a good resolution, it is necessary to choose a fine discretization grid resulting in intractable computational problems due to the huge size of the dictionaries. The idea behind the multigrid approach amounts to refine the dictionary over several levels of resolution. The algorithm starts from a coarse grid and adaptively improves the resolution in dependence of the active set provided by sparse approximation methods. The proposed method is quite general in the sense that it allows one to process in the same way monoand multidimensional signals. We show through simulations that, as compared to highresolution modal estimation methods, the proposed sparse modal method can greatly enhance the estimation accuracy for noisy signals and shows good robustness with respect to the choice of the number of components.
Keywords
 modal estimation
 multidimensional damped sinusoids
 adaptive sparse approximation
 multi grid
1 Introduction
The topic of sparse signal representation has received considerable attention in the last decades since it can find application in a variety of problems, including monoand multidimensional deconvolution [1], statistical regression [2], and radar imaging [3]. Sparse approximation consists of finding a decomposition of a signal y as a linear combination of a limited number of elements from a dictionary Φ ∈ ℂ^{ M × N }, i.e., finding a coefficient vector x that satisfies y ≈ Φx, where Φ is overcomplete (M < N). The sparsity condition on x ensures that the underdetermined problem does not have an infinite number of solutions. The dictionary Φ can be chosen according to its ability to represent the signal with a limited number of coefficients or it can be imposed by the inverse problem at hand. In the latter case, we consider dictionaries whose atoms are function of some parameters. The different atoms of the dictionary are then formed by evaluating this function over a grid which has to be very fine to achieve a certain degree of resolution. This is the case for the modal estimation problem in which the atoms are formed by discretizing the frequency and damping factor axes. In this situation, the challenge is to get a good approximation without a prohibitive computational cost due to the huge size of the dictionary.
This study addresses the modal retrieval problem. This is an important topic in various applications including nuclear magnetic resonance (NMR) spectroscopy [4], wireless communications, radar, and sonar [5]. A modal signal is modeled as a sum of damped complex sinusoids. Several methods have been developed to address the modal estimation problem such as maximum likelihood [6, 7] and subspacebased methods [5, 8–12]. A special case of modal estimation is the harmonic retrieval problem (null damping factor) which has been formulated as a sparse approximation in a number of contributions. In the case of 1D harmonic retrieval problem, we can cite FOCUSS [13], the method of Moal and Fuchs [14], basis pursuit [15], adaptive weighted norm extrapolation [16]. Some other contributions may be found in [17, 18]. Nevertheless, only a few methods have been applied to the damped case. For instance, [19] presents a sparse estimation example of 1D NMR (modal) data by using Lasso [20], LARS [21] and OMP [22]. Goodwin et al. [23] proposed a damped sinusoidal signal decomposition for 1D signals using Matching Pursuit [24]. Similarly, regarding multigrid approaches associated with sparse approximation methods, only some studies are considering the 1D harmonic signals [25, 26]. In the case of 2D signals, an approach combining adaptive multigrid decomposition and TLSProny estimation was proposed in [27]. However, to authors knowledge, there is no study that deals with the problem of estimating parameters of multidimensional (RD) damped sinusoidal signals by sparse approximation methods. This article provides a multidimensional generalization of the study presented in [28, 29].
The goal of this article is to present an efficient approach that reduces the computational cost of sparse algorithms for RD modal estimation problems. The main contributions of the article are as follows. (i) We propose a procedure which iteratively improves the set of atoms in the dictionary. The goal of this procedure is to improve resolution by avoiding computationally expensive operations due to the processing of large matrices; we refer to this procedure as the multigrid approach. (ii) We show how the 1D modal retrieval problem can be addressed using sparse estimation approach by building a dictionary whose atoms are calculated by sampling the modal function over a 2D grid (frequency and damping factor) in order to obtain all possible modes combinations. (iii) We show how to extend the sparse 1D modal estimation problem to RD modal problems.
The article is organized as follows. In Section 2, we provide background material and definitions for sparse signal representation. We present some known sparse methods and we recall the single best replacement (SBR) [30] algorithm and its advantages as compared to other algorithms such as OMP, OLS, and CoSaMP, to name a few. In Section 3, we present the multigrid dictionary refinement approach and we discuss its usefulness to accelerate computation and to improve resolution. In Section 4, we see how the 1D modal retrieval problem may be addressed using sparse approximations and how the multigrid approach can be applied. In Section 5, we extend the sparse multigrid approach to the RD modal estimation problem. In Section 6, experimental results are presented first to compare SBR to a greedy algorithm (OMP) and a solver to the basis pursuit problem. Then, the effectiveness of the multigrid approach will be illustrated on simulated 1D and 2D modal signals. Conclusions are drawn in section 7.
Notations: Upper and lower bold face letters will be used for matrices and column vectors, respectively. A^{ T }denotes the transpose of A. "⊙" will denote the KhatriRao product (columnwise Kronecker) and "⊗" will denote the Kronecker product.
2 Sparse approximations
2.1 Key ideas of sparse approximations

the constrained ℓ _{2}  ℓ _{0} problem whose goal is to seek the minimal error possible at a given level of sparsity s ≥ 1:$\underset{{\Vert \mathbf{x}\Vert}_{0}\le s}{\text{argmin}}\{\mathcal{\mathcal{E}}(\mathbf{x}\text{)}={\Vert \mathbf{y}\mathbf{\Phi}\mathbf{x}\Vert}^{2}\}$(1)

the penalized ℓ _{2}  ℓ _{0} problem:$\underset{\mathbf{x}\in {\u2102}^{n}}{\text{arg}\phantom{\rule{0.5em}{0ex}}\text{min}}\text{{}\mathcal{J}(\mathbf{x},\lambda )=\mathcal{\mathcal{E}}(\mathbf{x})+\lambda {\Vert \mathbf{x}\Vert}_{0}\}.$(2)
The goal is to balance between the two objectives (fitting error and sparsity). Here, the solution sparsity level is controlled by the λ parameter.
The ℓ_{2}ℓ_{0} problem is known to yield an NP complete combinatorial problem which is usually handled by using suboptimal search algorithm. Restricting our attention to greedy algorithms, the main advantage of the ℓ_{2}ℓ_{0} penalized form is to allow both insertion and removal of elements in x, while the constrained form only allows the insertion when optimization is carried through a descent approach [30, 31].
A well known greedy method for sparse approximation is orthogonal matching pursuit (OMP) [22]. It minimizes iteratively the error ℰ(x) until a stoping criterion is met. At each iteration the current estimate of the coefficient vector x is refined by selecting one more atom to yield a substantial improvement of the signal approximation.
This principle leads to approximation that can be sparse and this minimization problem can be solved via linear programming [32]. Instead of ℓ_{2}  ℓ_{1} penalized problem, FOCUSS algorithm [13] uses a ℓ_{ 2 }  ℓ_{ p } penalized criterion. For p < 1, the cost function is nonconvex, and the convergence to global minima is not guaranteed. It is indicated in [33], that the best results are obtained for p close to 1, whereas the convergence is also slowest for p = 1.
In this article, we will use the SBR algorithm together with the multigrid approach. This algorithm has very interesting performance particularly in the case where the dictionary elements are strongly correlated [30], this is precisely the case with modal atoms. The algorithm is briefly recalled in the following paragraph.
2.2 SBR algorithm for penalized ℓ_{2} ℓ_{0}problem
SBR algorithm [30]
 Input. A signal y∈ ℂ^{ M }, a matrix Φ ∈ ℂ^{ M × N }and a scalar λ  Output. A sparse coefficient vector x∈ ℂ^{ N }. 

1. Initialize. Set the index set Ω_{1} = ∅, The coefficient vector x_{1} = [0,..., 0]^{ T }and set the counter to k = 1. 2. Identify. Find the replacement n_{k} of Φ that most decreases the objective function:${n}_{k}\in \underset{n}{\text{arg min}}\text{{}{\mathcal{J}}_{{\Omega}_{k}\u2022n}\left(\lambda \right):={\mathcal{E}}_{\Omega \u2022n}+\lambda \text{Card(}\Omega \u2022n\text{)}}$ 3. Iterate. Update the active set if ${\mathcal{J}}_{{\Omega}_{k}\u2022{n}_{k}}\left(\lambda \right)<{\mathcal{J}}_{{\Omega}_{k}}\left(\lambda \right)$${\Omega}_{k+1}={\Omega}_{k}\u2022{n}_{k}.$ Increment k. Repeat (2)(3) until the active set is not updated. 4. Output. Return x = x_{ k }, the active amplitudes. 
At each iteration, the N possible single replacements Ω • n (n = 1,..., N) are tested (i.e., N least square problems are solved to compute the minimum squared error ℰ_{Ω•n}related to each support Ω•n), then the replacement yielding the minimal value of the cost function $\mathcal{J}\left(\mathbf{x},\lambda \right)$, i.e., ${J}_{\Omega \u2022n}(\lambda ):={\mathcal{E}}_{\Omega \u2022n}+\lambda \text{Card(}\Omega \u2022n)$, is selected. In Table 1, the replacement rule is formulated by "n_{ k }∈..." in case several replacements yield the same value of $\mathcal{J}\left(\mathbf{x},\lambda \right)$. However, this special case is not likely to occur when dealing with real data. A detailed analysis and performance evaluation can be found in [30] where it is shown that SBR performs very well in the case of highly correlated dictionary atoms (which is the case here). We note that unlike many algorithms which require to fix either a maximum number of iterations to be performed or a threshold on the squared error variation (OMP and OLS for instance), the SBR algorithm does not need any stopping condition since it stops when the cost function $\mathcal{J}\left(\mathbf{x},\lambda \right)$ does not decrease anymore. However it requires to tune the parameter λ which is done empirically.
3 Multigrid dictionary refinement
Sparse multigrid algorithm
 Input. A signal y∈ ℂ^{ M }, a matrix Φ ∈ ℂ^{ M × N }, a scaler λ and an integer L  Output. A sparse coefficient vector x_{L1}∈ ℂ^{ N } 

For l = 0 up to l = L  1$\begin{array}{l}{\mathbf{x}}_{l}=\text{SAM(}{\mathbf{\Phi}}_{l}\text{,}\mathbf{y}\text{,}\lambda \text{)}\\ {\mathbf{\Phi}}_{l+1}=\text{ADAPT(}{\mathbf{\Phi}}_{l}\text{,}{\mathbf{x}}_{l}\text{),}\end{array}$ End For. 
The multigrid dictionary refinement is proposed in the context of modal analysis. However, it is worth noticing that this idea can be straightforwardly extended to any dictionary obtained by sampling a continuous function over a grid.
4 Monodimensional modal estimation using sparse approximation and multigrid
4.1 1D data model
and c = [c_{1},...,c_{ F } ]^{ T }.
4.2 1D sparse modal estimation
The problem of modal estimation is an inverse problem since y is given and A,c, and F are unknown. It can be formulated as a sparse signal estimation problem by defining the dictionary Φ gathering all the possible modes obtained by sampling α (P samples) and f (K samples) on a 2D grid. Φ is expressed in (7) with ${\varphi}_{p,k}\left(m\right)={e}^{\left({\alpha}_{p}+j2\pi {f}_{k}\right)m}$ and N = PK Provided that α and f are finely sampled, we can assume that A is a submatrix of Φ so that c correspond to the nonzero elements in x. Then the modal estimation problem can be formulated as a penalized ℓ_{2}  ℓ_{0} sparse signal estimation problem (2). The multigrid approach presented before can be used to that end.
5 Multidimensional modal estimation using sparse approximation and multigrid
5.1 RD data model
where m_{ r } = 0,...,M_{ r }  1 for r = 1,...,R. M_{ r } denotes the sample support of the r th dimension, ${a}_{i,r}={e}^{\left({\alpha}_{i,r}+j2\pi {f}_{i,r}\right)}$ is the i th mode in the r th dimension, with ${\left\{{\alpha}_{i,r}\right\}}_{i=1,r=1}^{F,R}$ the damping factors and ${\left\{{f}_{i,r}\right\}}_{i=1,r=1}^{F,R}$ the frequencies, ${\left\{{c}_{i}\right\}}_{i=1}^{F}$ the complex amplitudes, and e(m_{1}m_{2}...,m_{ R } ) stands for an additive observation noise. The problem is to estimate the set of parameters ${\left\{{\alpha}_{i,r}\right\}}_{i=1,r=1}^{F,R}$ and ${\left\{{c}_{i}\right\}}_{i=1}^{F}$ from the samples y(m_{1},...,m_{ R } ).
where c = [c_{1},c_{2},..., c_{ F } ]^{ T }gathers the complex amplitudes and e is the noise vector.
5.2 ED sparse modal estimation
where the number of atoms is $N={\Pi}_{r=1}^{R}{N}_{r}$, with N_{ r } = P_{ r }K_{ r } . Note that the dictionary Φ can be seen as a 2Rdimensional sampling of the Rdimensional modal function. Then the RD modal retrieval problem can be formulated as a penalized ℓ_{2}  ℓ_{0} sparse signal estimation problem (2).
5.3 Multigrid approach for RD modal estimation
RD sparse multigrid algorithm
 Input. A signal $\mathbf{y}\in {\u2102}^{{M}_{1}{M}_{2}...{M}_{R}}$R matrices ${\mathbf{\Phi}}_{0}^{\left(r\right)}\in {\u2102}^{{M}_{r}\times {N}_{r}}$, a scaler λ and an integer L  Output. A sparse coefficient vector x_{L1}∈ ℂ^{ N } 

For l = 0 up to l = L  1$\begin{array}{c}{\mathbf{\Phi}}_{l}={\mathbf{\Phi}}_{l}^{\left(1\right)}\otimes {\mathbf{\Phi}}_{l}^{\left(2\right)}\otimes \cdots \otimes {\mathbf{\Phi}}_{l}^{\left(R\right)}\\ {\mathbf{x}}_{l}=\text{SAM(}{\mathbf{\Phi}}_{l}\text{,y,}\lambda \text{)}\end{array}$ For r = 1 up to r = R${\mathbf{\Phi}}_{l+1}^{(r)}=\text{ADAPT(}{\mathbf{\Phi}}_{l}^{(r)},{\text{x}}_{l})$ End For End For. 
6 Experimental results
In this section, we present some experimental results for the multigrid sparse modal estimation. First, we present two examples on 1D simulated modal signals. Next, we present and discuss results on a 2D simulated signal and we compare them with those obtained by the subspace method "2D ESPRIT" [5]. We chose the 2D ESPRIT method because a comparative performance study [35] has shown that among different subspacebased high resolution modal estimation techniques, it was the one which was giving the best results.
6.1 1D modal estimation results
First, we compare the results achieved by SBR, OMP, and the primaldual logarithmic barrier (logbarrier) algorithm for solving the BP problem [15]. Here we used the SparseLab^{ a } implementations of OMP and BP (SolveOMP and SolveBP). Then, we present the results achieved using the multigrid SBR approach.
The dictionary is constructed using 20 equally spaced frequency points in the interval [0 1], where each frequency point is coupled with 20 points of damping factors in [0 0.5] and each atom represents a 1D complex sinusoid of M samples. Thus, the dictionary Φ is of size 30 × 400. We notice that the simulated 1D modes belong to the dictionary. Thus, in the noise free case, it is possible to have an exact representation of the signal.
6.2 2D modal estimation results
7 Conclusion
We presented a multigrid technique that adaptively refines ordered dictionaries for sparse approximation. The algorithm may be associated with any sparse method, but clearly the accuracy of the final results will depend on the accuracy of the sparse approximation. Then sparse approximation associated to multigrid are used to tackle monoand multidimensional modal (damped sinusoids) estimation problem. Thus, we applied the SBR algorithm which is shown, using simulation results, to perform better than OMP and Basis Pursuit for modal approximation. Finally, we examined performances of our proposed algorithm over existing Rmodal estimation algorithms. It allows one to separate modes that the Fourier transform cannot resolve without a huge increase in the computational cost, improves robustness to noise and does not require initialization. As perspectives, we will study possible improvements for the sparse multigrid approach in the case of multidimensional modal signals. In particular, we can envisage to used multiple 1D modal estimation to get a low dimension initial dictionary for RD modal estimation. We also are planning to study the convergence properties of the multigrid approach and we will apply the method to the modal estimation of real NMR signals.
Endnote
Declarations
Authors’ Affiliations
References
 Dupé FX, Fadili JM, Starck JL: A proximal iteration for deconvolving poisson noisy images using sparse representations. IEEE Trans Image Process 2009, 18(2):310321.MathSciNetView ArticleGoogle Scholar
 Miller AJ: Subset Selection in Regression. Chapman and Hall, London, UK; 2002.View ArticleGoogle Scholar
 Cetin M, Karl W: Featureenhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans Image Process 2001, 10(4):623631. 10.1109/83.913596View ArticleGoogle Scholar
 Hoch JC, Stern A: Classification and treatment of zygomatic fractures: a review of 1,025 cases. In NMR Data Processing. WileyLiss, NY; 1996.Google Scholar
 Rouquette S, Najim M: Estimation of frequencies and damping factors by twodimensional ESPRIT type methods. IEEE Trans Signal Process 2001, 49(1):237245. 10.1109/78.890367View ArticleGoogle Scholar
 Bresler Y, Makovski A: Exact maximum likelihood parameter estimation of superimposed exponential in noise. IEEE Trans Acoustics Speech Signal Process 1986, 35(5):10811089.View ArticleGoogle Scholar
 Clark MP, Scharf LL: Twodimensional modal anlysis based on maximum likelihood. IEEE Trans Signal Process 1994, 42(6):14431451. 10.1109/78.286959View ArticleGoogle Scholar
 Kumaresan R, Tufts DW: Estimating the parameters of exponentially damped sinusoids and polezero modeling in noise. IEEE Trans Acoustics Speech Signal Process 1982, 30: 833840. 10.1109/TASSP.1982.1163974View ArticleGoogle Scholar
 Stoica P, Nehorai A: MUSIC, maximum likelihood, and cramerrao bound. IEEE Trans Acoustic Speech Signal Process 1989, 37(5):720741. 10.1109/29.17564MathSciNetView ArticleGoogle Scholar
 Roy R, Kailath T: ESPRIT: Estimation of signal parameters via rotational invariance. IEEE Trans Acoustics Speech Signal Process 1989, 37(7):984995. 10.1109/29.32276View ArticleGoogle Scholar
 Sacchini JJ, Steedly WM, Moses RL: Twodimensional Prony modeling and parameter estimation. IEEE Trans Signal Process 1993, 41(11):31273137. 10.1109/78.257242View ArticleGoogle Scholar
 Liu J, Liu X, Ma X: Multidimensional frequency estimation with finite snapshots in the presence of identical frequencies. IEEE Trans Signal Process 2007, 55: 51795194.MathSciNetView ArticleGoogle Scholar
 Gorodnitsky IF, Rao BD: Sparse signal reconstruction from limited data using FOCUSS: a reweighted minimum norm algorithm. IEEE Trans Signal Process 1997, 45(3):600616. 10.1109/78.558475View ArticleGoogle Scholar
 Moal N, Fuchs J: Sinusoids in white noise: a quadratic programming approach. In IEEE Proc ICASSP. Volume 4. Seattle, WA, USA; 1998:22212224.Google Scholar
 Chen SS, Donoho DL: Application of basis pursuit in spectrum estimation. In IEEE Proc ICASSP. Volume 3. Seattle, WA, USA; 1998:18651868.Google Scholar
 Cabrera S, Boonsri T, Brito AE: Principal component separation in sparse signal recovery for harmonic retrieval. Proc of the IEEE SAM Workshop 2002, 249253.Google Scholar
 Bourguignon S, Carfantan H, Idier J: A sparsitybased method for the estimation of spectral lines from irregularly sampled data. IEEE J Sel Topics Signal Process 2007, 1(4):575585.View ArticleGoogle Scholar
 Fuchs J: Convergence of a sparse representations algorithm applicable to real or complex data. IEEE J Sel Topics Signal Process 2007, 1(4):598605.View ArticleGoogle Scholar
 Donoho DL, Tsaig Y: Fast solution of ℓ_{1}norm minimization problems when the solution may be sparse. IEEE Trans Inf 2008, 54(11):47894812. TheoryMathSciNetView ArticleGoogle Scholar
 Tibshirani R: Regression shrinkage and selection via the lasso. In J Royal Stat Soc. Volume 58. Methodol; 1996:267288. Series BGoogle Scholar
 Efron B, Hastie T, Johnstone I, Tibshirani R: Least angle regression. Ann Statist 2004, 32(2):407499. 10.1214/009053604000000067MathSciNetView ArticleGoogle Scholar
 Pati YC, Rezaiifar R, Krishnaprasad PS: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In 1993 Conference Record of The TwentySeventh Asilomar Conference on Signals, Systems and Computers. Volume 1. Pacific Grove, CA, USA; 1993:4044.Google Scholar
 Goodwin M, Vetterli M: Matching pursuit and atomic signal models based on recursive filter banks. IEEE Trans Signal Process 1999, 47(7):18901902. 10.1109/78.771038View ArticleGoogle Scholar
 Mallat SG, Zhifeng Zhang: Matching pursuits with timefrequency dictionaries. IEEE Trans Signal Process 1993, 41(12):33973415. 10.1109/78.258082View ArticleGoogle Scholar
 Cabrera S, Malladi S, Mulpuri R, Brito A: Adaptive refinement in maximally sparse harmonic signal retrieval. IEEE Digital Signal Processing Workshop 2004, 231235.Google Scholar
 Malioutov D, Cetin M, Willsky AS: A sparse signal recontruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 2005, 53(8):30103022.MathSciNetView ArticleGoogle Scholar
 Djermoune EH, Kasalica G, Brie D: Estimation of the parameters of twodimensional NMR spectroscopy signals using an adapted subband decomposition. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2008). Las Vegas, USA; 2008:36413644.Google Scholar
 Sahnoun S, Djermoune E, Soussen C, Brie D: Analyse modale bidimensionnelle par approximation parcimonieuse et multirésolution. In GRETSI. Bordeaux, France; 2011.Google Scholar
 Sahnoun S, Djermoune E, Soussen C, Brie D: Sparse multiresolution modal estimation. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP2011). France; 2011:309312.View ArticleGoogle Scholar
 Soussen C, Idier J, Brie D, Duan J: From BernoulliGaussian deconvolution to sparse signal restoration. IEEE Trans Signal Process 2011, 56(10):45724584.MathSciNetView ArticleGoogle Scholar
 Herzet C, Drémeau A: Bayesian pursuit algorithms. In Proceedings of the European Signal Processing Conference (EUSIPCO2010). Aalborg, Denmark; 2010:14741478.Google Scholar
 Chen SS, Donoho D, Saunders M: Atomic decomposition by basis pursuit. SIAM J SIAM J Sci Comput 1998, 20(1):3361. 10.1137/S1064827596304010MathSciNetView ArticleGoogle Scholar
 Rao B, KreutzDelgado K: An affine scaling methodology for best basis selection. IEEE Trans Signal Process 1999, 47(1):187200. 10.1109/78.738251MathSciNetView ArticleGoogle Scholar
 Kormylo J, Mendel J: Maximum likelihood detection and estimation of BernoulliGaussian processes. IEEE Trans Inf 1982, 28(3):482488. Theory 10.1109/TIT.1982.1056496MathSciNetView ArticleGoogle Scholar
 Sahnoun S, Djermoune EH, Brie D: A comparative study of subspacebased methods for 2D nuclear magnetic resonance spectroscopy signals. In Tech rep. CRAN; 2010.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.