Sparse multidimensional modal analysis using a multigrid dictionary refinement
© Sahnoun et al; licensee Springer. 2012
Received: 15 September 2011
Accepted: 8 March 2012
Published: 8 March 2012
We address the problem of multidimensional modal estimation using sparse estimation techniques coupled with an efficient multigrid approach. Modal dictionaries are obtained by discretizing modal functions (damped complex exponentials). To get a good resolution, it is necessary to choose a fine discretization grid resulting in intractable computational problems due to the huge size of the dictionaries. The idea behind the multigrid approach amounts to refine the dictionary over several levels of resolution. The algorithm starts from a coarse grid and adaptively improves the resolution in dependence of the active set provided by sparse approximation methods. The proposed method is quite general in the sense that it allows one to process in the same way mono-and multidimensional signals. We show through simulations that, as compared to high-resolution modal estimation methods, the proposed sparse modal method can greatly enhance the estimation accuracy for noisy signals and shows good robustness with respect to the choice of the number of components.
The topic of sparse signal representation has received considerable attention in the last decades since it can find application in a variety of problems, including mono-and multidimensional deconvolution , statistical regression , and radar imaging . Sparse approximation consists of finding a decomposition of a signal y as a linear combination of a limited number of elements from a dictionary Φ ∈ ℂ M × N , i.e., finding a coefficient vector x that satisfies y ≈ Φx, where Φ is overcomplete (M < N). The sparsity condition on x ensures that the underdetermined problem does not have an infinite number of solutions. The dictionary Φ can be chosen according to its ability to represent the signal with a limited number of coefficients or it can be imposed by the inverse problem at hand. In the latter case, we consider dictionaries whose atoms are function of some parameters. The different atoms of the dictionary are then formed by evaluating this function over a grid which has to be very fine to achieve a certain degree of resolution. This is the case for the modal estimation problem in which the atoms are formed by discretizing the frequency and damping factor axes. In this situation, the challenge is to get a good approximation without a prohibitive computational cost due to the huge size of the dictionary.
This study addresses the modal retrieval problem. This is an important topic in various applications including nuclear magnetic resonance (NMR) spectroscopy , wireless communications, radar, and sonar . A modal signal is modeled as a sum of damped complex sinusoids. Several methods have been developed to address the modal estimation problem such as maximum likelihood [6, 7] and subspace-based methods [5, 8–12]. A special case of modal estimation is the harmonic retrieval problem (null damping factor) which has been formulated as a sparse approximation in a number of contributions. In the case of 1-D harmonic retrieval problem, we can cite FOCUSS , the method of Moal and Fuchs , basis pursuit , adaptive weighted norm extrapolation . Some other contributions may be found in [17, 18]. Nevertheless, only a few methods have been applied to the damped case. For instance,  presents a sparse estimation example of 1-D NMR (modal) data by using Lasso , LARS  and OMP . Goodwin et al.  proposed a damped sinusoidal signal decomposition for 1-D signals using Matching Pursuit . Similarly, regarding multigrid approaches associated with sparse approximation methods, only some studies are considering the 1-D harmonic signals [25, 26]. In the case of 2-D signals, an approach combining adaptive multigrid decomposition and TLS-Prony estimation was proposed in . However, to authors knowledge, there is no study that deals with the problem of estimating parameters of multidimensional (R-D) damped sinusoidal signals by sparse approximation methods. This article provides a multidimensional generalization of the study presented in [28, 29].
The goal of this article is to present an efficient approach that reduces the computational cost of sparse algorithms for R-D modal estimation problems. The main contributions of the article are as follows. (i) We propose a procedure which iteratively improves the set of atoms in the dictionary. The goal of this procedure is to improve resolution by avoiding computationally expensive operations due to the processing of large matrices; we refer to this procedure as the multigrid approach. (ii) We show how the 1-D modal retrieval problem can be addressed using sparse estimation approach by building a dictionary whose atoms are calculated by sampling the modal function over a 2-D grid (frequency and damping factor) in order to obtain all possible modes combinations. (iii) We show how to extend the sparse 1-D modal estimation problem to R-D modal problems.
The article is organized as follows. In Section 2, we provide background material and definitions for sparse signal representation. We present some known sparse methods and we recall the single best replacement (SBR)  algorithm and its advantages as compared to other algorithms such as OMP, OLS, and CoSaMP, to name a few. In Section 3, we present the multigrid dictionary refinement approach and we discuss its usefulness to accelerate computation and to improve resolution. In Section 4, we see how the 1-D modal retrieval problem may be addressed using sparse approximations and how the multigrid approach can be applied. In Section 5, we extend the sparse multigrid approach to the R-D modal estimation problem. In Section 6, experimental results are presented first to compare SBR to a greedy algorithm (OMP) and a solver to the basis pursuit problem. Then, the effectiveness of the multigrid approach will be illustrated on simulated 1-D and 2-D modal signals. Conclusions are drawn in section 7.
Notations: Upper and lower bold face letters will be used for matrices and column vectors, respectively. A T denotes the transpose of A. "⊙" will denote the Khatri-Rao product (column-wise Kronecker) and "⊗" will denote the Kronecker product.
2 Sparse approximations
2.1 Key ideas of sparse approximations
the constrained ℓ 2 - ℓ 0 problem whose goal is to seek the minimal error possible at a given level of sparsity s ≥ 1:(1)
the penalized ℓ 2 - ℓ 0 problem:(2)
The goal is to balance between the two objectives (fitting error and sparsity). Here, the solution sparsity level is controlled by the λ parameter.
The ℓ2-ℓ0 problem is known to yield an NP complete combinatorial problem which is usually handled by using suboptimal search algorithm. Restricting our attention to greedy algorithms, the main advantage of the ℓ2-ℓ0 penalized form is to allow both insertion and removal of elements in x, while the constrained form only allows the insertion when optimization is carried through a descent approach [30, 31].
A well known greedy method for sparse approximation is orthogonal matching pursuit (OMP) . It minimizes iteratively the error ℰ(x) until a stoping criterion is met. At each iteration the current estimate of the coefficient vector x is refined by selecting one more atom to yield a substantial improvement of the signal approximation.
This principle leads to approximation that can be sparse and this minimization problem can be solved via linear programming . Instead of ℓ2 - ℓ1 penalized problem, FOCUSS algorithm  uses a ℓ 2 - ℓ p penalized criterion. For p < 1, the cost function is nonconvex, and the convergence to global minima is not guaranteed. It is indicated in , that the best results are obtained for p close to 1, whereas the convergence is also slowest for p = 1.
In this article, we will use the SBR algorithm together with the multigrid approach. This algorithm has very interesting performance particularly in the case where the dictionary elements are strongly correlated , this is precisely the case with modal atoms. The algorithm is briefly recalled in the following paragraph.
2.2 SBR algorithm for penalized ℓ2- ℓ0problem
SBR algorithm 
- Input. A signal y∈ ℂ M , a matrix Φ ∈ ℂ M × N and a scalar λ
- Output. A sparse coefficient vector x∈ ℂ N .
1. Initialize. Set the index set Ω1 = ∅, The coefficient vector x1 = [0,..., 0] T and set the counter to k = 1.
2. Identify. Find the replacement nk of Φ that most decreases the objective function:
3. Iterate. Update the active set if
Increment k. Repeat (2)-(3) until the active set is not updated.
4. Output. Return x = x k , the active amplitudes.
At each iteration, the N possible single replacements Ω • n (n = 1,..., N) are tested (i.e., N least square problems are solved to compute the minimum squared error ℰΩ•nrelated to each support Ω•n), then the replacement yielding the minimal value of the cost function , i.e., , is selected. In Table 1, the replacement rule is formulated by "n k ∈..." in case several replacements yield the same value of . However, this special case is not likely to occur when dealing with real data. A detailed analysis and performance evaluation can be found in  where it is shown that SBR performs very well in the case of highly correlated dictionary atoms (which is the case here). We note that unlike many algorithms which require to fix either a maximum number of iterations to be performed or a threshold on the squared error variation (OMP and OLS for instance), the SBR algorithm does not need any stopping condition since it stops when the cost function does not decrease anymore. However it requires to tune the parameter λ which is done empirically.
3 Multigrid dictionary refinement
Sparse multigrid algorithm
- Input. A signal y∈ ℂ M , a matrix Φ ∈ ℂ M × N , a scaler λ and an integer L
- Output. A sparse coefficient vector xL-1∈ ℂ N
For l = 0 up to l = L - 1
The multigrid dictionary refinement is proposed in the context of modal analysis. However, it is worth noticing that this idea can be straightforwardly extended to any dictionary obtained by sampling a continuous function over a grid.
4 Monodimensional modal estimation using sparse approximation and multigrid
4.1 1-D data model
and c = [c1,...,c F ] T .
4.2 1-D sparse modal estimation
The problem of modal estimation is an inverse problem since y is given and A,c, and F are unknown. It can be formulated as a sparse signal estimation problem by defining the dictionary Φ gathering all the possible modes obtained by sampling α (P samples) and f (K samples) on a 2-D grid. Φ is expressed in (7) with and N = PK Provided that α and f are finely sampled, we can assume that A is a submatrix of Φ so that c correspond to the nonzero elements in x. Then the modal estimation problem can be formulated as a penalized ℓ2 - ℓ0 sparse signal estimation problem (2). The multigrid approach presented before can be used to that end.
5 Multidimensional modal estimation using sparse approximation and multigrid
5.1 R-D data model
where m r = 0,...,M r - 1 for r = 1,...,R. M r denotes the sample support of the r th dimension, is the i th mode in the r th dimension, with the damping factors and the frequencies, the complex amplitudes, and e(m1m2...,m R ) stands for an additive observation noise. The problem is to estimate the set of parameters and from the samples y(m1,...,m R ).
where c = [c1,c2,..., c F ] T gathers the complex amplitudes and e is the noise vector.
5.2 E-D sparse modal estimation
where the number of atoms is , with N r = P r K r . Note that the dictionary Φ can be seen as a 2R-dimensional sampling of the R-dimensional modal function. Then the R-D modal retrieval problem can be formulated as a penalized ℓ2 - ℓ0 sparse signal estimation problem (2).
5.3 Multigrid approach for R-D modal estimation
R-D sparse multigrid algorithm
- Input. A signal R matrices , a scaler λ and an integer L
- Output. A sparse coefficient vector xL-1∈ ℂ N
For l = 0 up to l = L - 1
For r = 1 up to r = R
6 Experimental results
In this section, we present some experimental results for the multigrid sparse modal estimation. First, we present two examples on 1-D simulated modal signals. Next, we present and discuss results on a 2-D simulated signal and we compare them with those obtained by the subspace method "2-D ESPRIT" . We chose the 2-D ESPRIT method because a comparative performance study  has shown that among different subspace-based high resolution modal estimation techniques, it was the one which was giving the best results.
6.1 1-D modal estimation results
First, we compare the results achieved by SBR, OMP, and the primal-dual logarithmic barrier (log-barrier) algorithm for solving the BP problem . Here we used the SparseLab a implementations of OMP and BP (SolveOMP and SolveBP). Then, we present the results achieved using the multigrid SBR approach.
The dictionary is constructed using 20 equally spaced frequency points in the interval [0 1], where each frequency point is coupled with 20 points of damping factors in [0 0.5] and each atom represents a 1-D complex sinusoid of M samples. Thus, the dictionary Φ is of size 30 × 400. We notice that the simulated 1-D modes belong to the dictionary. Thus, in the noise free case, it is possible to have an exact representation of the signal.
6.2 2-D modal estimation results
We presented a multigrid technique that adaptively refines ordered dictionaries for sparse approximation. The algorithm may be associated with any sparse method, but clearly the accuracy of the final results will depend on the accuracy of the sparse approximation. Then sparse approximation associated to multigrid are used to tackle mono-and multidimensional modal (damped sinusoids) estimation problem. Thus, we applied the SBR algorithm which is shown, using simulation results, to perform better than OMP and Basis Pursuit for modal approximation. Finally, we examined performances of our proposed algorithm over existing R-modal estimation algorithms. It allows one to separate modes that the Fourier transform cannot resolve without a huge increase in the computational cost, improves robustness to noise and does not require initialization. As perspectives, we will study possible improvements for the sparse multigrid approach in the case of multidimensional modal signals. In particular, we can envisage to used multiple 1-D modal estimation to get a low dimension initial dictionary for R-D modal estimation. We also are planning to study the convergence properties of the multigrid approach and we will apply the method to the modal estimation of real NMR signals.
- Dupé FX, Fadili JM, Starck JL: A proximal iteration for deconvolving poisson noisy images using sparse representations. IEEE Trans Image Process 2009, 18(2):310-321.MathSciNetView ArticleGoogle Scholar
- Miller AJ: Subset Selection in Regression. Chapman and Hall, London, UK; 2002.View ArticleGoogle Scholar
- Cetin M, Karl W: Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans Image Process 2001, 10(4):623-631. 10.1109/83.913596View ArticleGoogle Scholar
- Hoch JC, Stern A: Classification and treatment of zygomatic fractures: a review of 1,025 cases. In NMR Data Processing. Wiley-Liss, NY; 1996.Google Scholar
- Rouquette S, Najim M: Estimation of frequencies and damping factors by two-dimensional ESPRIT type methods. IEEE Trans Signal Process 2001, 49(1):237-245. 10.1109/78.890367View ArticleGoogle Scholar
- Bresler Y, Makovski A: Exact maximum likelihood parameter estimation of superimposed exponential in noise. IEEE Trans Acoustics Speech Signal Process 1986, 35(5):1081-1089.View ArticleGoogle Scholar
- Clark MP, Scharf LL: Two-dimensional modal anlysis based on maximum likelihood. IEEE Trans Signal Process 1994, 42(6):1443-1451. 10.1109/78.286959View ArticleGoogle Scholar
- Kumaresan R, Tufts DW: Estimating the parameters of exponentially damped sinusoids and pole-zero modeling in noise. IEEE Trans Acoustics Speech Signal Process 1982, 30: 833-840. 10.1109/TASSP.1982.1163974View ArticleGoogle Scholar
- Stoica P, Nehorai A: MUSIC, maximum likelihood, and cramer-rao bound. IEEE Trans Acoustic Speech Signal Process 1989, 37(5):720-741. 10.1109/29.17564MathSciNetView ArticleGoogle Scholar
- Roy R, Kailath T: ESPRIT: Estimation of signal parameters via rotational invariance. IEEE Trans Acoustics Speech Signal Process 1989, 37(7):984-995. 10.1109/29.32276View ArticleGoogle Scholar
- Sacchini JJ, Steedly WM, Moses RL: Two-dimensional Prony modeling and parameter estimation. IEEE Trans Signal Process 1993, 41(11):3127-3137. 10.1109/78.257242View ArticleGoogle Scholar
- Liu J, Liu X, Ma X: Multidimensional frequency estimation with finite snapshots in the presence of identical frequencies. IEEE Trans Signal Process 2007, 55: 5179-5194.MathSciNetView ArticleGoogle Scholar
- Gorodnitsky IF, Rao BD: Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm. IEEE Trans Signal Process 1997, 45(3):600-616. 10.1109/78.558475View ArticleGoogle Scholar
- Moal N, Fuchs J: Sinusoids in white noise: a quadratic programming approach. In IEEE Proc ICASSP. Volume 4. Seattle, WA, USA; 1998:2221-2224.Google Scholar
- Chen SS, Donoho DL: Application of basis pursuit in spectrum estimation. In IEEE Proc ICASSP. Volume 3. Seattle, WA, USA; 1998:1865-1868.Google Scholar
- Cabrera S, Boonsri T, Brito AE: Principal component separation in sparse signal recovery for harmonic retrieval. Proc of the IEEE SAM Workshop 2002, 249-253.Google Scholar
- Bourguignon S, Carfantan H, Idier J: A sparsity-based method for the estimation of spectral lines from irregularly sampled data. IEEE J Sel Topics Signal Process 2007, 1(4):575-585.View ArticleGoogle Scholar
- Fuchs J: Convergence of a sparse representations algorithm applicable to real or complex data. IEEE J Sel Topics Signal Process 2007, 1(4):598-605.View ArticleGoogle Scholar
- Donoho DL, Tsaig Y: Fast solution of ℓ1-norm minimization problems when the solution may be sparse. IEEE Trans Inf 2008, 54(11):4789-4812. TheoryMathSciNetView ArticleGoogle Scholar
- Tibshirani R: Regression shrinkage and selection via the lasso. In J Royal Stat Soc. Volume 58. Methodol; 1996:267-288. Series BGoogle Scholar
- Efron B, Hastie T, Johnstone I, Tibshirani R: Least angle regression. Ann Statist 2004, 32(2):407-499. 10.1214/009053604000000067MathSciNetView ArticleGoogle Scholar
- Pati YC, Rezaiifar R, Krishnaprasad PS: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In 1993 Conference Record of The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers. Volume 1. Pacific Grove, CA, USA; 1993:40-44.Google Scholar
- Goodwin M, Vetterli M: Matching pursuit and atomic signal models based on recursive filter banks. IEEE Trans Signal Process 1999, 47(7):1890-1902. 10.1109/78.771038View ArticleGoogle Scholar
- Mallat SG, Zhifeng Zhang: Matching pursuits with time-frequency dictionaries. IEEE Trans Signal Process 1993, 41(12):3397-3415. 10.1109/78.258082View ArticleGoogle Scholar
- Cabrera S, Malladi S, Mulpuri R, Brito A: Adaptive refinement in maximally sparse harmonic signal retrieval. IEEE Digital Signal Processing Workshop 2004, 231-235.Google Scholar
- Malioutov D, Cetin M, Willsky AS: A sparse signal recontruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 2005, 53(8):3010-3022.MathSciNetView ArticleGoogle Scholar
- Djermoune EH, Kasalica G, Brie D: Estimation of the parameters of two-dimensional NMR spectroscopy signals using an adapted subband decomposition. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP-2008). Las Vegas, USA; 2008:3641-3644.Google Scholar
- Sahnoun S, Djermoune E, Soussen C, Brie D: Analyse modale bidimensionnelle par approximation parcimonieuse et multirésolution. In GRETSI. Bordeaux, France; 2011.Google Scholar
- Sahnoun S, Djermoune E, Soussen C, Brie D: Sparse multiresolution modal estimation. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP-2011). France; 2011:309-312.View ArticleGoogle Scholar
- Soussen C, Idier J, Brie D, Duan J: From Bernoulli-Gaussian deconvolution to sparse signal restoration. IEEE Trans Signal Process 2011, 56(10):4572-4584.MathSciNetView ArticleGoogle Scholar
- Herzet C, Drémeau A: Bayesian pursuit algorithms. In Proceedings of the European Signal Processing Conference (EUSIPCO-2010). Aalborg, Denmark; 2010:1474-1478.Google Scholar
- Chen SS, Donoho D, Saunders M: Atomic decomposition by basis pursuit. SIAM J SIAM J Sci Comput 1998, 20(1):33-61. 10.1137/S1064827596304010MathSciNetView ArticleGoogle Scholar
- Rao B, Kreutz-Delgado K: An affine scaling methodology for best basis selection. IEEE Trans Signal Process 1999, 47(1):187-200. 10.1109/78.738251MathSciNetView ArticleGoogle Scholar
- Kormylo J, Mendel J: Maximum likelihood detection and estimation of Bernoulli-Gaussian processes. IEEE Trans Inf 1982, 28(3):482-488. Theory 10.1109/TIT.1982.1056496MathSciNetView ArticleGoogle Scholar
- Sahnoun S, Djermoune EH, Brie D: A comparative study of subspace-based methods for 2-D nuclear magnetic resonance spectroscopy signals. In Tech rep. CRAN; 2010.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.