A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios, the performance of our method with respect to the Group Lasso and Trace Norm regularizers when they are applied directly to the target matrix.


Introduction
The solution of the electroencephalographic (EEG) inverse problem to obtain functional brain images is of high value for neurological research and medical diagnosis.It involves the estimation of the Brain Electrical Sources (BES) distribution from noisy EEG measurements, whose relation is modeled according to the linear model where Y ∈ R M×T and A ∈ R M×N are known and represent, respectively, the EEG measurements matrix and the forward operator (a.k.a lead field matrix), S ∈ R N×T denotes the BES matrix, and E ∈ R M×T is a noise matrix.M denotes the number of EEG electrodes, N is the number of brain electrical sources, and T is the number of time instants.
This estimation problem is very challenging: N M, and the existence of silent BES (BES that produce nonmeasurable fields on the scalp surface) implies that the EEG inverse problem has infinite solutions: a silent BES can always be added to a solution of the inverse problem without affecting the EEG measurements.For all these reasons, the EEG inverse problem is an undetermined ill-posed problem [1][2][3][4].
A classical approach to solve an ill-posed problem is to use regularization theory, which involves the replacement of the original ill-posed problem with a 'nearby' wellposed problem whose solution approximates the required solution [5].Solutions developed by this theory are stated in terms of a regularization function, which helps us to select, among the infinite solutions, the one that best fulfills a prescribed constrain (e.g., smoothness, sparsity, and low rank).To define the constrain, we can use mathematical restrictions (minimum norm estimates) or anatomical, physiological, and functional prior information.Some examples of useful neurophysiological information are [1,6]: the irrotational character of the brain current sources, the (smooth) dynamic of the neural signals, the clusters formed by neighboring or functional related BES, and the smoothness and focality of the electromagnetic fields generated and propagated within the volume conductor media (brain cortex, skull, and scalp).
Several regularization functions have been proposed in the EEG community: Hämäläinen and Ilmoniemi in [7] proposed a squared Frobenius norm penalty ( S 2 F ), which they named Minimum Norm Estimate (MNE).This regularization function usually induces solutions that spread over a considerable part of the brain.Uutela et al. in [8] proposed an 1 -norm penalty ( S 1 ).They named their approach Minimum Current Estimate (MCE).This penalty function promotes solutions that tend to be scattered around the true sources.Mixed 1 2 -norm penalties have also been proposed in the framework of the time basis, time frequency dictionaries, and spatial basis decomposition.These mixed norm approaches induce structured sparse solutions and depend on decomposing the BES signals as linear combinations of multiple basis functions, e.g., Ou et al. in [9] proposed the use of temporal basis functions obtained with singular value decomposition (SVD), Gramfort et al. in [10,11] proposed the use of time-frequency Gabor dictionaries, and Haufe et al. in [12] proposed the use of spatial basis Gaussian functions.For a more detailed overview on inverse methods for EEG, see [2,3,13] and references therein.For a more detailed overview on regularization functions applied to structured sparsity problems, see [14][15][16] and references therein.
All of these regularizers try to induce neurophysiological meaningful solutions, which take into account the smoothness and structured sparsity of the BES matrix: during a particular cognitive task, only the BES related with the brain area involved in such a task will be active, and their corresponding time evolution will vary smoothly, that is, the BES matrix will have few nonzero rows, and in addition, the columns will vary smoothly.In this paper, we propose a regularizer that takes into account not only the smoothness and structured sparsity of the BES matrix but also its low rank, capturing this way the linear relation between the active sources and their corresponding neighbors.In order to do so, we propose a new method based on matrix factorization and regularization, with the aim of recovering the latent structure of the BES matrix.In the factorization, the first matrix, which acts as a coding matrix, is penalized using the 21 -norm, and the second one, which acts as a dense, full rank latent source matrix, is penalized using the squared Frobenius norm.
In our approach, the resulting optimization problem is nonsmooth and nonconvex.A standard approach to deal with the nonsmoothness introduced by the nonsmooth regularizers mentioned above is to reformulate the regularization problem as a second-order cone programming (SOCP) problem [12] and use interior point-based solvers.However, interior point-based methods can not handle large scale problems, which is the case of large EEG inverse problems involving thousands of brain sources.Another approach is to try to solve the nonsmooth problem directly, using general nonsmooth optimization methods, for instance, the subgradient method [17].This method can be used if a subgradient of the objective function can be computed efficiently [14].However, its convergence rate is, in practice, slow where k is the iteration counter.In this paper, in order to tackle the nonsmoothness of the optimization problem, we depart from these optimization methods and use instead efficient first-order nonsmooth optimization methods [5,18,19]: forward-backward splitting methods.These methods are also called proximal splitting because the nonsmooth function is involved via its proximity operator.Forwardbackward splitting methods were first introduced in the EEG inverse problem by Gramfort et al. [10,11,13], where they used them to solve nonsmooth optimization problems resulting from the use of mixed 1 2 -norm penalties functions.These methods have drawn, increasing attention in the EEG, machine learning, and signal processing community, especially because of their convergence rates and their ability to deal with large problems [19][20][21].
On the other hand, in order to handle the nonconvexity of the optimization problem, we use an iterative alternating minimization approach: minimizing over the coding matrix while maintaining fixed the latent source matrix and viceversa.Both of these optimization problems are convex: the first one can be solved using proximal splitting methods, while the second one can be solve directly in terms of a matrix inversion.
The rest of the paper is organized as follows.In Section 2, we give an overview of the EEG inverse problem.In Section 3, we present the mathematical background related with the proximal splitting methods.The resulting nonsmooth and nonconvex optimization problem is formally described in Section 4. In Section 5, we propose an alternating minimization algorithm, and its convergence analysis is presented in Section 6. Section 7 is devoted to the numerical evaluation of the algorithm and its comparison with the Group Lasso and Trace Norm regularizers, which consider partially the characteristics of the matrix S: its structured sparsity by using the 21norm and its low rank by using the * -norm, respectively.The advantages of considering both characteristics in a single method, like in the proposed one, become clear in comparison with the independent use of the Group Lasso and Trace Norm regularizers.Finally, conclusions are presented in Section 8.

EEG inverse problem background
The EEG signals represent the electrical activity of one or several assemblies of neurons [22].The area of a neuron http://asp.eurasipjournals.com/content/2014/1/97assembly is small compared to the distance to the observation point (the EEG sensors).Therefore, the electromagnetic fields produced by an active neuron assembly at the sensor level are very similar to the field produced by a current dipole [23].This simplified model is known as the equivalent current dipole (ECD).These ECDs are also known by other names such as BES and current sources.Due to the uniform spatial organization of their dendrites (perpendicular to the brain cortex), the pyramidal neurons are the only neurons that can generate a net current dipole over a piece of cortical surface, whose field is detectable on the scalp [3].According to [24], it is necessary to add the field of ∼ 10 4 pyramidal neurons in order to produce a voltage that is detectable on the scalp.These voltages can be recorded by using different types of electrodes [22], such as disposable (gel-less, and pre-gelled types), reusable disc electrodes (gold, silver, stainless steel, or tin), headbands and electrode caps, saline-based electrodes, and needle electrodes.
Under the quasi-static approximation of Maxwell's equations, we can express the general model for the observed EEG signals y(t) at time t as linear functions of the BES s(t) [9]: where y(t) ∈ R M×1 is the EEG measurements vector, s(t) ∈ R N×1 is the BES vector, e(t) ∈ R M×1 is the noise vector, and A ∈ R M×N is the lead field matrix.In a typical experimental setup, the number of electrodes (M) is ∼ 10 2 , and the number of BES (N) is ∼ 10 3 , 10 3 Mathematical background

Proximity operator
The proximity operator [19,25] corresponding to a convex function f is a mapping from R n to itself and is defined as follows: where • denotes the Euclidean norm.Note that the proximity operator is well defined, because the above minimum exists and is unique (the objective function if strongly convex).

Subdifferential-proximity operator relationship
If f is a convex function on R n and y ∈ R n , then [26] x ∈ ∂f (y) where ∂f (y) denotes the subdifferential of f at y.

Principles of proximal splitting methods
Proximal splitting methods are specifically tailored to solve an optimization problem of the form minimize where f (S) is a smooth convex function, and r(S) is also a convex function, but nonsmooth.From convex analysis [17], we know that S is a minimizer of (5) if and only if 0 ∈ ∂(f + r)(S).This implies the following [18]: Using (4) in the former expression, we get Equation 6 suggests that we can solve (5) using a fixed point iteration: In optimization, ( 7) is known as forward-backward splitting process [19].It consists of two steps: first, it performs a forward gradient descend step S * k = S k − γ ∇f (S k ) and then it performs a backward step S k+1 = prox γ r (S * k ).From (7), we can see the importance of the proximity operator (associated to γ r(S)) with respect to the forward-backward splitting methods, since their main step is to calculate it.If we would have a closed-form expression for such proximity operator or if we could approximate it efficiently (with the approximation errors decreasing at appropriate rates [27]), then we could efficiently solve (7).Furthermore, when f has a Lipschitz continuous gradient, there are fast algorithms to solve (7).
For instance, the Iterative Soft Thresholding Algorithm (ISTA) has a convergence rate of O(1/k), and the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) has a convergence rate of O(1/k 2 ) [5].

Problem formulation
The regularized EEG inverse problem can be stated as follows: where 1 2 ||AS − Y|| 2 F is the square loss function ( F denotes the Frobenius norm), and λ (S) is a nonsmooth penalty term that is used to encode the prior knowledge about the structure of the target matrix S.
In order to induce structured sparse-low-rank solutions, we propose to reformulate (1) using a matrix factorization approach, which involves expressing the matrix S as the product of two matrices, S = BC, obtaining the following nonlinear estimation model: where B and C are penalized using the 21 -norm and the squared Frobenius norm, respectively.The resulting optimization problem can be stated as follows: where λ > 0, ρ > 0, B ∈ R N×K , C ∈ R K ×T , and B(i, :), C(i, :) denote the ith row of B and C, respectively.K {N, T}, λ, and ρ are parameters of the model that must be adjusted.
In this formulation, which we denote as matrix factorization approach, the 21 -norm and the squared Frobenius norm induce structured sparsity and smoothness in the rows of B and C, respectively, and therefore also in the rows of S. Finally, the parameter K encloses the low rank of S: Hence, the proposed regularization framework takes into account all the prior knowledge about the structure of the target matrix S.

Matrix factorization approach
In this section, we address the issue of implementing the learning method (10) numerically.We propose the following reparameterization of (10): Using (11) in the objective function of (10), we get where λ = λ √ λρ, and therefore, we get an optimization problem with only one regularization parameter: The optimization problem ( 12) is a simultaneous minimization over matrices B and C. For a fixed C, the minimum over B can be obtained using FISTA.On the other hand, for a fixed B, the minimum over C can be solved directly in terms of a matrix inversion.These observations suggest an alternating minimization algorithm [15,28]:

Algorithm 1 Alternating minimization algorithm to solve the matrix factorization-regularization-based approach
until stopping condition is met In order to obtain the initialization matrix C 0 , we use an approach based on the singular value decomposition of http://asp.eurasipjournals.com/content/2014/1/97Y. Without loss of generality, let us work with (9) in the noiseless case: From ( 15), we can see that {Y 1 , Y 2 , . . ., Y M } ⊂ Row Space(C), where Y i denotes the ith row of Y. Now, let us obtain a rank-K approximation of Y by using a truncated SVD (truncated at the singular value σ K ): From the SVD theory [29], we know that {Y 1 , Y 2 , . . ., Y M } ⊂ Row Space V ; therefore, we can choose C 0 = V .Then, given C 0 , we can start iterating using ( 13) and ( 14).

Minimization over B (fixed C)
The minimization over B can be stated as follows: where F .This is a composite convex optimization problem involving the sum of a smooth function (F B (B)) and a nonsmooth function (λ B 2,1 ).As we have seen in Section 3, this kind of problem can be efficiently handled using proximal splitting methods (e.g., FISTA).In order to apply FISTA to solve (17), we first need to compute the following: 1.The gradient of the smooth function F B (B) where A denotes the transpose of the matrix A.

An upper bound of the Lipschitz constant (L) of
∇F B (B) (it can also be estimated using a backtracking search routine [5]) where C t−1 C t−1 j denotes the j-th column of the [29], where | • | 2 denotes the spectral norm , we get: From ( 18), taking into account that the spectral norm is submultiplicative and, using the fact that |P | 2 2 ≤ P 2 2 , ∀P ∈ R M×N , we obtain: where where (• ) + = max(• , 0), and by convention 0 0 = 0.

Minimization over C (fixed B)
The minimization over C can be stated as follows: where In what follows, we show how http://asp.eurasipjournals.com/content/2014/1/97 the minimum over C can be solved directly in terms of a matrix inversion: The matrix B t A AB t + I K ∈ R K ×K , and K is supposed to be small; therefore, calculating its corresponding inverse matrix is quite cheap.

Convergence analysis
We are going to analyze the convergence behavior of Algorithm 1 by using the global convergence theory of iterative algorithms developed by Zangwill [30].Note that in this theory, the term 'global convergence' do not imply convergence to a global optimum for all initial points.The property of global convergence expresses, in a sense, the certainty that the algorithm converges to the solution set.Formally, an iterative algorithm ξ , on the set X, is said to be globally convergent provided, for any starting point x 0 ∈ X, the sequence {x n } generated by ξ has a limit point [31].
In order to use the global convergence theory of iterative algorithms, we need a formal definition of iterative algorithm, as well as the definition of a set-valued mapping (a.k.a point-to-set mapping) [30]: Definition 6.1.Set-valued mapping.Given two sets, X and Y , a set-valued mapping defined on X, with range in the power set of Y , P(Y ), is a map, , which assigns to each x ∈ X a subset (x) ∈ P(Y ), : X → P(Y ) Definition 6.2.Iterative algorithm.Let X be a set and x 0 ∈ X a given point.Then, an iterative algorithm ξ , with initial point x 0 , is a set-valued mapping ξ : X → P(X) which generates a sequence {x n } ∞ n=1 via the rule x n+1 ∈ ξ(x n ), n = 0, 1, . . .Now that we know the main building blocks of the global convergence theory of iterative algorithms, we are in a position to state the convergence theorem related to Algorithm 1: Theorem 6.1.Let denotes the iterative Algorithm 1, and suppose that given t=1 is generated and satisfies {B t+1 , C t+1 } ∈ (B t , C t ).Also, let B and C denote the solution sets of ( 13) and (14), respectively: Then, the limit of any convergent subsequence of {B t , C t } ∞ t=1 is in B and C .This convergence theorem is a direct application of Zangwill's global convergence theorem [30].Before going in this assertion, let us show some definitions and theorems used in the proof.Definition 6.3.Compact set.A set X is said to be compact if any sequence (or subsequence) contains a convergent subsequence whose limit is in X.More explicitly, given a subsequence {x n } n∈ N in X, there exists a N 1 ⊂ N such that

1.
A is closed at x 0 2. B is closed on A (x 0 ) 3. if x n → x 0 and y n ∈ A (x n ), then there exists y 0 ∈ Y , such that for some sequence y n j , y n j → y 0 as j → ∞.
Then, the composite map C = B • A is closed at x 0 .Lemma 6.1.[32] Given a real-valued function defined on X × Y , define the set-valued mapping : X → P(Y ) by Let the set-valued mapping M x (x) : X → P(X) determine an algorithm that given a point x 0 generates a sequence {x n } ∞ n=0 through the iteration x n+1 ∈ M x (x n ).Also, let a solution set be given.Suppose Then, the limit of any convergent subsequence of {x n } ∞ n=0 is in .That is, accumulation points x * of the sequence x n lie in .Furthermore, α(x n ) converges to α * , and α(x * )=α * for all accumulation points x * .
Proof.Theorem 6.1.The iterative algorithm can be decomposed into two well-defined iterative algorithms B and C : As we can see from ( 23) and ( 24), at iteration t, the result of B becomes the input of C , and at iteration t + 1, the result of C becomes the input of B ; therefore, we can express as the composition of C and B , that is, To prove this theorem by using Zangwill's global convergence theorem, we need to prove that all its corresponding assumptions are fulfilled.In order to prove assumption 1, let us analyze the sequences {B t } ∞ t=1 and {C t } ∞ t=1 .The sequence {B t } ∞ t=1 is generated by using FISTA, which is a convergent algorithm (B t → B ∞ ) that guarantees that B t ∈ B [5,18].Hence, using Definition 6.3, we can see that the sequence {B t } ∞ t=1 generated by ( 23) lies in a compact set.On the other hand, the sequence {C t } ∞ t=1 is generated by (22), which guarantees that C t ∈ C .This sequence always converges to a point inside C , which implies that C also lies in a compact set.This concludes the proof of assumption 1.
To prove assumption 2, let us use Z(C, t) as the function α(•); thus, in order to verify the fulfillment of assumption 2, we need to prove that From ( 25), we can see that the sequence {C t } ∞ t=1 will always lie in (because C t is generated by ( 22)); therefore, we only need to prove (b).
Let C t+1 be the solution of ( 25) at iteration t + 1; this implies On the other hand, if B t+1 is the solution of ( 23) at iteration t + 1, this implies and from ( 26) and ( 27), we can prove assumption 2(b): In order to prove assumption 3, we need to prove that is closed at C if C / ∈ .To do so, we are going to use Theorem 6.2; therefore, we need to prove that B and C are both closed maps: from ( 23) and ( 24), we can see that their corresponding objective functions are both continuous ∀B ∈ R N×K and ∀C ∈ R K ×T , respectively; hence, by using Weierstrass Theorem and Lemma 6.1, we can conclude that B and C are both closed maps for any C t−1 and B t , respectively, and by using Theorem 6.2, we can conclude that is closed on any C t−1 .
Finally, from all the previous proofs and Zangwill's global convergence theorem, it follows that the limit of any convergent subsequence of {B t , C t } ∞ t=1 is in B and C .

Numerical experiments
In this section, we evaluate the performance of the matrix factorization approach and compare it with the Group Lasso regularizer: and the Trace Norm regularizer: where q = min {N, T} and σ i (S) denotes the ith singular value of S. Both problems ( 28) and (29) were solved using the FISTA implementation of the SPArse Modeling Software (SPAMS) [33,34].
In order to have a reproducible comparison of the different regularization approaches, we generated two synthetic scenarios: • M = 128 EEG electrodes, T = 161 time instants, N = 413 current sources within the brain, but only 12 of them are active: 4 main active sources with their corresponding 2 nearest neighbor sources are also active.The other 401 sources are not active (zero electrical activity).Therefore, in this scenario, the synthetic matrix S is a structured sparse matrix with only 12 nonzero rows (the rows associated to the active sources).• M = 128 EEG electrodes, T = 161 time instants, N = 2, 052 current sources within the brain, but only 40 of them are active: 4 main active sources with their corresponding 9 nearest neighbor sources are also active.The other 2012 sources are not active (zero electrical activity).Therefore, in this scenario, the synthetic matrix S is a structured sparse matrix with only 40 nonzero rows (the rows associated to the active sources).
In both scenarios, the simulated electrical activity (simulated waveforms) associated to the four Main Active Sources (MAS) was obtained from a face perceptionevoked potential study [35,36].To obtain the simulated electrical activity associated to each one of the active neighbor sources, we simply set it as a scaled version of the electrical activity of its corresponding nearest MAS (with a scaled factor equal to 0.5).Hence, there is a linear relation between the four MAS and their corresponding nearest neighbor sources; therefore, in both scenarios, the rank of the synthetic matrix S is equal to 4.
As forward model (A), we used a three-shell concentric spherical head model.In this model, the inner sphere represents the brain, the intermediate layer represents the skull, and the outer layer represents the scalp [37].To obtain the values of each one of the components of the matrix A, we need to solve the EEG forward problem [38]: Given the electrical activity of the current sources within the brain and a model for the geometry of the conducting media (brain, skull and scalp, with its corresponding electric properties), compute the resulting EEG signals.This problem was solved by using the SPM software [39].Taking into account the comments mentioned in Section 2, the N simulated current sources were positioned on a mesh located on the brain cortex, with an orientation fixed perpendicular to it.
Finally, the simulated EEG signals were generated according to (1), where E is a Gaussian noise G(0, σ 2 I) whose variance was set to satisfy a SNR = 20 log 10 dB. Summarizing, our synthetic problems can be stated as follows: Given matrices http://asp.eurasipjournals.com/content/2014/1/97Y ∈ R 128×161 and A ∈ R 128×N , recover the synthetic BES matrix S ∈ R N×161 .According to this, in both scenarios, we want to estimate a BES matrix which is structured sparse and low rank, with its rank equal to the number of MAS simulated.The activity of the four MAS, the synthetic EEG measurements as well as the sparsity pattern of the synthetic BES matrix are shown in Figures 1 and 2 (Ground Truth).
We have used cross-validation to select the regularization parameter λ associated to the Group Lasso and Trace Norm regularizers, as well as the parameters λ and K in the case of the Matrix Factorization approach (K ∈ [1, 2, 3, . . ., 10], λ ∈ [10 −3 , 10 −2 , 10 −1 , . . ., 10 3 ]): the rows of Y are randomly partitioned into three groups of approximately equal size.Each union of two groups forms a train set (TrS), while the remaining group forms a test set (TS).This procedure is carried out three times, each time selecting a different test group.Inverse reconstructions are carried out based on the training sets, obtaining different regression matrices Ŝi .We then evaluate the root mean square error (RMSE) using the test sets and the regression matrices Ŝi : where Y TS i ∈ R M TS i ×T , and A TS i ∈ R M TS i ×N (TS i denotes the index set of the rows that belongs to the ith test set).
Once the estimated matrix Ŝ has been found, we apply a threshold to remove spurious sources with almost zero activity.We have set this threshold equal to the 1% of the mean energy of all the sources.

Performance evaluation
In order to evaluate the performance of the regularizers, we compare the waveform and localization of the four MAS present in the synthetic BES matrix against the four MAS estimated by each one of the regularizers.We also compare the sparsity pattern of the estimated BES matrix Ŝ against the sparsity pattern of the synthetic BES matrix S, as well as the synthetic and predicted EEG measurements.
As we can see from Figures 1 and 2, the Group Lasso and Trace Norm regularizers do not reveal the correct number of linear independent sources, while the Matrix Factorization does: it finds out four linear independent sources in both scenarios.To select such four linear independent MAS, we find a basis for the Column Space( S ) (using a QR factorization), where S is a matrix whose rows are a sorted version of the rows of S (sorted in a descending order of their corresponding energy value).To get the four linear independent MAS estimated by the Group Lasso and Trace Norm regularizers, we followed the same procedure described before and retained the first four components of the basis of the Column Space( S ).
According to Figures 1 and 2, the Matrix Factorization approach is able to estimate a BES matrix with the correct rank and whose sparsity pattern follows closely the sparsity pattern of the true BES matrix, that is, both matrices have a similar structure, which implies that the proposed approach is able to induce the desired solution: A rowstructured sparse matrix, whose nonzero rows encode the linear relation between the active sources and their corresponding nearest neighbor sources.Using the estimated BES matrix, the Matrix Factorization approach is also able to predict a smooth version of the noisy EEG, and the waveforms of the estimated MAS follow closely the waveforms of the true MAS.
As we can see from Figures 1 and 2, Group Lasso is able to estimate a BES matrix with a similar row-sparsity pattern to the true BES matrix, but it does not take into account the linear relation between the nonzero rows, which can be seen from the rank of the estimated BES matrix.The waveforms of the estimated MAS are very similar to the true MAS, but they are not so smooth as the ones estimated by the Matrix Factorization approach.http://asp.eurasipjournals.com/content/2014/1/97As we can see from Figures 1 and 2, the Trace Norm regularizer takes into account the linear relation of the active sources by inducing solutions which are low rank, but, on the other hand, it does not take into account the structured sparsity pattern of the BES matrix.All of this implies that the Trace Norm tends to induce low rank dense solutions, which are not biologically plausible.
According to Figures 3 and 4, the position of the MAS obtained from the BES matrix estimated by the Matrix Factorization approach, the Group Lasso, and Trace Norm regularizers follows closely the position of the true MAS.Nevertheless, it is worth highlighting that before selecting the MAS, we first need an accurate estimation of their number, and the Group Lasso and Trace Norm regularizers were not able to get a precise estimate of it, only the Matrix Factorization were able to.
From these results, we can see that the proposed Matrix Factorization approach outperforms both the Group Lasso and Trace Norm regularizers.The main reason for this is because it combines their two main features: it combines the structured sparsity (from Group Lasso) and the low rank (from Trace Norm) into one unified framework, which implies that it is able to induce structured sparselow-rank solutions which are biologically plausible: few active sources, with linear relations between them.

Conclusions
We have presented a novel approach to solve the EEG inverse problem, which is based on matrix factorization and regularization.Our method combines the ideas behind the Group Lasso (structured sparsity) and Trace Norm (low rank) regularizers into one unified framework.We have also developed and analyzed the convergence of an alternating minimization algorithm to solve the resulting nonsmooth-nonconvex regularization problem.Finally, using simulation studies, we have compared our method with the Group Lasso and Trace Norm regularizers when they are applied directly to the target matrix, and we have shown the gain in performance obtained by our method, hence proving the effectiveness and efficiency of the proposed algorithm.

Definition 6 . 4 .Definition 6 . 5 .
Composite map.Let A : X → Y and B : Y → Z be two set-valued mappings.The composite map C = B • A which takes points x ∈ X to sets C (x) ⊂ Z is defined by C (x) := y∈ A (x) B (y) Closed map.A set-valued mapping :

Figure 1 Figure 2
Figure 1 Simulation results: waveforms of the MAS, EEG estimated, and sparsity pattern of the estimated BES matrix.Experiment setup: 413 sources, 128 EEG electrodes, 161 time instants, 4 main active sources with their corresponding 2 nearest neighbor sources also active.

Figure 3
Figure 3 Localization of the MAS, N = 413 sources.From left to right: Ground Truth, Matrix Factorization, Group Lasso, and Trace Norm.

Figure 4
Figure 4 Localization of the MAS, N = 2, 052 sources.From left to right: Ground Truth, Matrix Factorization, Group Lasso, and Trace Norm.

where Y = y(t 1 ), y(t 2 ), . . . , y(t T ) ∈
ij tells us how the jth BES influences the measure obtained by the ith electrode).Following this notation, the EEG inverse problem can be stated as follows: Given a set of EEG signals (Y) and a forward model (A), estimate the current sources within the brain (S) that produce these signals.