Skip to main content

A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

Abstract

We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the 21-norm of the coding matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios, the performance of our method with respect to the Group Lasso and Trace Norm regularizers when they are applied directly to the target matrix.

1 Introduction

The solution of the electroencephalographic (EEG) inverse problem to obtain functional brain images is of high value for neurological research and medical diagnosis. It involves the estimation of the Brain Electrical Sources (BES) distribution from noisy EEG measurements, whose relation is modeled according to the linear model

Y = AS + E ,
(1)

where Y R M × T and A R M × N are known and represent, respectively, the EEG measurements matrix and the forward operator (a.k.a lead field matrix), S R N × T denotes the BES matrix, and E R M × T is a noise matrix. M denotes the number of EEG electrodes, N is the number of brain electrical sources, and T is the number of time instants.

This estimation problem is very challenging: NM, and the existence of silent BES (BES that produce nonmeasurable fields on the scalp surface) implies that the EEG inverse problem has infinite solutions: a silent BES can always be added to a solution of the inverse problem without affecting the EEG measurements. For all these reasons, the EEG inverse problem is an undetermined ill-posed problem [14].

A classical approach to solve an ill-posed problem is to use regularization theory, which involves the replacement of the original ill-posed problem with a ‘nearby’ well-posed problem whose solution approximates the required solution [5]. Solutions developed by this theory are stated in terms of a regularization function, which helps us to select, among the infinite solutions, the one that best fulfills a prescribed constrain (e.g., smoothness, sparsity, and low rank). To define the constrain, we can use mathematical restrictions (minimum norm estimates) or anatomical, physiological, and functional prior information. Some examples of useful neurophysiological information are [1, 6]: the irrotational character of the brain current sources, the (smooth) dynamic of the neural signals, the clusters formed by neighboring or functional related BES, and the smoothness and focality of the electromagnetic fields generated and propagated within the volume conductor media (brain cortex, skull, and scalp).

Several regularization functions have been proposed in the EEG community: Hämäläinen and Ilmoniemi in [7] proposed a squared Frobenius norm penalty (S F 2 ), which they named Minimum Norm Estimate (MNE). This regularization function usually induces solutions that spread over a considerable part of the brain. Uutela et al. in [8] proposed an 1-norm penalty (S1). They named their approach Minimum Current Estimate (MCE). This penalty function promotes solutions that tend to be scattered around the true sources. Mixed 12-norm penalties have also been proposed in the framework of the time basis, time frequency dictionaries, and spatial basis decomposition. These mixed norm approaches induce structured sparse solutions and depend on decomposing the BES signals as linear combinations of multiple basis functions, e.g., Ou et al. in [9] proposed the use of temporal basis functions obtained with singular value decomposition (SVD), Gramfort et al. in [10, 11] proposed the use of time-frequency Gabor dictionaries, and Haufe et al. in [12] proposed the use of spatial basis Gaussian functions. For a more detailed overview on inverse methods for EEG, see [2, 3, 13] and references therein. For a more detailed overview on regularization functions applied to structured sparsity problems, see [1416] and references therein.

All of these regularizers try to induce neurophysiological meaningful solutions, which take into account the smoothness and structured sparsity of the BES matrix: during a particular cognitive task, only the BES related with the brain area involved in such a task will be active, and their corresponding time evolution will vary smoothly, that is, the BES matrix will have few nonzero rows, and in addition, the columns will vary smoothly. In this paper, we propose a regularizer that takes into account not only the smoothness and structured sparsity of the BES matrix but also its low rank, capturing this way the linear relation between the active sources and their corresponding neighbors. In order to do so, we propose a new method based on matrix factorization and regularization, with the aim of recovering the latent structure of the BES matrix. In the factorization, the first matrix, which acts as a coding matrix, is penalized using the 21-norm, and the second one, which acts as a dense, full rank latent source matrix, is penalized using the squared Frobenius norm.

In our approach, the resulting optimization problem is nonsmooth and nonconvex. A standard approach to deal with the nonsmoothness introduced by the nonsmooth regularizers mentioned above is to reformulate the regularization problem as a second-order cone programming (SOCP) problem [12] and use interior point-based solvers. However, interior point-based methods can not handle large scale problems, which is the case of large EEG inverse problems involving thousands of brain sources. Another approach is to try to solve the nonsmooth problem directly, using general nonsmooth optimization methods, for instance, the subgradient method [17]. This method can be used if a subgradient of the objective function can be computed efficiently [14]. However, its convergence rate is, in practice, slow (O(1/ k )), where k is the iteration counter. In this paper, in order to tackle the nonsmoothness of the optimization problem, we depart from these optimization methods and use instead efficient first-order nonsmooth optimization methods [5, 18, 19]: forward-backward splitting methods. These methods are also called proximal splitting because the nonsmooth function is involved via its proximity operator. Forward-backward splitting methods were first introduced in the EEG inverse problem by Gramfort et al. [10, 11, 13], where they used them to solve nonsmooth optimization problems resulting from the use of mixed 12-norm penalties functions. These methods have drawn, increasing attention in the EEG, machine learning, and signal processing community, especially because of their convergence rates and their ability to deal with large problems [1921].

On the other hand, in order to handle the nonconvexity of the optimization problem, we use an iterative alternating minimization approach: minimizing over the coding matrix while maintaining fixed the latent source matrix and viceversa. Both of these optimization problems are convex: the first one can be solved using proximal splitting methods, while the second one can be solve directly in terms of a matrix inversion.

The rest of the paper is organized as follows. In Section 2, we give an overview of the EEG inverse problem. In Section 3, we present the mathematical background related with the proximal splitting methods. The resulting nonsmooth and nonconvex optimization problem is formally described in Section 4. In Section 5, we propose an alternating minimization algorithm, and its convergence analysis is presented in Section 6. Section 7 is devoted to the numerical evaluation of the algorithm and its comparison with the Group Lasso and Trace Norm regularizers, which consider partially the characteristics of the matrix S: its structured sparsity by using the 21-norm and its low rank by using the -norm, respectively. The advantages of considering both characteristics in a single method, like in the proposed one, become clear in comparison with the independent use of the Group Lasso and Trace Norm regularizers. Finally, conclusions are presented in Section 8.

2 EEG inverse problem background

The EEG signals represent the electrical activity of one or several assemblies of neurons [22]. The area of a neuron assembly is small compared to the distance to the observation point (the EEG sensors). Therefore, the electromagnetic fields produced by an active neuron assembly at the sensor level are very similar to the field produced by a current dipole [23]. This simplified model is known as the equivalent current dipole (ECD). These ECDs are also known by other names such as BES and current sources. Due to the uniform spatial organization of their dendrites (perpendicular to the brain cortex), the pyramidal neurons are the only neurons that can generate a net current dipole over a piece of cortical surface, whose field is detectable on the scalp [3]. According to [24], it is necessary to add the field of 104 pyramidal neurons in order to produce a voltage that is detectable on the scalp. These voltages can be recorded by using different types of electrodes [22], such as disposable (gel-less, and pre-gelled types), reusable disc electrodes (gold, silver, stainless steel, or tin), headbands and electrode caps, saline-based electrodes, and needle electrodes.

Under the quasi-static approximation of Maxwell’s equations, we can express the general model for the observed EEG signals y(t) at time t as linear functions of the BES s(t) [9]:

y(t)=As(t)+e(t),
(2)

where y(t) R M × 1 is the EEG measurements vector, s(t) R N × 1 is the BES vector, e(t) R M × 1 is the noise vector, and A R M × N is the lead field matrix. In a typical experimental setup, the number of electrodes (M) is 102, and the number of BES (N) is 103, 104. We can express the former model for all time instants {t1,t2,…,t T } (corresponding to some observation time window) by using the matrix formulation (1), where Y= y ( t 1 ) , y ( t 2 ) , , y ( t T ) R M × T , S= s ( t 1 ) , s ( t 2 ) , , s ( t T ) R N × T , and E= e ( t 1 ) , e ( t 2 ) , , e ( t T ) R M × T . The i th row of the matrix Y represents the electrical activity recorded by the i th EEG electrode during the observation time window. In the BES matrix S, each row represents the time evolution of one brain electrical source, and each column represents the activity of all the corresponding sources in a particular time instant. Finally, the forward operator A summarizes the geometric and electric properties of the conducting media (brain, skull, and scalp) and establishes the link between the current sources and EEG sensors (A i j tells us how the j th BES influences the measure obtained by the i th electrode). Following this notation, the EEG inverse problem can be stated as follows: Given a set of EEG signals (Y) and a forward model (A), estimate the current sources within the brain (S) that produce these signals.

3 Mathematical background

3.1 Proximity operator

The proximity operator [19, 25] corresponding to a convex function f is a mapping from R n to itself and is defined as follows:

prox f ( z ) = argmin x R n f ( x ) + 1 2 x z 2 ,
(3)

where · denotes the Euclidean norm. Note that the proximity operator is well defined, because the above minimum exists and is unique (the objective function if strongly convex).

3.2 Subdifferential-proximity operator relationship

If f is a convex function on R n and y R n , then [26]

x ∂f ( y ) y = prox f ( x + y ) ,
(4)

where f(y) denotes the subdifferential of f at y.

3.3 Principles of proximal splitting methods

Proximal splitting methods are specifically tailored to solve an optimization problem of the form

minimize S f ( S ) + r ( S ) ,
(5)

where f(S) is a smooth convex function, and r(S) is also a convex function, but nonsmooth. From convex analysis [17], we know that S is a minimizer of (5) if and only if 0(f+r)(S). This implies the following [18]:

0 ( f + r ) ( S ) 0 { ∂f ( S ) + ∂r ( S ) } f ( S ) ∂r ( S ) γ f ( S ) γ∂r ( S ) ( S γ f ( S ) ) S ∂γr ( S )

Using (4) in the former expression, we get

S = prox γr ( S γ f ( S ) )
(6)

Equation 6 suggests that we can solve (5) using a fixed point iteration:

S k + 1 = prox γr ( S k γ f ( S k ) )
(7)

In optimization, (7) is known as forward-backward splitting process [19]. It consists of two steps: first, it performs a forward gradient descend step S k = S k γf( S k ) and then it performs a backward step S k + 1 = prox γr ( S k ).

From (7), we can see the importance of the proximity operator (associated to γ r(S)) with respect to the forward-backward splitting methods, since their main step is to calculate it. If we would have a closed-form expression for such proximity operator or if we could approximate it efficiently (with the approximation errors decreasing at appropriate rates [27]), then we could efficiently solve (7). Furthermore, when f has a Lipschitz continuous gradient, there are fast algorithms to solve (7). For instance, the Iterative Soft Thresholding Algorithm (ISTA) has a convergence rate of O(1/k), and the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) has a convergence rate of O(1/k2) [5].

4 Problem formulation

The regularized EEG inverse problem can be stated as follows:

S ̂ = argmin S 1 2 | | AS Y | | F 2 + λΩ ( S ) , λ > 0 ,
(8)

where 1 2 ||ASY| | F 2 is the square loss function ( F denotes the Frobenius norm), and λ Ω(S) is a nonsmooth penalty term that is used to encode the prior knowledge about the structure of the target matrix S.

In order to induce structured sparse-low-rank solutions, we propose to reformulate (1) using a matrix factorization approach, which involves expressing the matrix S as the product of two matrices, S=B C, obtaining the following nonlinear estimation model:

Y = ABC + E ,
(9)

where B and C are penalized using the 21-norm and the squared Frobenius norm, respectively. The resulting optimization problem can be stated as follows:

B ̂ , C ̂ = argmin B , C 1 2 A(BC) Y F 2 + λ i = 1 N B ( i , : ) 2 + ρ 2 i = 1 K C ( i , : ) 2 2 = argmin B , C 1 2 A(BC) Y F 2 + λ B 2 , 1 + ρ 2 C F 2 ,
(10)

where λ>0, ρ>0, B R N × K , C R K × T , and B(i,:), C(i,:) denote the i th row of B and C, respectively. K{N,T}, λ, and ρ are parameters of the model that must be adjusted.

In this formulation, which we denote as matrix factorization approach, the 21-norm and the squared Frobenius norm induce structured sparsity and smoothness in the rows of B and C, respectively, and therefore also in the rows of S. Finally, the parameter K encloses the low rank of S:

rank ( B ) min N , K rank ( B ) K rank ( C ) min K , T rank ( C ) K rank ( BC ) min rank ( B ) , rank ( C ) K rank ( S ) K

Hence, the proposed regularization framework takes into account all the prior knowledge about the structure of the target matrix S.

5 Optimization algorithm

5.1 Matrix factorization approach

In this section, we address the issue of implementing the learning method (10) numerically. We propose the following reparameterization of (10):

B = λρ B ~ , C = 1 λρ C ~ BC = λρ B ~ 1 λρ C ~ BC = B ~ C ~
(11)

Using (11) in the objective function of (10), we get

1 2 A ( B ~ C ~ ) Y F 2 + λ λρ B ~ 2 , 1 + ρ 2 1 λρ C ~ F 2 1 2 A ( B ~ C ~ ) Y F 2 + λ λρ B ~ 2 , 1 + λρ 2 ( λρ ) 2 C ~ F 2 1 2 A ( B ~ C ~ ) Y F 2 + λ ~ B ~ 2 , 1 + 1 2 C ~ F 2

where λ ~ =λ λρ , and therefore, we get an optimization problem with only one regularization parameter:

B ̂ , C ̂ = argmin B,C 1 2 A(BC) Y F 2 + λ B 2 , 1 + 1 2 C F 2 , λ > 0
(12)

The optimization problem (12) is a simultaneous minimization over matrices B and C. For a fixed C, the minimum over B can be obtained using FISTA. On the other hand, for a fixed B, the minimum over C can be solved directly in terms of a matrix inversion. These observations suggest an alternating minimization algorithm [15, 28]:

In order to obtain the initialization matrix C 0 , we use an approach based on the singular value decomposition of Y. Without loss of generality, let us work with (9) in the noiseless case:

Y = ABC
(15)

From (15), we can see that {Y 1 ,Y 2 ,…,Y M }RowSpace(C), where Y i denotes the i th row of Y.

Now, let us obtain a rank-K approximation of Y by using a truncated SVD (truncated at the singular value σ K ):

Y U M × K Σ K × K V K × T
(16)

From the SVD theory [29], we know that {Y1,Y 2 ,…,Y M }Row Space(V); therefore, we can choose C 0 =V. Then, given C 0 , we can start iterating using (13) and (14).

5.1.1 Minimization over B(fixed C)

The minimization over B can be stated as follows:

B t = argmin B F B ( B ) + λ B 2 , 1 , λ > 0 ,
(17)

where F B (B)= 1 2 A(BC t-1 ) Y F 2 + 1 2 C t-1 F 2 . This is a composite convex optimization problem involving the sum of a smooth function (F B (B)) and a nonsmooth function (λB2,1). As we have seen in Section 3, this kind of problem can be efficiently handled using proximal splitting methods (e.g., FISTA). In order to apply FISTA to solve (17), we first need to compute the following:

  1. 1.

    The gradient of the smooth function F B (B)

    F B ( B ) = F B ( B ) B = A ( A ( B C t 1 ) Y ) C t 1

    where A denotes the transpose of the matrix A.

  2. 2.

    An upper bound of the Lipschitz constant (L) of F B (B) (it can also be estimated using a backtracking search routine [5])

    F B ( B 1 ) F B ( B 2 ) 2 2 = A A B 1 C t 1 C t 1 A A B 2 C t 1 C t 1 2 2 = A A ( B 1 B 2 ) C t 1 C t 1 2 2 = j = 1 K A A B 1 B 2 C t 1 C t 1 j 2 2

    where C t 1 C t 1 j denotes the j-th column of the matrix C t 1 C t 1 . Taking into account that Qx 2 |Q | 2 x 2 ,x R N ,Q R M × N [29], where |2 denotes the spectral norm, we get:

    F B B 1 F B B 2 2 2 = j = 1 K A A B 1 B 2 × C t 1 C t 1 j 2 2 j = 1 K | A A B 1 B 2 | 2 2 × C t 1 C t 1 j 2 2 | A A B 1 B 2 | 2 2 × j = 1 K C t 1 C t 1 j 2 2 | A A B 1 B 2 | 2 2 × C t 1 C t 1 2 2
    (18)

    From (18), taking into account that the spectral norm is submultiplicative (|PQ | 2 |P | 2 |Q | 2 ,P R M × N ,Q R N × T ), it follows that:

    F B ( B 1 ) F B ( B 2 ) 2 2 | A A | 2 2 | B 1 B 2 | 2 2 | C t 1 C t 1 2 2

    and, using the fact that |P | 2 2 P 2 2 ,P R M × N , we obtain:

    F B ( B 1 ) F B ( B 2 ) 2 2 | A A | 2 2 B 1 B 2 2 2 × C t 1 C t 1 2 2 L B 1 B 2 2 2
    (19)

    where L=| A A | 2 C t 1 C t 1 2 .

  3. 3.

    Proximal operator associated to the nonsmooth function λ·2,1

    prox λ · 2 , 1 ( B ) = argmin X λ X 2 , 1 + 1 2 X B 2 2 = prox λ · 2 , 1 ( B ) i , : i = 1 i = N = B i , : B i , : 2 ( B i , : 2 λ ) + i = 1 i = N
    (20)

    where (·)+= max(·,0), and by convention 0 0 =0.

5.1.2 Minimization over C(fixed B)

The minimization over C can be stated as follows:

C t = argmin C F C ( C )
(21)

where F C (C)= 1 2 A(B t C)Y F 2 +λ B t 2 , 1 + 1 2 C F 2 is a smooth function of C. In what follows, we show how the minimum over C can be solved directly in terms of a matrix inversion:

F C ( C ) = F C ( C ) C = B t A ( A ( B t C ) Y ) + C F C ( C ) = 0 B t A ( A ( B t C t ) Y ) + C t = 0 C t = B t A A B t + I K 1 B t A Y
(22)

The matrix B t A A B t + I K R K × K , and K is supposed to be small; therefore, calculating its corresponding inverse matrix is quite cheap.

6 Convergence analysis

We are going to analyze the convergence behavior of Algorithm 1 by using the global convergence theory of iterative algorithms developed by Zangwill [30]. Note that in this theory, the term ‘global convergence’ do not imply convergence to a global optimum for all initial points. The property of global convergence expresses, in a sense, the certainty that the algorithm converges to the solution set. Formally, an iterative algorithm ξ, on the set X, is said to be globally convergent provided, for any starting point x0X, the sequence {x n } generated by ξ has a limit point [31].

In order to use the global convergence theory of iterative algorithms, we need a formal definition of iterative algorithm, as well as the definition of a set-valued mapping (a.k.a point-to-set mapping) [30]:

Definition 6.1

Set-valued mapping. Given two sets, X and Y, a set-valued mapping defined on X, with range in the power set of Y, P(Y), is a map, Φ, which assigns to each xX a subset Φ(x)P(Y),

Φ : X P ( Y )

Definition 6.2

Iterative algorithm. Let X be a set and x0X a given point. Then, an iterative algorithm ξ, with initial point x0, is a set-valued mapping

ξ : X P ( X )

which generates a sequence x n n = 1 via the rule xn+1ξ(x n ), n=0,1,…

Now that we know the main building blocks of the global convergence theory of iterative algorithms, we are in a position to state the convergence theorem related to Algorithm 1:

Theorem 6.1

Let Φ denotes the iterative Algorithm 1, and suppose that given Y R M × T , A R M × N , B 0 R N × K , C 0 R K × T , K, and λ, the sequence B t , C t t = 1 is generated and satisfies {B t+1 ,C t+1 }Φ(B t ,C t ). Also, let Ω B and Ω C denote the solution sets of (13) and (14), respectively:

Ω B = B R N × K 0 1 2 | | A(BC t-1 ) Y | | F 2 + λ | | B | | 2 , 1 + 1 2 | | C t-1 | | F 2 Ω C = C R K × T 1 2 | | A(B t C) Y | | F 2 + λ | | B t | | 2 , 1 1 2 | | C | | F 2 = 0

Then, the limit of any convergent subsequence of B t , C t t = 1 is in Ω B and Ω C .

This convergence theorem is a direct application of Zangwill’s global convergence theorem [30]. Before going in this assertion, let us show some definitions and theorems used in the proof.

Definition 6.3

Compact set. A set X is said to be compact if any sequence (or subsequence) contains a convergent subsequence whose limit is in X. More explicitly, given a subsequence x n n N ̂ in X, there exists a N 1 ̂ N ̂ such that

x n x , n N 1 ̂

with x X (we write convergence of subsequences as x n x , which is equivalent to lim n x n = x ).

Definition 6.4

Composite map. Let Π A :XY and Π B :YZ be two set-valued mappings. The composite map Π C =Π B Π A which takes points xX to sets Π C (x)Z is defined by

Π C ( x ) : = y Π A ( x ) Π B ( y )

Definition 6.5

Closed map. A set-valued mapping Π:XP(Y) is closed at x0X provided

  1. 1.

    x n x 0 a s n, x n X

  2. 2.

    y n y 0 a s n, y n , y 0Y

  3. 3.

    y n Π(x n )

implies y0Π(x0). The map Π is called closed on SX provided is closed at each xS.

Theorem 6.2

Composition of closed maps. Let Π A :XY and Π B :YZ be two set-valued mappings. Suppose

  1. 1.

    Π A is closed at x 0

  2. 2.

    Π B is closed on Π A (x 0)

  3. 3.

    if x n x 0 and y n Π A (x n ), then there exists y 0Y, such that for some sequence y n j , y n j y 0 as j.

Then, the composite map Π C =Π B Π A is closed at x0.

Lemma 6.1

[32] Given a real-valued function defined on X×Y, define the set-valued mapping Ψ:XP(Y) by

Ψ ( x ) = argmin y Y h ( x , y )

then, Ψ is closed at x if Ψ(x) is nonempty.

Theorem 6.3

Weierstrass theorem. If f is a real continuous function on a compact set S R n , then the problem

argmin x R n f ( x ) , x S

has an optimal solution xS.

Theorem 6.4

[30] Zangwill’s global convergence theorem. Let the set-valued mapping M x (x):XP(X) determine an algorithm that given a point x0 generates a sequence x n n = 0 through the iteration xn+1M x (x n ). Also, let a solution set Γ be given. Suppose

  1. 1.

    All point x n are in a compact set SX.

  2. 2.

    There is a continuous function α:XR such that

    1. (a)

      if xΓ, then α(x )<α(x)x M x (x).

    2. (b)

      if xΓ, then α(x )≤α(x)x M x (x).

  3. 3.

    The map M x (x) is closed at x if xΓ.

Then, the limit of any convergent subsequence of x n n = 0 is in Γ. That is, accumulation points x of the sequence x n lie in Γ. Furthermore, α(x n ) converges to α, and α(x)= α for all accumulation points x.

Proof

Theorem 6.1. The iterative algorithm Φ can be decomposed into two well-defined iterative algorithms Φ B and Φ C :

Φ B ( C t-1 ) = B t = argmin B 1 2 | | A(BC t-1 ) Y | | F 2 + λ | | B | | 2 , 1 + 1 2 | | C t-1 | | F 2
(23)
Φ C ( B t ) = C t = argmin C 1 2 | | A(B t C) Y | | F 2 + λ | | B t | | 2 , 1 + 1 2 | | C | | F 2
(24)

As we can see from (23) and (24), at iteration t, the result of Φ B becomes the input of Φ C , and at iteration t+1, the result of Φ C becomes the input of Φ B ; therefore, we can express Φ as the composition of Φ C and Φ B , that is, Φ(C t-1 )= Φ C (Φ B (C t-1 )):

Φ C ( Φ B ( C t-1 ) ) = C t = argmin C 1 2 | | A(B t C) Y | | F 2 + λ | | B t | | 2 , 1 + 1 2 | | C | | F 2
(25)
subject to B t = argmin B 1 2 | | A(BC t-1 ) Y | | F 2 + λ | | B | | 2 , 1 + 1 2 | | C t-1 | | F 2

Let Γ be the solution set of Φ

Γ = C R K × T ∂Z ( C , t ) C = 0 ,

where Z(C,t)= 1 2 || A(B t C)Y| | F 2 +λ|| B t | | 2 , 1 + 1 2 ||C| | F 2 .

To prove this theorem by using Zangwill’s global convergence theorem, we need to prove that all its corresponding assumptions are fulfilled. In order to prove assumption 1, let us analyze the sequences B t t = 1 and C t t = 1 . The sequence B t t = 1 is generated by using FISTA, which is a convergent algorithm (B t B ) that guarantees that B t Ω B [5, 18]. Hence, using Definition 6.3, we can see that the sequence B t t = 1 generated by (23) lies in a compact set. On the other hand, the sequence C t t = 1 is generated by (22), which guarantees that C t Ω C . This sequence always converges to a point inside Ω C , which implies that Ω C also lies in a compact set. This concludes the proof of assumption 1.

To prove assumption 2, let us use Z(C,t) as the function α(·); thus, in order to verify the fulfillment of assumption 2, we need to prove that

  1. (a)

    if C t Γ, then Z(C t+1 ,t+1)<Z(C t ,t) C t+1 Φ(C t )

  2. (b)

    if C t Γ, then Z(C t+1 ,t+1)≤Z(C t ,t) C t+1 Φ(C t )

From (25), we can see that the sequence C t t = 1 will always lie in Γ (because C t is generated by (22)); therefore, we only need to prove (b).

Let C t+1 be the solution of (25) at iteration t+1; this implies

1 2 | | A(B t+1 C t+1 ) Y | | F 2 + λ | | B t+1 | | 2 , 1 + 1 2 | | C t+1 | | F 2 1 2 | | A(B t+1 C) Y | | F 2 + λ | | B t+1 | | 2 , 1 + 1 2 | | C | | F 2 , C R K × T 1 2 | | A(B t+1 C t ) Y | | F 2 + λ | | B t+1 | | 2 , 1 + 1 2 | | C t | | F 2
(26)

On the other hand, if B t+1 is the solution of (23) at iteration t+1, this implies

1 2 | | A(B t+1 C t ) Y | | F 2 + λ | | B t+1 | | 2 , 1 + 1 2 | | C t | | F 2 1 2 | | A(BC t ) Y | | F 2 + λ | | B | | 2 , 1 + 1 2 | | C t | | F 2 , B R N × K 1 2 | | A(B t C t ) Y | | F 2 + λ | | B t | | 2 , 1 + 1 2 | | C t | | F 2
(27)

and from (26) and (27), we can prove assumption 2(b):

1 2 | | A(B t+1 C t+1 ) Y | | F 2 + λ | | B t+1 | | 2 , 1 + 1 2 | | C t+1 | | F 2 1 2 | | A(B t C t ) Y | | F 2 + λ | | B t | | 2 , 1 + 1 2 | | C t | | F 2 Z ( C t+1 , t + 1 ) Z ( C t , t )

In order to prove assumption 3, we need to prove that Φ is closed at C if CΓ. To do so, we are going to use Theorem 6.2; therefore, we need to prove that Φ B and Φ C are both closed maps: from (23) and (24), we can see that their corresponding objective functions are both continuous B R N × K and C R K × T , respectively; hence, by using Weierstrass Theorem and Lemma 6.1, we can conclude that Φ B and Φ C are both closed maps for any C t-1 and B t , respectively, and by using Theorem 6.2, we can conclude that Φ is closed on any C t-1 .

Finally, from all the previous proofs and Zangwill’s global convergence theorem, it follows that the limit of any convergent subsequence of B t , C t t = 1 is in Ω B and Ω C .

7 Numerical experiments

In this section, we evaluate the performance of the matrix factorization approach and compare it with the Group Lasso regularizer:

S ̂ = argmin S 1 2 AS Y F 2 + λ i = 1 N S ( i , : ) 2 , λ > 0
(28)

and the Trace Norm regularizer:

S ̂ = argmin S 1 2 AS Y F 2 + λ i = 1 q σ i ( S ) , λ > 0
(29)

where q=min{N,T} and σ i (S) denotes the i th singular value of S. Both problems (28) and (29) were solved using the FISTA implementation of the SPArse Modeling Software (SPAMS) [33, 34].

In order to have a reproducible comparison of the different regularization approaches, we generated two synthetic scenarios:

  • M=128 EEG electrodes, T=161 time instants, N=413 current sources within the brain, but only 12 of them are active: 4 main active sources with their corresponding 2 nearest neighbor sources are also active. The other 401 sources are not active (zero electrical activity). Therefore, in this scenario, the synthetic matrix S is a structured sparse matrix with only 12 nonzero rows (the rows associated to the active sources).

  • M=128 EEG electrodes, T=161 time instants, N=2,052 current sources within the brain, but only 40 of them are active: 4 main active sources with their corresponding 9 nearest neighbor sources are also active. The other 2012 sources are not active (zero electrical activity). Therefore, in this scenario, the synthetic matrix S is a structured sparse matrix with only 40 nonzero rows (the rows associated to the active sources).

In both scenarios, the simulated electrical activity (simulated waveforms) associated to the four Main Active Sources (MAS) was obtained from a face perception-evoked potential study [35, 36]. To obtain the simulated electrical activity associated to each one of the active neighbor sources, we simply set it as a scaled version of the electrical activity of its corresponding nearest MAS (with a scaled factor equal to 0.5). Hence, there is a linear relation between the four MAS and their corresponding nearest neighbor sources; therefore, in both scenarios, the rank of the synthetic matrix S is equal to 4.

As forward model (A), we used a three-shell concentric spherical head model. In this model, the inner sphere represents the brain, the intermediate layer represents the skull, and the outer layer represents the scalp [37]. To obtain the values of each one of the components of the matrix A, we need to solve the EEG forward problem [38]: Given the electrical activity of the current sources within the brain and a model for the geometry of the conducting media (brain, skull and scalp, with its corresponding electric properties), compute the resulting EEG signals. This problem was solved by using the SPM software [39]. Taking into account the comments mentioned in Section 2, the N simulated current sources were positioned on a mesh located on the brain cortex, with an orientation fixed perpendicular to it.

Finally, the simulated EEG signals were generated according to (1), where E is a Gaussian noise G(0,σ2I) whose variance was set to satisfy a SNR=20 log 10 | | AS | | F | | E | | F =10dB. Summarizing, our synthetic problems can be stated as follows: Given matrices Y R 128 × 161 and A R 128 × N , recover the synthetic BES matrix S R N × 161 . According to this, in both scenarios, we want to estimate a BES matrix which is structured sparse and low rank, with its rank equal to the number of MAS simulated. The activity of the four MAS, the synthetic EEG measurements as well as the sparsity pattern of the synthetic BES matrix are shown in Figures 1 and 2 (Ground Truth).

Figure 1
figure 1

Simulation results: waveforms of the MAS, EEG estimated, and sparsity pattern of the estimated BES matrix. Experiment setup: 413 sources, 128 EEG electrodes, 161 time instants, 4 main active sources with their corresponding 2 nearest neighbor sources also active.

Figure 2
figure 2

Simulation results: waveforms of the MAS, EEG estimated, and sparsity pattern of the estimated BES matrix. Experiment setup: 2,052 sources, 128 EEG electrodes, 161 time instants, 4 main active sources with their corresponding 9 nearest neighbor sources also active.

We have used cross-validation to select the regularization parameter λ associated to the Group Lasso and Trace Norm regularizers, as well as the parameters λ and K in the case of the Matrix Factorization approach (K [ 1,2,3,…,10], λ [ 10−3,10−2,10−1,…,103]): the rows of Y are randomly partitioned into three groups of approximately equal size. Each union of two groups forms a train set (TrS), while the remaining group forms a test set (TS). This procedure is carried out three times, each time selecting a different test group. Inverse reconstructions are carried out based on the training sets, obtaining different regression matrices S ̂ i . We then evaluate the root mean square error (RMSE) using the test sets and the regression matrices S ̂ i :

RMSE : 1 3 i = 1 3 1 M TS i × T A TS i S ̂ i Y TS i F ,

where Y TS i R M TS i × T , and A TS i R M TS i × N (TS i denotes the index set of the rows that belongs to the i th test set). Once the estimated matrix S ̂ has been found, we apply a threshold to remove spurious sources with almost zero activity. We have set this threshold equal to the 1% of the mean energy of all the sources.

7.1 Performance evaluation

In order to evaluate the performance of the regularizers, we compare the waveform and localization of the four MAS present in the synthetic BES matrix against the four MAS estimated by each one of the regularizers. We also compare the sparsity pattern of the estimated BES matrix S ̂ against the sparsity pattern of the synthetic BES matrix S, as well as the synthetic and predicted EEG measurements.

As we can see from Figures 1 and 2, the Group Lasso and Trace Norm regularizers do not reveal the correct number of linear independent sources, while the Matrix Factorization does: it finds out four linear independent sources in both scenarios. To select such four linear independent MAS, we find a basis for the Column Space( S ~ ) (using a QR factorization), where S ~ is a matrix whose rows are a sorted version of the rows of S (sorted in a descending order of their corresponding energy value). To get the four linear independent MAS estimated by the Group Lasso and Trace Norm regularizers, we followed the same procedure described before and retained the first four components of the basis of the Column Space( S ~ ).According to Figures 1 and 2, the Matrix Factorization approach is able to estimate a BES matrix with the correct rank and whose sparsity pattern follows closely the sparsity pattern of the true BES matrix, that is, both matrices have a similar structure, which implies that the proposed approach is able to induce the desired solution: A row-structured sparse matrix, whose nonzero rows encode the linear relation between the active sources and their corresponding nearest neighbor sources. Using the estimated BES matrix, the Matrix Factorization approach is also able to predict a smooth version of the noisy EEG, and the waveforms of the estimated MAS follow closely the waveforms of the true MAS.As we can see from Figures 1 and 2, Group Lasso is able to estimate a BES matrix with a similar row-sparsity pattern to the true BES matrix, but it does not take into account the linear relation between the nonzero rows, which can be seen from the rank of the estimated BES matrix. The waveforms of the estimated MAS are very similar to the true MAS, but they are not so smooth as the ones estimated by the Matrix Factorization approach.As we can see from Figures 1 and 2, the Trace Norm regularizer takes into account the linear relation of the active sources by inducing solutions which are low rank, but, on the other hand, it does not take into account the structured sparsity pattern of the BES matrix. All of this implies that the Trace Norm tends to induce low rank dense solutions, which are not biologically plausible.According to Figures 3 and 4, the position of the MAS obtained from the BES matrix estimated by the Matrix Factorization approach, the Group Lasso, and Trace Norm regularizers follows closely the position of the true MAS. Nevertheless, it is worth highlighting that before selecting the MAS, we first need an accurate estimation of their number, and the Group Lasso and Trace Norm regularizers were not able to get a precise estimate of it, only the Matrix Factorization were able to.

Figure 3
figure 3

Localization of the MAS, N=413 sources. From left to right: Ground Truth, Matrix Factorization, Group Lasso, and Trace Norm.

Figure 4
figure 4

Localization of the MAS, N=2,052 sources. From left to right: Ground Truth, Matrix Factorization, Group Lasso, and Trace Norm.

From these results, we can see that the proposed Matrix Factorization approach outperforms both the Group Lasso and Trace Norm regularizers. The main reason for this is because it combines their two main features: it combines the structured sparsity (from Group Lasso) and the low rank (from Trace Norm) into one unified framework, which implies that it is able to induce structured sparse-low-rank solutions which are biologically plausible: few active sources, with linear relations between them.

8 Conclusions

We have presented a novel approach to solve the EEG inverse problem, which is based on matrix factorization and regularization. Our method combines the ideas behind the Group Lasso (structured sparsity) and Trace Norm (low rank) regularizers into one unified framework. We have also developed and analyzed the convergence of an alternating minimization algorithm to solve the resulting nonsmooth-nonconvex regularization problem. Finally, using simulation studies, we have compared our method with the Group Lasso and Trace Norm regularizers when they are applied directly to the target matrix, and we have shown the gain in performance obtained by our method, hence proving the effectiveness and efficiency of the proposed algorithm.

References

  1. Hämäläinen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV: Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys 1993, 65(2):413.

    Article  Google Scholar 

  2. Pascual-Marqui RD: Review of methods for solving the EEG inverse problem. Int. J. Bioelectromagnetism 1999, 1(1):75-86.

    Google Scholar 

  3. Baillet S, Mosher JC, Leahy RM: Electromagnetic brain mapping. IEEE Signal Process. Mag 2001, 18(6):14-30.

    Article  Google Scholar 

  4. Grech R, Cassar T, Muscat J, Camilleri KP, Fabri SG, Zervakis M, Xanthopoulos P, Sakkalis V, Vanrumste B: Review on solving the inverse problem in EEG source analysis. J. Neuroeng. Rehabil 2008, 5(1):25.

    Article  Google Scholar 

  5. Beck A, Teboulle M: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci 2009, 2(1):183-202.

    Article  MathSciNet  MATH  Google Scholar 

  6. Menendez RGdP, Murray MM, Michel CM, Martuzzi R, Andino SLG: Electrical neuroimaging based on biophysical constraints. NeuroImage 2004, 21(2):527-539.

    Article  Google Scholar 

  7. Hämäläinen MS, Ilmoniemi R: Interpreting magnetic fields of the brain: minimum norm estimates. Med. Biol. Eng. Comput. 1994, 32(1):35-42.

    Article  Google Scholar 

  8. Uutela K, Hämäläinen M, Somersalo E: Visualization of magnetoencephalographic data using minimum current estimates. NeuroImage 1999, 10(2):173-180.

    Article  Google Scholar 

  9. Ou W, Hämäläinen MS, Golland P: A distributed spatio-temporal EEG/MEG inverse solver. NeuroImage 2009, 44(3):932-946.

    Article  Google Scholar 

  10. Gramfort A, Strohmeier D, Haueisen J, Hamalainen M, Kowalski M: Functional brain imaging with M/EEG using structured sparsity in time-frequency dictionaries. In Information Processing in Medical Imaging, Lecture Notes in Computer Science. Edited by: Székely G, Hahn HK. Springer, Berlin; 2011:600-611.

    Chapter  Google Scholar 

  11. Gramfort A, Strohmeier D, Haueisen J, Hämäläinen M, Kowalski M: Time-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with non-stationary source activations. NeuroImage 2013, 70: 410-22.

    Article  Google Scholar 

  12. Haufe S, Tomioka R, Dickhaus T, Sannelli C, Blankertz B, Nolte G, Müller KR: Large-scale EEG/MEG source localization with spatial flexibility. NeuroImage 2011, 54(2):851-859.

    Article  Google Scholar 

  13. Gramfort A, Kowalski M, Hämäläinen M: Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods. Phys. Med. Biol 2012, 57(7):1937.

    Article  Google Scholar 

  14. Bach F, Jenatton R, Mairal J, Obozinski G: Optimization with Sparsity-Inducing Penalties. Foundations Trends®; Mach. Learn 2011, 4(1):1-106.

    Article  MATH  Google Scholar 

  15. Micchelli CA, Morales JM, Pontil M: Regularizers for structured sparsity. Adv. Comput. Math. 2013, 38(3):455-489.

    Article  MathSciNet  MATH  Google Scholar 

  16. Sra S, Nowozin S, Wright SJ: Optimization for Machine Learning. MIT Press, Cambridge; 2012.

    Google Scholar 

  17. Bertsekas DP: Nonlinear Programming. Athena Scientific, Belmont; 1999.

    MATH  Google Scholar 

  18. Combettes PL, Wajs VR: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul 2005, 4(4):1168-1200.

    Article  MathSciNet  MATH  Google Scholar 

  19. Combettes PL, Pesquet J-C: Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Optimization and Its Applications. Edited by: Bauschke HH, Burachik RS, Combettes PL, Elser V, Luke DR, Wolkowicz H. Springer, New York; 2011:185-212.

    Chapter  Google Scholar 

  20. Nesterov Y: Gradient methods for minimizing composite objective function. CORE Discussion Papers 2007076, Center for Operations Research and Econometrics (CORE), Université Catholique de Louvain. 2007.

    Google Scholar 

  21. Wright SJ, Nowak RD, Figueiredo MA: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 2009, 57(7):2479-2493.

    Article  MathSciNet  Google Scholar 

  22. Sanei S, Chambers JA: EEG Signal Processing. Wiley, West Sussex; 2008.

    Google Scholar 

  23. Malmivuo J, Plonsey R: Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields. Oxford University Press, Oxford; 1995.

    Book  Google Scholar 

  24. Murakami S, Okada Y: Contributions of principal neocortical neurons to magnetoencephalography and electroencephalography signals. J Phys 2006, 575(3):925-936.

    Google Scholar 

  25. Moreau JJ: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 1965, 93(2):273-299.

    MathSciNet  MATH  Google Scholar 

  26. Micchelli CA, Shen L, Xu Y: Proximity algorithms for image models: denoising. Inverse Probl 2011, 27: 045009.

    Article  MathSciNet  MATH  Google Scholar 

  27. Schmidt M, Roux NL, Bach F: Convergence rates of inexact proximal-gradient methods for convex optimization. Adv. Neural Inform. Process. Syst. 2011, 24: 1458-1466.

    Google Scholar 

  28. Argyriou A, Evgeniou T, Pontil M: Convex multi-task feature learning. Mach. Learn 2008, 73(3):243-272.

    Article  Google Scholar 

  29. Horn RA, Johnson CR: Matrix Analysis. Cambridge university press, Cambridge; 1990.

    MATH  Google Scholar 

  30. Zangwill WI: Nonlinear Programming: a Unified Approach. Prentice-Hall, Englewood Cliffs; 1969.

    MATH  Google Scholar 

  31. Sriperumbudur B, Lanckriet G: On the convergence of the concave-convex procedure. Adv. Neural Inform. Process. Syst 2009, 22: 1759-1767.

    Google Scholar 

  32. Gunawardana A, Byrne W: Convergence theorems for generalized alternating minimization procedures. J. Mach. Learn. Res 2005, 6: 2049-2073.

    MathSciNet  MATH  Google Scholar 

  33. Jenatton R, Mairal J, Obozinski G, Bach F: Proximal methods for sparse hierarchical dictionary learning. In Proceedings of the International Conference on Machine Learning (ICML). Haifa; 21–24 June 2010.

    Google Scholar 

  34. Mairal J, Jenatton R, Bach FR, Obozinski GR: Network flow algorithms for structured sparsity. Adv. Neural Inform. Process. Syst 2010, 23: 1558-1566.

    MATH  Google Scholar 

  35. Friston K, Harrison L, Daunizeau J, Kiebel S, Phillips C, Trujillo-Barreto N, Henson R, Flandin G, Mattout J: Multiple sparse priors for the M/EEG inverse problem. NeuroImage 2008, 39(3):1104-1120.

    Article  Google Scholar 

  36. Henson R, Goshen-Gottstein Y, Ganel T, Otten L, Quayle A, Rugg M: Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cereb. Cortex 2003, 13(7):793.

    Article  Google Scholar 

  37. Hallez H, Vanrumste B, Grech R, Muscat J, De Clercq W, Vergult A, D’Asseler Y, Camilleri KP, Fabri SG, Van Huffel S, Lemahieu I: Review on solving the forward problem in EEG source analysis. J. Neuroeng. Rehabil 2007, 4(1):46.

    Article  Google Scholar 

  38. Mosher JC, Leahy RM, Lewis PS: EEG and MEG: forward solutions for inverse methods. IEEE Trans. Biomed. Eng. 1999, 46(3):245-259.

    Article  Google Scholar 

  39. Litvak V, Mattout J, Kiebel S, Phillips C, Henson R, Kilner J, Barnes G, Oostenveld R, Daunizeau J, Flandin G, Penny W, Friston K: EEG and MEG data analysis in SPM8. Comput. Intell. Neurosci 2011. doi:10.1155/2011/852961

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper. They are also grateful to Dr. Carsten Stahlhut, from DTU Informatics, for valuable discussions about EEG brain imaging and also to Dr. Alexandre Gramfort, from Telecom ParisTech, for his advices related with EEG signal processing and nonsmooth convex programming theory applied to the EEG inverse problems. This work has been partly supported by Ministerio de Economía of Spain (projects ‘DEIPRO’ (id. TEC2009-14504-C02-01) and ‘COMONSENS’ (id. CSD2008-00010), ‘ALCIT’ (id. TEC2012-38800-C03-01), and ‘COMPREHENSION’ (id. TEC2012-38883-C02-01)). Authors LKH and MP were funded by Banco Santander and Universidad Carlos III de Madrid’s Excellence Chair programme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jair Montoya-Martínez.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Montoya-Martínez, J., Artés-Rodríguez, A., Pontil, M. et al. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem. EURASIP J. Adv. Signal Process. 2014, 97 (2014). https://doi.org/10.1186/1687-6180-2014-97

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-97

Keywords