 Research
 Open access
 Published:
Timevarying graph learning from smooth and stationary graph signals with hidden nodes
EURASIP Journal on Advances in Signal Processing volumeÂ 2024, ArticleÂ number:Â 33 (2024)
Abstract
Learning graph structure from observed signals over graph is a crucial task in many graph signal processing (GSP) applications. Existing approaches focus on inferring static graph, typically assuming that all nodes are available. However, these approaches ignore the situation where only a subset of nodes are available from spatiotemporal measurements, and the remaining nodes are never observed due to applicationspecific constraints, resulting in timevarying graph estimation accuracy declines dramatically. To handle this problem, we propose a framework that consider the presence of hidden nodes to identify timevarying graph. Specifically, we assume that the graph signals are smooth and stationary on the graphs and only a small number of edges are allowed to change between two consecutive graphs. With these assumptions, we present a challenging timevarying graph inference problem, which models the influence of hidden nodes in terms of estimating the graphshift operator matrices that have a form of graph Laplacian. Moreover, we emphasize similar edge pattern (columnsparsity) between different graphs. Finally, our method is evaluated on both synthetic and realworld data. The experimental results demonstrate the advantage of our method when compared to existing benchmarking methods.
1 Introduction
Recently, there has been a growing prevalence of modern data analysis that involve structured data with nonEuclidean support. In the real world, numerous examples of such data can be found, including weather measurements data in wireless sensor networks, stock price measurements in financial networks and human behaviors in social networks [1,2,3,4]. Typically, graphs are used as efficient mathematical tools to describe the latent structures of such data, where nodes act as entities of the graph and edges model the pairwise relationship between the function values at the nodes. Such graphbased data representation leads to the emerging field of graph signal processing (GSP) [5,6,7].
In some situations, the underlying network topology is known, but in most cases, there are often settings where the graph is not readily available. However, it is feasible to infer the graph structure from a collection of nodal observations in order to capture the relationships between the different entities. This process is known as graph learning [8,9,10,11,12,13]. So far, a significant amount of literature has been proposed to learn graph topology, which is summarized in [11,12,13]. Particularly, the GSP viewpoint provides a new technique for inferring the graph topology from a set of observations. In general, the GSPbased approaches can generally be categorized into three main groups. The first category of approaches makes assumption about the graph by enforcing properties such as smoothness or sparsity of the graph signals [14,15,16]. Instead of smoothness/sparsitybased approaches, the second category of approaches assumes that the graph signals are generated from a Laplacian constrained Gaussianâ€“Markov random field (GMRF) [17, 18]. The third category of approaches exploits diffusion model [9, 19] to learn graph topology. The model considers that the observed signals are the outcome of a diffusion process on the graph, where each node continuously influences its neighborhoods.
It should be noted that all the aforementioned graph topology inference works focus on the scenario where observations from all the nodes are available. However, in numerous pertinent scenarios, the observed graph signals may correspond only to a subset of the original graph nodes, while the rest graph nodes are hidden. Neglecting these hidden nodes can drastically hinder the performance of graph topology inference methods. Consequently, some recent works have begun to address this related issue, including Gaussian graphical model selection [20,21,22], linear Bayesian model [23], and nonlinear regression [24]. More recently, the problem of constructing a graph when consider the existence of hidden nodes has been investigated within the context of GSP [25,26,27]. Notably, two related works have proposed leveraging, respectively, smoothness prior [5] or stationarity prior [28, 29] to infer graph topologies from incomplete data [25, 26]. Another work has focused on estimating multilayer graphs in the presence of hidden nodes, assuming that the observed graph signals follow a GMRF model [27]. The existing three GSPbased graph learning methods with hidden nodes are limited to learning static graph or multilayer graphs.
It came to our attention that some scenarios involve the consideration of timevarying generation models to capture the relationships between data variables in the real world. For example, this is observed when estimating the timevarying brain functional connectivity using electroencephalography recordings (EEGs) or restingstate functional magnetic resonance imaging (fMRI) [30]. Additionally, identifying temporal transitions in biological networks, such as protein, RNA, and DNA [31], and inferencing relationships between stock trading from financial market data [32] also exhibit the timevarying nature. To address the growing demand for understanding these timevarying graph structures, several approaches have been proposed. These approaches have leveraged prior assumptions about the evolutionary patterns of timevarying graphs to tackle the problem of learning their topology. In a recent study [33], the authors have proposed an efficient method for inferring timevarying topology. They have utilized Tikhonov regularization to ensure smooth temporal changes in edge weights, thereby capturing the evolving nature of the graphs. To apply additional constraints on the sparse changes of graph, the authors in [34] have introduced a \(l _{1}\) regularization term for graph variation. Additionally, another timevarying graph learning work has been described in [35], where the authors have proposed to extend the graphical Lasso [36] to account for the temporal variability. While these works in [33,34,35] did not explicitly incorporate scenarios involving hidden nodes, they served as inspiration for our research. We recognize the significance of collecting observations from related graphs and leveraging information across timevarying graphs to address the challenge of hidden nodes. However, it remains uncertain how existing algorithms can be adapted to measure graph similarity between unobserved nodes. Consequently, modeling the influence of hidden nodes in the context of timevarying graph learning becomes crucial. For a summary of the proposed method and related graph learning methods, please refer to Table 1.
Building on the preceding discussion, the primary objective of this paper is to address the inference problem of timevarying graphs with the presence of hidden nodes. Our two primary contributions are formulating this problem as a convex optimization problem and devising an algorithm to effectively solve it. Our method is predicated on the assumption that the observed signals exhibit simultaneous smoothness and stationarity on the given graphs. While this assumption has proven successful in the riled of static graph inference, a robust formula for timevarying graph learning with the hidden nodes has not yet been established. To fill this gap, it is necessary to modify the classic interpretations of smoothness prior and stationarity prior, in order to account for the impact of hidden nodes. We first adopt a block matrix factorization methodology to revise the smooth and stationary assumptions. Then, we exploit the inherent lowrank and sparse patterns within the blocks associated with hidden nodes. The patterns enable the smooth evolution of graph edges, thereby capturing the temporal dynamics across the sequence of graphs. Furthermore, to fully leverage the characteristics of timevarying graphs, it is crucial to capture the similarity among graphs, accounting for both the observed and hidden nodes. This is achieved through utilization of a similar columnsparsity pattern, which emerges from the similarity analysis of each time slot graph. We test the proposed approach on synthetic and realworld data. Experimental results show that the effectiveness of our proposed approach.
The remainder of this paper is structured as follows. SectionÂ 2 provides a comprehensive review of fundamental concepts related to signals defined over graphs, as well as an overview of the associated graph learning methods. SectionÂ 3 introduces a timevarying graph learning problem with hidden nodes at hand. SectionÂ 4 proposes optimization frameworks to solve this problem. SectionÂ 5 is dedicated to the evaluation of the performance of our proposed method. Finally, we conclude the paper in Sect.Â 6.
Notations: The following notations will be used in this paper. All the vectors are denoted by boldface lower case letters, and the matrices by boldface upper case letters. We use calligraphic font capital letters to denote sets. \(\mathbb {R}^{N\times N}\) denotes the set of matrices of size \(N\times N\) with nonnegative. For vector \(\textbf{x}\), \(\mathbb {E}[\textbf{x}]\) represents the expected value of \(\textbf{x}\). For matrix \(\textbf{X}\), \(\vert \textbf{X}\vert _F\) represents the Frobenius norm, \(\Vert \textbf{X}\Vert _0\) represents the \(l_0\) norm, \(\Vert \textbf{X}\Vert _{\textrm{F},\textrm{off}}\) is the Frobenius norm of \(\textbf{X}\) that does not include the diagonal elements, \(\Vert \textbf{X}\Vert _*\) represents the nuclear norm, \(\Vert \textbf{X}\Vert _{2,1}\) represents the \(l_{2,1}\) norm and can be understood as a twostep process where one first obtains the \(l _2\) norm of each of the matrix \(\textbf{X}\), then, the \(l _1\) norm of the resulting row vector is computed. Moreover, \(\textrm{diag}(\cdot )\) is a diagonal matrix with its argument along the main diagonal, \(\mathrm{tr}(\cdot )\) is the trace of the matrix, \(\textbf{1}\) stands for allone vectors and \(\textbf{I}\) stands for the identity matrix. Finally, the minimization operator, the transpose and pseudoinverse denoted by \(\min\), superscript \(\top\) and superscript \(^{\dagger }\), respectively.
2 Preliminaries
In this section, we first outline some basic GSP definitions. Then, we provide a concise overview of two pivotal models of graph signals, namely smooth graph signals and stationary graph signals. Building on these insights, antecedent works of graph learning problem based on these two graph signal models are introduced. Finally, a general framework for learning timevarying graphs is briefly reviewed.
2.1 Basic definitions for GSP
An undirected and weighted graph \({\mathcal {G}}=({\mathcal {V}},{\mathcal {E}},\textbf{W})\) with N nodes are considered here, where \({\mathcal {V}}=\{1,\ldots ,N\}\) represents the set of nodes, \({\mathcal {E}}\subseteq {\mathcal {V}}\times {\mathcal {V}}\) is the set of edges. The weighted adjacency matrix \(\textbf{W}\in \mathbb {R}^{N\times N}\) is a symmetric matrix, each element of the matrix characterizes the strength of the connection. We also assume that there are no selfloops or directed edge in the graph, which implies diag\((\textbf{W})=\textbf{0}\). The (i,Â j)th entry \(W_{ij}\) of the adjacency matrix is assigned a nonnegative value if \((i,j)\in {\mathcal {E}}\), i and j represent two nodes. We utilize a vector \(\textbf{x}=[x_{1},\ldots , x_{N}]^\top \in \mathbb {R}^{N}\) to represent graph signals, where \(x_{i}\) denotes the value measured at the node i.
In graph theory, the graph Laplacian \(\textbf{L}\) is defined as \(\mathbf {L: =DW}\). The degree matrix \(\textbf{D}\) is a diagonal matrix that contains the degrees of the nodes along diagonal with entries \(D_{ii}=\sum _{j=1}^N W_{ij}\) and \(D_{ij}=0\) for \(i\ne j\). The matrix \(\textbf{L}\) can be decomposed into \(\textbf{L}=\mathbf {U\Lambda U}^\top\) due to its symmetry, where \(\textbf{U}=[\textbf{u}_{1},\ldots ,\textbf{u}_{N}]\in \mathbb {C}^{N\times N}\) is a matrix consisting of the eigenvectors of \(\textbf{L}\), and \(\mathbf {\Lambda }\in \mathbb {C}^{N\times N}\) is a diagonal matrix containing the corresponding eigenvalues arranged in increasing order. The graph shift operator (GSO) is a \({N\times N}\) square matrix \(\textbf{S}\) that captures the underlying topology of graph. The entries of \(\textbf{S}\), denoted as \(S_{ij}\), can be only nonzero if \(i=j\) or there exists an edge \((i,j)\in {\mathcal {E}}\) in the graph. The adjacency matrix [37] and the Laplacians [15] are selected as popular options for the GSO. Without loss of generality, \(\textbf{S}\) possesses a complete set of orthonormal eigenvectors and associated eigenvalues.
2.2 Graph signal models
2.2.1 Smooth graph signals
In the node domain, the smoothness of graph signals refers to the tendency for the values of graph signals associated with the two end nodes of edges with significant weights in the graph to exhibit similarity. Typically, in the field of GSP, the total variation (TV) of graph signals \(\textbf{x}\) is commonly interpreted as a smoothness measure, quantified through a quadratic form [5]
Intuitively, graph signals \(\textbf{x}\) are said to be smooth when the Laplacian quadratic form TV\((\textbf{x})\) is small. In particular, the smaller the values of TV\((\textbf{x})\), the smoother the graph signals.
When comes to graph learning problem, the smooth property is widely used as a prior information. Considering the matrix \(\textbf{X}=[\textbf{x}_{1},\ldots ,\textbf{x}_{K}]\) contains K observations, a general graph learning framework is proposed in the works of [14, 15]
The penalty function \(f(\textbf{L})=\alpha \Vert \textbf{L}\Vert _F^2\beta \textbf{1}^\top \log (\textrm{diag}(\textbf{L}))\) is employed to prevent the acquisition of a trivial solution and controls the sparsity of the learned graph. Parameters \(\alpha\) and \(\beta\) are constants. The term \(\log (\textrm{diag}(\textbf{L}))\) is a twostep process. Firstly, the processÂ obtainsÂ the diagonal elements of matrix \(\textbf{L}\) using the diag operation, and then the log operation is applied to the resulting row vector. Therefore, log is an elementwise operation. The learned Laplacian matrix has to be in the valid combinatorial Laplacians set, by defining \({\mathcal {L}}:=\{ L_{ij}\le {0},i\ne j; \textbf{L}=\textbf{L}^\top ; \textbf{L1}=0; \textbf{L}\succ {0}\}\). This constraint emphasizes that \(\textbf{L}\) is a symmetric positive semidefinite matrix. The smoothness of all observed signals over the selected graph is quantified by the first term of equation (2).
2.2.2 Stationary graph signals
Given an undirected graph \({\mathcal {G}}\), obviously, GSO \(\textbf{S}\) is symmetric matrix. A linear shiftinvariant graph filter \(\textbf{H}\in \mathbb {R}^{N\times N}\) can be written as \(\textbf{H}=\sum _{l=0}^{L1} h_{l}\textbf{S}^{l}\), where \(\textbf{h}=[h_{0},\ldots ,h_{L1}]^\top\) is a vector composed of the graph filter coefficients and \(L1\) denotes the filter degree. Since \(\textbf{H}\) is a polynomial of \(\textbf{S}\), it readily follows that the matrix \(\textbf{H}\) is also symmetric. For a given input signal \(\textbf{s}\), the output of the graph filter is simply defined as \(\mathbf {x = Hs}\). Assuming that the \(\textbf{s}\) is a white signal follows a normalized i.i.d Gaussian distribution with mean zero, the output of filter \(\textbf{H}\) is stationary on the graph. This is because the following properties are satisfied
where \(\mathbf {m_x}\) denotes the expected value and \(\textbf{C}\) represents the covariance matrix of the graph signals \(\textbf{x}\). Moreover, since \({\mathcal {G}}\) is undirected, both \(\textbf{S}\) and \(\textbf{C}\) are symmetric. It becomes apparent that the two diagonalizable matrices GSO \(\textbf{S}\) and \(\textbf{C}\) share common eigenvectors \(\textbf{U}\) in the spectral domain [38] from (3). As a result, we also have that the matrices \(\textbf{S}\) and \(\textbf{C}\) commute.
Thus, the task of learning underlying graph from stationary graph signals is equivalent to inferring its shift operator \(\textbf{S}\). To be more precise, under the general assumption of graph sparsity, the graph learning problem from stationary graph signals can be formulated as [12]
where the set \({\mathcal {S}}\) enforces the estimated matrix \(\textbf{S}\) to satisfy some specify properties. \(\Vert \cdot \Vert _0\) promotes sparse solutions of the matrix \(\textbf{S}\). The equality constraint enforces that commutativity of the Laplacian and the covariance matrix.
2.2.3 Timevarying graph learning
Timevarying graph learning will learn a series of graphs \(\textbf{L}^{(1)}, \ldots , \textbf{L}^{(T)}\) using the graph signals \(\textbf{X}^{(1)}, \ldots , \textbf{X}^{(T)}\) collected during T time periods, where \(\textbf{X}^{(t)}=[\textbf{x}_{1}^{(t)},\ldots ,\textbf{x}_{K}^{(t)}]\in \mathbb {R}^{N\times {K}}\) contains K observations at a time window t. In this case, the selection of slowly changing timevarying graphs can be accomplished by solving [33]
where the term \(f(\textbf{L}^{(t)}\textbf{L}^{(t1)})\) denotes a regularization term that captures the temporal change in graph edges. The parameter \(\eta\) controls the temporal sparseness.
3 Timevarying graph learning with hidden nodes
In this section, we consider situations where the graph signals are observed only from a subset of nodes during the data collection process. Specifically, Sect.Â 3.1 involves analysis of latent nodal variables and their influence on the timevarying graph. This is accomplished through the utilization of a block matrix factorization methodology to represent the original matrices. Subsequently, we describe the timevarying graph topology inference problem with hidden nodes, as outlined in Sect.Â 3.2.
3.1 Timevarying graph model with hidden nodes
Formally, we consider a sequence of graph signals that are partitioned into nonoverlapping time windows \(\{\textbf{X}^{(1)},\ldots ,\textbf{X}^{(T)}\}\). In this paper, we consider an observation model with hidden nodes where the observed graph signals correspond to a subset of \(\textbf{X}^{(t)}\), while the values of graph signal residing on the remaining nodes have never been observed. We partition the set of nodes \({\mathcal {V}}\) into disjoint subsets \({\mathcal {O}}\) and \({\mathcal {H}}\), where \({\mathcal {O}}\) is the set of observable nodes and \({\mathcal {H}}\) is the set of hidden nodes with \({\mathcal {H}}={\mathcal {V}}{\setminus } {\mathcal {O}}\). In particular, we set \({\mathcal {O}}=\{1,\ldots ,O\}\) with cardinality \(\left {\mathcal {O}}\right =O\) and \({\mathcal {H}}=\{O+1,\ldots ,N\}\) with cardinality \(\left {\mathcal {H}}\right =H=NO\). We represent the graph signal values of observed nodes by the \(O\times K\) submatrix \(\textbf{X}_{O}^{(t)}\) associated with the first O rows of \(\textbf{X}^{(t)}\). As described in Sect.Â 2, the sample covariance matrices and GSO corresponding to the observed graph signals are given by \(\hat{\textbf{C}}_{O}^{(t)}\) and \(\textbf{S}_{O}^{(t)}\), respectively. To this end, for each time slot graph, the matrices \(\textbf{S}^{(t)}\) and \(\hat{\textbf{C}}^{(t)}\) can be described by block structure as
Here, the submatrices \(\textbf{S}_{O}^{(t)}, \textbf{S}_{OH}^{(t)}, \textbf{S}_{H}^{(t)}\) specify the dependencies among the observed nodes, between the observed and hidden nodes and among the hidden nodes, respectively. In particular, the sample covariance of the observed graph signals is represented by \(\hat{\textbf{C}}_{O}^{(t)}=\frac{1}{K}\textbf{X}_{O}^{(t)}(\textbf{X}_{O}^{(t)})^\top\). The undirected graphs follow that \(\textbf{S}^{(t)}=(\textbf{S}^{(t)})^\top\) and \(\hat{\textbf{C}}^{(t)}=(\hat{\textbf{C}}^{(t)})^\top\) due to both matrices \(\textbf{S}^{(t)}\) and \(\hat{\textbf{C}}^{(t)}\) are symmetric. Similarly, the submatrices \(\textbf{S}_{OH}^{(t)}\) and \(\hat{\textbf{C}}_{OH}^{(t)}\) also exhibit the property of symmetry.
As we can see, the block structure of the matrices \(\textbf{S}^{(t)}\) and \(\hat{\textbf{C}}^{(t)}\) in (6) motivates the search for optimal timevarying graphs when consider the existence of hidden nodes. Next, the problem of timevarying graph learning with hidden nodes will be introduced.
3.2 Problem statement
Given the known nodal subset \({\mathcal {O}}\subset {\mathcal {V}}\), and the matrices \(\{\textbf{X}_{O}^{(t)}\}_{t=1}^{T}\) collect the graph signal values of observed nodes arising from unknown timevarying graphs \(\{{\mathcal {G}}^{(t)}\}_{t=1}^{T}\). Our objective is to learn the timevarying graph while accounting for the presence of hidden nodes, which is tantamount to learn the GSO sequence \(\{\textbf{S}_{O}^{(t)}\}_{t=1}^{T}\) from \(\{\textbf{X}_{O}^{(t)}\}_{t=1}^{T}\) if the following assumptions hold

1.
The number of hidden nodes far less than the number of observed nodes with cardinality \(H\ll {O}\);

2.
The full observations \(\{\textbf{X}^{(t)}\}_{t=1}^T\) satisfy the prior assumption that they are smooth and stationary in \(\textbf{S}^{(t)}\) simultaneously;

3.
The number of graph edges permitted to change between consecutive graphs is limited according to a particular function \(\varvec{\psi }(\textbf{S}^{(t)}\textbf{S}^{(t1)})\), a prior that graph edges change smoothly in time.
The task of learning timevarying graphs encoded in the matrices \(\{\textbf{S}_{O}^{(t)}\}_{t=1}^{T}\) presents a challenging problem due to the absence of observations from nodes in set \({\mathcal {H}}\). To address this problem, the above three assumptions are made to render the problem more tractable. Firstly, the assumption (1) guarantees the availability of information for the majority of nodes. Secondly, the assumption (2) establishes a welldefined relationship between the graph signals and the unknown timevarying graphs. Lastly, the assumption (3) enforces that the graph edges change smoothly over time, providing temporal relations that may exist in timevarying graphs.
4 Proposed optimization framework
In this section, the influence of hidden nodes on smoothness prior and stationarity prior is presented, respectively. Following this, an optimization framework is designed to address the timevarying graph learning problem with hidden nodes, considering the scenario where the observed graph signals are both smooth and stationary.
4.1 Influence of hidden nodes on smoothness prior
The smoothness of signals on timevarying graphs can be computed as \(\frac{1}{K}\mathrm{tr}\big ((\textbf{X}^{(t)})^\top \textbf{L}^{(t)}\textbf{X}^{(t)}\big )\). In this part, we focus on \(\hat{\textbf{C}}^{(t)}=\frac{1}{K}\textbf{X}^{(t)}(\textbf{X}^{(t)})^\top\), and thus, the TV of graph signals is equal to \(\mathrm{tr}(\hat{\textbf{C}}^{(t)}\textbf{L}^{(t)})\). However, the existence of hidden nodes restricts our access solely to the observed sampled covariance matrices \(\hat{\textbf{C}}_{ O }^{(t)}\). Regarding the block structure of matrices \(\hat{\textbf{C}}^{(t)}\) and \(\textbf{S}^{(t)}\) defined in (6), the smoothness of signals within the context of timevarying graphs can be rewritten as
where the matrices \(\textbf{P}^{(t)}:=\hat{\textbf{C}}_{OH}^{(t)}(\textbf{L}_{OH}^{(t)})^\top \in \mathbb {R}^{O\times {O}}\), \(\textbf{R}^{(t)}:=\hat{\textbf{C}}_H^{(t)}\textbf{L}_H^{(t)}\in \mathbb {R}^{H\times {H}}\), and \(r^{(t)}:=\mathrm{tr}(\textbf{R}^{(t)})\) are nonnegative variables. The first equation in (7) represents the blockwise smoothness of graph signals. However, we do not have knowledge of most of the submatrices related to the hidden nodes. By lifting the matrices \(\textbf{P}^{(t)}\) and \(\textbf{R}^{(t)}\), we circumvent this challenge and solve the timevarying topology inference as a convex problem.
Notice that the matrices \(\textbf{L}_{O}^{(t)}\) belong to the set \(\bar{{\mathcal {L}}}:=\{ L_{ij}\le {0}, i\ne j; \textbf{L}=\textbf{L}^\top ; \textbf{L1}\ge {0}; \textbf{L}\succ {0}\}\), which are different from the set of valid combinatorial Laplacians \({\mathcal {L}}\). The only difference between the two set is to replace the condition \(\textbf{L1}=0\) with \(\textbf{L1}\ge {0}\), while others remain unchanged. The existence of links between the elements in \({\mathcal {O}}\) and the elements in \({\mathcal {H}}\) gives rise to nonzero (negative) entries in \(\textbf{L}_{OH}^{(t)}\) and, as a result, the sum of the offdiagonal elements of can be smaller than the value of the associated diagonal elements (which account for the links in both \({\mathcal {O}}\) and \({\mathcal {H}}\)). Therefore, \(\textbf{L}_{O}^{(t)}\) is not a combinatorial Laplacian.
Indeed, we encounter the challenge that \(\textbf{L}_{O}^{(t)}\) are not Laplacians themselves, while tackling the timevarying graph topology inference from smooth observations with hidden nodes. In order to circumvent this challenging issue, we turn to estimating \(\tilde{\textbf{L}}_{O}^{(t)}:=\textrm{diag}(\textbf{A}_{O}^{(t)}\textbf{1})\textbf{A}_{O}^{(t)}\) rather than \(\textbf{L}_{O}^{(t)}\), where \(\textbf{A}_{O}^{(t)}\) represent the adjacency matrices of the observed graph signals in the t th time slot. With this consideration in mind, the matrices \(\tilde{\textbf{L}}_{O}^{(t)}\) are proper combinatorial Laplacians satisfy the conditions for the valid set of graph Laplacians \({\mathcal {L}}\). We formulate the relation between \(\textbf{L}_{O}^{(t)}\) and \(\tilde{\textbf{L}}_{O}^{(t)}\) with equation \(\tilde{\textbf{L}}_{O}^{(t)}=\textbf{L}_{O}^{(t)}\textbf{D}_{OH}^{(t)}\). We use degree matrices \(\textbf{D}_{OH}^{(t)}\) to represent the edges existing between the observed and hidden nodes, which is defined as \(\textbf{D}_{OH}^{(t)}:= \textrm{diag}(\textbf{A}_{OH}^{(t)}\textbf{1})\in \mathbb {R}^{O\times {H}}\). By leveraging the matrices \(\tilde{\textbf{L}}_{O}^{(t)}\), we take the place of the smoothness penalty in (7) as
where \(\tilde{\textbf{P}}^{(t)}:=\hat{\textbf{C}}_{O}^{(t)}\textbf{D}_{OH}^{(t)}/2+\textbf{P}^{(t)}\). With the assumption (1), it is obvious that the matrices \(\textbf{P}^{(t)}\) are lowrank matrices with rank(\(\textbf{P}^{(t)}\))\(\le {H}\ll {O}\). Furthermore, the matrices \(\textbf{D}_{OH}^{(t)}\) are lowrank matrices, if the graphs are sparse. Thus, it can be inferred that \(\tilde{\textbf{P}}^{(t)}\) also exhibits a lowrank structure.
4.2 Influence of hidden nodes on stationarity prior
Upon evaluating the impact of the smoothness assumption on the timevarying graph learning problem involving hidden nodes, we proceed to consider that the graph signals to be stationary over the whole graphs. This graph signals model leads us to the conclusion that the eigenvectors of \(\textbf{C}^{(t)}\) and \(\textbf{S}^{(t)}\) are identical, thereby the equation \(\textbf{C}^{(t)}\textbf{S}^{(t)}=\textbf{S}^{(t)}\textbf{C}^{(t)}\) holds. To this end, we leverage the block structure of matrices \(\textbf{C}^{(t)}\) and \(\textbf{S}^{(t)}\), with a specific focus on the upper left block on both sides of the equation \(\textbf{C}^{(t)}\textbf{S}^{(t)}=\textbf{S}^{(t)}\textbf{C}^{(t)}\), to model the impact of hidden nodes on the stationarity prior
Equation (9) reveals that we canâ€™t simply focus on \(\textbf{C}_{O}^{(t)}\textbf{S}_{O}^{(t)}=\textbf{S}_{O}^{(t)}\textbf{C}_{O}^{(t)}\) when the hidden nodes are presented, but also need to notice that the associate terms \(\textbf{C}_{OH}^{(t)}(\textbf{S}_{OH}^{(t)})^\top\) and \(\textbf{S}_{OH}^{(t)}(\textbf{C}_{OH}^{(t)})^\top\). Furthermore, we set the matrices \(\bar{\textbf{P}}^{(t)}=\textbf{C}_{OH}^{(t)}(\textbf{S}_{OH}^{(t)})^\top\), similar to the definition of \(\textbf{P}^{(t)}\). The key distinction lies in our utilization of the matrices \(\textbf{S}_{OH}^{(t)}\) instead of the Laplacians \(\textbf{L}_{OH}^{(t)}\) to associate the matrices \(\bar{\textbf{P}}^{(t)}\). Under this setting, equation (9) can be formulated as
Similar to the analysis of \(\tilde{\textbf{P}}^{(t)}\) in section 4.1, the matrices \(\bar{\textbf{P}}^{(t)}\) are also lowrank matrices. We will exploit the lowrank structure of the matrices \(\bar{\textbf{P}}^{(t)}\) and \(\tilde{\textbf{P}}^{(t)}\) in our formulation.
4.3 Smoothness prior versus stationarity prior
Supposing that we are given with two datasets, \(\textbf{X}_1\) and \(\textbf{X}_2\), each containing an equal number of graph signals. Specifically, we known that the signals in \(\textbf{X}_1\) exhibit smoothness characteristics on the graph, and another set of the signals in \(\textbf{X}_2\) are stationary on the graph. Based on this information, we are able to identify the underlying graph. It is of interest to see which one leads to a better graph topology inference result. Without loss of generality, graph smoothness is a more lenient assumption that limits the TV of the observed values of the graph signal to be small. However, graph stationary outperforms the smoothnessbased method, as it has a much better prior assumption with significantly restricts the GSO. In the meantime, there may arise an instance where the observations are both stationary and smooth on the graph. More precisely, it means that the covariance matrix of the observations is diagonalized by eigenvectors with the graph Laplacian and the graph signals is lowfrequencybased. In such settings, two graph recovery methods can be combined to enhance recovery performance, which will be explored in the subsequent subsection.
4.4 Topology inference based on smoothness prior and stationarity prior
Taking the assumption (3) into account, the task of learning timevarying graph with the presence of hidden nodes involves acquiring knowledge of a sequence of graphs \(\{\textbf{S}_{O}^{(t)}\}_{t=1}^T\) from the observed graph signals \(\textbf{X}_{O}^{(1)},\ldots ,\textbf{X}_{O}^{(T)}\) collected during T time periods. The task specifically concentrates on the scenario where the GSO is represented by the Laplacian matrix, namely, our ultimate target corresponds to infer \(\{\textbf{L}_{O}^{(t)}\}_{t=1}^T\). We assume that the observed signals exhibit both smoothness and stationarity characteristics on unknown timevarying graphs. As a result, the smoothness penalty described in (8) and the commutativity constraint accounting for stationary equation in (10) can be jointly considered to approach timevarying graph learning problem. More specifically, this problem can be formulated as the ensuing objective function
where \(\gamma _{*}\) and \(\eta\) are tuning parameters. The term \(\Vert \tilde{\textbf{L}}_{O}^{(t)}\Vert _{F,off}^2\) offers a handle on the level of sparsity. The penalty function \(\varvec{\psi }(\cdot )\) imposes a constraint that limits the number of edge changes between consecutive graphs to a small value and we set function \(\varvec{\psi }(\tilde{\textbf{L}}_{O}^{(t)}\tilde{\textbf{L}}_{O}^{(t1)})=\Vert \tilde{\textbf{L}}_{O}^{(t)}\tilde{\textbf{L}}_{O}^{(t1)}\Vert _{F}^2\). The first constraint ensures that the TV of graph signals is nonnegative. The equality constraint enforces that commutativity of the Laplacians and the covariance matrices when consider the presence of hidden nodes. The two rank constraints capture the fact that the low rankness property of \(\tilde{\textbf{P}}^{(t)}\) and \(\bar{\textbf{P}}^{(t)}\).
In most instances, it is not feasible to obtain the entire covariance \(\textbf{C}_{O}^{(t)}\). Therefore, we resort to relying on the sample covariance matrices \(\hat{\textbf{C}}_{O}^{(t)}\) and relax the stationary constrain to guarantee that the left and right terms of the original equation (10) are roughly equivalent, though not entirely so. It is worth noting that under this more lenient condition, \(\bar{\textbf{P}}^{(t)}\) and \(\textbf{P}^{(t)}\) are equivalent. In such circumstances, we focus on rank(\(\tilde{\textbf{P}}^{(t)}\)) only and exploit the nuclear norm to capture the lowrank structure of matrices \(\tilde{\textbf{P}}^{(t)}\). To this end, we reformulate the optimization objective function (11) as
where the nuclear norm penalty \(\Vert \tilde{\textbf{P}}^{(t)}\Vert _{*}\) is employed to encourage lowrank solutions by favoring matrices with sparse singular values. The nonnegative constant \(\epsilon\) is an essential parameter that characterizes the accuracy of the sample covariance. The value of the parameter under consideration is inherently related to the amount of noise and the total number of samples K. This value is used as an indicator of the accuracy and faithfulness of the estimated covariance.
Based on previous analysis, the matrices \(\tilde{\textbf{P}}^{(t)}\) are not only inseparable from the product of the matrices \(\hat{\textbf{C}}_{O}^{(t)}\) and \(\textbf{D}_{OH}^{(t)}\) but also related to the matrices \(\textbf{P}^{(t)}\). Recalling that the diagonal of \(\textbf{D}_{OH}^{(t)}\) are sparse, it is obviously that \(\hat{\textbf{C}}_{O}^{(t)}\textbf{D}_{OH}^{(t)}\) are the matrices with several zero columns. More precisely, the assumption (1) reveals the presence of a columnsparse structure in the matrices \(\tilde{\textbf{P}}^{(t)}\). However, the rank constraint fails to preserve the desired columns sparsity characteristic. Following the classical approach in the literature, an efficient way to circumvent this issue is to replace the nuclear norm with the group Lasso penalty. This penalty not only reduces the number of nonzero columns but also promotes solutions with a low rankness.
To further improve the performance of graph topology inference, we consider leveraging the aforementioned columnsparsity regularization. Taking this consideration into account, a convex optimization problem for solving the timevarying graph learning with hidden nodes is proposed
The assumption (3) guarantees the similarity of temporally adjacent graphs. To exploit this property, we construct a tall matrix consisting of the matrices \(\tilde{\textbf{P}}^{(t)}\) and \(\tilde{\textbf{P}}^{(t1)}\). By applying the \(l _{2,1}\) norm to the tall matrix, we are able to capture and preserve the desired columnsparsity characteristic. This approach allows us to effectively leverage the temporal similarity between adjacent graphs, ensuring that columns with nonzero entries are likely to be consistently positioned across the varying matrices \(\tilde{\textbf{P}}^{(t)}\). This is particularly important to consider this additional structure, which is helpful to improve the estimation of \(\tilde{\textbf{P}}^{(t)}\) and result in a better recovery performance of \(\tilde{\textbf{L}}_{O}^{(t)}\). The effectiveness of the formulation (13) in promoting the desired columnsparsity pattern is demonstrated through the experimental results in Sect.Â 5.3.
We solve the optimization problem (13) by adopting an alternating minimization scheme. To find a numerically efficient solution, we decouple (13) into three simpler optimization problems. Specifically, with \(m = 0,\ldots , M1\) being the iteration index, we initialize two variables \(\tilde{\textbf{P}}^{(t),(m)}\) and \(r^{(t),(m)}\). At the first step, for the given \(\tilde{\textbf{P}}^{(t),(m)}\) and \(r^{(t),(m)}\), we solve the following optimization problem with respect to \(\tilde{\textbf{L}}_{O}^{(t)}\)
At the second step, we fix \(r^{(t),(m)}\) and leverage the estimate \(\tilde{\textbf{L}}_{O}^{(t),(m+1)}\) from the previous step to optimize the objective function with respect to \(\tilde{\textbf{P}}^{(t),(m+1)}\), which leads to the following optimization problem
At the last step, according to the \(\tilde{\textbf{L}}_{O}^{(t),(m+1)}\) and \(\tilde{\textbf{P}}^{(t),(m+1)}\) obtained in the first two steps, we solve the following convex optimization problem with respect to \(r^{(t),(m+1)}\)
We alternate between the three steps outlined in (14), (15) and (16) to obtain the final solution for the optimization problem described in (13). We generally observe convergence within a few iterations. The algorithm is summarized in Algorithm 1.
5 Numerical experiments
In this section, we present some numerical results validating the effectiveness of the proposed timevarying graph learning method for both synthetic and realworld data. The proposed method (hereinafter called \(\text {TGSmStGL}\)) is compared with benchmarking methods, including static graph learning from smooth and stationary graph signals with hidden nodes \(\text {(GSmStGL)}\) [26], timevarying graph learning method based on temporal smoothness \(\text {(TVGLTS)}\) [33] and timevarying graph learning method based on sparse variation \(\text {(TVGLSV)}\) [34]. We commence with an introduction of the general experimental settings. Next, we assess the efficacy of our method using synthetic data and conduct a comparative analysis of our method against established classical methods. Finally, we introduce the simulation performed over one realworld data. In our experiments, we solve optimization problems using CVX [39], which is a package for solving convex programs.
5.1 Experimental settings
5.1.1 Evaluation metrics
We employ five evaluation metrics to access the performance of our proposed method. The first evaluation metric is Mean Error that measures the estimation accuracy of recovered graphs. The Mean Error is defined as \({\Vert \hat{\textbf{L}}_{O}^{(t)}(\textbf{L}_{O}^{(t)})^*\Vert }_{F}^2/{\Vert (\textbf{L}_{O}^{(t)})^*\Vert _F^2}\), where \(\hat{\textbf{L}}_{O}^{(t)}\) are the estimated Laplacian matrices and \((\textbf{L}_{O}^{(t)})^*\) are the ground truth. Additionally, we utilize three metrics, namely Precision, Recall and Fscore, to evaluate how effectively the true edge structure of the graph is captured. More precisely, the Fscore provides a measure of the accuracy in estimating the graph topology, which is closely related to the metrics Precision and Recall. The Fscore ranges between 0 and 1, with higher values indicating better performance in capturing the graph topology. The mutual dependence between the obtained edge set and the groundtruth graph is measured by the last evaluation metric Normalized Mutual Information (NMI).
5.1.2 Baseline methods
We discuss various related graph learning strategies to compare with our proposed method. The first is GSmStGL, stands for the static graph learning method. This method considers the existence of hidden nodes and assumes that the entire graph signals exhibit both smoothness and stationarity simultaneously. The second is TGSmStGLnh, a timevarying graph inference method that aims to address the same problem as TGSmStGL but ignores the presence of hidden nodes. The other two baseline timevarying graph learning schemes are TVGLTS and TVGLSV. These two schemes are applicable when all nodes are available and the graph signals satisfy the stationarity assumption.
5.2 Synthetic data
We create a type of synthetic graph signals generated from a timevarying ErdÃ¶sâ€“RÃ©nyi graph (abbreviated as TVER graph). The process of constructing the data involves two steps: firstly, the creation of the TVER graph; secondly, the generation of timevarying graph signals by utilizing probability distributions based on the graph Laplacians of the aforementioned TVER graph.
5.2.1 Timevarying graph construction
In general, the generation of a TVER graph involves two steps. At the first step, an initial static ER graph \({\mathcal {G}}^{(1)}\) is constructed. The graph \({\mathcal {G}}^{(1)}\) consists of \(N = 20\) nodes, and we set the edge connection probability \(s = 0.3\). At the second step, we change the connections of edges in the original ER graph over time to construct the TVER graph. The t th graph \({\mathcal {G}}^{(1)}\) is obtained by resampling 10% of edges from the previous graph \({\mathcal {G}}^{(t1)}\). In this way, we construct a set of graphs, such that only a few edges switch at a time while most of the edges remain unchanged, i.e., the set of graphs follow the assumption (3). The edge weights of graphs belong to the set \(\{0,1\}\).
5.2.2 Generating synthetic graph signals
We generate timevarying graph signals by utilizing distributions derived from the graph Laplacians of the TVER graph that we construct. Let \(\textbf{L}^{(t)}\) represents the graph Laplacian of a graph at a certain time slot t, we can write its eigendecomposition as \(\textbf{L}^{(t)}=\textbf{U}^{(t)}\mathbf {\Lambda }^{(t)}(\textbf{U}^{(t)})^\top\). We create the smooth graph signals as \(\textbf{X}^{(t)}=\textbf{U}^{(t)}\textbf{Z}^{(t)}\) with \(K=50\), where \(\textbf{Z}^{(t)}\sim {\mathcal {N}}(\textbf{0},(\mathbf {\Lambda }^{(t)})^{\dagger })\). It is worth mentioning that the covariance of \(\textbf{X}^{(t)}\) is represented as \(\textbf{C}^{(t)}=((\textbf{L}^{(t)})^{\dagger })^2\). Hence, the graph signals generated from this model satisfy the assumption of being both smooth and stationary on the timevarying graphs.
5.3 Results on synthetic data
We conduct several experiments to investigate the behavior of our proposed method and the other baseline methods on synthetic data. Different settings are considered in these experiments, including the number of hidden nodes, the noise level, and the columnsparse structure of matrices \(\tilde{\textbf{P}}^{(t)}\).
5.3.1 Number of hidden nodes
We assess the performance of each method by varying the number of hidden nodes and set \({\mathcal {H}}=\{1, 2, 3, 4, 5\}\). We select the hidden nodes from all nodes in the graph by random selection.
The results in Fig.Â 1 show that the performance comparisons for different number of hidden nodes based on the TVER graph. The Mean Error of recovered graphs and variation of the Fscore are reported in Fig.Â 1a, b, respectively. It can be seen that the Mean Error increases with the growing H and the Fscore decreases with the growing H for \(\text {TGSmStGL}\). This observation highlights the significant influence of hidden nodes on timevarying graph recovery. The comparison depicted in Fig.Â 1 further supports the conclusion that the proposed method outperforms \(\text {TGSmStGLnh}\). This is because \(\text {TGSmStGLnh}\) ignores the presence of hidden nodes. As the same time, \(\text {TVGLTS}\) and \(\text {TVGLSV}\) present the worst performance since that these two methods not only ignore the presence of hidden nodes but also only account for smoothness assumption for the graph signals. \(\text {TGSmStGL}\) outperforms \(\text {GSmStGL}\) because the latter lacks consideration of the temporal relationship between graphs.
5.3.2 Noisy observations
The effect of different noise levels is evaluated in the second experiment. We use TVER model with edge probability values of \(s=0.3\) to generate random timevarying graphs and set \(H=2\). Assuming that the groundtruth graph signals \(\textbf{X}_O^{(t)}\) are corrupted by a multivariate Gaussian distribution noise with mean zero and covariance \(\sigma ^2\textbf{I}\), resulting in the observed noise graph signals \(\tilde{\textbf{X}}_O^{(t)}\). As depicted in Fig.Â 2, the \(\text {Fscore}\) of the learned graphs is plotted on the yaxis, while the power of noise is represented on the xaxis. Notably, \(\text {TGSmStGL}\) demonstrates superior performance compared to \(\text {GSmStGL}\). This finding is consistent with the previous experimental results. Besides, compared to \(\text {TGSmStGL}\), we observe that the performance of \(\text {TGSmStnh}\), \(\text {TVGLTS}\) and \(\text {TVGLSV}\) deteriorates significantly with increase in noise power, further emphasizing the necessity of considering the existence of hidden nodes. Furthermore, the result of \(\text {TGSmStGL}\) in terms of \(\text {Fscore}\) decays slightly when the noise power increases, demonstrating the proposed method is robust to noise.
5.3.3 Structure properties of \(\tilde{\textbf{P}}^{(t)}\)
Although the primary objective of this study is to achieve the recovery of \(\{\hat{\textbf{L}}_{O}^{(t)}\}_{t=1}^{T}\), the structure properties of \(\{\tilde{\textbf{P}}^{(t)}\}_{t=1}^T\) make a significantly contribution to our proposed method at the same time. Consequently, illustrate the recovered \(\{\hat{\textbf{L}}_{O}^{(t)}\}_{t=1}^{T}\) and \(\{\hat{\textbf{P}}^{(t)}\}_{t=1}^T\) is the purpose of this experiment. In this way, we can gain a clearer understanding of the impact of different methods on graph structure recovery. The outcomes are depicted in Figs.Â 3 and 4.
In Fig.Â 3, the first column corresponds to groundtruth graph topology and the corresponding covariance matrix separately. The second column corresponds to the groundtruth values of \(\tilde{\textbf{L}}_{O}^{(1)}\) and \(\tilde{\textbf{P}}^{(1)}\). The last two columns present the estimates obtained by the group Lasso scheme \(\text {TGSmStGL}\) [cf. (13)] and the lowrank scheme \(\text {TGSmSt}\) [cf. (12)], respectively. It is apparent that for the depicted example, the lowrank scheme \(\text {TGSmSt}\) is not capable of recovering the columnsparse structure of the original matrix \(\tilde{\textbf{P}}^{(1)}\). On the contrary, the estimated matrix \(\hat{\textbf{P}}^{(1)}\) in Fig.Â 3g exhibits similar column sparsity as the ground truth \(\tilde{\textbf{P}}^{(1)}\). Significantly, from the perspective of the estimated \(\hat{\textbf{L}}_{O}^{(1)}\), it becomes evident that the more precise estimation of \(\hat{\textbf{P}}^{(1)}\) leads to a superior inference of the graph topology. Thus, the group Lasso scheme \(\text {TGSmStGL}\) yielding better estimates than the lowrank scheme \(\text {TGSmSt}\).
In Fig.Â 4, we show that the learned matrices \(\hat{\textbf{P}}^{(t)}\) of timevarying graph, where the value of t range from 1 to 4. It is apparent that the columns with nonzero entries maintain consistent positions across adjacent time slots for the learned matrices \(\hat{\textbf{P}}^{(t1)}\). In other words, the scheme \(\text {TGSmStGL}\) captures the similar columnsparsity pattern of \(\hat{\textbf{P}}^{(t)}\) resulted from the temporal similarity of timevarying graph.
5.4 Experiments on realworld data
In this section, we evaluate our algorithms using two realworld data and compare their recovery performance with existing alternatives in the literature.
5.4.1 Application to PM 2.5 data
We start by considering the daily mean PM 2.5 concentration data from California provided by the US Environmental Protection Agency [40]. The dataset contains daily measurements collected from 93 sensors in California over the initial 304 days of 2015. According to the longitude and latitude coordinates of these 93 sensors, we build an initial graph. To infer bestrepresented timevarying graphs from incomplete graph signals, we make the assumption that only the 15 first sensors are observed. In this case, the goal is to infer the connections between those 15 sensors. Moreover, we divide 304 days into 10 time slots in equal proportion, and thus, each sensor includes data from 30 days, i.e., \(\textbf{X}^{(t)}\in \mathbb {R}^{15\times 30}\).
The comparative outcomes between the proposed approach and other relevant alternatives are shown in Table 2. We notice that the proposed \(\text {TGSmStGL}\) obtains the higher Fscore 0.5011 and the higher NMI than the other methods.
5.4.2 Application to COVID19 data
Finally, we use the global COVID19 dataset provided by the Johns Hopkins University [41]. The dataset contains the cumulative number of COVID19 cases for each day and each locality between January 22 and April 6, as well as the geographical localization of 259 places including some regions of the world, i.e., overall the dataset has 7 time slots with N = 259 and M = 10. As we want to take into account the presence of hidden variables, we are going to assume that \({\mathcal {O}} = {1,\ldots , 30}\), so that only the 30 first stations are observed, with our goal being inferring the connections between those stations.
The results are listed in Table 2. We observe that the proposed \(\text {TGSmStGL}\) outperforms \(\text {GSmStGL}\), \(\text {TVGLTS}\) and \(\text {TGSmStSV}\). In particular, \(\text {TGSmStGL}\) and \(\text {GSmStGL}\) have comparable performance for the Fscore. This is not surprising, since the value of time slot T is relatively small. On the other hand, \(\text {TVGLTS}\) and \(\text {TGSmStSV}\) get worse performance than \(\text {TGSmStGL}\). The result once again clearly reflects that the explicit consideration of hidden variables when inferring the graph structure leads to better performance.
6 Conclusion
In this paper, we introduced an optimization framework aimed at addressing the issue of timevarying graph learning with hidden nodes. The framework relied on the assumption that the observed signals are both smooth and stationary on learned graphs and identified graph topologies by leveraging the similarity in timevarying graphs. Specially, the key was to leverage the block structure of matrix to handle the presence of hidden nodes, and an optimization framework based on the graph topologies and the graph signals constraints is proposed. Moreover, in order to capture the characteristics of the learned graphs precisely, we augmented the objective function with a columnsparsity constrain and considered the connection of the similarity of different time slots graphs on columnsparsity. The experimental results from both simulated and realworld data verified the effectiveness and superiority of our method. Our future work includes considering the inference problem of timevarying graphs under different evolutionary modes in the presence of hidden nodes.
Data availability
All data generated or analyzed during this study are included in this published article
References
E.D. Kolaczyk, Statistical Analysis of Network Data: Methods and Models (Springer, New York, 2009)
O. Sporns, Discovering the Human Connectome (MIT Press, Boston, 2012)
S. Myers, J. Leskovec, On the convexity of latent social network inference. NIPS. 23 (2010)
A. Namaki, A. Shirazi, R. Raei, G. Jafari, Network analysis of a financial market based on genuine correlation and threshold method. Phys. A 390(21), 3835â€“3841 (2011)
D.I. Shuman, S.K. Narang, P. Frossard, A. Ortega, P. Vandergheynst, The emerging field of signal processing on graphs: extending highdimensional data analysis to networks and other irregular domains. IEEE Sign. Process. Mag. 30(3), 83â€“98 (2013). https://doi.org/10.1109/MSP.2012.2235192
A. Sandryhaila, J.M.F. Moura, Big data analysis with signal processing on graphs: representation and processing of massive data sets with irregular structure. IEEE Sign. Process. Mag. 31(5), 80â€“90 (2014). https://doi.org/10.1109/MSP.2014.2329213
A.G. Marques, N. Kiyavash, J.M.F. Moura, D. Van De Ville, R. Willett, Graph signal processing: foundations and emerging directions [from the guest editors]. IEEE Sign. Process. Mag. 37(6), 11â€“13 (2020). https://doi.org/10.1109/MSP.2020.3020715
E. Pavez, A. Ortega, Generalized Laplacian precision matrix estimation for graph signal processing. in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6350â€“6354 (2016). https://doi.org/10.1109/ICASSP.2016.7472899
S. Segarra, A.G. Marques, G. Mateos, A. Ribeiro, Network topology inference from spectral templates. IEEE Trans. Sign. Inf. Process. Netw. 3(3), 467â€“483 (2017). https://doi.org/10.1109/TSIPN.2017.2731051
S. Segarra, A.G. Marques, M. Goyal, S. Rey, Network topology inference from inputoutput diffusion pairs. in 2018 IEEE Statistical Signal Processing Workshop (SSP), pp. 508â€“512 (2018). https://doi.org/10.1109/SSP.2018.8450838
X. Dong, D. Thanou, M. Rabbat, P. Frossard, Learning graphs from data: a signal representation perspective. IEEE Sign. Process. Mag. 36(3), 44â€“63 (2019). https://doi.org/10.1109/MSP.2018.2887284
G. Mateos, S. Segarra, A.G. Marques, A. Ribeiro, Connecting the dots: identifying network structure via graph signal processing. IEEE Sign. Process. Mag. 36(3), 16â€“43 (2019). https://doi.org/10.1109/MSP.2018.2890143
F. Xia, K. Sun, S. Yu, A. Aziz, L. Wan, S. Pan, H. Liu, Graph learning: a survey. IEEE Trans. Artif. Intell. 2(2), 109â€“127 (2021). https://doi.org/10.1109/TAI.2021.3076021
V. Kalofolias, How to learn a graph from smooth signals. in Proceedings of International Conference on Artificial Intelligence and Statistics, vol. 51, pp. 920â€“929 (2016)
X. Dong, D. Thanou, P. Frossard, P. Vandergheynst, Learning Laplacian matrix in smooth graph signal representations. IEEE Trans Sign. Process. 64(23), 6160â€“6173 (2016). https://doi.org/10.1109/TSP.2016.2602809
S.P. Chepuri, S. Liu, G. Leus, A.O. Hero, Learning sparse graphs under smoothness prior. in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6508â€“6512 (2017). https://doi.org/10.1109/ICASSP.2017.7953410
H.E. Egilmez, E. Pavez, A. Ortega, Graph learning from data under Laplacian and structural constraints. IEEE J. Sel. Top. Sign. Process. 11(6), 825â€“841 (2017). https://doi.org/10.1109/JSTSP.2017.2726975
S. Kumar, J. Ying, J.V. Miranda Cardoso, D. Palomar, Structured graph learning via Laplacian spectral constraints. Adv. Neural Inf. Process. Syst. 32, 11647â€“11658 (2019)
D. Thanou, X. Dong, D. Kressner, P. Frossard, Learning heat diffusion graphs. IEEE Trans. Sign. Inf. Process. Netw. 3(3), 484â€“499 (2017). https://doi.org/10.1109/TSIPN.2017.2731164
V. Chandrasekaran, P.A. Parrilo, A.S. Willsky, Latent variable graphical model selection via convex optimization. in 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1610â€“1613 (2010). https://doi.org/10.1109/ALLERTON.2010.5707106
A. Chang, T. Yao, G.I. Allen, Graphical models and dynamic latent factors for modeling functional brain connectivity. in 2019 IEEE Data Science Workshop (DSW), pp. 57â€“63 (2019). https://doi.org/10.1109/DSW.2019.8755783
X. Yang, M. Sheng, Y. Yuan, T.Q.S. Quek, Network topology inference from heterogeneous incomplete graph signals. IEEE Trans. Sign. Process. 69, 314â€“327 (2021). https://doi.org/10.1109/TSP.2020.3039880
A. Anandkumar, D. Hsu, S. A. Javanmard, Kakade, Learning linear bayesian networks with latent variables. in International Conference on Machine Learning, pp. 249â€“257 (2013)
J. Mei, M.F. Moura, Silvar: single index latent variable models. IEEE Trans. Sign. Process. 66(11), 2790â€“2803 (2018). https://doi.org/10.1109/TSP.2018.2818075
A. Buciulea, S. Rey, C. Cabrera, A.G. Marques, Network reconstruction from graphstationary signals with hidden variables. in 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 56â€“60 (2019). https://doi.org/10.1109/IEEECONF44664.2019.9048913
A. Buciulea, S. Rey, A.G. Marques, Learning graphs from smooth and graphstationary signals with hidden variables. IEEE Trans. Sign. Inf. Process. Netw. 8, 273â€“287 (2022). https://doi.org/10.1109/TSIPN.2022.3161079
S. Rey, A. Buciulea, M. Navarro, S. Segarra, A.G. Marques, Joint inference of multiple graphs with hidden variables from stationary graph signals. in 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5817â€“5821 (2022). https://doi.org/10.1109/ICASSP43922.2022.9747524
A.G. Marques, S. Segarra, G. Leus, A. Ribeiro, Stationary graph processes and spectral estimation. IEEE Trans. Sign. Process. 65(22), 5911â€“5926 (2017). https://doi.org/10.1109/TSP.2017.2739099
N. Perraudin, P. Vandergheynst, Stationary signal processing on graphs. IEEE Trans. Sign. Process. 65(13), 3462â€“3477 (2017). https://doi.org/10.1109/TSP.2017.2690388
M.G. Preti, T.A. Bolton, D. Van De Ville, The dynamic functional connectome: stateoftheart and perspectives. Neuroimage 160, 41â€“54 (2017)
Y. Kim, S. Han, S. Choi, D. Hwang, Inference of dynamic networks using timecourse data. Brief. Bioinform. 15(2), 212â€“228 (2014)
R.N. Mantegna, Hierarchical structure in financial markets. Eur. Phys. J. B Condens. Matter Complex Syst. 11(1), 193â€“197 (1999)
V. Kalofolias, A. Loukas, D. Thanou, P. Frossard, Learning time varying graphs. in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2826â€“2830 (2017). https://doi.org/10.1109/ICASSP.2017.7952672
K. Yamada, Y. Tanaka, A. Ortega, Timevarying graph learning with constraints on graph temporal variation (2020). Preprint at https://arxiv.org/abs/2001.03346
D. Hallac, Y. Park, S. Boyd, J. Leskovec, Network inference via the timevarying graphical lasso. in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 205â€“213 (2017)
J. Friedman, T. Hastie, R. Tibshirani, Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9(3), 432â€“441 (2008)
A. Sandryhaila, J. Moura, Discrete signal processing on graphs. IEEE Trans. Sign. Process. 61(7), 1644â€“1656 (2013)
B. Girault, Stationary graph signals using an isometric graph translation. in 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 1516â€“1520 (2015)
M. Grant, S. Boyd, CVX: Matlab software for disciplined convex programming, version 2.1 beta. http://cvxr.com/cvx (2013)
Air data: Air quality data collected at outdoor monitors across the US. https://www.epa.gov/outdoorairqualitydata
E. Dong, H. Du, L. Gardner, An interactive webbased dashboard to track Covid19 in real time. Lancet Infect. 20(5), 533â€“534 (2020)
Acknowledgements
Not applicable.
Funding
This work was supported by the Innovation Program for Quantum Science and Technology under Grants No. 2021ZD0300703,Â theÂ National Natural Science Foundation of China under Grants No. 61971146, and the UniversityIndustry Collaborative Education ProgramÂ under GrantsÂ No.Â 230805384035416.
Author information
Authors and Affiliations
Contributions
Not applicable.
Corresponding author
Ethics declarations
Competing interests
The authorâ€™s declared that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ye, R., Jiang, XQ., Feng, H. et al. Timevarying graph learning from smooth and stationary graph signals with hidden nodes. EURASIP J. Adv. Signal Process. 2024, 33 (2024). https://doi.org/10.1186/s13634024011280
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634024011280