 Research
 Open access
 Published:
Stationary timevertex signal processing
EURASIP Journal on Advances in Signal Processing volume 2019, Article number: 36 (2019)
Abstract
This paper considers regression tasks involving highdimensional multivariate processes whose structure is dependent on some known graph topology. We put forth a new definition of timevertex widesense stationarity, or joint stationarity for short, that goes beyond product graphs. Joint stationarity helps by reducing the estimation variance and recovery complexity. In particular, for any jointly stationary process (a) one reliably learns the covariance structure from as little as a single realization of the process and (b) solves MMSE recovery problems, such as interpolation and denoising, in computational time nearly linear on the number of edges and timesteps. Experiments with three datasets suggest that joint stationarity can yield accuracy improvements in the recovery of highdimensional processes evolving over a graph, even when the latter is only approximately known, or the process is not strictly stationary.
1 Introduction
One of the main challenges when modeling multivariate processes is to decouple the estimation variance from the problem size. Consider an Nvariate process unfolding over T timesteps. If only mild assumptions are made, then the number of realizations needed to reliably estimate the first two moments is up to a logarithmic factor proportional to O(NT), i.e., the data size [1]. Assuming that the process is time widesense stationarity (TWSS) makes the length T of the process inconsequential. This is ideal for the univariate setting as it enables us to make relevant predictions even based on a single realization. If one additionally assumes that the signal autocorrelation is compactly supported, such that most data dependencies take place within a short time horizon, then the estimation variance can be reduced further by (roughly) splitting the observations into parts and considering each as an independent realization. This approach suffices when N is relatively small. For highdimensional processes, however, one needs to incorporate additional assumptions to obtain meaningful predictions [2–5].
In this spirit, this paper focuses on highdimensional processes that are supported on the vertex set and are statistically dependent on the edge set of some known graph topology. Whether examining epidemic spreading [6], how traffic evolves in the roads of a city [7], or neuronal activation patterns present in the brain [8], many of the highdimensional processes one encounters are inherently constrained by some underlying network. This realization has been the driving force behind recent efforts to reinvent classical models by taking into account the graph structure, with advances in many problems, such as denoising [9] and semisupervised learning [10, 11], among others.
Yet, standard models for processes (evolving) on graphs often fail to produce useful results when applied to real datasets. One of the main reasons for this shortcoming is that they model only a limited set of spatiotemporal behaviors. The wellused graph Tikhonov and total variation priors, for instance, assume that the signal varies slowly or in a piecewise constant manner over edges, without specifying any precise relations [12–14]. Similarly, assuming that the graph Laplacian encodes the conditional correlations of variables, as is done with Gaussian Markov random fields [15], becomes a rigid model when the graph is known [16]. To capture the behavior of complex networked systems, such as transportation and biological networks, it is crucial to train expressive models, being able to reproduce a wide range of graph and temporal behaviors.
1.1 Contributions
This paper considers the statistical modeling of processes evolving on graphs. In particular, we investigate the relationship between two different hypotheses: TWSS and VWSS [17–19], which are individually helpful in reducing the variance of covariance estimation for time series and graph signals, respectively. We propose a combined multivariate hypothesis that we refer to as timevertex widesense stationarity, or joint stationarity for short. The necessary first step of our analysis consists of reformulating the standard properties of stationarity (such as the relation of the covariance matrix and power spectral density to an appropriate Fourier transform) from the lens of timevertex analysis [20, 21]. This analysis is purposeful, yet not trivial as joint stationarity is more complicated than assuming stationarity on the product of two graphs^{Footnote 1}.
We use the hypothesis of joint stationarity to control variance and computational complexity in estimation and recovery tasks. Similar to [17], also here, one may reliably estimate the model parameters from few observations (e.g., see Fig. 4) and solve MMSE recovery problems in time linear on the number of edges and timesteps (e.g., see Fig. 3). Complimenting previous work, we also provide an analysis of the power spectral density (PSD) estimation, which brings insight into the inherent tradeoff between bias and variance. In addition, we experimentally demonstrate that assuming joint stationarity aids in recovery even when only an approximation of the graph is known, or the process is only approximately jointly stationary. These experiments corroborate that the joint stationarity hypothesis is a useful assumption, particularly in situations when the problem features a large number of variables but only a limited number of observations.
To test the utility of joint stationarity, we apply our methods on three diverse datasets: (a) a meteorological dataset containing the hourly temperature of 32 weather stations over 1 month in Molene, France [18], (b) a traffic dataset depicting highresolution daily vehicle flow of 4 weekdays in the highways of Sacramento, and (c) simulated SIRStype epidemics over Europe. Our experiments confirm that for highdimensional processes evolving over graphs, assuming joint stationarity yields an improvement in recovery performance as compared to time or vertexbased stationarity methods, even when the graph is only approximately known and the data violate the strict conditions of our definition.
1.2 Related work
There exists an extensive literature on multivariate stationary processes, developing the original work of Wiener et al. [22, 23]. The reader may find interesting Bloomfield’s book [24] focusing on spectral relations. We focus on two main approaches that relate to our work, graphical models and signal processing on graphs.
1.2.1 Graphical models
In the context of graphical models, multivariate stationarity has been used jointly with a graph in the work of [25, 26]. Though relevant, we note that there is a key difference of these models with our approach: we assume that the graph is given, whereas in graphical models, the graph structure (or more precisely the precision matrix) is learned from the data. Knowing the graph allows us to search for more involved relations between the variables. As such, we are not restricted to the case that the conditional dependencies are given by the graph (and therefore that they are sparse), but allow nonadjacent variables to be conditionally dependent, modeling a broader set of behaviors. We also note that our approach is eventually more scalable. We refer to [16] for elements of connections between graphical models and graph signal processing.
1.2.2 Graph signal processing
The idea of studying the stationarity of a random vector w.r.t. a graph was first introduced in [18, 27] and then in [17, 19]. While these contributions have different starting points, they both roughly propose the same definition. Another more recent contribution relating to stationarity on graphs in the context of PSD estimation is [28]. Despite the relevance of these works, it is important to stress that the current paper is the first to consider a stationary hypothesis over graph signals varying in time. Moreover, the new results are nontrivial as they cannot be obtained by applying previous definitions on a product graph. In addition, some of the analysis presented here (particularly that of Section 4) is novel and can also be employed for the previously studied case of stationary graph signals. To make the connection with previous works transparent, in the following, every technical result (e.g., Lemma, Theorem, Proposition) that emerges as a generalization of [18, 19, 27] contains a reference in its heading pointing to the former claim.
The forecasting of timeevolving signals on graphs was also considered in [29–32]. Nonetheless, there are several differences with these works, with the most important being that we define joint stationarity and that we are not restricted to the causal case (where a process is reconstructed only from its past). Finally, it should be noted that some preliminary results of this work appeared in a conference paper [33]. This work extends the conference paper in many directions. We refine the definition of joint stationarity and explore how it relates to other wellknown stationarity hypotheses. We propose a new PSD estimator and provide a theoretical analysis of the bias and variance of the old and new PSD estimators. Additionally, we study the complexity of the proposed solution and evaluate its merit w.r.t. two new datasets.
2 Preliminaries
2.1 General notation
We use boldface symbols for matrices and vectors (e.g., A and a respectively) and calligraphic symbols for sets (e.g., \(\mathcal {V}\) and \(\mathcal {E}\)). Symbol j denotes the imaginary unit, I_{N} is the N×N identity matrix, and 1_{N} is the allones vector of size N. We use brackets to index matrix elements and subscripts for matrix blocks: if A is of size n_{1}×n_{2}, then A[n_{1},n_{2}] is the element at the n_{1}th row and n_{2}th column and \(\phantom {\dot {i}\!}\mathbf {A}_{{n}_{1},{n}_{2}}\) is a (block) matrix. Vector a=vec (A) (without subscript) is the vectorized representation of A and a_{n} is its nth column. Moreover, \(\mathbf {A}^{\intercal }\) is its transpose and A^{∗} is its transposed complex conjugate (meaning that \(\left ({\mathbf {A}}^{{*}}_{T}\right)^{\intercal }\) is the complex conjugate). If A is N×N Hermitian, its eigendecomposition is generically written as A=UΛU^{∗}, where U=[ u_{1},…,u_{N}] is a matrix having eigenvectors as columns and Λ=diag(λ_{1},…,λ_{N}) is the diagonal matrix of eigenvalues. Symbols h(·),f(·), and g(·) are reserved for scalar/matrix functions. A matrix function with a single argument takes as an input a symmetric matrix A and outputs h(A)=Udiag(h(λ_{1}),…,h(λ_{N}))U^{⊤}. The operator ⊗ denotes the Kronecker product. The Kronecker sum ⊕ can be defined in terms of the Kronecker product as A⊕B=A⊗I_{M}+I_{N}⊗B, where matrix B has size M×M.
2.2 Harmonic timevertex analysis
We consider signals supported on the vertices \(\mathcal {V} = \{ v_{1}, v_{2}, \ldots, v_{N} \}\) of a weighted undirected graph \(\mathcal {G} = (\mathcal {V}, \mathcal {E}, \mathbf {W}_{G})\), with \(\mathcal {E}\) the set of edges of cardinality \(E = \mathcal {E}\) and W_{G} the weighted adjacency matrix. Suppose that signal x_{t} is sampled at T successive regular intervals of unit length. A real timevertex signal \(\mathbf {X} = \left [ \mathbf {x}_{1}, \mathbf {x}_{2}, \ldots, \mathbf {x}_{T} \right ] {\in } \mathbb {R}^{N\times T}\) is then the matrix having graph signal x_{t} as its tth column.
The frequency representation of a timevertex signal X is given by the joint Fourier transform [14, 21] (or JFT for short)
with U_{G} and U_{T} being, respectively, the unitary graph Fourier transform (GFT) and discrete Fourier transform (DFT) matrices, whereas \(\left ({\mathbf {U}}^{*}_{T}\right)^{\intercal }\) is the complex conjugate of U_{T}. In vector form, we have that \(\hat {\mathbf {x}} = \text {JFT}\{{\mathbf {x}}\} \triangleq \mathbf {U}_{J}^{*}\, \mathbf {x}\), where U_{J}=U_{T}⊗U_{G}. As is often the case, we choose U_{G} to be the eigenvector matrix of the combinatorial^{Footnote 2} graph Laplacian matrix L_{G}=diag(W_{G}1_{N})−W_{G}, where 1_{N} is the allones vector of size N, and diag(W_{G}1_{N}) is the diagonal degree matrix. Matrix U_{T} is the eigenvector matrix of the Laplacian L_{T} of a cyclic graph \(\mathcal {T}\):
With this in place, \(\hat {\mathbf {X}}[\!n,\tau ]\) can be seen as the Fourier coefficient associated with the joint frequency [ λ_{n},ω_{τ}], where λ_{n} denotes the nth graph eigenvalue and ω_{τ} the τth angular frequency.
The JFT maintains a close connection with the product graph \(\mathcal {J}\) [14, 21]. The latter is the graph whose adjacency matrix is W_{J}=W_{T}⊕W_{G} (this amounts to a Cartesian product between \(\mathcal {G}\) and the ring graph \(\mathcal {T}\)). The connection is revealed if one realizes that the Laplacian L_{J}=L_{T}⊕L_{G} of \(\mathcal {J}\) carries the eigendecomposition L_{J}=U_{J}(Λ_{T}⊕Λ_{G})U_{J}. It follows that computing the JFT (in vector form) is the same as computing the GFT of x w.r.t. graph \(\mathcal {J}\). The main issue with any^{Footnote 3} product graph interpretation is that it imposes a strict dependence between the eigenvalues of L_{G} and L_{T} (since the eigenvalues of L_{J} are given by Λ_{T}⊕Λ_{G}). As we will see in the next paragraph, to attain full generality, one needs to abandon the product graph. For an indepth discussion of JFT and its properties, we refer the reader to [34].
2.3 Joint timevertex filtering
Filtering a timevertex signal x with a joint filter h(L_{G},L_{T}) corresponds to elementwise multiplication in the joint frequency domain [ λ,ω] by a function \(h: [\!0, \lambda _{\max }] \times [\!1,\ 1] \mapsto \mathbb {R}\) [21, 34–36]. When a joint filter h(L_{G},L_{T}) is applied to x, the output is
where \(\mathbf {\Lambda }_{G} \in \mathbb {R}^{N\times N} \) and \(\mathbf {\Omega } \in \mathbb {R}^{T\times T}\) are diagonal matrices with Λ_{G}[ n,n]=λ_{n} and Ω[ τ,τ]=ω_{τ}, whereas h(Λ_{G},Ω) is a diagonal NT×NT matrix defined as
and diag(vec(A)) creates a matrix with diagonal elements the vectorized form of A. The bivariate notation h(·,·) is meant to illustrate that joint filters operate independently on the two domains, something impossible^{Footnote 4} in the product graph framework [14, 21]. For convenience, we will often overload notation and write h(θ_{n,τ}) to refer to the bivariate function h(λ_{n},ω_{τ}). Furthermore, we say that a joint filter is separable, if its joint frequency response h can be written as the product of a frequency response h_{1} defined solely in the vertex domain and one h_{2} in the time domain, i.e., h(θ)=h_{1}(λ)·h_{2}(ω).
3 Joint timevertex stationarity
Let \(\mathbf {X} \in \mathbb {R}^{N\times T}\) be a real discrete periodic multivariate stochastic process with a finite number of timesteps T that is indexed by the vertex v_{i} of graph \(\mathcal {G}\) and time t. We refer to such processes as timevertex processes, or joint processes for short.
Our objective is to provide a definition of stationarity that captures statistical invariance of the first two moments of a joint process \(\mathbf {x} = \text {vec}({\mathbf {X}}) \sim \mathcal {D}(\bar {\mathbf {x}}, \mathbf {\Sigma })\), i.e., the mean \(\bar {\mathbf {x}} = \mathbf {E}\left [{\mathbf {x}}\right ]\) and the covariance \(\mathbf {\Sigma } = \mathbf {E}\left [{\mathbf {xx}^{\intercal }}\right ]  \bar {\mathbf {x}}{\bar {\mathbf {x}}}^{\intercal }\). Crucially, the definition should do so in a manner that is faithful to the graph and temporal structure.
3.1 Definition
Typically, widesense stationarity is thought of as an invariance of the two first moments of a process w.r.t. translation. For the first moment, things are straightforward: stationarity implies a constant mean E[x]=c1, independently of the domain of interest. The second moment, however, is more complicated as it depends on the exact form translation takes in the particular domain. Unfortunately, for graphs, translation is a nontrivial operation and three alternative translation operators exist: the generalized translation [37], the graph shift [13], and the isometric graph translation [27]. Due to this challenge, there are currently three alternative (though akin) definitions of stationarity appropriate for graphs [17–19].
The ambiguity associated with translation on graphs urges us to seek an alternative starting point for our definition. Fortunately, there exists an interpretation which holds promise: up to its constant mean, a widesense stationary process corresponds to a white process filtered linearly on the underlying space. This “filtering interpretation” of stationarity is well known classically^{Footnote 5} as well as in the graph setting [19] and is equivalent to asserting that the second moment can be expressed as Σ=h(L_{T}), where h(L_{T}) is a linear filter. Thankfully, not only filtering is elegantly and uniquely defined for graphs [37], but also stating that a process is graph widesense stationary if E[x]=c1_{N} and Σ=h(L_{G}), is a graph filter, is generally consistent^{Footnote 6} with current definitions [17–19].
This motivates us to also express the definition of stationarity for joint processes in terms of joint filtering:
Definition 1
(JWSS) A joint process x=vec (X) is called jointly widesense stationary (JWSS), if and only if

(a)
The first moment of the process is constant E[x]=c1_{NT}.

(b)
The covariance matrix of the process is a joint filter Σ=h(L_{G},L_{T}), where h(·,·) is a nonnegative real function referred to as joint power spectral density (JPSD).
Let us examine Definition 1 in detail.
First moment condition. As in the classical case, the first moment of a JWSS process has to be constant over the time and the vertex sets, i.e., \(\bar {\mathbf {X}}[\!{i,t}] = c \) for every i=1,2,…,N and t=1,2,…,T. For alternative choices of the graph Laplacian with a nullspace not spanned by the constant vector, the first moment condition should be modified to requiring that the expected value of a JWSS process is in the null space of the matrix L_{T}⊕L_{G} (see Remark 2 [19] for a similar observation on stochastic graph signals).
Second moment condition. According to the definition, the covariance matrix of a JWSS process takes the form of a joint filter h(L_{G},L_{T}), and is therefore diagonalizable by the JFT matrix U_{J}. It may also be interesting to notice that the matrix h(L_{G},L_{T}) can be expressed as follows
where each block \(\mathbf {H}_{t_{1},t_{2}}\) of Σ is an N×N matrix defined as:
and \(h_{\omega _{\tau }} (\mathbf {L}_{G})\) is the graph filter with frequency response \(h_{\omega _{\tau }} = h(\lambda,\omega _{\tau })\). Being a covariance matrix, h(L_{G},L_{T}) must necessarily be positivesemidefinite; thus, h(·,·) is real (the eigenvalues of every Hermitian matrix are real) and nonnegative. Also, equivalently, every zero mean JWSS process x=vec (X) can be generated by joint filtering x=h(L_{G},L_{T})^{1/2}ε a white process ε with zero mean and identity covariance. The following proposition exploits these facts to provide an interpretation of JWSS processes in the joint frequency domain.
Proposition 1
(Generalizes Theorem 1 [17] and Proposition 1 [18, 19]) A joint process X over a connected graph \(\mathcal {G}\) is jointly widesense stationary (JWSS) if and only if:

(a)
The joint spectral modes are in expectation zero \(\mathbf {E}\left [\!{\hat {\mathbf {X}}}[\!n,{\tau }]\right ]=0 \quad \text {if}~ \lambda _{n} \neq 0 \text { and } \ \omega _{\tau } \neq 0. \)

(b)
The product graph spectral modes are uncorrelated \(\mathbf {E}\left [\!{ \hat {\mathbf {X}}[\!n_{1},\tau _{1}] \hat {\mathbf {X}}[\!n_{2},\tau _{2}]}\right ] = 0, \) whenever n_{1}≠n_{2} or τ_{1}≠τ_{2}.

(c)
There exists a nonnegative function h(·,·), referred to as joint power spectral density (JPSD), such that
$$\mathbf{E}\left[{\left\hat{\mathbf{X}}\left[n,\tau\right]\right^{2}}\right]  \left\mathbf{E}\left[{\hat{\mathbf{X}}[\!n,\tau]}\right]\right^{2} = h(\lambda_{n}, \omega_{\tau}),$$for every n=1,2,…,N and τ=1,2,…,T.
(For clarity, this and other proofs of the paper have been moved to the “Appendix”.)
We briefly present a few additional properties of JWSS processes that will be useful in the rest of the paper.
Property 1
(Generalizes Example 1 [17–19]) White centered i.i.d. noise \(\mathbf {w} \in \mathbb {R}^{NT} \sim \mathcal {D}(\mathbf {0}_{NT},\mathbf {I}_{NT})\) is JWSS with constant JPSD for any graph.
The proof follows easily by noting that the covariance of w is diagonalized by the joint Fourier basis of any graph \(\mathbf {\Sigma }_{\mathbf {w}} = \mathbf {I} = \mathbf {U}_{J} \mathbf {I} \mathbf {U}_{J}^{*}\). This last equation tells us that the JPSD is constant, which implies that similar to the classical case, the energy of white noise is evenly spread across all joint frequencies.
A second interesting property of JWSS processes is that stationarity is preserved through a filtering operation.
Property 2
(Generalizes Theorem 2 [17], Property 1 [19]) When a joint filter f(L_{G},L_{T}) is applied to a JWSS process X with JPSD h, the result Y remains JWSS with mean cf(0,0)1_{NT}, where c is the mean of X, and JPSD f^{2}(λ,ω) h(λ,ω).
Finally, we notice that for real processes X, which are the focus of this paper, the function h forming the joint filter should be symmetric w.r.t. ω, meaning that h(λ,ω)=h(λ,−ω). This property can be easily derived from the definition of the Fourier transform.
3.2 Relations to classical definitions
We next provide an indepth examination of the relations between joint widesense stationarity, time and vertex stationarity, as well as their multivariate equivalents. For clarity, we order the rows/columns of the covariance matrix Σ such that each \(\mathbf {\Sigma }_{t_{1}, t_{2}}\) block of size N×N measures the covariance between \(\mathbf {x}_{t_{1}}\) and \(\mathbf {x}_{t_{2}}\) (see (4)).
3.2.1 Standard definitions
As we discuss below, known definitions of stationarity in time/vertex domains are particular cases of joint stationarity.
TWSS ∩VWSS ⊂JWSS. The known versions of stationarity (TWSS, VWSS) are oblivious to any structure along one of the two dimensions of X. In this manner, assuming that X is TWSS amounts to interpreting each of the N time series as a separate realization of the same process with TPSD h_{T}(ω). Similarly, if X is VWSS, then each graph signal x_{t} is taken as a separate realization of a single stochastic graph signal with VPSD h_{G}(λ) [17, 19]. It is a simple consequence that, different from the JWSS hypothesis, assuming that X is both TWSS and VWSS is equivalent to limiting our scope to separable JPSD defined as the product of two univariate functions h(λ,ω)=h_{G}(λ)h_{T}(ω)—see also Fig. 1.
3.2.2 Definitions based on the product graph
As explained in Section 2, the JFT can be interpreted as a graph Fourier transform taken over a product graph whose Laplacian is L_{J}=L_{G}⊕L_{T}. This construction can give rise to two additional definitions for joint stationarity:
3.2.2.1 VWSS on a product graph.
The first is obtained by applying the VWSS definition of [17, 19] on the graph associated with L_{J}. The resulting model is not sufficiently general in order to generate the full spectrum of JWSS processes. The reason is that, whereas the JPSD h(λ,ω) can be any twodimensional nonnegative function, the JPSD of any VWSS process on L_{J} is necessarily onedimensional (the eigenvalues of L_{J} are the sums of all combinations of the eigenvalues of L_{G} and L_{T})—see Fig. 1 for a pictorial demonstration and “Appendix: Univariate vs multivariate JPSD” for examples from real data. The same reasoning also holds for alternative products between graphs, such as the strong and Kronecker products [14].
3.2.2.2 Covariance diagonalized by the product graph Fourier transform.
The second definition, which we refer to as JWSSalternate, entails asserting that the covariance matrix Σ can be diagonalized by the JFT, i.e., the eigenbasis of L_{J}. This can be seen to differ from the JWSS definition only in case of graph Laplacian eigenvalue multiplicities: whenever the graph Laplacian features repeated eigenvalues, for Definition 1, the degrees of freedom of the JPSD h decrease, as necessarily h(λ_{1},ω)=h(λ_{2},ω) when λ_{1}=λ_{2}. This restriction is motivated by the following observation: for an eigenspace with multiplicity greater than one, there exists an infinite number of possible eigenvectors corresponding to the different rotations in the space, and the JPSD is in general illdefined. The condition h(λ_{1},ω)=h(λ_{2},ω) when λ_{1}=λ_{2} deals with this ambiguity, as it ensures that the JPSD is the same independently of the choice of eigenvectors. On the contrary, with JWSSalternate, one should construct an arbitrary basis of each eigenspace with multiplicity and set^{Footnote 7} h(λ_{1},ω)≠h(λ_{2},ω). This approach, which was followed in [38], features more degrees of freedom at the expense of the loss of filtering interpretation and higher computational complexity: one may not anymore use filters to estimate the JPSD (without reverting to Definition 1), whereas using the JFT to diagonalize the covariance scales like O(N^{3}+N^{2}T+NT log(T)). On the contrary, in our setting, the PSD estimation complexity can be reduced to be close to linear in the number of edges E and timesteps T (see “Appendix: Implementation details of the JPSD estimator”).
Nevertheless, we should mention that the differences mentioned above are mostly academic. Eigenvalue multiplicities occur mainly when graph automorphisms exist. In the absence of such symmetries (e.g., in the graphs used in our experiments), the two definitions yield the same outcome.
3.2.3 Multivariate definitions
On the other hand, joint stationarity can itself be derived as the combination of two multivariate versions of time/vertex stationarity, which we refer to respectively as MTWSS (see [25]) and MVWSS. Before formally defining them in Definitions 2 and 3, let us state our result formally:
Theorem 1
A joint process X is JWSS if and only if it is MTWSS and MVWSS.
To put this in context, we examine the two multivariate definitions independently.
(a) JWSS ⊂MTWSS. The covariance matrix of a JWSS process has a block circulant structure, as \(\mathbf {\Sigma }_{t_{1},t_{2}} = \mathbf {\Sigma }_{\delta,1} = \mathbf {\Gamma }_{\delta }\), where δ=t_{1}−t_{2}+1. Hence, Σ can be written as
implying that correlations only depend on δ and not on any time localization. This property is shared by multivariate time widesense stationary processes:
Definition 2
(MTWSS [25]) A joint process \(\mathbf {X}= \left [ \mathbf {x}_{1}, \mathbf {x}_{2}, \ldots, \mathbf {x}_{T} \right ] \in \mathbb {R}^{N\times T}\) is multivariate time widesense stationary (MTWSS), if and only if the following two properties hold:

(a)
The expected value is constant as E[x_{t}]=c1 for all t.

(b)
For all t_{1},t_{2}, the second moment satisfies \( \mathbf {\Sigma }_{t_{1},t_{2}} = \mathbf {\Sigma }_{\delta,1} = \mathbf {\Gamma }_{\delta }, \) where δ=t_{1}−t_{2}+1.
Similarly to the univariate case, the time power spectral density (TPSD) is defined to encode the statistics of the process in the spectral domain
We then obtain the TPSD of a JWSS process by constructing a graph filter from h while fixing ω. Setting \(h_{\omega _{\tau }}(\lambda) = h(\lambda,\omega _{\tau })\), the TPSD of a JWSS process is \(\hat {\mathbf {\Gamma }}_{\tau } = h_{\omega _{\tau }}(\mathbf {L}_{G}).\)
(b) JWSS ⊂ MVWSS. For a JWSS process, each block of Σ has to be a linear graph filter, i.e., \(\mathbf {\Sigma }_{t_{1},t_{2}}= \gamma _{t_{1},t_{2}}(\mathbf {L}_{G})\), meaning that
This is perhaps better understood when compared to the multivariate version of vertex stationarity defined below:
Definition 3
(MVWSS) A joint process \(\mathbf {X} = \left [\mathbf {x}_{1}, \mathbf {x}_{2}, \right. \left.\ldots, \mathbf {x}_{T}\right ] \in \mathbb {R}^{N\times T}\) is called multivariate vertex widesense stationary (MVWSS), if and only if the following two properties hold independently:

(a)
The expected value is of each signal x_{t} is constant E[ x_{t}]=c_{t}1 for all t.

(b)
For all t_{1} and t_{2}, there exist a kernel \(\gamma _{t_{1},t_{2}}\) such that \( \mathbf {\Sigma }_{t_{1},t_{2}} = \gamma _{t_{1},t_{2}}(\mathbf {L}_{G}) \).
It can be seen that every JWSS process must also be MVWSS, or equivalently JWSS ⊂ MVWSS.
4 Joint power spectral density estimation
The joint stationarity assumption can be useful in overcoming the challenges associated with dimensionality. The main reason is that for JWSS processes, the estimation variance is decoupled from the problem size. Concretely, suppose that we want to estimate the covariance matrix Σ of a joint process x=vec(X) from K samples x_{(1)},x_{(2)},…,x_{(K)}. As we show in the following, if the process is JWSS such that Σ=h(L_{G},L_{T}), the JPSD estimation variance is O(1). This is a sharp decrease from the classical and MTWSS settings, for which K≈NT and K≈N realizations are necessary^{Footnote 8}, respectively.
This section presents two JPSD estimators. The first provides unbiased estimates at a complexity that is O(N^{3}T log(T)). The second estimator decreases further the estimation variance at the cost of a bounded bias and is approximated with (close to) linear complexity.
4.1 Sample JPSD estimator
We define the sample JPSD estimator for every graph frequency λ_{n} and angular frequency ω_{τ} as the estimate
In case the process does not have zero mean, it should be centered by subtracting the constant signal \(c \, \mathbf {1}_{N} \mathbf {1}_{T}^{*}\), where \( c = \sum _{k,i,t} \mathbf {X}_{(k)}{[\!i,t]} / (KNT).\) In that case, the unbiased estimator should involve division by K−1, instead of K as we have in (7).
4.1.1 Analysis
For simplicity, in the following, we suppose that the process is correctly centered. As the Theorem 2 claims, the sample JPSD estimator is unbiased, and its variance decreases linearly with the number of samples K.
Theorem 2
For every distribution with bounded second and fourth order moments, the sample JPSD estimator \(\dot {h}(\theta)\)

(a)
is unbiased, i.e., \(\mathbf {E}\left [{\dot {h}(\theta)}\right ] = h(\theta)\), and

(b)
has variance \(\mathbf {Var}\left [{\dot {h}(\theta)}\right ] = h^{2}(\theta)\, \frac { \gamma  1}{K}\),
where constant γdepends only on the distribution of x.
Proof
For any θ=[ λ,ω], the sample estimate is
with \(\hat {\varepsilon }_{(k)}\) being independent realizations of \(\hat {\varepsilon }\), a zero mean complex random variable with unit variance. To see this, write x=h(L_{G},L_{T})^{1/2}ε, where the random vector ε has zero mean and identity covariance. Then, the complex random variable \(\hat {\varepsilon }\) is the JFT coefficient of ε corresponding to frequencies λ and ω. The bias follows by noting that \(\mathbf {E}\left [{\hat {\epsilon }_{(k)} \hat {\epsilon }_{(k)}^{*}}\right ] = 1\), for every k. The variance is computed similarly by exploiting the fact that different terms in the sum are independent as they correspond to distinct realizations and setting \(\gamma = \mathbf {E}\left [{\hat {\varepsilon }^{4}}\right ]\). □
For the standard case of a Gaussian joint process, we provide an exact characterization of the distribution.
Corollary 1
For every Gaussian JWSS process, the sample JPSD estimate follows a Gamma distribution with shape K/2 and scale 2h(θ)/K. The estimation error variance is equal to \(\mathbf {Var}\left [{ \dot {h}(\theta)}\right ] = 2\, h^{2}(\theta)/K\).
Proof
We continue in the context of the proof of Theorem 2. For a Gaussian distribution, \(\hat {\varepsilon }\) is centered and scaled Gaussian and thus \(\hat {\varepsilon }^{2}\) is a chisquared random variable with 1 degree of freedom. Our estimate is, therefore, a scaled sum of i.i.d. chisquared variables and corresponds to a Gamma distribution. The corollary then follows directly. □
Observe that the variance depends linearly on the fourthorder moment of \(\hat {\varepsilon }\) (see proof of Theorem 2) and is inversely proportional to the number of samples, but it is independent of N and T. This implies that \({\mathbf {\Sigma }  \dot {\mathbf {\Sigma }}}_{2}\) can be made arbitrarily small using K=O(1) samples. In the following, we discuss how to achieve an even smaller variance by exploiting the properties of h(θ).
4.2 Convolutional JPSD estimator
When the number of available realizations K is small (even 1), one may make use of additional assumptions on to obtain reasonable estimates. To this end, we next present a parametric JPSD estimator that allows us to trade off bias for variance.
Before delving into JWSS processes, it is helpful to consider the purely temporal case. For a TWSS process, it is customary to assume that the autocorrelation function has support L that is a few times smaller than T. Then, cutting the signal into \(\frac {T}{L}\) smaller parts and computing the average estimate reduces the variance (by a factor of \(\frac {T}{L}\)), without sacrificing frequency resolution. This basic idea stems from two established methods used to estimate the PSD of a temporal signal, namely Bartlett’s and Welch’s method [40, 41]. Averaging across different windows is equivalent to smoothing the TPSD by convolving it with a window in the frequency domain: this results in attenuation of the correlation for long delays, enforcing localization in the time domain.
4.2.1 Estimator
Armed with this interpretation, we proceed by smoothing the JPSD with a userspecified bivariate window g, such as a Gaussian or a disc window. The convolutional JPSD estimator computes the JPSD at joint frequency θ=(λ,ω) as:
where \(c_{g}(\theta) \triangleq \sum _{n,\tau } g(\theta  \theta _{n,\tau })^{2}\) is a normalization factor. For implementation specifics, including a discussion on the choice of the bivariate kernel g, we refer the reader to “Appendix: Implementation details of the JPSD estimator”.
The convolutional JPSD estimator is related to known PSD estimators for TWSS and VWSS processes. The Dirac function is denoted by ϕ. We have that (a) for g(θ)=ϕ(λ)·g_{T}(ω), we recover the classical TPSD estimator, applied independently for each λ. (b) For g(θ)=g_{G}(λ)·ϕ(ω), we recover the VPSD estimator from [17] applied independently for each ω. Similar to the latter, the estimator can be closely approximated at a complexity that is linear w.r.t. the number of graph edges/nodes, and up to a logarithmic factor linear to the number of timesteps (see “Appendix: Implementation details of the JPSD estimator”).
4.2.2 Analysis
To provide a meaningful bias analysis, we introduce a Lipschitz continuity assumption on the JPSD, matching the intuition that localized phenomena tend to have a smooth representation in the frequency domain.
Theorem 3
At θ, the convolutional JPSD estimator \(\ddot {h}(\theta)\)

(a)
has bias
$$\begin{array}{*{20}l} {}\left\mathbf{E}\left[{\ddot{h}(\theta)  h(\theta)}\right] \right \leq \frac{\epsilon }{ c_{g}(\theta) } \sum_{n=1,\tau=1}^{T,N}g(\theta  \theta_{n,\tau})^{2} {\theta  \theta_{n,\tau}}{~}_{2}, \end{array} $$where ε is the Lipschitz constant of h(θ), and

(b)
when the entries of \(\hat {\mathbf {X}}\) are independent random variables, its variance is
$$\begin{array}{*{20}l} \mathbf{Var}\left[\!{\ddot{h}(\theta)}\right] = \sum_{n,\tau} \frac{g(\theta \theta_{n,\tau})^{4}}{c_{g}(\theta)^{2}} \, \mathbf{Var}\left[\!{\dot{h}(\theta_{n,\tau})}\right], \end{array} $$where \(\mathbf {Var}\left [\!{\dot {h}(\theta _{n,\tau })}\right ]\) is the variance of the sample JPSD estimator at θ_{n,τ}.
The derivations of the bias and variance are given in Lemmas 1 and 2, respectively.
We note two corner cases of interest. In the most convenient case, the JPSD is constant, and our estimator is unbiased (the Lipschitz constant ε is zero). On the other hand, if the JPSD fluctuates rapidly, the bias of the estimate will be significant unless g is close to a Dirac. Here, the sample estimator should be preferred.
We further consider as a theoretical example the case of a Gaussian JWSS process and a (spectral) disc window with bandwidth B, i.e., g_{B}(θ)=1 if \({\theta }{~}_{2} \leq \frac {B}{2}\) and 0 otherwise. Though perhaps not the most practical choice from a computational perspective, we consider here a disc window because it leads to simple and intuitive estimates.
Corollary 2
For every εLipschitz Gaussian JWSS process and disc window g_{B}(θ), the convolutional estimate has
with set \(\mathcal {S}_{\theta } = \{ \theta _{n,\tau } \,  \, {\theta _{n,\tau }  \theta }{~}_{2} \leq B/2 \}\) and \({h^{2}_{\mathcal {S}}} = \sum _{\theta _{n,\tau } \in \mathcal {S}} h(\theta _{n,\tau })^{2}\).
Proof
The results follow from Theorem 3 and Corollary 1 by noting that when a disc window is used, (a) \(c_{g}(\theta) = \mathcal {S_{\theta }}\) and (b) g(θ−θ_{n,τ})^{2}=1 for all n,τ in the window (there are \(\mathcal {S_{\theta }}\) in total) and zero otherwise. The independence condition required by the variance clause of the theorem is satisfied since \(\hat {\mathbf {X}}\) is Gaussian (as a rotation \(\hat {\mathbf {x}} = \mathbf {U}_{J}^{*} \mathbf {x}\) of a Gaussian vector) with diagonal covariance. □
The above result suggests that by selecting our window (bandwidth), we can trade off bias for variance. The tradeoff is particularly beneficial as long as (a) the JPSD is smooth relatively to the disc size (εB≪1) and (b) the graph eigenvalues are clustered (S_{θ}≫1 when h(θ)≫0).
5 Recovery of JWSS Processes
This section considers the MMSE problem of recovering a JWSS process x=vec(X) from linear measurements y corrupted by a zeromean JWSS process w:
where the function f is linear on y, i.e., there exists a matrix W and a vector b such that f(y)=Wy+b. We remark that (a) for A binary diagonal and w=0, (P0) is an interpolation problem, (b) for A=I and w white noise (P0) is a denoising problem, and (c) for A diagonal with A_{ii}=1 if i≤Nt and zero otherwise and w=0 it corresponds to forecasting. We mainly consider the former two problems since, for forecasting, it is more computationally efficient to utilize autoregressive models [29].
The minimum meansquared linear estimate is known to be
with the definitions Σ_{y}=AΣA^{∗}+Σ_{w} and Σ_{xy}=ΣA^{∗}. Obtaining \(\dot {\mathbf {x}}\) therefore entails solving a linear system in matrix Σ_{y} that—naively approached—has O(N^{2}T^{2}) complexity. In addition, the condition number of Σ_{y} can be large, rendering direct inversion unstable. For instance, this may happen when one attempts to reverse any smoothing operation A that severely attenuates part of the signal’s spectrum.
We next discuss how to deal with these issues:
5.1 Decreasing the complexity
Thankfully, even if Σ_{y} is not always sparse, we can approximate its multiplication by a vector without actually computing it as (a) A is, for many applications (denoising, prediction, forecasting), sparse, and (b) per our assumption, Σ and Σ_{w} are joint filters, and therefore, they can be implemented at complexity that is (up to logarithmic factors) linear to the number of edges E and timesteps T [20, 34, 36]. Therefore, if we employ an iterative method such as the (preconditioned) conjugate gradient to compute the solution, the complexity of each iteration will be linear on the problem size.
5.2 Singular or badly conditioned Σ _{y}
We choose the solution with the minimal residual by substituting the inverse \(\mathbf {\Sigma }_{{y}}^{1}\) in (11) with the pseudoinverse \(\mathbf {\Sigma }_{{y}}^{+}\). However, instead of solving the normal equations \( \dot {\mathbf {x}} = \mathbf {\Sigma }_{\mathbf {x}\mathbf {y}} \left (\mathbf {\Sigma }_{{y}}^{2}\right)^{1} \mathbf {\Sigma }_{{y}} \left (\mathbf {y}  \bar {\mathbf {y}}\right)+ \bar {\mathbf {x}}, \) which has the effect of significantly increasing the condition number of our matrix, we suggest to employ the minimal residual conjugate gradient method for symmetric matrices [42]. For badly conditioned covariance matrices, an alternative solution is to rewrite the problem as a regularized least squares problem
and solve it using the generalization of the fast iterative shrinkagethresholding algorithm (FISTA) scheme [43–45]. This problem was shown to converge to the correct solution when w is white noise. More details about the optimization procedures can be found in [17]. Similarly, in the noiseless case one removes term \(\\mathbf {A}\mathbf {z} \mathbf {y}\{~}_{2}^{2}\) in (12) and introduces instead the constraint Az=y. The resulting optimization problem can be solved using a DouglasRachford scheme [46].
6 Experiments
6.1 Joint power spectral density estimation
The first step in our evaluation is to analyze the efficiency of JPSD estimation. Our objective is dual. First, we aim to study the role of the different method parameters into the estimation accuracy and computational complexity, essentially providing practical guidelines for their usage. In addition, we wish to illustrate the usefulness of the joint stationarity assumption, even when the graph is only approximately known.
6.1.1 Variancebiascomplexity tradeoffs
To validate the analysis of Section 4 for the computational and accuracy tradeoffs inherent to our JPSD estimation method, we performed numerical experiments with random geometric graphs of N=256 vertices (we build a 10nearest neighbor graph, weighted by a radial basis function kernel tuned so th at the average weighted degree is slightly above 7) and JWSS processes (T=128 timesteps). Though our approach works with any JPSD, including high frequency ones, in this experiment, we consider a stochastic process generated by the discrete damped wave equation with a nonseparable JPSD h(λ,ω)= exp(−ω/2) cos(ω acos(1−λ))
6.1.2 Variancebias
First, we examine the relation between the real JPSD h and the convolutional estimate \(\ddot {h}\) obtained using the “fast” method described in “Appendix: Implementation details of the JPSD estimator”. We use the following metrics:
where H=h(Λ_{G},Ω), \(\ddot {\mathbf {}H} = \ddot {h}(\mathbf {\Lambda }_{G}, \mathbf {\Omega }),\) and \(\tilde {\mathbf {E}} [{\cdot }]\) is the empirical expectation computed over 20 independent experiments. We remind the reader that there are two parameters influencing the performance of the convolutional JPSD estimator (see “Appendix: Implementation details of the JPSD estimator”: the window size L corresponding to our assumption for the support length of the autocorrelation in time, and the number of graph filters F used to capture power density in the graph spectral dimension. As discussed in Theorem 3, the bias will be small as long as the JPSD is a smooth function (it has a small Lipschitz constant ε), in which case one may opt for small L and F. Figure 2a–d report four key metrics for an exhaustive search of L,F combinations. We observe that large values of F and L generally reduce the estimation error (Fig. 2a) because they result in reduced bias (Fig. 2b). Nevertheless, setting the parameters to their maximum values is not suggested as the variance is increased (Fig. 2c).
6.1.3 Complexity
In Fig. 2d, we see that utilizing a large number of filters (i.e., large F) increases the average execution time. Figure 3 delves further into the issue of scalability. In particular, we vary the number of vertices from 1000 to 9000 and focus on a process with JPSD \(h(\theta) = e^{\lambda /\lambda _{\text {max}}} \, e^{5\, \omega ^{2}}\). We then examine the min/median/max execution time of the convolutional JPSD estimator for a for increasing problem sizes when ran in a desktop computer and repeated 10 times. We compare two implementations. The first, which naively performs the convolution in the spectral domain, uses the eigenvalue decomposition and therefore scales quadratically with the number of vertices. Due to its optimized code and simplicity, this should be the method of choice when N is small. For larger problems, we suggest using the fast implementation. As shown in the figure, this scales linearly with N (here E=O(N)) when the number of filters F and timesteps T are held constant. In this experiment, we set L to 64.
6.1.4 How to choose L and F?
Having no computational constrains, one should choose the parameter combination that minimized the Akaike information criterion (AIC) score \(\text {AIC} = 2FL  2\ln \left (\ddot {\ell }\right)\), where \(\ddot {\ell }\) is the distribution dependent estimated likelihood \(\ddot {\ell } = \mathbf {P}\left (\mathbf {x}  \ddot {\mathbf {\Sigma }}\right)\) and \(\ddot {\mathbf {\Sigma }}\) is the estimated covariance based on the convolutional JPSD estimator with parameters L and F [47]. This procedure is often unfeasible as it is based on computing each model’s loglikelihood and thus entails estimating one JPSD for each parameterization in consideration (as well as knowing the distribution type). We have found experimentally that setting F=min(N,50) provides a good tradeoff between computational complexity and error. On the other hand, we suggest setting L to an upper bound of the autocorrelation support.
6.1.5 Learning from few realizations and a noisy graph
Figure 4 illustrates the benefit of a joint stationarity prior as compared to (a) an empirical covariance estimator which makes no assumptions about the data and (b) the MTWSS process estimator with optimal bandwidth [22]. As expected, accurate estimation is challenging when the number of realizations is much smaller than the number of problem variables (NT), returning errors above one for the empirical estimator. Introducing stationarity priors regularizes the estimation resulting in more stable estimates.
What is perhaps surprising is that even when the graph (and U_{G}) is known only approximately, estimating the second order moment of the distribution using the joint stationarity assumption is beneficial. To portray this phenomenon, we also plot the estimation error when using a noisy graph (we corrupted the weighted adjacency matrix by Gaussian noise, resulting in an SNR of 10 dB). Undoubtedly, introducing noise to the graph edges negatively affects estimation by introducing bias. Still, even with noise, the proposed method significantly outperforms purely timebased methods when less than NT realizations are available.
6.2 Recovery performance on three datasets
We apply our methods on three diverse datasets featuring multivariate processes evolving over graphs: (a) a weather dataset depicting the temperature of 32 weather stations over 1 month, (b) a traffic dataset depicting highresolution daily vehicle flow of 4 weekdays, and (c) SIRStype epidemics in Europe. Our experiments aim to show that joint stationarity is a useful model, even in datasets which may violate the strict conditions of our definition, and that it can yield a significant improvement in recovery performance, as compared to time or vertexbased stationarity methods.
6.2.1 Experimental setup
We split the K realizations of each dataset into a training set of size p_{t}K and a test set of size (1−p_{t})K, respectively. The training set is used to estimate the JPSD. Then, in the first two experiments, we attempt to recover the values of p_{d}NT variables randomly discarded from the test set. This corresponds to A being a binary diagonal matrix and w=0 in Problem P0, for which the solution is not given by a Wiener filter. In the third experiment, we instead consider a denoising problem with A=I and w being a random Gaussian vector. In each case, we report the RMSE for the recovered signal normalized by the ℓ_{2}norm of the original signal. We compare our joint method with the sample and convolutional JPSD estimators to univariate time/vertex stationarity [17]. These methods solve the statistical recovery problem under the assumption that signals are stationary in the time/vertex domains, but considering different vertices/timesteps as independent. These methods are known to outperform nonmodel based methods, such as Tikhonov regularization (ridge regression) and totalvariation regularization (lasso) over the time or graph dimensions [12, 13]. We also compare to the more involved MTWSS model [25] where the values at different vertices are correlated and the covariance is block circulant of size NT×NT (see Definition 2). The latter is only shown for the weather dataset as the large number of variables present in the other datasets (e.g., ≈10^{8} parameters for the traffic dataset) prohibited computation. We remark that the graph Laplacians we considered did not possess eigenvalue multiplicities, meaning that the results obtained using the JWSSalternate definition are identical to that with JWSS using a sample JPSD estimator—thus, we do not include JWSSalternate in our comparison.
6.2.2 Molene dataset
The French national meteorological service has published in open access a dataset^{Footnote 9} with hourly weather observations collected during the month of January 2014 in the region of Brest (France) [18]. The graph was built from the coordinates of the weather stations by connecting all the neighbors in a given radius with a weight function W_{G}[ i_{1},i_{2}]=exp(−k d(i_{1},i_{2})^{2}), where d(i_{1},i_{2}) is the Euclidean distance between the stations i_{1} and i_{2}. Parameter k was adjusted to obtain an average degree around 5 (k however is not a sensitive parameter). We split the data in K=15 consecutive periods of T=48 h each. As sole preprocessing, we removed the mean (over time and stations) of the temperature^{Footnote 10}.
We first test the influence of training set size p_{t}, while discarding p_{d}=30% of the test variables. As seen in Fig. 5a, due to its large sample complexity, the MTWSS approach provides good recovery estimates when the number of realizations is large, approaching that of joint stationarity, but suffers for small training sets (though not shown in the figure, the relative mean error was 9.8 when only p_{t}=10% of the data was used for training). Due to their stricter modeling assumptions, univariate stationarity methods returned relevant estimates when trained from few realizations but exhibited larger bias. The convolutional JPSD estimator can be seen to improve upon the sample estimator when the amount of data used for JPSD estimation is small (less than 20%). For bigger training sets, the two estimators yield similar accuracy. Figure 5b reports the achieved errors for recovery problems with progressively larger percentage 5%≤p_{d}≤95% of discarded entries for a training percentage of p_{t}=20%. We can observe that the error trends are consistent across all cases.
6.2.3 Traffic dataset
The California department of transportation publishes highresolution traffic flow measurements (number of vehicles per unit interval) from stations deployed in the highways of Sacramento^{Footnote 11}. We focused on 727 stations over four weekdays in the period 01–06 April 2016. Starting from the road connectivity network obtained by the OpenStreetMap.org, we constructed one time series for each highway segment by setting the flow over it to be a weighted average of all nearby stations, while abiding to traffic direction. This resulted in a graph of N=710 vertices and a total of T=24×12 measurements per day for K=4 days. We used the convolutional JPSD estimator with parameters L=T/2 and F=75, which were experimentally found to give good performance in the training set.
Figure 6a and b depict the mean recovery errors when the training sets were 1 (p_{t}= 25%) and 3 days (p_{t}= 75%) respectively. The strong temporal correlations present in highway traffic were useful in recovering missing values. Considering both the temporal and spatial dimensions of the problem resulted in accurate estimates, with less than 0.04 error when p_{d}=50% of the data were removed and the PSD was estimated from 1 day. As expected, the convolutional estimator is efficient in the case when the training set is small (1 out of 4 days used for training): assuming that the JPSD is smooth helps to reduce estimation variance and computational complexity but can lead to a slight decrease in accuracy when a large amount of training data is available.
6.2.4 SIRS epidemic
Our third experiment simulates the spread of an infectious disease over N=200 major cities of Europe, as predicted by the susceptibleinfectedrecoveredsusceptible (SIRS) model, one of the standard models used to study epidemics. We intend to examine the predictive power of the considered methods when dealing with different realizations of a nonlinear and probabilistic process over a graph (the data are fictitious). We parameterized SIRS as follows: length of infection period, 2 days; length of immunity period, 10 days; probability of contagion across neighboring cities per day, 0.005; and total period, T=180 days. We generated a total of K=10 infections, all having the same starting point.
In contrast to the previous experiments, here, we attempt to recover the data after they have been corrupted with additive Gaussian noise. Figure 7a and b depict the mean recovery error as a function of the input signaltonoise ratio (SNR), respectively, when p_{t}= 50% and p_{t}= 90% of the data were used for training. As in previous experiments, the joint stationarity attains better recovery. The difference becomes clearer for low SNR, in which case the error is decreased (roughly) by a factor of two w.r.t. the best alternative.
6.2.5 Code
We remark that our simulations were done using the GSPBOX [48], the UNLocBoX [49], and the LTFAT [50]. The code reproducing our experiments is available at https://lts2.epfl.ch/stationarytimevertexsignalprocessing/.
7 Conclusion
This paper proposed a new definition of widesense stationarity appropriate for multivariate processes supported on the vertices of a graph.
Our model presents two key benefits. First, the estimation and recovery of JWSS processes is efficient, both in terms of estimation variance and computational complexity. In particular, the JPSD of a JWSS process can be estimated from few observations at a complexity that is roughly linear to the number of graph edges and timesteps. After the PSD has been estimated, the linear MMSE recovery problems of interpolation and denoising can be solved in the same asymptotic complexity. Second, joint stationarity is a volatile model, which is able to capture nontrivial statistical relations in the temporal and vertex domains. Our experiments suggested that we can model real spatiotemporal processes as jointly stationary without significant loss. Specifically, the JWSS prior was found more expressive than (univariate) TWSS and VWSS priors and improved upon the multivariate time stationarity prior when the dimensionality was large, but the model estimation was based on few observations of the process.
8 Appendix
8.1 Implementation details of the JPSD estimator
A straightforward implementation requires O(N^{3}) operations for computing the eigenbasis of our graph, O(N^{2}×KT) for performing KT independent GFT, O(T log(T)×KN) for KN independent FFT, and O(N^{2}T^{2}) for the convolution.
This section describes how to approximate a convolutional estimate using a number of operations that is linear to ET. Before describing the exact algorithm, we note two helpful properties of the estimator. First, we can compute \(\ddot {h}(\theta)\) by obtaining estimates for each X_{(k)} independently and then averaging over k:
As we will see in the following, the terms inside the outer sum can be approximated efficiently, avoiding the need for an expensive JFT. In addition, when the convolution window is separable, i.e., g(θ)=g_{G}(λ)·g_{T}(ω), as is assumed in this contribution, the joint convolution can be performed successively (and at any order) in the time and vertex domains
where \(c_{g}(\theta) = c_{g_{T}}(\omega) \cdot c_{g_{G}}(\lambda)\phantom {\dot {i}\!}\). Exploiting this property, we treat the implementation of the two convolutions separately and the presented algorithms can be combined in any order.
Fast time convolution. This is the textbook case of TPSD estimation that is solved by the Welch’s method [41]. The method entails splitting each time series into equally sized overlapping segments and averaging over segments the squared amplitude of the Fourier coefficients. The procedure is equivalent to an averaging (over time) of the squared coefficients of a shorttime Fourier transform (STFT), with halfoverlapping windows w_{T} defined such that DFTw_{T}(t)=g_{T}(ω) [51, 52]. Let L be the support of the autocorrelation or equivalently the number of frequency bands. We suggest using the iterated sine window
as it turns the STFT into a tight operator. In order to get an estimate of \(\ddot {h}\) at unknown frequencies, we interpolate between the L known points using splines [53].
Fast graph convolution. Inspired by the technique of [17], we perform the graph convolution using an approximated graph filtering operation [54] that scales linearly to the number of graph edges E. In particular,
We suggest using the Gaussian window
with σ^{2}=2(F+1)λ_{max}/F^{2}. As we did before, we only compute the above for F=O(1) different values of λ and approximate the rest using splines. As the eigenvalues are not known, we need a stable way to estimate \(c_{g_{G}}(\lambda)\). We obtain an unbiased estimate by filtering Q=O(1) random Gaussian signals on the graph \(\epsilon \in \mathbb {R}^{N} \sim \mathcal {N} (0, \mathbf {I}_{N})\), such that
with variance equal to \(2 \sum _{n=1}^{N} g^{4}(\lambda  \lambda _{n}) / Q\). We omit the analysis, as it is similar to that in Theorem 2. According to our numerical evaluation, the approximation error introduced by the latter estimator and spectral filtering is almost negligible for smooth JPSD.
O(TKF×E+QF×E)=O((TK+Q)EF) for the fast graph convolutions. Here, the TK and Q convolutions are performed in order to estimate the quantities at (13) and (15) for F different values of λ. (b) O(NK×T log(L)) for the fast time convolution, corresponding to NK STFT. Thus, in total the complexity of the fast convolutional JPSD estimator is O(TKFE+QEF+NKT log(L)). Furthermore, when Q,F,K,L are constants, the complexity simplifies to O(TE). We remark that, though asymptotically superior, the fast implementation can be significantly slower when the number of variables is small. Our experiments demonstrate that it should be preferred for N larger than a few thousands (see Fig. 3).
8.2 Univariate vs multivariate JPSD
As discussed in Section 3.2, one could potentially pose a VWSS hypothesis on a product graph to define joint stationarity, but the direct effect of such a choice is that the spectral domain becomes 1dimensional instead of 2dimensional. To see why this is problematic, in Fig. 8, we plot the two different representations of the JPSD for the three datasets featured in our experiments. It can be seen that the 2D representation (corresponding to the JWSS hypothesis) is more structured than its 1D counterpart. More importantly, a JWSS hypothesis leads to a smoother JPSD: this is what our convolutional JPSD estimator employs to decrease the estimation variance.
8.3 Deferred proofs
Proof of Proposition 1
In order to simplify the notation in the next proof, we define the unravel function \(u_{r}:\mathbb {Z}^{2}\rightarrow \mathbb {Z}\) that transforms the double indexes n,τ of the matrix indexing of X into its vector index of u(n,τ)=(τ−1)N+n, i.e., X[ n,τ]=vec(X)[ u(n,τ)].
By construction of the JFT basis, \(\hat {\mathbf {X}}[0,0]\) captures the DCoffset of a signal, and condition (a) is equivalent to stating that E[ x]=c1_{NT}. Moreover, if the graph is connected and (a) holds, at least one of \(\mathbf {E}\left [{\hat {\mathbf {X}}[\!n_{1},\tau _{1}]}\right ]\) and \(\mathbf {E}\left [{\hat {\mathbf {X}}}[\!n_{2},\tau _{2}]\right ]\) must be zero when n_{1}≠n_{2} or τ_{1}≠τ_{2} and
Therefore, condition (b) is equivalent to stating that \(\mathbf {\Sigma } = \mathbf {U}_{J} \mathbf {D} \mathbf {U}_{J}^{*}\) for some diagonal matrix D. In addition, (c) asserts that D[ u(n,τ),u(n,τ)]=h(λ_{n},ω_{τ}) for every n,τ. Thus, taken together, (b) and (c) state that \(\mathbf {\Sigma } = \mathbf {U}_{J} \mathbf {D} \mathbf {U}_{J}^{*} = \mathbf {U}_{J} h(\mathbf {\Lambda }_{G}, \mathbf {\Lambda }_{T}) \mathbf {U}_{J}^{*} = h(\mathbf {L}_{G}, \mathbf {L}_{T})\), which is the second moment condition of a JWSS process. □
Proof of Theorem 1
For the first moment, it is straightforward to see that E[X[ n,t]]=c if and only if both E[X[ n,t]]=c_{t} and E[X[ n,t]]=c_{n}∀n,t.For the second moment, the covariance matrix of a JWSS process is by definition the linear operator associated to a joint filter Σ=h(L_{G},L_{T}). Using (5), \(\mathbf {\Sigma }_{t_{1},t_{2}}\) can be written as
where δ=t_{1}−t_{2}+1 and
Hence, the process satisfies the (b) statement of Definition 2 (TWSS) and 3 (VWSS). Conversely, if a process is TWSS and VWSS, we have \(\mathbf {\Sigma }_{t_{1},t_{2}}= \gamma _{t_{1},t_{2}}(\mathbf {L}_{G}) = \gamma _{\delta }(\mathbf {L}_{G})\) with the same δ as before. As a result, using (5), its covariance matrix can be written as a joint filter h(L_{G},L_{T}), where
and hence also satisfies the property of the second moment of JWSS processes. □
Proof of Property 2
The output of a filter f(L_{J}) can be written in vector form as y=f(L_{J}). If the input signal x is JWSS, we can confirm that the first moment of the filter output is E[f(L_{J})x]=f(L_{J})E[ x]=f(0,0)E[ x], which remains constant as E[ x] is constant by hypothesis. The computation of the second moment gives
which satisfies the second moment condition of JWSS processes. Above, f^{2}(Θ) is a diagonal NT×NT matrix, whose diagonal is obtained by applying the bivariate function f^{2}(·,·) on [λ_{n},ω_{τ}] for all n,τ (f can be interpreted as the frequency response of a joint filter). Matrix h_{X}(Θ) is similarly defined. □
Lemma 1
If function h(θ) is εLipschitz, then the bias is bounded by
Proof
Since h(θ) is ε Lipschitz, we have h(θ)−h(θ_{n,τ})≤εθ−θ_{n,τ} _{2}. Hence, we write
where the second equality stems from \(\sum _{n,\tau } g^{2}(\theta  \theta _{n,\tau }) = c_{g}(\theta)\). □
Lemma 2
If X is a JWSS process such that the entries of \(\hat {\mathbf {X}}\) are independent random variables, the convolutional JPSD estimate at θ has variance
where \(\mathbf {Var}\left [{\dot {h}(\theta _{n,\tau })}\right ]\)is the variance of the sample JPSD estimator at θ_{n,τ}.
Proof
Set
and \(\hat {\mathbf {E}}_{(k)} = \text {mat}\left ({\hat {\epsilon }_{(k)}}\right) = \text {mat}({h(\boldsymbol {\Lambda }_{G}, \boldsymbol {\Omega })^{+1/2} \hat {\mathbf {x}}_{(k)}})\), where + denotes the pseudoinverse, \(\hat {\epsilon }_{(k)}\) is white, and mat(·) is the matricization operator. The centered random variable
is a weighted sum of centered, identically distributed random variables z_{n,τ}. Moreover, when the elements of \(\hat {\b {E}}_{(k)}\) are independent, so are the variables z_{n,τ}. It follows that
which matches our claim. □
Notes
Though we use the combinatorial Laplacian in our presentation, our results can be adapted to alternative positive semidefinite matrix definitions of a graph Laplacian, such as the normalized Laplacian.
The same limitation holds for product graph constructions that do not rely on the Cartesian product [14].
Defining joint filters in terms of a product graph would imply that there is a fixed relation between angular and graph frequencies (determined by the product graph construction). As a result, in the product graph, framework filters are univariate functions.
As the correlation between two instants t_{1} and t_{2} depends only on the difference between these two instants E[x[t_{1}]x[t_{2}]]−E[x[t_{1}]]E[x[t_{2}]]=γ[t_{1}−t_{2}], the covariance matrix has to be circulant, a property that is shared by linear filters.
The only exception: for graphs with repeated eigenvalues, the conditions E[x]=c1 and Σ=h(L_{G}) are sufficient but not necessary for the graph stationarity definition based on isometric graph translation [18].
More generally, in an analogy to [18], the JPSD could be block diagonal with each block being of size equal to the multiplicity.
Access to the raw data is possible directly from https://donneespubliques.meteofrance.fr/donnees_libres/Hackathon/RADOMEH.tar.gz
Though computing separate means (one for each data chunk) yields slightly better performance, to be consistent with the proposed model, we computed a single mean over all training data.
The data correspond to the 3rd district of California and can be downloaded from http://pems.dot.ca.gov/
Abbreviations
 AIC:

Akaike information criterion
 DFT:

Discrete Fourier transform
 GFT:

Graph Fourier transform
 JFT:

Joint Fourier transform
 JPSD:

Joint power spectral density
 JWSS:

Jointly widesense stationary
 MTWSS:

Multivariate time widesense stationary
 MVWSS:

Multivariate vertex widesense stationary
 PSD:

Power spectral density
 TPSD:

Time power spectral density
 TWSS:

Time widesense stationarity
 VPSD:

Vertex power spectral density
 VWSS:

Vertex widesense stationarity
 WSS:

Widesense stationarity
References
M. Rudelson, Random vectors in the isotropic position. J. Funct. Anal.164(1), 60–72 (1999).
H. Lütkepohl, New introduction to multiple time series analysis. Springer Sci. Bus. Media (2005).
O. Ledoit, M. Wolf, A wellconditioned estimator for largedimensional covariance matrices. J. Multivar. Anal.88(2), 365–411 (2004).
C. Lam, Q. Yao, et al., Factor modeling for highdimensional time series: inference for the number of factors. Ann. Stat.40(2), 694–726 (2012).
G. Connor, The three types of factor models: a comparison of their explanatory power. Financ. Anal. J.51(3), 42–46 (1995).
M. J. Keeling, K. T. Eames, Networks and epidemic models. J. R. Soc. Interface. 2(4), 295–307 (2005).
P. Mohan, V. N. Padmanabhan, R. Ramjee, in Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems. Nericell: rich monitoring of road and traffic conditions using mobile smartphones (ACM, 2008), pp. 323–336.
W. Huang, L. Goldsberry, N. F. Wymbs, S. T. Grafton, D. S. Bassett, A. Ribeiro, Graph frequency analysis of brain signals. IEEE J. Sel. Top. Signal Proc.10(7), 1189–1203 (2016).
F. Zhang, E. R. Hancock, Graph spectral image smoothing using the heat kernel. Pattern Recog.41(11), 3328–3342 (2008).
A. J. Smola, R. Kondor, Kernels and regularization on graphs. Learning theory and kernel machines, 144–158 (2003).
M. Belkin, P. Niyogi, Semisupervised learning on riemannian manifolds. Mach. Learn.56(1–3), 209–239 (2004).
D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, P. Vandergheynst, The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains. Signal Process. Mag. IEEE. 30(3), 83–98 (2013).
A. Sandryhaila, J. M. Moura, Discrete signal processing on graphs. IEEE Trans. Signal Process.61:, 1644–1656 (2013).
A. Sandryhaila, J. M. Moura, Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure. IEEE Signal Process. Mag.31(5), 80–90 (2014).
A. Gadde, A. Ortega, in International Conference on Acoustics, Speech and Signal Processing (ICASSP). A probabilistic interpretation of sampling theory of graph signals (IEEE, 2015), pp. 3257–3261.
C. Zhang, D. Florêncio, P. A. Chou, Graph signal processing–a probabilistic framework. Microsoft Res. (2015). Redmond, WA, USA, Tech. Rep. MSRTR201531.
N. Perraudin, P. Vandergheynst, Stationary signal processing on graphs. IEEE Trans. Signal Process.65(13), 3462–3477 (2017). https://doi.org/10.1109/TSP.2017.2690388.
B. Girault, in Signal Processing Conference (EUSIPCO), 2015 23rd European. Stationary graph signals using an isometric graph translation (IEEE, 2015), pp. 1516–1520.
A. G. Marques, S. Segarra, G. Leus, A. Ribeiro, Stationary graph processes and spectral estimation. IEEE Trans. Signal Process.65(22), 5911–5926 (2016).
E. Isufi, A. Loukas, A. Simonetto, G. Leus, Autoregressive moving average graph filtering. IEEE Trans. Signal Process.65(2), 274–288 (2017).
A. Loukas, D. Foucard, in Global Conference on Signal and Information Processing (GlobalSIP). Frequency analysis of timevarying graph signals (IEEE, 2016), pp. 346–350.
N. Wiener, P. Masani, The prediction theory of multivariate stochastic processes. Acta Math.98(1), 111–150 (1957).
N. Wiener, P. Masani, The prediction theory of multivariate stochastic processes, ii. Acta Math.99(1), 93–137 (1958).
P. Bloomfield, Fourier analysis of time series: an introduction (Wiley, 2004).
F. R. Bach, M. I. Jordan, Learning graphical models for stationary time series. IEEE Trans. Signal Process.52(8), 2189–2199 (2004).
R. Dahlhaus, M. Eichler, Causality and graphical models in time series analysis. Oxf. Stat. Sci. Ser., 115–137 (2003).
B. Girault, Signal processing on graphscontributions to an emerging field. PhD thesis, Ecole normale supérieure de lyon (2015).
S. P. Chepuri, G. Leus, in Sensor Array and Multichannel Signal Processing Workshop (SAM). Subsampling for graph power spectrum estimation (IEEE, 2016), pp. 1–5.
A. Loukas, E. Isufi, N. Perraudin, in Asilomar Conference on Signals, Systems, and Computers. Predicting the evolution of stationary graph signals (IEEE, 2017), pp. 60–64.
J. Mei, J. M. Moura, Signal processing on graphs: causal modeling of unstructured data. IEEE Trans. Signal Process.65(8), 2077–2092 (2017).
V. N. Ioannidis, D. Romero, G. B. Giannakis, Inference of spatiotemporal functions over graphs via multikernel kriged kalman filtering. Trans. Signal Process.66(12), 3228–3239 (2017).
P. D. Lorenzo, E. Isufi, P. Banelli, S. Barbarossa, G. Leus, in 2017 25th European Signal Processing Conference (EUSIPCO). Distributed recursive least squares strategies for adaptive reconstruction of graph signals, (2017), pp. 2289–2293. https://doi.org/10.23919/EUSIPCO.2017.8081618.
N. Perraudin, A. Loukas, F. Grassi, P. Vandergheynst, Towards stationary timevertex signal processing. Int. Conf. Acoust. IEEE Speech Signal Process. (ICASSP) (2017).
F. Grassi, A. Loukas, N. Perraudin, B. Ricaud, A timevertex signal processing framework. IEEE Trans. Signal Process.66(3), 817–829 (2017).
A. Loukas, A. Simonetto, G. Leus, Distributed autoregressive moving average graph filters. IEEE Signal Process. Lett.22(11), 1931–1935 (2015).
E. Isufi, A. Loukas, A. Simonetto, G. Leus, in Signal Processing Conference (EUSIPCO), 2016 24th European. Separable autoregressive moving average graphtemporal filters (IEEE, 2016), pp. 200–204.
D. I. Shuman, B. Ricaud, P. Vandergheynst, Vertexfrequency analysis on graphs. Appl. Comput. Harmon. Anal.40(2), 260–291 (2016).
S. Segarra, S. P. Chepuri, A. G. Marques, G. Leus, Statistical graph signal processing: Stationarity and spectral estimation, 325–347 (2018).
R. Vershynin, How close is the sample covariance matrix to the actual covariance matrix?J. Theor. Probab.25(3), 655–686 (2012).
M. S. Bartlett, Periodogram analysis and continuous spectra. Biometrika. 37(1/2), 1–16 (1950).
P. Welch, The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. Trans. Audio Electroacoustics IEEE. 15(2), 70–73 (1967).
O. Axelsson, Conjugate gradient type methods for unsymmetric and inconsistent systems of linear equations. Linear Algebra Appl.29:, 1–16 (1980).
P. L. Combettes, V. R. Wajs, Signal recovery by proximal forwardbackward splitting. Multiscale Model. Simul.4(4), 1168–1200 (2005).
P. L. Combettes, J. C. Pesquet, Proximal splitting methods in signal processing. Fixedpoint algorithms for inverse problems in science and engineering, 185–212 (2011).
N. Komodakis, J. C. Pesquet, Playing with duality: an overview of recent primal?dual approaches for solving largescale optimization problems. IEEE Signal Process. Mag.32(6), 31–54 (2015).
P. L. Combettes, J. C. Pesquet, A douglas–rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process.1(4), 564–574 (2007).
H. Akaike, A new look at the statistical model identification. IEEE Trans. Autom. Control.19(6), 716–723 (1974).
N. Perraudin, J. Paratte, D. Shuman, V. Kalofolias, P. Vandergheynst, D. K. Hammond, GSPBOX: A toolbox for signal processing on graphs. ArXiv eprints (2014). http://arxiv.org/abs/1408.5781.
N. Perraudin, D. Shuman, G. Puy, P. Vandergheynst, UNLocBoX A matlab convex optimization toolbox using proximal splitting methods. ArXiv eprints (2014). http://arxiv.org/abs/1402.0779.
Z. Prusa, P. L. Sondergaard, N. Holighaus, C. Wiesmeyr, P. Balazs, The large timefrequency analysis toolbox 2.0. Sound, Music, and Motion: 10th International Symposium. 8905:, 419–442 (2014).
K. Gröchenig, Foundations of timefrequency analysis. Springer Sci. Bus. Media (2013).
H. G. Feichtinger, T. Strohmer, Gabor analysis and algorithms: theory and applications. Springer Sci. Bus. Media (2012).
C. De Boor, A practical guide to splines, vol. 27 (SpringerVerlag, New York, 1978).
A. Susnjara, N. Perraudin, D. Kressner, P. Vandergheynst, Accelerated filtering on graphs using Lanczos method. arXiv preprint (2015). arXiv:1509.04537.
Acknowledgements
We thank Francesco Grassi for his help with the code.
Funding
This work has been supported by the Swiss National Science Foundation research project Towards Signal Processing on Graphs (grant number: 2000_21/154350/1) and research project Deep Learning for GraphStructured Data (grant number: PZ00P2 179981).
Author information
Authors and Affiliations
Contributions
The two authors contributed equally both for the experiments and for the writing of the paper. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Loukas, A., Perraudin, N. Stationary timevertex signal processing. EURASIP J. Adv. Signal Process. 2019, 36 (2019). https://doi.org/10.1186/s1363401906317
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363401906317