Skip to main content

Stationary time-vertex signal processing

Abstract

This paper considers regression tasks involving high-dimensional multivariate processes whose structure is dependent on some known graph topology. We put forth a new definition of time-vertex wide-sense stationarity, or joint stationarity for short, that goes beyond product graphs. Joint stationarity helps by reducing the estimation variance and recovery complexity. In particular, for any jointly stationary process (a) one reliably learns the covariance structure from as little as a single realization of the process and (b) solves MMSE recovery problems, such as interpolation and denoising, in computational time nearly linear on the number of edges and timesteps. Experiments with three datasets suggest that joint stationarity can yield accuracy improvements in the recovery of high-dimensional processes evolving over a graph, even when the latter is only approximately known, or the process is not strictly stationary.

1 Introduction

One of the main challenges when modeling multivariate processes is to decouple the estimation variance from the problem size. Consider an N-variate process unfolding over T timesteps. If only mild assumptions are made, then the number of realizations needed to reliably estimate the first two moments is up to a logarithmic factor proportional to O(NT), i.e., the data size [1]. Assuming that the process is time wide-sense stationarity (TWSS) makes the length T of the process inconsequential. This is ideal for the univariate setting as it enables us to make relevant predictions even based on a single realization. If one additionally assumes that the signal autocorrelation is compactly supported, such that most data dependencies take place within a short time horizon, then the estimation variance can be reduced further by (roughly) splitting the observations into parts and considering each as an independent realization. This approach suffices when N is relatively small. For high-dimensional processes, however, one needs to incorporate additional assumptions to obtain meaningful predictions [25].

In this spirit, this paper focuses on high-dimensional processes that are supported on the vertex set and are statistically dependent on the edge set of some known graph topology. Whether examining epidemic spreading [6], how traffic evolves in the roads of a city [7], or neuronal activation patterns present in the brain [8], many of the high-dimensional processes one encounters are inherently constrained by some underlying network. This realization has been the driving force behind recent efforts to re-invent classical models by taking into account the graph structure, with advances in many problems, such as denoising [9] and semi-supervised learning [10, 11], among others.

Yet, standard models for processes (evolving) on graphs often fail to produce useful results when applied to real datasets. One of the main reasons for this shortcoming is that they model only a limited set of spatiotemporal behaviors. The well-used graph Tikhonov and total variation priors, for instance, assume that the signal varies slowly or in a piece-wise constant manner over edges, without specifying any precise relations [1214]. Similarly, assuming that the graph Laplacian encodes the conditional correlations of variables, as is done with Gaussian Markov random fields [15], becomes a rigid model when the graph is known [16]. To capture the behavior of complex networked systems, such as transportation and biological networks, it is crucial to train expressive models, being able to reproduce a wide range of graph and temporal behaviors.

1.1 Contributions

This paper considers the statistical modeling of processes evolving on graphs. In particular, we investigate the relationship between two different hypotheses: TWSS and VWSS [1719], which are individually helpful in reducing the variance of covariance estimation for time series and graph signals, respectively. We propose a combined multivariate hypothesis that we refer to as time-vertex wide-sense stationarity, or joint stationarity for short. The necessary first step of our analysis consists of reformulating the standard properties of stationarity (such as the relation of the covariance matrix and power spectral density to an appropriate Fourier transform) from the lens of time-vertex analysis [20, 21]. This analysis is purposeful, yet not trivial as joint stationarity is more complicated than assuming stationarity on the product of two graphsFootnote 1.

We use the hypothesis of joint stationarity to control variance and computational complexity in estimation and recovery tasks. Similar to [17], also here, one may reliably estimate the model parameters from few observations (e.g., see Fig. 4) and solve MMSE recovery problems in time linear on the number of edges and timesteps (e.g., see Fig. 3). Complimenting previous work, we also provide an analysis of the power spectral density (PSD) estimation, which brings insight into the inherent trade-off between bias and variance. In addition, we experimentally demonstrate that assuming joint stationarity aids in recovery even when only an approximation of the graph is known, or the process is only approximately jointly stationary. These experiments corroborate that the joint stationarity hypothesis is a useful assumption, particularly in situations when the problem features a large number of variables but only a limited number of observations.

Fig. 1
figure 1

The joint stationarity hypothesis is more general than assuming either (standard) VWSS and TWSS or VWSS on a (Cartesian) product graph. The figure presents three examples of PSDs plotted as 2-dimensional function of λ1,λ2 that, for simplicity, corresponds to the eigenvalues of two graphs. The second graph (time) is a ring. In the separable case (left), the PSD has to satisfy h(λ1,λ2)=h1(λ1)h2(λ2), making it unable to capture any dependencies between λ1 and λ2. Using VWSS (middle) limits the PSD to h(λ1,λ1)=h(λ1+λ2) leading to constant values along the diagonal line λ1+λ2=c. Joint stationarity (right) can encode any PSD h(λ1,λ2), as exemplified here

Fig. 2
figure 2

Influence of the parameters (window size L and number of graph filters F) on the a estimation error, b bias, c normalized standard deviation, and d execution time. For improved visibility, the scale of c has been changed

Fig. 3
figure 3

Scalability of the convolutional JPSD estimator in seconds (vertical axis) w.r.t. the number of vertices (horizontal axis). The fast implementation should be favored when the graph is composed of more than a few thousand vertices. The approximation error of the fast implementation was negligible in our experiments

Fig. 4
figure 4

Estimation error \({\tilde {\mathbf {E}}\left [{{\left \|\ddot {\mathbf {H}} - \mathbf {H}\right \|}_{F}} \right ]}/{{\left \|\mathbf {H}\right \|}_{F}}\) as a function of the number of realizations and number of vertices. Even an approximate knowledge of the graph enables us to make good estimates of the covariance (and PSD) from few realizations. The joint stationarity prior becomes especially meaningful when the number of variables (N,T) increases. The benefit also holds for a noisy graph (SNR = 10dB). a N=10, T=10b N=100, T=10

To test the utility of joint stationarity, we apply our methods on three diverse datasets: (a) a meteorological dataset containing the hourly temperature of 32 weather stations over 1 month in Molene, France [18], (b) a traffic dataset depicting high-resolution daily vehicle flow of 4 weekdays in the highways of Sacramento, and (c) simulated SIRS-type epidemics over Europe. Our experiments confirm that for high-dimensional processes evolving over graphs, assuming joint stationarity yields an improvement in recovery performance as compared to time- or vertex-based stationarity methods, even when the graph is only approximately known and the data violate the strict conditions of our definition.

1.2 Related work

There exists an extensive literature on multivariate stationary processes, developing the original work of Wiener et al. [22, 23]. The reader may find interesting Bloomfield’s book [24] focusing on spectral relations. We focus on two main approaches that relate to our work, graphical models and signal processing on graphs.

1.2.1 Graphical models

In the context of graphical models, multivariate stationarity has been used jointly with a graph in the work of [25, 26]. Though relevant, we note that there is a key difference of these models with our approach: we assume that the graph is given, whereas in graphical models, the graph structure (or more precisely the precision matrix) is learned from the data. Knowing the graph allows us to search for more involved relations between the variables. As such, we are not restricted to the case that the conditional dependencies are given by the graph (and therefore that they are sparse), but allow non-adjacent variables to be conditionally dependent, modeling a broader set of behaviors. We also note that our approach is eventually more scalable. We refer to [16] for elements of connections between graphical models and graph signal processing.

1.2.2 Graph signal processing

The idea of studying the stationarity of a random vector w.r.t. a graph was first introduced in [18, 27] and then in [17, 19]. While these contributions have different starting points, they both roughly propose the same definition. Another more recent contribution relating to stationarity on graphs in the context of PSD estimation is [28]. Despite the relevance of these works, it is important to stress that the current paper is the first to consider a stationary hypothesis over graph signals varying in time. Moreover, the new results are non-trivial as they cannot be obtained by applying previous definitions on a product graph. In addition, some of the analysis presented here (particularly that of Section 4) is novel and can also be employed for the previously studied case of stationary graph signals. To make the connection with previous works transparent, in the following, every technical result (e.g., Lemma, Theorem, Proposition) that emerges as a generalization of [18, 19, 27] contains a reference in its heading pointing to the former claim.

The forecasting of time-evolving signals on graphs was also considered in [2932]. Nonetheless, there are several differences with these works, with the most important being that we define joint stationarity and that we are not restricted to the causal case (where a process is reconstructed only from its past). Finally, it should be noted that some preliminary results of this work appeared in a conference paper [33]. This work extends the conference paper in many directions. We refine the definition of joint stationarity and explore how it relates to other well-known stationarity hypotheses. We propose a new PSD estimator and provide a theoretical analysis of the bias and variance of the old and new PSD estimators. Additionally, we study the complexity of the proposed solution and evaluate its merit w.r.t. two new datasets.

2 Preliminaries

2.1 General notation

We use boldface symbols for matrices and vectors (e.g., A and a respectively) and calligraphic symbols for sets (e.g., \(\mathcal {V}\) and \(\mathcal {E}\)). Symbol j denotes the imaginary unit, IN is the N×N identity matrix, and 1N is the all-ones vector of size N. We use brackets to index matrix elements and subscripts for matrix blocks: if A is of size n1×n2, then A[n1,n2] is the element at the n1th row and n2th column and \(\phantom {\dot {i}\!}\mathbf {A}_{{n}_{1},{n}_{2}}\) is a (block) matrix. Vector a=vec (A) (without subscript) is the vectorized representation of A and an is its nth column. Moreover, \(\mathbf {A}^{\intercal }\) is its transpose and A is its transposed complex conjugate (meaning that \(\left ({\mathbf {A}}^{{*}}_{T}\right)^{\intercal }\) is the complex conjugate). If A is N×N Hermitian, its eigendecomposition is generically written as A=UΛU, where U=[ u1,…,uN] is a matrix having eigenvectors as columns and Λ=diag(λ1,…,λN) is the diagonal matrix of eigenvalues. Symbols h(·),f(·), and g(·) are reserved for scalar/matrix functions. A matrix function with a single argument takes as an input a symmetric matrix A and outputs h(A)=Udiag(h(λ1),…,h(λN))U. The operator denotes the Kronecker product. The Kronecker sum can be defined in terms of the Kronecker product as AB=AIM+INB, where matrix B has size M×M.

2.2 Harmonic time-vertex analysis

We consider signals supported on the vertices \(\mathcal {V} = \{ v_{1}, v_{2}, \ldots, v_{N} \}\) of a weighted undirected graph \(\mathcal {G} = (\mathcal {V}, \mathcal {E}, \mathbf {W}_{G})\), with \(\mathcal {E}\) the set of edges of cardinality \(E = |\mathcal {E}|\) and WG the weighted adjacency matrix. Suppose that signal xt is sampled at T successive regular intervals of unit length. A real time-vertex signal \(\mathbf {X} = \left [ \mathbf {x}_{1}, \mathbf {x}_{2}, \ldots, \mathbf {x}_{T} \right ] {\in } \mathbb {R}^{N\times T}\) is then the matrix having graph signal xt as its tth column.

The frequency representation of a time-vertex signal X is given by the joint Fourier transform [14, 21] (or JFT for short)

$$\begin{array}{*{20}l} \hat{\mathbf{X}} = \text{JFT}\{{\mathbf{X}}\} \triangleq \text{GFT}\{{\text{DFT}\{{\mathbf{X}}}\}\} = \mathbf{U}_{G}^{*} \mathbf{X} \left({{\mathbf{U}}^{*}_{T}}\right)^{\intercal}, \end{array} $$
(1)

with UG and UT being, respectively, the unitary graph Fourier transform (GFT) and discrete Fourier transform (DFT) matrices, whereas \(\left ({\mathbf {U}}^{*}_{T}\right)^{\intercal }\) is the complex conjugate of UT. In vector form, we have that \(\hat {\mathbf {x}} = \text {JFT}\{{\mathbf {x}}\} \triangleq \mathbf {U}_{J}^{*}\, \mathbf {x}\), where UJ=UTUG. As is often the case, we choose UG to be the eigenvector matrix of the combinatorialFootnote 2 graph Laplacian matrix LG=diag(WG1N)−WG, where 1N is the all-ones vector of size N, and diag(WG1N) is the diagonal degree matrix. Matrix UT is the eigenvector matrix of the Laplacian LT of a cyclic graph \(\mathcal {T}\):

$$\begin{array}{*{20}l} \mathbf{U}_{T}^{*}[\!\tau,t]&= \frac{e^{-j \omega_{\tau} t}}{\sqrt{T}},\\ \quad \text{with} \quad \omega_{\tau} &= \frac{2 \pi (\tau-1)}{T} \quad \text{for} \quad t,\tau = 1, 2, \ldots, T. \end{array} $$
(2)

With this in place, \(\hat {\mathbf {X}}[\!n,\tau ]\) can be seen as the Fourier coefficient associated with the joint frequency [ λn,ωτ], where λn denotes the nth graph eigenvalue and ωτ the τth angular frequency.

The JFT maintains a close connection with the product graph \(\mathcal {J}\) [14, 21]. The latter is the graph whose adjacency matrix is WJ=WTWG (this amounts to a Cartesian product between \(\mathcal {G}\) and the ring graph \(\mathcal {T}\)). The connection is revealed if one realizes that the Laplacian LJ=LTLG of \(\mathcal {J}\) carries the eigendecomposition LJ=UJ(ΛTΛG)UJ. It follows that computing the JFT (in vector form) is the same as computing the GFT of x w.r.t. graph \(\mathcal {J}\). The main issue with anyFootnote 3 product graph interpretation is that it imposes a strict dependence between the eigenvalues of LG and LT (since the eigenvalues of LJ are given by ΛTΛG). As we will see in the next paragraph, to attain full generality, one needs to abandon the product graph. For an in-depth discussion of JFT and its properties, we refer the reader to [34].

2.3 Joint time-vertex filtering

Filtering a time-vertex signal x with a joint filter h(LG,LT) corresponds to element-wise multiplication in the joint frequency domain [ λ,ω] by a function \(h: [\!0, \lambda _{\max }] \times [\!-1,\ 1] \mapsto \mathbb {R}\) [21, 3436]. When a joint filter h(LG,LT) is applied to x, the output is

$$\begin{array}{*{20}l} h(\mathbf{L}_{G},\mathbf{L}_{T}) \, \mathbf{x} &= \mathbf{U}_{J}\, h(\mathbf{\Lambda}_{G},\mathbf{\Omega}) \, \mathbf{U}_{J}^{*} \mathbf{x}, \end{array} $$
(3)

where \(\mathbf {\Lambda }_{G} \in \mathbb {R}^{N\times N} \) and \(\mathbf {\Omega } \in \mathbb {R}^{T\times T}\) are diagonal matrices with ΛG[ n,n]=λn and Ω[ τ,τ]=ωτ, whereas h(ΛG,Ω) is a diagonal NT×NT matrix defined as

$$\begin{array}{*{20}l} {}h(\mathbf{\Lambda}_{G},\mathbf{\Omega}) = \text{diag} \left(\text{vec} \left(\left[\begin{array}{ccc} h(\lambda_{1}, \omega_{1}) & \cdots & h(\lambda_{1}, \omega_{T}) \\ \vdots & \ddots & \vdots \\ h(\lambda_{N}, \omega_{1}) & \cdots & h(\lambda_{N}, \omega_{T}) \end{array}\right] \right) \right) \notag \end{array} $$

and diag(vec(A)) creates a matrix with diagonal elements the vectorized form of A. The bi-variate notation h(·,·) is meant to illustrate that joint filters operate independently on the two domains, something impossibleFootnote 4 in the product graph framework [14, 21]. For convenience, we will often overload notation and write h(θn,τ) to refer to the bivariate function h(λn,ωτ). Furthermore, we say that a joint filter is separable, if its joint frequency response h can be written as the product of a frequency response h1 defined solely in the vertex domain and one h2 in the time domain, i.e., h(θ)=h1(λh2(ω).

3 Joint time-vertex stationarity

Let \(\mathbf {X} \in \mathbb {R}^{N\times T}\) be a real discrete periodic multivariate stochastic process with a finite number of timesteps T that is indexed by the vertex vi of graph \(\mathcal {G}\) and time t. We refer to such processes as time-vertex processes, or joint processes for short.

Our objective is to provide a definition of stationarity that captures statistical invariance of the first two moments of a joint process \(\mathbf {x} = \text {vec}({\mathbf {X}}) \sim \mathcal {D}(\bar {\mathbf {x}}, \mathbf {\Sigma })\), i.e., the mean \(\bar {\mathbf {x}} = \mathbf {E}\left [{\mathbf {x}}\right ]\) and the covariance \(\mathbf {\Sigma } = \mathbf {E}\left [{\mathbf {xx}^{\intercal }}\right ] - \bar {\mathbf {x}}{\bar {\mathbf {x}}}^{\intercal }\). Crucially, the definition should do so in a manner that is faithful to the graph and temporal structure.

3.1 Definition

Typically, wide-sense stationarity is thought of as an invariance of the two first moments of a process w.r.t. translation. For the first moment, things are straightforward: stationarity implies a constant mean E[x]=c1, independently of the domain of interest. The second moment, however, is more complicated as it depends on the exact form translation takes in the particular domain. Unfortunately, for graphs, translation is a non-trivial operation and three alternative translation operators exist: the generalized translation [37], the graph shift [13], and the isometric graph translation [27]. Due to this challenge, there are currently three alternative (though akin) definitions of stationarity appropriate for graphs [1719].

The ambiguity associated with translation on graphs urges us to seek an alternative starting point for our definition. Fortunately, there exists an interpretation which holds promise: up to its constant mean, a wide-sense stationary process corresponds to a white process filtered linearly on the underlying space. This “filtering interpretation” of stationarity is well known classicallyFootnote 5 as well as in the graph setting [19] and is equivalent to asserting that the second moment can be expressed as Σ=h(LT), where h(LT) is a linear filter. Thankfully, not only filtering is elegantly and uniquely defined for graphs [37], but also stating that a process is graph wide-sense stationary if E[x]=c1N and Σ=h(LG), is a graph filter, is generally consistentFootnote 6 with current definitions [1719].

This motivates us to also express the definition of stationarity for joint processes in terms of joint filtering:

Definition 1

(JWSS) A joint process x=vec (X) is called jointly wide-sense stationary (JWSS), if and only if

  1. (a)

    The first moment of the process is constant E[x]=c1NT.

  2. (b)

    The covariance matrix of the process is a joint filter Σ=h(LG,LT), where h(·,·) is a non-negative real function referred to as joint power spectral density (JPSD).

Let us examine Definition 1 in detail.

First moment condition. As in the classical case, the first moment of a JWSS process has to be constant over the time and the vertex sets, i.e., \(\bar {\mathbf {X}}[\!{i,t}] = c \) for every i=1,2,…,N and t=1,2,…,T. For alternative choices of the graph Laplacian with a null-space not spanned by the constant vector, the first moment condition should be modified to requiring that the expected value of a JWSS process is in the null space of the matrix LTLG (see Remark 2 [19] for a similar observation on stochastic graph signals).

Second moment condition. According to the definition, the covariance matrix of a JWSS process takes the form of a joint filter h(LG,LT), and is therefore diagonalizable by the JFT matrix UJ. It may also be interesting to notice that the matrix h(LG,LT) can be expressed as follows

$$ \mathbf{\Sigma} = h(\mathbf{L}_{G},\mathbf{L}_{T}) = \left(\begin{array}{cccc} \mathbf{H}_{1,1}& \mathbf{H}_{1,2}& \cdots & \mathbf{H}_{1,T}\\ \mathbf{H}_{2,1} & \mathbf{H}_{2,2} & & \mathbf{H}_{2,T} \\ \vdots & & \ddots & \vdots \\ \mathbf{H}_{T,1} & \mathbf{H}_{1,2} & \cdots & \mathbf{H}_{T,T} \end{array}\right), $$
(4)

where each block \(\mathbf {H}_{t_{1},t_{2}}\) of Σ is an N×N matrix defined as:

$$ \mathbf{H}_{t_{1},t_{2}} = \frac{1}{T} \sum_{\tau = 1}^{T} h_{\omega_{\tau}} (\mathbf{L}_{G}) \, e^{j\omega_{\tau}{(t_{1-t2+1})}} $$
(5)

and \(h_{\omega _{\tau }} (\mathbf {L}_{G})\) is the graph filter with frequency response \(h_{\omega _{\tau }} = h(\lambda,\omega _{\tau })\). Being a covariance matrix, h(LG,LT) must necessarily be positive-semidefinite; thus, h(·,·) is real (the eigenvalues of every Hermitian matrix are real) and non-negative. Also, equivalently, every zero mean JWSS process x=vec (X) can be generated by joint filtering x=h(LG,LT)1/2ε a white process ε with zero mean and identity covariance. The following proposition exploits these facts to provide an interpretation of JWSS processes in the joint frequency domain.

Proposition 1

(Generalizes Theorem 1 [17] and Proposition 1 [18, 19]) A joint process X over a connected graph \(\mathcal {G}\) is jointly wide-sense stationary (JWSS) if and only if:

  1. (a)

    The joint spectral modes are in expectation zero \(\mathbf {E}\left [\!{\hat {\mathbf {X}}}[\!n,{\tau }]\right ]=0 \quad \text {if}~ \lambda _{n} \neq 0 \text { and } \ \omega _{\tau } \neq 0. \)

  2. (b)

    The product graph spectral modes are uncorrelated \(\mathbf {E}\left [\!{ \hat {\mathbf {X}}[\!n_{1},\tau _{1}] \hat {\mathbf {X}}[\!n_{2},\tau _{2}]}\right ] = 0, \) whenever n1n2 or τ1τ2.

  3. (c)

    There exists a non-negative function h(·,·), referred to as joint power spectral density (JPSD), such that

    $$\mathbf{E}\left[{\left|\hat{\mathbf{X}}\left[n,\tau\right]\right|^{2}}\right] - \left|\mathbf{E}\left[{\hat{\mathbf{X}}[\!n,\tau]}\right]\right|^{2} = h(\lambda_{n}, \omega_{\tau}),$$

    for every n=1,2,…,N and τ=1,2,…,T.

(For clarity, this and other proofs of the paper have been moved to the “Appendix”.)

We briefly present a few additional properties of JWSS processes that will be useful in the rest of the paper.

Property 1

(Generalizes Example 1 [1719]) White centered i.i.d. noise \(\mathbf {w} \in \mathbb {R}^{NT} \sim \mathcal {D}(\mathbf {0}_{NT},\mathbf {I}_{NT})\) is JWSS with constant JPSD for any graph.

The proof follows easily by noting that the covariance of w is diagonalized by the joint Fourier basis of any graph \(\mathbf {\Sigma }_{\mathbf {w}} = \mathbf {I} = \mathbf {U}_{J} \mathbf {I} \mathbf {U}_{J}^{*}\). This last equation tells us that the JPSD is constant, which implies that similar to the classical case, the energy of white noise is evenly spread across all joint frequencies.

A second interesting property of JWSS processes is that stationarity is preserved through a filtering operation.

Property 2

(Generalizes Theorem 2 [17], Property 1 [19]) When a joint filter f(LG,LT) is applied to a JWSS process X with JPSD h, the result Y remains JWSS with mean cf(0,0)1NT, where c is the mean of X, and JPSD f2(λ,ω) h(λ,ω).

Finally, we notice that for real processes X, which are the focus of this paper, the function h forming the joint filter should be symmetric w.r.t. ω, meaning that h(λ,ω)=h(λ,−ω). This property can be easily derived from the definition of the Fourier transform.

3.2 Relations to classical definitions

We next provide an in-depth examination of the relations between joint wide-sense stationarity, time and vertex stationarity, as well as their multivariate equivalents. For clarity, we order the rows/columns of the covariance matrix Σ such that each \(\mathbf {\Sigma }_{t_{1}, t_{2}}\) block of size N×N measures the covariance between \(\mathbf {x}_{t_{1}}\) and \(\mathbf {x}_{t_{2}}\) (see (4)).

3.2.1 Standard definitions

As we discuss below, known definitions of stationarity in time/vertex domains are particular cases of joint stationarity.

TWSSVWSS JWSS. The known versions of stationarity (TWSS, VWSS) are oblivious to any structure along one of the two dimensions of X. In this manner, assuming that X is TWSS amounts to interpreting each of the N time series as a separate realization of the same process with TPSD hT(ω). Similarly, if X is VWSS, then each graph signal xt is taken as a separate realization of a single stochastic graph signal with VPSD hG(λ) [17, 19]. It is a simple consequence that, different from the JWSS hypothesis, assuming that X is both TWSS and VWSS is equivalent to limiting our scope to separable JPSD defined as the product of two univariate functions h(λ,ω)=hG(λ)hT(ω)—see also Fig. 1.

3.2.2 Definitions based on the product graph

As explained in Section 2, the JFT can be interpreted as a graph Fourier transform taken over a product graph whose Laplacian is LJ=LGLT. This construction can give rise to two additional definitions for joint stationarity:

3.2.2.1 VWSS on a product graph.

The first is obtained by applying the VWSS definition of [17, 19] on the graph associated with LJ. The resulting model is not sufficiently general in order to generate the full spectrum of JWSS processes. The reason is that, whereas the JPSD h(λ,ω) can be any two-dimensional non-negative function, the JPSD of any VWSS process on LJ is necessarily one-dimensional (the eigenvalues of LJ are the sums of all combinations of the eigenvalues of LG and LT)—see Fig. 1 for a pictorial demonstration and “Appendix: Univariate vs multivariate JPSD” for examples from real data. The same reasoning also holds for alternative products between graphs, such as the strong and Kronecker products [14].

3.2.2.2 Covariance diagonalized by the product graph Fourier transform.

The second definition, which we refer to as JWSS-alternate, entails asserting that the covariance matrix Σ can be diagonalized by the JFT, i.e., the eigenbasis of LJ. This can be seen to differ from the JWSS definition only in case of graph Laplacian eigenvalue multiplicities: whenever the graph Laplacian features repeated eigenvalues, for Definition 1, the degrees of freedom of the JPSD h decrease, as necessarily h(λ1,ω)=h(λ2,ω) when λ1=λ2. This restriction is motivated by the following observation: for an eigenspace with multiplicity greater than one, there exists an infinite number of possible eigenvectors corresponding to the different rotations in the space, and the JPSD is in general ill-defined. The condition h(λ1,ω)=h(λ2,ω) when λ1=λ2 deals with this ambiguity, as it ensures that the JPSD is the same independently of the choice of eigenvectors. On the contrary, with JWSS-alternate, one should construct an arbitrary basis of each eigenspace with multiplicity and setFootnote 7 h(λ1,ω)≠h(λ2,ω). This approach, which was followed in [38], features more degrees of freedom at the expense of the loss of filtering interpretation and higher computational complexity: one may not anymore use filters to estimate the JPSD (without reverting to Definition 1), whereas using the JFT to diagonalize the covariance scales like O(N3+N2T+NT log(T)). On the contrary, in our setting, the PSD estimation complexity can be reduced to be close to linear in the number of edges E and timesteps T (see “Appendix: Implementation details of the JPSD estimator”).

Nevertheless, we should mention that the differences mentioned above are mostly academic. Eigenvalue multiplicities occur mainly when graph automorphisms exist. In the absence of such symmetries (e.g., in the graphs used in our experiments), the two definitions yield the same outcome.

3.2.3 Multivariate definitions

On the other hand, joint stationarity can itself be derived as the combination of two multivariate versions of time/vertex stationarity, which we refer to respectively as MTWSS (see [25]) and MVWSS. Before formally defining them in Definitions 2 and 3, let us state our result formally:

Theorem 1

A joint process X is JWSS if and only if it is MTWSS and MVWSS.

To put this in context, we examine the two multivariate definitions independently.

(a) JWSS MTWSS. The covariance matrix of a JWSS process has a block circulant structure, as \(\mathbf {\Sigma }_{t_{1},t_{2}} = \mathbf {\Sigma }_{\delta,1} = \mathbf {\Gamma }_{\delta }\), where δ=t1t2+1. Hence, Σ can be written as

$$\mathbf{\Sigma}_{\mathbf{x}} = \left(\begin{array}{cccc} \mathbf{\Gamma}_{1}& \mathbf{\Gamma}_{2} & \cdots & \mathbf{\Gamma}_{T}\\ \mathbf{\Gamma}_{T} & \mathbf{\Gamma}_{1} & & \mathbf{\Gamma}_{T-1} \\ \vdots & & \ddots & \vdots \\ \mathbf{\Gamma}_{2} & \mathbf{\Gamma}_{3} & \cdots & \mathbf{\Gamma}_{1} \end{array}\right), $$

implying that correlations only depend on δ and not on any time localization. This property is shared by multivariate time wide-sense stationary processes:

Definition 2

(MTWSS [25]) A joint process \(\mathbf {X}= \left [ \mathbf {x}_{1}, \mathbf {x}_{2}, \ldots, \mathbf {x}_{T} \right ] \in \mathbb {R}^{N\times T}\) is multivariate time wide-sense stationary (MTWSS), if and only if the following two properties hold:

  1. (a)

    The expected value is constant as E[xt]=c1 for all t.

  2. (b)

    For all t1,t2, the second moment satisfies \( \mathbf {\Sigma }_{t_{1},t_{2}} = \mathbf {\Sigma }_{\delta,1} = \mathbf {\Gamma }_{\delta }, \) where δ=t1t2+1.

Similarly to the univariate case, the time power spectral density (TPSD) is defined to encode the statistics of the process in the spectral domain

$$ \hat{\mathbf{\Gamma}}_{\tau} = \sum_{\delta=1}^{T} \mathbf{\Gamma}_{\delta} e^{-j\omega_{\tau} \delta}. $$
(6)

We then obtain the TPSD of a JWSS process by constructing a graph filter from h while fixing ω. Setting \(h_{\omega _{\tau }}(\lambda) = h(\lambda,\omega _{\tau })\), the TPSD of a JWSS process is \(\hat {\mathbf {\Gamma }}_{\tau } = h_{\omega _{\tau }}(\mathbf {L}_{G}).\)

(b) JWSS MVWSS. For a JWSS process, each block of Σ has to be a linear graph filter, i.e., \(\mathbf {\Sigma }_{t_{1},t_{2}}= \gamma _{t_{1},t_{2}}(\mathbf {L}_{G})\), meaning that

$$\mathbf{\Sigma} = \left(\begin{array}{cccc} \gamma_{1,1}(\mathbf{L}_{G})& \gamma_{1,2}(\mathbf{L}_{G}) & \cdots & \gamma_{1,T}(\mathbf{L}_{G})\\ \gamma_{2,1}(\mathbf{L}_{G}) & \gamma_{2,2}(\mathbf{L}_{G}) & & \\ \vdots & & \ddots & \vdots \\ \gamma_{T,1}(\mathbf{L}_{G}) & & \cdots & \gamma_{T,T}(\mathbf{L}_{G}) \end{array}\right). $$

This is perhaps better understood when compared to the multivariate version of vertex stationarity defined below:

Definition 3

(MVWSS) A joint process \(\mathbf {X} = \left [\mathbf {x}_{1}, \mathbf {x}_{2}, \right. \left.\ldots, \mathbf {x}_{T}\right ] \in \mathbb {R}^{N\times T}\) is called multivariate vertex wide-sense stationary (MVWSS), if and only if the following two properties hold independently:

  1. (a)

    The expected value is of each signal xt is constant E[ xt]=ct1 for all t.

  2. (b)

    For all t1 and t2, there exist a kernel \(\gamma _{t_{1},t_{2}}\) such that \( \mathbf {\Sigma }_{t_{1},t_{2}} = \gamma _{t_{1},t_{2}}(\mathbf {L}_{G}) \).

It can be seen that every JWSS process must also be MVWSS, or equivalently JWSS MVWSS.

4 Joint power spectral density estimation

The joint stationarity assumption can be useful in overcoming the challenges associated with dimensionality. The main reason is that for JWSS processes, the estimation variance is decoupled from the problem size. Concretely, suppose that we want to estimate the covariance matrix Σ of a joint process x=vec(X) from K samples x(1),x(2),…,x(K). As we show in the following, if the process is JWSS such that Σ=h(LG,LT), the JPSD estimation variance is O(1). This is a sharp decrease from the classical and MTWSS settings, for which KNT and KN realizations are necessaryFootnote 8, respectively.

This section presents two JPSD estimators. The first provides unbiased estimates at a complexity that is O(N3T log(T)). The second estimator decreases further the estimation variance at the cost of a bounded bias and is approximated with (close to) linear complexity.

4.1 Sample JPSD estimator

We define the sample JPSD estimator for every graph frequency λn and angular frequency ωτ as the estimate

$$\begin{array}{*{20}l} \dot{h}(\lambda_{n}, \omega_{\tau}) \triangleq \sum_{k = 1}^{K} \frac{\left| \text{JFT}\{{\mathbf{X}_{(k)}}\}[\!n,\tau]\right|^{2}}{K}. \end{array} $$
(7)

In case the process does not have zero mean, it should be centered by subtracting the constant signal \(c \, \mathbf {1}_{N} \mathbf {1}_{T}^{*}\), where \( c = \sum _{k,i,t} \mathbf {X}_{(k)}{[\!i,t]} / (KNT).\) In that case, the unbiased estimator should involve division by K−1, instead of K as we have in (7).

4.1.1 Analysis

For simplicity, in the following, we suppose that the process is correctly centered. As the Theorem 2 claims, the sample JPSD estimator is unbiased, and its variance decreases linearly with the number of samples K.

Theorem 2

For every distribution with bounded second and fourth order moments, the sample JPSD estimator \(\dot {h}(\theta)\)

  1. (a)

    is unbiased, i.e., \(\mathbf {E}\left [{\dot {h}(\theta)}\right ] = h(\theta)\), and

  2. (b)

    has variance \(\mathbf {Var}\left [{\dot {h}(\theta)}\right ] = h^{2}(\theta)\, \frac { \gamma - 1}{K}\),

where constant γdepends only on the distribution of x.

Proof

For any θ=[ λ,ω], the sample estimate is

$$\begin{array}{*{20}l} \dot{h}(\theta) = {h(\theta)} \sum_{k = 1}^{K} \frac{\hat{\varepsilon}_{(k)} \hat{\varepsilon}_{(k)}^{*} }{K}, \end{array} $$
(8)

with \(\hat {\varepsilon }_{(k)}\) being independent realizations of \(\hat {\varepsilon }\), a zero mean complex random variable with unit variance. To see this, write x=h(LG,LT)1/2ε, where the random vector ε has zero mean and identity covariance. Then, the complex random variable \(\hat {\varepsilon }\) is the JFT coefficient of ε corresponding to frequencies λ and ω. The bias follows by noting that \(\mathbf {E}\left [{\hat {\epsilon }_{(k)} \hat {\epsilon }_{(k)}^{*}}\right ] = 1\), for every k. The variance is computed similarly by exploiting the fact that different terms in the sum are independent as they correspond to distinct realizations and setting \(\gamma = \mathbf {E}\left [{|\hat {\varepsilon }|^{4}}\right ]\). □

For the standard case of a Gaussian joint process, we provide an exact characterization of the distribution.

Corollary 1

For every Gaussian JWSS process, the sample JPSD estimate follows a Gamma distribution with shape K/2 and scale 2h(θ)/K. The estimation error variance is equal to \(\mathbf {Var}\left [{ \dot {h}(\theta)}\right ] = 2\, h^{2}(\theta)/K\).

Proof

We continue in the context of the proof of Theorem 2. For a Gaussian distribution, \(\hat {\varepsilon }\) is centered and scaled Gaussian and thus \(\hat {\varepsilon }^{2}\) is a chi-squared random variable with 1 degree of freedom. Our estimate is, therefore, a scaled sum of i.i.d. chi-squared variables and corresponds to a Gamma distribution. The corollary then follows directly. □

Observe that the variance depends linearly on the fourth-order moment of \(|\hat {\varepsilon }|\) (see proof of Theorem 2) and is inversely proportional to the number of samples, but it is independent of N and T. This implies that \(||{\mathbf {\Sigma } - \dot {\mathbf {\Sigma }}}_{2}||\) can be made arbitrarily small using K=O(1) samples. In the following, we discuss how to achieve an even smaller variance by exploiting the properties of h(θ).

4.2 Convolutional JPSD estimator

When the number of available realizations K is small (even 1), one may make use of additional assumptions on to obtain reasonable estimates. To this end, we next present a parametric JPSD estimator that allows us to trade off bias for variance.

Before delving into JWSS processes, it is helpful to consider the purely temporal case. For a TWSS process, it is customary to assume that the autocorrelation function has support L that is a few times smaller than T. Then, cutting the signal into \(\frac {T}{L}\) smaller parts and computing the average estimate reduces the variance (by a factor of \(\frac {T}{L}\)), without sacrificing frequency resolution. This basic idea stems from two established methods used to estimate the PSD of a temporal signal, namely Bartlett’s and Welch’s method [40, 41]. Averaging across different windows is equivalent to smoothing the TPSD by convolving it with a window in the frequency domain: this results in attenuation of the correlation for long delays, enforcing localization in the time domain.

4.2.1 Estimator

Armed with this interpretation, we proceed by smoothing the JPSD with a user-specified bi-variate window g, such as a Gaussian or a disc window. The convolutional JPSD estimator computes the JPSD at joint frequency θ=(λ,ω) as:

$$ \ddot{h}(\theta) \triangleq \frac{1}{ c_{g}(\theta)} \sum_{\substack{n=1\\ \tau = 1}}^{N, T} g(\theta - \theta_{n,\tau})^{2} \, \sum_{k = 1}^{K} \frac{\left| \text{JFT}{\mathbf{X}_{(k)} }[\!n,\tau]\right|^{2}}{K}, $$
(9)

where \(c_{g}(\theta) \triangleq \sum _{n,\tau } g(\theta - \theta _{n,\tau })^{2}\) is a normalization factor. For implementation specifics, including a discussion on the choice of the bivariate kernel g, we refer the reader to “Appendix: Implementation details of the JPSD estimator”.

The convolutional JPSD estimator is related to known PSD estimators for TWSS and VWSS processes. The Dirac function is denoted by ϕ. We have that (a) for g(θ)=ϕ(λgT(ω), we recover the classical TPSD estimator, applied independently for each λ. (b) For g(θ)=gG(λϕ(ω), we recover the VPSD estimator from [17] applied independently for each ω. Similar to the latter, the estimator can be closely approximated at a complexity that is linear w.r.t. the number of graph edges/nodes, and up to a logarithmic factor linear to the number of timesteps (see “Appendix: Implementation details of the JPSD estimator”).

4.2.2 Analysis

To provide a meaningful bias analysis, we introduce a Lipschitz continuity assumption on the JPSD, matching the intuition that localized phenomena tend to have a smooth representation in the frequency domain.

Theorem 3

At θ, the convolutional JPSD estimator \(\ddot {h}(\theta)\)

  1. (a)

    has bias

    $$\begin{array}{*{20}l} {}\left|\mathbf{E}\left[{\ddot{h}(\theta) - h(\theta)}\right] \right| \leq \frac{\epsilon }{ c_{g}(\theta) } \sum_{n=1,\tau=1}^{T,N}g(\theta - \theta_{n,\tau})^{2} ||{\theta - \theta_{n,\tau}}||{~}_{2}, \end{array} $$

    where ε is the Lipschitz constant of h(θ), and

  2. (b)

    when the entries of \(\hat {\mathbf {X}}\) are independent random variables, its variance is

    $$\begin{array}{*{20}l} \mathbf{Var}\left[\!{\ddot{h}(\theta)}\right] = \sum_{n,\tau} \frac{g(\theta- \theta_{n,\tau})^{4}}{c_{g}(\theta)^{2}} \, \mathbf{Var}\left[\!{\dot{h}(\theta_{n,\tau})}\right], \end{array} $$

    where \(\mathbf {Var}\left [\!{\dot {h}(\theta _{n,\tau })}\right ]\) is the variance of the sample JPSD estimator at θn,τ.

The derivations of the bias and variance are given in Lemmas 1 and 2, respectively.

We note two corner cases of interest. In the most convenient case, the JPSD is constant, and our estimator is unbiased (the Lipschitz constant ε is zero). On the other hand, if the JPSD fluctuates rapidly, the bias of the estimate will be significant unless g is close to a Dirac. Here, the sample estimator should be preferred.

We further consider as a theoretical example the case of a Gaussian JWSS process and a (spectral) disc window with bandwidth B, i.e., gB(θ)=1 if \(||{\theta }||{~}_{2} \leq \frac {B}{2}\) and 0 otherwise. Though perhaps not the most practical choice from a computational perspective, we consider here a disc window because it leads to simple and intuitive estimates.

Corollary 2

For every ε-Lipschitz Gaussian JWSS process and disc window gB(θ), the convolutional estimate has

$$\begin{array}{*{20}l} \left|\mathbf{E}\left[\!{\ddot{h}(\theta) \,-\, h(\theta)}\right] \right| \leq \frac{\epsilon B}{2} \quad \text{and} \quad\mathbf{Var}\left[\!{\ddot{h}(\theta)}\right] &= \frac{2 \,{h^{2}_{\mathcal{S}}}}{K |\mathcal{S_{\theta}}|}, \end{array} $$
(10)

with set \(\mathcal {S}_{\theta } = \{ \theta _{n,\tau } \, | \, ||{\theta _{n,\tau } - \theta }||{~}_{2} \leq B/2 \}\) and \({h^{2}_{\mathcal {S}}} = \sum _{\theta _{n,\tau } \in \mathcal {S}} h(\theta _{n,\tau })^{2}\).

Proof

The results follow from Theorem 3 and Corollary 1 by noting that when a disc window is used, (a) \(c_{g}(\theta) = |\mathcal {S_{\theta }}|\) and (b) g(θθn,τ)2=1 for all n,τ in the window (there are \(|\mathcal {S_{\theta }}|\) in total) and zero otherwise. The independence condition required by the variance clause of the theorem is satisfied since \(\hat {\mathbf {X}}\) is Gaussian (as a rotation \(\hat {\mathbf {x}} = \mathbf {U}_{J}^{*} \mathbf {x}\) of a Gaussian vector) with diagonal covariance. □

The above result suggests that by selecting our window (bandwidth), we can trade off bias for variance. The trade-off is particularly beneficial as long as (a) the JPSD is smooth relatively to the disc size (εB1) and (b) the graph eigenvalues are clustered (|Sθ|1 when h(θ)0).

5 Recovery of JWSS Processes

This section considers the MMSE problem of recovering a JWSS process x=vec(X) from linear measurements y corrupted by a zero-mean JWSS process w:

$$\begin{array}{*{20}l} \begin{aligned} & \min_{f: \mathbb{R}^{N^{\prime}} \rightarrow \mathbb{R}^{N}} & & \mathbf{E}{|| f(\mathbf{y}) - \mathbf{x}||{~}_{2}^{2}} \\ & \text{subject to} & & \mathbf{y} = \mathbf{A} \mathbf{x} + \mathbf{w}, \end{aligned} \end{array} $$
(P0)

where the function f is linear on y, i.e., there exists a matrix W and a vector b such that f(y)=Wy+b. We remark that (a) for A binary diagonal and w=0, (P0) is an interpolation problem, (b) for A=I and w white noise (P0) is a denoising problem, and (c) for A diagonal with Aii=1 if iNt and zero otherwise and w=0 it corresponds to forecasting. We mainly consider the former two problems since, for forecasting, it is more computationally efficient to utilize autoregressive models [29].

The minimum mean-squared linear estimate is known to be

$$\begin{array}{@{}rcl@{}} \dot{\mathbf{x}} &= \mathbf{\Sigma}_{\mathbf{x}\mathbf{y}} \mathbf{\Sigma}_{\mathbf{y}}^{-1} (\mathbf{y} - \bar{\mathbf{y}}) + \bar{\mathbf{x}}, \end{array} $$
(11)

with the definitions Σy=AΣA+Σw and Σxy=ΣA. Obtaining \(\dot {\mathbf {x}}\) therefore entails solving a linear system in matrix Σy that—naively approached—has O(N2T2) complexity. In addition, the condition number of Σy can be large, rendering direct inversion unstable. For instance, this may happen when one attempts to reverse any smoothing operation A that severely attenuates part of the signal’s spectrum.

We next discuss how to deal with these issues:

5.1 Decreasing the complexity

Thankfully, even if Σy is not always sparse, we can approximate its multiplication by a vector without actually computing it as (a) A is, for many applications (denoising, prediction, forecasting), sparse, and (b) per our assumption, Σ and Σw are joint filters, and therefore, they can be implemented at complexity that is (up to logarithmic factors) linear to the number of edges E and timesteps T [20, 34, 36]. Therefore, if we employ an iterative method such as the (preconditioned) conjugate gradient to compute the solution, the complexity of each iteration will be linear on the problem size.

5.2 Singular or badly conditioned Σ y

We choose the solution with the minimal residual by substituting the inverse \(\mathbf {\Sigma }_{{y}}^{-1}\) in (11) with the pseudo-inverse \(\mathbf {\Sigma }_{{y}}^{+}\). However, instead of solving the normal equations \( \dot {\mathbf {x}} = \mathbf {\Sigma }_{\mathbf {x}\mathbf {y}} \left (\mathbf {\Sigma }_{{y}}^{2}\right)^{-1} \mathbf {\Sigma }_{{y}} \left (\mathbf {y} - \bar {\mathbf {y}}\right)+ \bar {\mathbf {x}}, \) which has the effect of significantly increasing the condition number of our matrix, we suggest to employ the minimal residual conjugate gradient method for symmetric matrices [42]. For badly conditioned covariance matrices, an alternative solution is to rewrite the problem as a regularized least squares problem

$$\begin{array}{*{20}l} \underset{\underline{z} \in \mathbb{R}^{N}}{\min} \|\mathbf{A}\underline{z}-\! \mathbf{y}\|{~}_{2}^{2} +\! \|{h_{\mathbf{w}}(\mathbf{L}_{G},\mathbf{L}_{T})}^{1/2} \, h_{\mathbf{x}}(\mathbf{L}_{G},\mathbf{L}_{T})^{-1/2} (\underline{z} -\bar{\mathbf{x}}) \|{~}_{2}^{2} \end{array} $$
(12)

and solve it using the generalization of the fast iterative shrinkage-thresholding algorithm (FISTA) scheme [4345]. This problem was shown to converge to the correct solution when w is white noise. More details about the optimization procedures can be found in [17]. Similarly, in the noiseless case one removes term \(\|\mathbf {A}\mathbf {z}- \mathbf {y}\|{~}_{2}^{2}\) in (12) and introduces instead the constraint Az=y. The resulting optimization problem can be solved using a Douglas-Rachford scheme [46].

6 Experiments

6.1 Joint power spectral density estimation

The first step in our evaluation is to analyze the efficiency of JPSD estimation. Our objective is dual. First, we aim to study the role of the different method parameters into the estimation accuracy and computational complexity, essentially providing practical guidelines for their usage. In addition, we wish to illustrate the usefulness of the joint stationarity assumption, even when the graph is only approximately known.

6.1.1 Variance-bias-complexity tradeoffs

To validate the analysis of Section 4 for the computational and accuracy trade-offs inherent to our JPSD estimation method, we performed numerical experiments with random geometric graphs of N=256 vertices (we build a 10-nearest neighbor graph, weighted by a radial basis function kernel tuned so th at the average weighted degree is slightly above 7) and JWSS processes (T=128 timesteps). Though our approach works with any JPSD, including high frequency ones, in this experiment, we consider a stochastic process generated by the discrete damped wave equation with a non-separable JPSD h(λ,ω)= exp(−|ω|/2) cos(ω acos(1−λ))

6.1.2 Variance-bias

First, we examine the relation between the real JPSD h and the convolutional estimate \(\ddot {h}\) obtained using the “fast” method described in “Appendix: Implementation details of the JPSD estimator”. We use the following metrics:

$${{\text{error}} \atop {\frac{\tilde{\mathbf{E}}\left[{||\ddot{\mathbf{H}} - \mathbf{H}}||{~}_{F}\right]}{||{\mathbf{H}}||{~}_{F}}}}\left| {\text{bias} \atop {\frac{||{{\tilde{\mathbf{E}}}\left[{\ddot{\mathbf{H}}} - {\mathbf{H}}\right]}||{~}_{F}}{||{\mathbf{H}}||{~}_{F}}}}\right| {{\mathrm{standard\ deviation}} \atop {\frac{{\tilde{\mathbf{E}}}\left[||{\ddot{\mathbf{H}} - \tilde{\mathbf{E}}{\ddot{\mathbf{H}}}}||{~}_{F}\right]}{||{\mathbf{H}}||{~}_{F}},}} $$

where H=h(ΛG,Ω), \(\ddot {\mathbf {}H} = \ddot {h}(\mathbf {\Lambda }_{G}, \mathbf {\Omega }),\) and \(\tilde {\mathbf {E}} [{\cdot }]\) is the empirical expectation computed over 20 independent experiments. We remind the reader that there are two parameters influencing the performance of the convolutional JPSD estimator (see “Appendix: Implementation details of the JPSD estimator”: the window size L corresponding to our assumption for the support length of the autocorrelation in time, and the number of graph filters F used to capture power density in the graph spectral dimension. As discussed in Theorem 3, the bias will be small as long as the JPSD is a smooth function (it has a small Lipschitz constant ε), in which case one may opt for small L and F. Figure 2a–d report four key metrics for an exhaustive search of L,F combinations. We observe that large values of F and L generally reduce the estimation error (Fig. 2a) because they result in reduced bias (Fig. 2b). Nevertheless, setting the parameters to their maximum values is not suggested as the variance is increased (Fig. 2c).

6.1.3 Complexity

In Fig. 2d, we see that utilizing a large number of filters (i.e., large F) increases the average execution time. Figure 3 delves further into the issue of scalability. In particular, we vary the number of vertices from 1000 to 9000 and focus on a process with JPSD \(h(\theta) = e^{-\lambda /\lambda _{\text {max}}} \, e^{-5\, \omega ^{2}}\). We then examine the min/median/max execution time of the convolutional JPSD estimator for a for increasing problem sizes when ran in a desktop computer and repeated 10 times. We compare two implementations. The first, which naively performs the convolution in the spectral domain, uses the eigenvalue decomposition and therefore scales quadratically with the number of vertices. Due to its optimized code and simplicity, this should be the method of choice when N is small. For larger problems, we suggest using the fast implementation. As shown in the figure, this scales linearly with N (here E=O(N)) when the number of filters F and timesteps T are held constant. In this experiment, we set L to 64.

6.1.4 How to choose L and F?

Having no computational constrains, one should choose the parameter combination that minimized the Akaike information criterion (AIC) score \(\text {AIC} = 2FL - 2\ln \left (\ddot {\ell }\right)\), where \(\ddot {\ell }\) is the distribution dependent estimated likelihood \(\ddot {\ell } = \mathbf {P}\left (\mathbf {x} | \ddot {\mathbf {\Sigma }}\right)\) and \(\ddot {\mathbf {\Sigma }}\) is the estimated covariance based on the convolutional JPSD estimator with parameters L and F [47]. This procedure is often unfeasible as it is based on computing each model’s log-likelihood and thus entails estimating one JPSD for each parameterization in consideration (as well as knowing the distribution type). We have found experimentally that setting F=min(N,50) provides a good trade-off between computational complexity and error. On the other hand, we suggest setting L to an upper bound of the autocorrelation support.

6.1.5 Learning from few realizations and a noisy graph

Figure 4 illustrates the benefit of a joint stationarity prior as compared to (a) an empirical covariance estimator which makes no assumptions about the data and (b) the MTWSS process estimator with optimal bandwidth [22]. As expected, accurate estimation is challenging when the number of realizations is much smaller than the number of problem variables (NT), returning errors above one for the empirical estimator. Introducing stationarity priors regularizes the estimation resulting in more stable estimates.

What is perhaps surprising is that even when the graph (and UG) is known only approximately, estimating the second order moment of the distribution using the joint stationarity assumption is beneficial. To portray this phenomenon, we also plot the estimation error when using a noisy graph (we corrupted the weighted adjacency matrix by Gaussian noise, resulting in an SNR of 10 dB). Undoubtedly, introducing noise to the graph edges negatively affects estimation by introducing bias. Still, even with noise, the proposed method significantly outperforms purely time-based methods when less than NT realizations are available.

6.2 Recovery performance on three datasets

We apply our methods on three diverse datasets featuring multivariate processes evolving over graphs: (a) a weather dataset depicting the temperature of 32 weather stations over 1 month, (b) a traffic dataset depicting high-resolution daily vehicle flow of 4 weekdays, and (c) SIRS-type epidemics in Europe. Our experiments aim to show that joint stationarity is a useful model, even in datasets which may violate the strict conditions of our definition, and that it can yield a significant improvement in recovery performance, as compared to time- or vertex-based stationarity methods.

6.2.1 Experimental setup

We split the K realizations of each dataset into a training set of size ptK and a test set of size (1−pt)K, respectively. The training set is used to estimate the JPSD. Then, in the first two experiments, we attempt to recover the values of pdNT variables randomly discarded from the test set. This corresponds to A being a binary diagonal matrix and w=0 in Problem P0, for which the solution is not given by a Wiener filter. In the third experiment, we instead consider a denoising problem with A=I and w being a random Gaussian vector. In each case, we report the RMSE for the recovered signal normalized by the 2-norm of the original signal. We compare our joint method with the sample and convolutional JPSD estimators to univariate time/vertex stationarity [17]. These methods solve the statistical recovery problem under the assumption that signals are stationary in the time/vertex domains, but considering different vertices/timesteps as independent. These methods are known to outperform non-model based methods, such as Tikhonov regularization (ridge regression) and total-variation regularization (lasso) over the time or graph dimensions [12, 13]. We also compare to the more involved MTWSS model [25] where the values at different vertices are correlated and the covariance is block circulant of size NT×NT (see Definition 2). The latter is only shown for the weather dataset as the large number of variables present in the other datasets (e.g., ≈108 parameters for the traffic dataset) prohibited computation. We remark that the graph Laplacians we considered did not possess eigenvalue multiplicities, meaning that the results obtained using the JWSS-alternate definition are identical to that with JWSS using a sample JPSD estimator—thus, we do not include JWSS-alternate in our comparison.

6.2.2 Molene dataset

The French national meteorological service has published in open access a datasetFootnote 9 with hourly weather observations collected during the month of January 2014 in the region of Brest (France) [18]. The graph was built from the coordinates of the weather stations by connecting all the neighbors in a given radius with a weight function WG[ i1,i2]=exp(−k d(i1,i2)2), where d(i1,i2) is the Euclidean distance between the stations i1 and i2. Parameter k was adjusted to obtain an average degree around 5 (k however is not a sensitive parameter). We split the data in K=15 consecutive periods of T=48 h each. As sole pre-processing, we removed the mean (over time and stations) of the temperatureFootnote 10.

We first test the influence of training set size pt, while discarding pd=30% of the test variables. As seen in Fig. 5a, due to its large sample complexity, the MTWSS approach provides good recovery estimates when the number of realizations is large, approaching that of joint stationarity, but suffers for small training sets (though not shown in the figure, the relative mean error was 9.8 when only pt=10% of the data was used for training). Due to their stricter modeling assumptions, univariate stationarity methods returned relevant estimates when trained from few realizations but exhibited larger bias. The convolutional JPSD estimator can be seen to improve upon the sample estimator when the amount of data used for JPSD estimation is small (less than 20%). For bigger training sets, the two estimators yield similar accuracy. Figure 5b reports the achieved errors for recovery problems with progressively larger percentage 5%pd≤95% of discarded entries for a training percentage of pt=20%. We can observe that the error trends are consistent across all cases.

Fig. 5
figure 5

Experiments with weather data. The joint approach becomes especially meaningful when the available data are few. a Influence of the training set size (pd=30%).b Influence of the percentage of missing values (pt=20%)

6.2.3 Traffic dataset

The California department of transportation publishes high-resolution traffic flow measurements (number of vehicles per unit interval) from stations deployed in the highways of SacramentoFootnote 11. We focused on 727 stations over four weekdays in the period 01–06 April 2016. Starting from the road connectivity network obtained by the OpenStreetMap.org, we constructed one time series for each highway segment by setting the flow over it to be a weighted average of all nearby stations, while abiding to traffic direction. This resulted in a graph of N=710 vertices and a total of T=24×12 measurements per day for K=4 days. We used the convolutional JPSD estimator with parameters L=T/2 and F=75, which were experimentally found to give good performance in the training set.

Figure 6a and b depict the mean recovery errors when the training sets were 1 (pt= 25%) and 3 days (pt= 75%) respectively. The strong temporal correlations present in highway traffic were useful in recovering missing values. Considering both the temporal and spatial dimensions of the problem resulted in accurate estimates, with less than 0.04 error when pd=50% of the data were removed and the PSD was estimated from 1 day. As expected, the convolutional estimator is efficient in the case when the training set is small (1 out of 4 days used for training): assuming that the JPSD is smooth helps to reduce estimation variance and computational complexity but can lead to a slight decrease in accuracy when a large amount of training data is available.

Fig. 6
figure 6

Experiments on Sacramento highway flow. By exploiting both graph and temporal dimensions, the joint approach closely captures the subtle variations in traffic throughout each weekday. a 1 out of 4 days used for training (pt=25%).b 3 out of 4 days used for training (pt=75%)

6.2.4 SIRS epidemic

Our third experiment simulates the spread of an infectious disease over N=200 major cities of Europe, as predicted by the susceptible-infected-recovered-susceptible (SIRS) model, one of the standard models used to study epidemics. We intend to examine the predictive power of the considered methods when dealing with different realizations of a non-linear and probabilistic process over a graph (the data are fictitious). We parameterized SIRS as follows: length of infection period, 2 days; length of immunity period, 10 days; probability of contagion across neighboring cities per day, 0.005; and total period, T=180 days. We generated a total of K=10 infections, all having the same starting point.

In contrast to the previous experiments, here, we attempt to recover the data after they have been corrupted with additive Gaussian noise. Figure 7a and b depict the mean recovery error as a function of the input signal-tonoise ratio (SNR), respectively, when pt= 50% and pt= 90% of the data were used for training. As in previous experiments, the joint stationarity attains better recovery. The difference becomes clearer for low SNR, in which case the error is decreased (roughly) by a factor of two w.r.t. the best alternative.

Fig. 7
figure 7

Experiments with the SIRS epidemic model. a Influence of noise level (pt=50%).b Influence of noise level (pt=90%)

6.2.5 Code

We remark that our simulations were done using the GSPBOX [48], the UNLocBoX [49], and the LTFAT [50]. The code reproducing our experiments is available at https://lts2.epfl.ch/stationary-time-vertex-signal-processing/.

7 Conclusion

This paper proposed a new definition of wide-sense stationarity appropriate for multivariate processes supported on the vertices of a graph.

Our model presents two key benefits. First, the estimation and recovery of JWSS processes is efficient, both in terms of estimation variance and computational complexity. In particular, the JPSD of a JWSS process can be estimated from few observations at a complexity that is roughly linear to the number of graph edges and timesteps. After the PSD has been estimated, the linear MMSE recovery problems of interpolation and denoising can be solved in the same asymptotic complexity. Second, joint stationarity is a volatile model, which is able to capture non-trivial statistical relations in the temporal and vertex domains. Our experiments suggested that we can model real spatiotemporal processes as jointly stationary without significant loss. Specifically, the JWSS prior was found more expressive than (univariate) TWSS and VWSS priors and improved upon the multivariate time stationarity prior when the dimensionality was large, but the model estimation was based on few observations of the process.

8 Appendix

8.1 Implementation details of the JPSD estimator

A straightforward implementation requires O(N3) operations for computing the eigenbasis of our graph, O(N2×KT) for performing KT independent GFT, O(T log(TKN) for KN independent FFT, and O(N2T2) for the convolution.

This section describes how to approximate a convolutional estimate using a number of operations that is linear to ET. Before describing the exact algorithm, we note two helpful properties of the estimator. First, we can compute \(\ddot {h}(\theta)\) by obtaining estimates for each X(k) independently and then averaging over k:

$$\begin{array}{*{20}l} \dot{h}(\theta) &= \frac{1}{K\, c_{g}(\theta)} \sum_{k} \sum_{n, \tau} g(\theta - \theta_{n,\tau})^{2} \, |\text{JFT}\{{\mathbf{X}_{(k)}}\}[\!n,\tau]|^{2} \end{array} $$

As we will see in the following, the terms inside the outer sum can be approximated efficiently, avoiding the need for an expensive JFT. In addition, when the convolution window is separable, i.e., g(θ)=gG(λgT(ω), as is assumed in this contribution, the joint convolution can be performed successively (and at any order) in the time and vertex domains

$$\begin{array}{*{20}l} \ddot{h}(\theta) & = \sum_{\tau} \frac{g_{T}(\omega - \omega_{\tau})^{2}}{c_{g_{T}}(\omega)} \left(\sum_{n} \frac{g_{G}(\lambda - \lambda_{n})^{2}}{c_{g_{G}}(\lambda)} \, \dot{h}(\theta_{n,\tau}) \right), \end{array} $$

where \(c_{g}(\theta) = c_{g_{T}}(\omega) \cdot c_{g_{G}}(\lambda)\phantom {\dot {i}\!}\). Exploiting this property, we treat the implementation of the two convolutions separately and the presented algorithms can be combined in any order.

Fast time convolution. This is the textbook case of TPSD estimation that is solved by the Welch’s method [41]. The method entails splitting each time series into equally sized overlapping segments and averaging over segments the squared amplitude of the Fourier coefficients. The procedure is equivalent to an averaging (over time) of the squared coefficients of a short-time Fourier transform (STFT), with half-overlapping windows wT defined such that DFTwT(t)=gT(ω) [51, 52]. Let L be the support of the autocorrelation or equivalently the number of frequency bands. We suggest using the iterated sine window

$$\begin{array}{*{20}l} w_{T}(t) \triangleq \begin{cases} \sin\left(0.5\pi\cos\left(\pi t / L\right)^{2}\right) &\text{if}~t\in [\!-L/2,L/2] \\ 0 & \text{otherwise}, \end{cases} \end{array} $$

as it turns the STFT into a tight operator. In order to get an estimate of \(\ddot {h}\) at unknown frequencies, we interpolate between the L known points using splines [53].

Fast graph convolution. Inspired by the technique of [17], we perform the graph convolution using an approximated graph filtering operation [54] that scales linearly to the number of graph edges E. In particular,

$$\begin{array}{*{20}l} \sum_{n = 1}^{N} \frac{g_{G}(\lambda - \lambda_{n})^{2}}{c_{g_{G}}(\lambda)} \, \dot{h}(\theta_{n,\tau}) = \frac{ \mathbf{E}\left[{\|g_{G}(\lambda\mathbf{I}_{N} - \mathbf{L}_{G}) \, \mathbf{x}_{\tau} \|{~}_{2}^{2}}\right]}{c_{g_{G}}(\lambda)}. \end{array} $$
(13)

We suggest using the Gaussian window

$$\begin{array}{*{20}l} g_{G}(\lambda - \lambda_{n}) \triangleq e^{-{(\lambda - \lambda_{n})^{2}}/{\sigma^{2}}}, \end{array} $$
(14)

with σ2=2(F+1)λmax/F2. As we did before, we only compute the above for F=O(1) different values of λ and approximate the rest using splines. As the eigenvalues are not known, we need a stable way to estimate \(c_{g_{G}}(\lambda)\). We obtain an unbiased estimate by filtering Q=O(1) random Gaussian signals on the graph \(\epsilon \in \mathbb {R}^{N} \sim \mathcal {N} (0, \mathbf {I}_{N})\), such that

$$\begin{array}{*{20}l} {c_{g_{G}}(\lambda)} = \mathbf{E}\left[{ \sum_{q = 1}^{Q} \|g_{G}(\lambda\mathbf{I}_{N} - \mathbf{L}_{G}) \epsilon_{(q)}\|{~}_{2}^{2}}\right], \end{array} $$
(15)

with variance equal to \(2 \sum _{n=1}^{N} g^{4}(\lambda - \lambda _{n}) / Q\). We omit the analysis, as it is similar to that in Theorem 2. According to our numerical evaluation, the approximation error introduced by the latter estimator and spectral filtering is almost negligible for smooth JPSD.

O(TKF×E+QF×E)=O((TK+Q)EF) for the fast graph convolutions. Here, the TK and Q convolutions are performed in order to estimate the quantities at (13) and (15) for F different values of λ. (b) O(NK×T log(L)) for the fast time convolution, corresponding to NK STFT. Thus, in total the complexity of the fast convolutional JPSD estimator is O(TKFE+QEF+NKT log(L)). Furthermore, when Q,F,K,L are constants, the complexity simplifies to O(TE). We remark that, though asymptotically superior, the fast implementation can be significantly slower when the number of variables is small. Our experiments demonstrate that it should be preferred for N larger than a few thousands (see Fig. 3).

8.2 Univariate vs multivariate JPSD

As discussed in Section 3.2, one could potentially pose a VWSS hypothesis on a product graph to define joint stationarity, but the direct effect of such a choice is that the spectral domain becomes 1-dimensional instead of 2-dimensional. To see why this is problematic, in Fig. 8, we plot the two different representations of the JPSD for the three datasets featured in our experiments. It can be seen that the 2D representation (corresponding to the JWSS hypothesis) is more structured than its 1D counterpart. More importantly, a JWSS hypothesis leads to a smoother JPSD: this is what our convolutional JPSD estimator employs to decrease the estimation variance.

Fig. 8
figure 8

The JPSD of the three datasets used in our experiments interpreted under a JWSS hypothesis (left) and a VWSS on a product graph hypothesis (right). In both left and right sub-figures, the JPSDs are plotted in logarithmic scale. Interpreting the JPSD as a 2-dimensional function leads to a smoother and more structured representation

8.3 Deferred proofs

Proof of Proposition 1

In order to simplify the notation in the next proof, we define the unravel function \(u_{r}:\mathbb {Z}^{2}\rightarrow \mathbb {Z}\) that transforms the double indexes n,τ of the matrix indexing of X into its vector index of u(n,τ)=(τ−1)N+n, i.e., X[ n,τ]=vec(X)[ u(n,τ)].

By construction of the JFT basis, \(\hat {\mathbf {X}}[0,0]\) captures the DC-offset of a signal, and condition (a) is equivalent to stating that E[ x]=c1NT. Moreover, if the graph is connected and (a) holds, at least one of \(\mathbf {E}\left [{\hat {\mathbf {X}}[\!n_{1},\tau _{1}]}\right ]\) and \(\mathbf {E}\left [{\hat {\mathbf {X}}}[\!n_{2},\tau _{2}]\right ]\) must be zero when n1n2 or τ1τ2 and

$$\begin{array}{*{20}l} {}\mathbf{E}\left[{\hat{\mathbf{X}}[\!n_{1},\tau_{1}] \hat{\mathbf{X}}[\!n_{2},\tau_{2}]}\right] &= \mathbf{E}\left[\!{\hat{\mathbf{X}}[\!n_{1},\tau_{1}] \hat{\mathbf{X}}[\!n_{2},\tau_{2}]}\right]\\ &\quad- \mathbf{E}\left[{\hat{\mathbf{X}}[\!n_{1},\tau_{1}]}\right]\mathbf{E}\left[{\hat{\mathbf{X}}[\!n_{2},\tau_{2}]}\right] \\ &= (\mathbf{U}_{J}^{*} \mathbf{\Sigma} \mathbf{U}_{J})\left[u(n_{1},\tau_{1}), u(n_{2},\tau_{2})\right]. \end{array} $$

Therefore, condition (b) is equivalent to stating that \(\mathbf {\Sigma } = \mathbf {U}_{J} \mathbf {D} \mathbf {U}_{J}^{*}\) for some diagonal matrix D. In addition, (c) asserts that D[ u(n,τ),u(n,τ)]=h(λn,ωτ) for every n,τ. Thus, taken together, (b) and (c) state that \(\mathbf {\Sigma } = \mathbf {U}_{J} \mathbf {D} \mathbf {U}_{J}^{*} = \mathbf {U}_{J} h(\mathbf {\Lambda }_{G}, \mathbf {\Lambda }_{T}) \mathbf {U}_{J}^{*} = h(\mathbf {L}_{G}, \mathbf {L}_{T})\), which is the second moment condition of a JWSS process. □

Proof of Theorem 1

For the first moment, it is straightforward to see that E[X[ n,t]]=c if and only if both E[X[ n,t]]=ct and E[X[ n,t]]=cnn,t.For the second moment, the covariance matrix of a JWSS process is by definition the linear operator associated to a joint filter Σ=h(LG,LT). Using (5), \(\mathbf {\Sigma }_{t_{1},t_{2}}\) can be written as

$$ \mathbf{\Sigma}_{t_{1},t_{2}} = \mathbf{U}_{G} \gamma_{\delta}(\mathbf{\Lambda}) \mathbf{U}_{G}^{*} = \gamma_{\delta}(\mathbf{L}_{G}), $$
(16)

where δ=t1t2+1 and

$$ \gamma_{\delta}(\lambda) = \frac{1}{T} \sum_{\tau=1}^{T} h(\lambda,\omega_{\tau}) e^{j \omega_{\tau} \delta}. $$
(17)

Hence, the process satisfies the (b) statement of Definition 2 (TWSS) and 3 (VWSS). Conversely, if a process is TWSS and VWSS, we have \(\mathbf {\Sigma }_{t_{1},t_{2}}= \gamma _{t_{1},t_{2}}(\mathbf {L}_{G}) = \gamma _{\delta }(\mathbf {L}_{G})\) with the same δ as before. As a result, using (5), its covariance matrix can be written as a joint filter h(LG,LT), where

$$ h(\lambda_{n},\omega_{\tau}) = \sum_{\delta=1}^{T} \gamma_{\delta}(\lambda_{n}) e^{j \omega_{\tau} \delta}, $$
(18)

and hence also satisfies the property of the second moment of JWSS processes. □

Proof of Property 2

The output of a filter f(LJ) can be written in vector form as y=f(LJ). If the input signal x is JWSS, we can confirm that the first moment of the filter output is E[f(LJ)x]=f(LJ)E[ x]=f(0,0)E[ x], which remains constant as E[ x] is constant by hypothesis. The computation of the second moment gives

$$\begin{array}{*{20}l} \mathbf{\Sigma}_{\mathbf{y}} &= \mathbf{E}\left[{ f(\mathbf{L}_{J})\mathbf{x} \left(f(\mathbf{L}_{J}) \mathbf{x} \right)^{*}}\right] - \mathbf{E}\left[{f(\mathbf{L}_{J}) \mathbf{x}} \mathbf{E}{ (f(\mathbf{L}_{J}) \mathbf{x})^{*} }\right]\\ &= f(\mathbf{L}_{J}) \mathbf{E}\left[{ \mathbf{x} \mathbf{x}^{*} }\right] f(\mathbf{L}_{J}) - f(\mathbf{L}_{J}) \mathbf{E}[\!{\mathbf{x}}]\mathbf{E}[\!{\mathbf{x}}]^{*} f(\mathbf{L}_{J})^{*}\\ &= f(\mathbf{L}_{J}) \mathbf{\Sigma}_{\mathbf{x}} f(\mathbf{L}_{J})^{*} = \mathbf{U}_{J} \, \left(f^{2}(\Theta)\, h_{\mathbf{X}}(\Theta) \right) \, \mathbf{U}_{J}^{*}, \end{array} $$

which satisfies the second moment condition of JWSS processes. Above, f2(Θ) is a diagonal NT×NT matrix, whose diagonal is obtained by applying the bivariate function f2(·,·) on [λn,ωτ] for all n,τ (f can be interpreted as the frequency response of a joint filter). Matrix hX(Θ) is similarly defined. □

Lemma 1

If function h(θ) is ε-Lipschitz, then the bias is bounded by

$$\begin{array}{ll} {}\left|\mathbf{E}\left[{\ddot{h}(\theta) - h(\theta)}\right] \right| \leq \frac{\epsilon }{c_{g}(\theta)} {\sum\limits^{T,N}_{n=1,\tau=1}}g(\theta - \theta_{n,\tau})^{2} ||{\theta - \theta||{~}_{n,\tau}}_{2}. \end{array} $$

Proof

Since h(θ) is ε Lipschitz, we have |h(θ)−h(θn,τ)|≤ε||θθn,τ|| 2. Hence, we write

$$\begin{array}{*{20}l} {}\left|\mathbf{E}\left[{\ddot{h}(\theta) - h(\theta)}\right] \right| & = \left| \left(\sum_{n,\tau=1}^{NT} \frac{g(\theta- \theta_{n,\tau})^{2}}{c_{g}(\theta)} h(\theta_{n,\tau}) \right) - h(\theta)\right| \\ &= \left| \sum_{n,\tau=1}^{NT} \frac{g(\theta- \theta_{n,\tau})^{2}}{c_{g}(\theta)} \left(h(\theta_{n,\tau}) - h(\theta) \right)\right| \\&\leq \frac{\epsilon}{c_{g}(\theta)} \sum_{n,\tau=1}^{NT} g(\theta- \theta_{n,\tau})^{2} ||{\theta - \theta_{n,\tau}}||{~}_{2}, \end{array} $$

where the second equality stems from \(\sum _{n,\tau } g^{2}(\theta - \theta _{n,\tau }) = c_{g}(\theta)\). □

Lemma 2

If X is a JWSS process such that the entries of \(\hat {\mathbf {X}}\) are independent random variables, the convolutional JPSD estimate at θ has variance

$$\begin{array}{*{20}l} \mathbf{Var}\left[{\ddot{h}(\theta)}\right] = \sum_{n,\tau} \frac{g(\theta- \theta_{n,\tau})^{4}}{c_{g}(\theta)^{2}} \, \mathbf{Var}\left[{\dot{h}(\theta_{n,\tau})}\right], \end{array} $$
(19)

where \(\mathbf {Var}\left [{\dot {h}(\theta _{n,\tau })}\right ]\)is the variance of the sample JPSD estimator at θn,τ.

Proof

Set

$$\alpha_{n,\tau}= g(\theta- \theta_{n,\tau})^{2} h(\theta_{n,\tau})/c_{g}(\theta) $$

and \(\hat {\mathbf {E}}_{(k)} = \text {mat}\left ({\hat {\epsilon }_{(k)}}\right) = \text {mat}({h(\boldsymbol {\Lambda }_{G}, \boldsymbol {\Omega })^{+1/2} \hat {\mathbf {x}}_{(k)}})\), where + denotes the pseudo-inverse, \(\hat {\epsilon }_{(k)}\) is white, and mat(·) is the matricization operator. The centered random variable

$$ \begin{aligned} \ddot{h}(\theta) - \mathbf{E}\left[{\ddot{h}(\theta)}\right] &= \sum_{n,\tau} \frac{g(\theta- \theta_{n,\tau})^{2}}{c_{g}(\theta)} (\dot{h}(\theta_{n,\tau}) - h(\theta_{n,\tau})) \\ &= \sum_{n,\tau} \alpha_{n,\tau} \left(\sum_{k} \frac{\hat{\mathbf{E}}_{(k)}[\!n,\tau] \hat{\mathbf{E}}_{(k)}[\!n,\tau]^{*} }{K} - 1\right)\\ &= \sum_{n,\tau} \alpha_{n,\tau} \, z_{n,\tau} \end{aligned} $$

is a weighted sum of centered, identically distributed random variables zn,τ. Moreover, when the elements of \(\hat {\b {E}}_{(k)}\) are independent, so are the variables zn,τ. It follows that

$$\begin{array}{*{20}l} \mathbf{Var}\left[{\ddot{h}(\theta)}\right] &= \sum_{n,\tau} \alpha_{n,\tau}^{2} \, \mathbf{Var}\left[{z_{n,\tau}^{2}}\right]\\ &= \sum_{n,\tau} \frac{g(\theta- \theta_{n,\tau})^{4}}{c_{g}(\theta)^{2}} \, \mathbf{Var}\left[{\dot{h}(\theta_{n,\tau})}\right], \end{array} $$

which matches our claim. □

Notes

  1. As it will be discussed in Section 3.2, joint stationarity is strictly more general than vertex stationarity on the product of the two graphs (first proposed in [14] in the deterministic setting), as the latter can only model processes with specific PSD.

  2. Though we use the combinatorial Laplacian in our presentation, our results can be adapted to alternative positive semi-definite matrix definitions of a graph Laplacian, such as the normalized Laplacian.

  3. The same limitation holds for product graph constructions that do not rely on the Cartesian product [14].

  4. Defining joint filters in terms of a product graph would imply that there is a fixed relation between angular and graph frequencies (determined by the product graph construction). As a result, in the product graph, framework filters are univariate functions.

  5. As the correlation between two instants t1 and t2 depends only on the difference between these two instants E[x[t1]x[t2]]−E[x[t1]]E[x[t2]]=γ[t1t2], the covariance matrix has to be circulant, a property that is shared by linear filters.

  6. The only exception: for graphs with repeated eigenvalues, the conditions E[x]=c1 and Σ=h(LG) are sufficient but not necessary for the graph stationarity definition based on isometric graph translation [18].

  7. More generally, in an analogy to [18], the JPSD could be block diagonal with each block being of size equal to the multiplicity.

  8. The number of realizations needed for obtaining a good sample covariance matrix of an n-dimensional process is O(n logn) [1, 39].

  9. Access to the raw data is possible directly from https://donneespubliques.meteofrance.fr/donnees_libres/Hackathon/RADOMEH.tar.gz

  10. Though computing separate means (one for each data chunk) yields slightly better performance, to be consistent with the proposed model, we computed a single mean over all training data.

  11. The data correspond to the 3rd district of California and can be downloaded from http://pems.dot.ca.gov/

Abbreviations

AIC:

Akaike information criterion

DFT:

Discrete Fourier transform

GFT:

Graph Fourier transform

JFT:

Joint Fourier transform

JPSD:

Joint power spectral density

JWSS:

Jointly wide-sense stationary

MTWSS:

Multivariate time wide-sense stationary

MVWSS:

Multivariate vertex wide-sense stationary

PSD:

Power spectral density

TPSD:

Time power spectral density

TWSS:

Time wide-sense stationarity

VPSD:

Vertex power spectral density

VWSS:

Vertex wide-sense stationarity

WSS:

Wide-sense stationarity

References

  1. M. Rudelson, Random vectors in the isotropic position. J. Funct. Anal.164(1), 60–72 (1999).

    Article  MathSciNet  MATH  Google Scholar 

  2. H. Lütkepohl, New introduction to multiple time series analysis. Springer Sci. Bus. Media (2005).

  3. O. Ledoit, M. Wolf, A well-conditioned estimator for large-dimensional covariance matrices. J. Multivar. Anal.88(2), 365–411 (2004).

    Article  MathSciNet  MATH  Google Scholar 

  4. C. Lam, Q. Yao, et al., Factor modeling for high-dimensional time series: inference for the number of factors. Ann. Stat.40(2), 694–726 (2012).

    Article  MathSciNet  MATH  Google Scholar 

  5. G. Connor, The three types of factor models: a comparison of their explanatory power. Financ. Anal. J.51(3), 42–46 (1995).

    Article  Google Scholar 

  6. M. J. Keeling, K. T. Eames, Networks and epidemic models. J. R. Soc. Interface. 2(4), 295–307 (2005).

    Article  Google Scholar 

  7. P. Mohan, V. N. Padmanabhan, R. Ramjee, in Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems. Nericell: rich monitoring of road and traffic conditions using mobile smartphones (ACM, 2008), pp. 323–336.

  8. W. Huang, L. Goldsberry, N. F. Wymbs, S. T. Grafton, D. S. Bassett, A. Ribeiro, Graph frequency analysis of brain signals. IEEE J. Sel. Top. Signal Proc.10(7), 1189–1203 (2016).

    Article  Google Scholar 

  9. F. Zhang, E. R. Hancock, Graph spectral image smoothing using the heat kernel. Pattern Recog.41(11), 3328–3342 (2008).

    Article  MATH  Google Scholar 

  10. A. J. Smola, R. Kondor, Kernels and regularization on graphs. Learning theory and kernel machines, 144–158 (2003).

  11. M. Belkin, P. Niyogi, Semi-supervised learning on riemannian manifolds. Mach. Learn.56(1–3), 209–239 (2004).

    Article  MATH  Google Scholar 

  12. D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, P. Vandergheynst, The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. Signal Process. Mag. IEEE. 30(3), 83–98 (2013).

    Article  Google Scholar 

  13. A. Sandryhaila, J. M. Moura, Discrete signal processing on graphs. IEEE Trans. Signal Process.61:, 1644–1656 (2013).

    Article  MathSciNet  MATH  Google Scholar 

  14. A. Sandryhaila, J. M. Moura, Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure. IEEE Signal Process. Mag.31(5), 80–90 (2014).

    Article  Google Scholar 

  15. A. Gadde, A. Ortega, in International Conference on Acoustics, Speech and Signal Processing (ICASSP). A probabilistic interpretation of sampling theory of graph signals (IEEE, 2015), pp. 3257–3261.

  16. C. Zhang, D. Florêncio, P. A. Chou, Graph signal processing–a probabilistic framework. Microsoft Res. (2015). Redmond, WA, USA, Tech. Rep. MSR-TR-2015-31.

  17. N. Perraudin, P. Vandergheynst, Stationary signal processing on graphs. IEEE Trans. Signal Process.65(13), 3462–3477 (2017). https://doi.org/10.1109/TSP.2017.2690388.

    Article  MathSciNet  MATH  Google Scholar 

  18. B. Girault, in Signal Processing Conference (EUSIPCO), 2015 23rd European. Stationary graph signals using an isometric graph translation (IEEE, 2015), pp. 1516–1520.

  19. A. G. Marques, S. Segarra, G. Leus, A. Ribeiro, Stationary graph processes and spectral estimation. IEEE Trans. Signal Process.65(22), 5911–5926 (2016).

    Article  MathSciNet  MATH  Google Scholar 

  20. E. Isufi, A. Loukas, A. Simonetto, G. Leus, Autoregressive moving average graph filtering. IEEE Trans. Signal Process.65(2), 274–288 (2017).

    Article  MathSciNet  MATH  Google Scholar 

  21. A. Loukas, D. Foucard, in Global Conference on Signal and Information Processing (GlobalSIP). Frequency analysis of time-varying graph signals (IEEE, 2016), pp. 346–350.

  22. N. Wiener, P. Masani, The prediction theory of multivariate stochastic processes. Acta Math.98(1), 111–150 (1957).

    Article  MathSciNet  MATH  Google Scholar 

  23. N. Wiener, P. Masani, The prediction theory of multivariate stochastic processes, ii. Acta Math.99(1), 93–137 (1958).

    Article  MathSciNet  Google Scholar 

  24. P. Bloomfield, Fourier analysis of time series: an introduction (Wiley, 2004).

  25. F. R. Bach, M. I. Jordan, Learning graphical models for stationary time series. IEEE Trans. Signal Process.52(8), 2189–2199 (2004).

    Article  MathSciNet  MATH  Google Scholar 

  26. R. Dahlhaus, M. Eichler, Causality and graphical models in time series analysis. Oxf. Stat. Sci. Ser., 115–137 (2003).

  27. B. Girault, Signal processing on graphs-contributions to an emerging field. PhD thesis, Ecole normale supérieure de lyon (2015).

  28. S. P. Chepuri, G. Leus, in Sensor Array and Multichannel Signal Processing Workshop (SAM). Subsampling for graph power spectrum estimation (IEEE, 2016), pp. 1–5.

  29. A. Loukas, E. Isufi, N. Perraudin, in Asilomar Conference on Signals, Systems, and Computers. Predicting the evolution of stationary graph signals (IEEE, 2017), pp. 60–64.

  30. J. Mei, J. M. Moura, Signal processing on graphs: causal modeling of unstructured data. IEEE Trans. Signal Process.65(8), 2077–2092 (2017).

    Article  MathSciNet  MATH  Google Scholar 

  31. V. N. Ioannidis, D. Romero, G. B. Giannakis, Inference of spatio-temporal functions over graphs via multikernel kriged kalman filtering. Trans. Signal Process.66(12), 3228–3239 (2017).

    Article  MathSciNet  MATH  Google Scholar 

  32. P. D. Lorenzo, E. Isufi, P. Banelli, S. Barbarossa, G. Leus, in 2017 25th European Signal Processing Conference (EUSIPCO). Distributed recursive least squares strategies for adaptive reconstruction of graph signals, (2017), pp. 2289–2293. https://doi.org/10.23919/EUSIPCO.2017.8081618.

  33. N. Perraudin, A. Loukas, F. Grassi, P. Vandergheynst, Towards stationary time-vertex signal processing. Int. Conf. Acoust. IEEE Speech Signal Process. (ICASSP) (2017).

  34. F. Grassi, A. Loukas, N. Perraudin, B. Ricaud, A time-vertex signal processing framework. IEEE Trans. Signal Process.66(3), 817–829 (2017).

    Article  MATH  Google Scholar 

  35. A. Loukas, A. Simonetto, G. Leus, Distributed autoregressive moving average graph filters. IEEE Signal Process. Lett.22(11), 1931–1935 (2015).

    Article  Google Scholar 

  36. E. Isufi, A. Loukas, A. Simonetto, G. Leus, in Signal Processing Conference (EUSIPCO), 2016 24th European. Separable autoregressive moving average graph-temporal filters (IEEE, 2016), pp. 200–204.

  37. D. I. Shuman, B. Ricaud, P. Vandergheynst, Vertex-frequency analysis on graphs. Appl. Comput. Harmon. Anal.40(2), 260–291 (2016).

    Article  MathSciNet  MATH  Google Scholar 

  38. S. Segarra, S. P. Chepuri, A. G. Marques, G. Leus, Statistical graph signal processing: Stationarity and spectral estimation, 325–347 (2018).

    Chapter  Google Scholar 

  39. R. Vershynin, How close is the sample covariance matrix to the actual covariance matrix?J. Theor. Probab.25(3), 655–686 (2012).

    Article  MathSciNet  MATH  Google Scholar 

  40. M. S. Bartlett, Periodogram analysis and continuous spectra. Biometrika. 37(1/2), 1–16 (1950).

    Article  MathSciNet  MATH  Google Scholar 

  41. P. Welch, The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. Trans. Audio Electroacoustics IEEE. 15(2), 70–73 (1967).

    Article  Google Scholar 

  42. O. Axelsson, Conjugate gradient type methods for unsymmetric and inconsistent systems of linear equations. Linear Algebra Appl.29:, 1–16 (1980).

    Article  MathSciNet  MATH  Google Scholar 

  43. P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul.4(4), 1168–1200 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  44. P. L. Combettes, J. -C. Pesquet, Proximal splitting methods in signal processing. Fixed-point algorithms for inverse problems in science and engineering, 185–212 (2011).

  45. N. Komodakis, J. -C. Pesquet, Playing with duality: an overview of recent primal?dual approaches for solving large-scale optimization problems. IEEE Signal Process. Mag.32(6), 31–54 (2015).

    Article  Google Scholar 

  46. P. L. Combettes, J. -C. Pesquet, A douglas–rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process.1(4), 564–574 (2007).

    Article  Google Scholar 

  47. H. Akaike, A new look at the statistical model identification. IEEE Trans. Autom. Control.19(6), 716–723 (1974).

    Article  MathSciNet  MATH  Google Scholar 

  48. N. Perraudin, J. Paratte, D. Shuman, V. Kalofolias, P. Vandergheynst, D. K. Hammond, GSPBOX: A toolbox for signal processing on graphs. ArXiv e-prints (2014). http://arxiv.org/abs/1408.5781.

  49. N. Perraudin, D. Shuman, G. Puy, P. Vandergheynst, UNLocBoX A matlab convex optimization toolbox using proximal splitting methods. ArXiv e-prints (2014). http://arxiv.org/abs/1402.0779.

  50. Z. Prusa, P. L. Sondergaard, N. Holighaus, C. Wiesmeyr, P. Balazs, The large time-frequency analysis toolbox 2.0. Sound, Music, and Motion: 10th International Symposium. 8905:, 419–442 (2014).

    Article  Google Scholar 

  51. K. Gröchenig, Foundations of time-frequency analysis. Springer Sci. Bus. Media (2013).

  52. H. G. Feichtinger, T. Strohmer, Gabor analysis and algorithms: theory and applications. Springer Sci. Bus. Media (2012).

  53. C. De Boor, A practical guide to splines, vol. 27 (Springer-Verlag, New York, 1978).

    Book  MATH  Google Scholar 

  54. A. Susnjara, N. Perraudin, D. Kressner, P. Vandergheynst, Accelerated filtering on graphs using Lanczos method. arXiv preprint (2015). arXiv:1509.04537.

Download references

Acknowledgements

We thank Francesco Grassi for his help with the code.

Funding

This work has been supported by the Swiss National Science Foundation research project Towards Signal Processing on Graphs (grant number: 2000_21/154350/1) and research project Deep Learning for Graph-Structured Data (grant number: PZ00P2 179981).

Author information

Authors and Affiliations

Authors

Contributions

The two authors contributed equally both for the experiments and for the writing of the paper. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Andreas Loukas.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Loukas, A., Perraudin, N. Stationary time-vertex signal processing. EURASIP J. Adv. Signal Process. 2019, 36 (2019). https://doi.org/10.1186/s13634-019-0631-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-019-0631-7

Keywords