Skip to main content

Bandlimited graph signal reconstruction by diffusion operator

An Erratum to this article was published on 25 January 2017

Abstract

Signal processing on graphs extends signal processing concepts and methodologies from the classical signal processing theory to data indexed by general graphs. For a bandlimited graph signal, the unknown data associated with unsampled vertices can be reconstructed from the sampled data by exploiting the spatial relationship of graph signal. In this paper, we propose a generalized analytical framework of unsampled graph signal and introduce a concept of diffusion operator which consists of local-mean and global-bias diffusion operator. Then, a diffusion operator-based iterative algorithm is proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, the reconstructed residuals associated with the sampled vertices are diffused to all the unsampled vertices for accelerating the convergence. We then prove that the proposed reconstruction strategy converges to the original graph signal. The simulation results demonstrate the effectiveness of the proposed reconstruction strategy with various downsampling patterns, fluctuation of graph cut-off frequency, robustness on the classic graph structures, and noisy scenarios.

1 Introduction

Recent years have witnessed an enormous growth of interest in efficient paradigms and techniques for representation, analysis, and processing of large-scale datasets emerging in various fields and applications, such as sensor and transportation networks, social networks and economic networks, and energy networks [1, 2]. The irregular structure is the most important characteristic of those large-scale datasets, which limits the applicability of many approaches used for small-scale datasets. This big data problem motivates the emerging field of signal processing on graphs.

Signal processing on graphs extends the classical signal processing techniques and paradigms to the irregular domain [35]. Graphs are useful representation tools for representing large-scale datasets with geometric structures. The relational structure of large-scale dataset is represented with graph, in which data elements correspond to the vertices, the relationship between data elements is represented by the edge, and the strength of relationship is reflected in the edge weight. The graph signal can be regarded as a vector signal which contains the spatial relationship of the vertices. Due to the complicated relationship and large data volume, it is necessary to transform the original graph signal to small-scale modality. The downsampling can be treated as any decrease in dimension via an operator, and conversely, the interpolation can be treated as any increase in dimension via an operator. The main purpose of downsampling method is that the original graph signal may be reconstructed through its entries on only a subset of the vertices by exploiting the character of smoothness. Pesenson in [6] established a Paley-Wiener function based sampling theory on combinatorial graphs. He proposed a concept of uniqueness set for downsampling and gave a sufficient condition that the downsampling set needs to satisfy for unique reconstruction. For the reconstruction of sampled graph signal, the main methodology of current algorithms is to extend the Papoulis-Gerchberg algorithm [7, 8] from the classical regular domain to the graph irregular domain. S. K. Narang in [9] proposed an iterative least square reconstruction (ILSR) algorithm for reconstructing bandlimited graph signal from partially observed samples. ILSR adopts the method of projection onto convex sets (POCS) to iteratively project the sampled data onto the downsampling subspace and the low-pass filtering subspace. In [10, 11], X. Wang proposed a concept of local-set and two local-set based iterative reconstruction algorithms (IWR and IPR) for recovering the sampled bandlimited graph signal. The local-set is formed by partitioning the graph into several disjoint subgraphs. iterative propagating reconstruction (IPR) is also an iterative reconstruction algorithm based on the philosophy of POCS. Compared with ILSR, IPR propagates the reconstructed residual associated with the sampled vertex to the unsampled vertices in the respective local-sets. Given the benefit which is obtained from the propagating of reconstructed residual, IPR has faster convergence than ILSR. However, the graph signals associated with the unsampled vertices are influenced by all the sampled vertices in graph, which should not be limited in the local-set. Besides, since the differences among the reconstruction residuals associated with the unsampled vertices cannot be ignored, the even propagation may not be able to achieve the assignment.

Related works on downsampling and reconstruction of graph signals include the methods proposed in [12, 13]. In [12], the authors proposed a sampling theory and reconstruction method on graphs for bandlimited graph signals. The sampling theory proposed in [12] focuses on the graph adjacency matrix and non-iterative reconstruction method. In [13], the authors proposed a sampling aggregation method for the graph signals, where the observations are aggregated to one vertex. Different from them, we focus on the iterative reconstruction method for bandlimited graphs in this paper.

The main contribution of this paper is that we present a generalized analytical framework of graph signals associated with the unsampled vertices to further improve the convergence rate of bandlimited graph signal reconstruction. We decompose the graph signals associated with the unsampled vertices into three components, i.e., the extrapolated component, the local-mean diffusion component, and the global-bias diffusion component. Based on this scheme, we propose an iterative diffusion operator-based reconstruction algorithm. The correspondence between the proposed algorithm and the current reconstruction algorithms (ILSR and IPR) is also analyzed, which will be helpful to future works on the reconstruction of bandlimited graph signal. Besides, the theoretical analysis for the proposed iterative reconstruction algorithm is also presented. Then, we demonstrate the performance of the proposed algorithm and the current algorithms with various downsampling patterns, fluctuation of graph cut-off frequency, robustness on the classic graph structures, and noisy scenarios. Finally, we adopt the temperature data of the USA and the electricity consumption data of Shandong province of China as the examples of real-world data to test the performance of reconstruction algorithms. The simulation results show that a better performance of the proposed algorithm can be achieved.

The rest of this paper is organized as follows. In Section 2, the previous works of downsampling and reconstruction for bandlimited graph signal are briefly reviewed. In Section 3, we propose the concept of diffusion operator and its corresponding iterative reconstruction algorithm. In Section 4, we analyze and prove the convergence of the proposed algorithm. In Section 5, we demonstrate the proposed algorithm by using the synthetic and real-world data on various graphs. In Section 6, conclusions are drawn.

2 The previous work for downsampling and reconstruction

A simple, connected, and undirected graph G=(V,E) is a collection of vertices V and edges E, with V={1,2,…,N} representing the set of vertices of the graph and E={w i,j ,i,jV} representing the set of edges connecting vertex i and j with weight w ij , where w ii =0. The adjacency matrix of the graph is defined as A(i,j)=w ij . The degree d i of a vertex i is defined as the sum of the weights of the edges connected to the vertex i. The degree matrix of the graph is a diagonal matrix defined as D=diag{d 1,d 2,…,d N }. The Laplacian matrix of the graph is defined as L=DA. The normalized Laplacian matrix \(\mathcal {L}\) is a symmetric positive semi-definite matrix and can be decomposed as

$$ {\mathcal{L}} = {D^{- 1/2}}L{D^{- 1/2}} = U\Lambda {U^{T}} = \sum\limits_{i = 1}^{N} {{\lambda_{i}}} {{{u}}_{i}}{{u}}_{i}^{T}, $$
(1)

where U T denotes the transpose of Laplacian eigenvector U, Λ=diag{λ 1,λ 2,…,λ N } is a diagonal matrix of real Laplacian eigenvalues ordering as λ 1λ 2λ N , and its corresponding orthogonal set of Laplacian eigenvectors denoted as U={u 1,u 2,…,u N }, with u i is the ith column vector of Laplacian eigenvector matrix.

For example, the Minnesota path graph is shown in Fig. 1, which contain 2642 vertices and 6606 edges. A graph signal f is represented as a vector mapping f:VR N, such that f(i) is the value of the signal on vertex i. \({\hat f_{i}}=<{f},{u}_{i}>\) is the graph Fourier transform (GFT) of f. Similar with the Fourier transform in classical signal processing, graph Fourier transform performs the expansion of a graph signal into a Laplacian eigenvector basis of signals [14]. The eigenvectors and eigenvalues of the Laplacian matrix provide a spectral interpretation of the graph signal. For more concise comparison, the eigenvalues of the Laplacian matrix can be regarded as the graph frequency and form the spectrum of graph, and the Laplacian eigenvectors that correspond to a frequency λ m are called the graph frequency component corresponding to the mth frequency.

Fig. 1
figure 1

The sampled and unsampled vertices on the Minnesota path graph

2.1 Downsampling of a bandlimited graph signal

Downsampling on graphs can efficiently extract valuable information by exploiting the spatial relationship of graph structure. If the GFT of a graph signal has support only in frequency [0,w), the graph signal is regarded as bandlimited in the range of [0,w), with the w is called the graph cut-off frequency [6, 15]. The space of bandlimited graph signal is often called Paley-Wiener space (PWS) and is denoted as

$$ {PW}_{\omega}\left(G \right) = \left\{\ {{f}: {\hat f}\left(\lambda \right) = 0, if\; \lambda \ge w} \right\}, $$
(2)

where λ denotes the Laplacian eigenvalue and \(\hat {f}(\lambda)\) denotes the graph frequency component corresponding to λ. We denote S as a downsampling set of the vertices of the graph, and S c=VS denotes its complementary set. The purpose of downsampling operation is to select valuable vertices to form the downsampling set S. The concept of uniqueness set is defined in [6], which provides a sufficient condition for exact reconstruction from the sampled graph signal. A subset of vertices SV is a uniqueness set in PWS, if for any two signals g,h, the fact that they coincide on S implies they coincide on V : g(S)=h(S) g=h.

Currently, there are two solutions for finding an appropriate downsampling set. In [16], the author formulates a greedy heuristic algorithm to obtain an estimation of the optimal downsampling set. In [11], the author proposes a local-set based downsampling forming algorithm. The graph is divided into disjoint subgraphs and each subgraph selects one vertex as the downsampling vertex, which is called one-hop sampling method. The one-hop sampling method is a rather economical and efficient choice of downsampling set forming method when there is no restriction on the number of vertices in the downsampling set or no location-limited of downsampling vertices. In Fig. 1, we adopt the one-hop sampling method to form the downsampling set, where the sampled vertices are denoted as the red-star-vertices and the unsampled vertices are denoted as the blue-roundness-vertices.

2.2 Reconstruction of a bandlimited graph signal

The main methodology of the reconstruction algorithm is to establish the relationship between the sampled vertices and the unsampled vertices according to the spatial structure of graph. In classical signal processing, the authors in [7, 8] propose an iterative extrapolation strategy (Papoulis-Gerchberg Algorithm) for reconstructing the original signal. The basic idea of Papoulis-Gerchberg algorithm lies in alternatingly imposing the initially known values in the time domain and the finite support constraint in the frequency domain, until convergence is reached. The iterative process of Papoulis-Gerchberg algorithm can be written as follows

$$ \begin{array}{c} {f_{0}^{c}} = {P_{T}^{c}}{f^{c}}\\ f_{k + 1}^{c} = {P_{T}^{c}}{f^{c}} + \left({I - {P_{T}^{c}}} \right){\mathcal{F}}_{c}^{- 1}{P_{w}^{c}}{{\mathcal{F}}_{c}}{f_{k}^{c}} \end{array} $$
(3)

where f c denotes the classical continuous signal, \({{P_{T}^{c}}}\) denotes the time domain downsampling operator, I denotes the identity operator, \({{P_{w}^{c}}}\) denotes the frequency domain cut-off operator, and \({{\mathcal {F}}_{c}}\) and \({\mathcal {F}}_{c}^{- 1}\) represent the classical Fourier transform and classical Fourier inverse transform, respectively. In each iteration, the Papoulis-Gerchberg algorithm replaces the downsampling part of the estimated reconstruction signal \({{f_{k}^{c}}}\) by the actual known segment and then combines the extrapolation segment to form the next iterative signal. In other words, at kth iteration, the solution \({f_{k}^{c}}\) is obtained from \(f_{k-1}^{c}\) and satisfies the two constraints of time domain downsampling and frequency domain bandlimited.

According to the principle of Papoulis-Gerchberg Algorithm, an iterative least square reconstruction (ILSR) algorithm is proposed in [9, 17] for the signal processing on graphs. At each iteration, ILSR resets the signal samples on the downsampling set S to the actual given samples and then projects the graph signal onto the low-pass filtering subspace. Denote P T as the vertex domain downsampling operator and f d as the sampled graph signal, then the downsampling process can be represented as follows

$$ {f_{d}} = {P_{T}}f \Rightarrow {f_{d}}\left(S \right) = f\left(S \right) \text{ and } {f_{d}}\left({{S^{c}}} \right) = 0. $$
(4)

Besides, the vertex domain downsampling operator P T is a diagonal matrix

$$ {P_{T}} = \text{diag}\left\{ {{1_{S}}} \right\} $$
(5)

where 1 S is the set indicator vector, whose ith entry is equal to one, if iS, or zero otherwise. The iterative process of ILSR can be written as follows

$$ \begin{array}{c} {f_{0}} = {{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}{f_{d}}\\ f_{k + 1}^{L} = {{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}\left({{f_{k}^{L}} + \left({{f_{d}} - {P_{T}}{f_{k}^{L}}} \right)} \right) \end{array} $$
(6)

where \({f_{k}^{L}}\) denotes the kth iterative reconstructed residual of ILSR, P w denotes the graph frequency cut-off operator, w denotes the graph frequency domain cut-off frequency, and \({\mathcal {F}}\) and \({{\mathcal {F}}^{- 1}}\) denote the graph Fourier transform and inverse transform, respectively. At the first iteration, the initial reconstructed graph signal f 0 is obtained by projecting the sampled graph signal f d onto the low-pass filtering subspace. In this paper, we define the difference between the graph signal f and the sampled reconstructed graph signal f k as the reconstructed residual \({f_{k}^{s}}\), i.e., \({f_{k}^{s}} = {f} - {f_{k}}\). Moreover, the reconstructed graph signal f k is denoted by \({f_{k}^{L}}\) for ILSR and \({f_{k}^{P}}\) for IPR. In [10, 11], the author proposes a local-set based IPR algorithm. In each iteration, IPR adopts a local propagative operator to locally and evenly propagate the reconstructed residual. The iterative process of IPR is shown as follows

$$ \begin{array}{c} {f_{0}} = {{\mathcal{Q}}_{w}}\left({\sum\limits_{v \in S} f (v){\mathbf{\delta}_{N(v)}}} \right)\\ f_{k + 1}^{P} = {f_{k}^{P}} + {{\mathcal{Q}}_{w}}\left({\sum\limits_{v \in S} {(f(} v) - {f_{k}^{P}}(v)){\mathbf{\delta }_{N(v)}}} \right) \end{array} $$
(7)

where \({f_{k}^{P}}\) denotes the kth iterative reconstructed graph signal of IPR, \({{\mathcal {Q}}_{w}}\left (\cdot \right)\) is a graph frequency domain cut-off operator, δ N(v)=(δ N(v)(1),δ N(v)(2),…,δ N(v)(N))T with δ N(v)(m)=1 only when mN(v). According to Eq. (7), IPR first propagates the reconstructed residual locally and evenly to the local-set that each sampled vertex belongs to and then projects the new signal onto the low-pass filtering subspace. Since IPR propagates the reconstructed residual in the local-set at each iteration, IPR converges faster than ILSR. However, IPR only focus on the propagating of reconstructed residual within the local-sets. In the next section, we propose a diffusion operator based iterative reconstruction strategy, which extends the reconstructed residual to more generalized diffusion.

Different from the POCS method, the sampling theory proposed in [12] recovers the sampled graph signal by employing an interpolation operator Φ=U w (P T U w )−1, where w denotes the bandwidth of bandlimited graph signal, U denotes the eigenvector matrix of graph adjacency matrix A, U w denotes the first w columns of U, and P T denotes the sampling operator. In this paper, we follow the methodology of the POCS method.

3 The diffusion operator-based reconstruction strategy

In this section, we establish a generalized analysis framework of the graph signals associated with the unsampled vertices. The concept of local-mean and global-bias diffusion operator is firstly defined. Then, we propose an iterative diffusion operator-based reconstruction algorithm. Discussions on the current reconstruction algorithms are also included in this section.

3.1 The generalized analytical framework of unsampled graph signal

The essence of the reconstruction algorithm is to establish the relationship between the sampled vertices and the unsampled vertices according to the spatial correlation. However, the current research does not pay much attention to the component analysis of graph signals associated with the unsampled vertices. In this paper, we analyze the graph signals associated with the unsampled vertices from the perspective of mean and bias and then establish a generalized analytical framework of unsampled graph signal. The bandlimited graph signal is smooth, where the graph signals associated with the vertices vary slowly in comparison to the neighboring vertices [3]. In Section 2, the reconstruction residual is defined as the difference between the actual graph signal and the reconstructed graph signal. Due to the fact that the actual graph signal and the reconstructed graph signal are both bandlimited, the reconstructed residual is also bandlimited. Thus, it can be seen that the reconstructed residuals associated with vertices vary slowly in comparison to the neighboring vertices. This important property may allow the diffusion of reconstructed residuals associated with the sampled vertices to the unsampled vertices, since we only know the reconstructed residuals associated with sampled vertices at each iteration of POCS method. Thus, we decompose the actual graph signals associated with the unsampled vertices as

$$\begin{array}{@{}rcl@{}} {f_{\text{acu}}} = {f_{\text{rec}}} + {f_{\text{res}}} \end{array} $$
(8)

where f acu denote the actual graph signals associated with the unsampled vertices, f rec denote the extrapolated graph signal obtained by projecting onto the low-pass filtering subspace, and f res denote the diffused reconstructed residuals obtained from the sampled vertices. Since ILSR directly projects the sampled signal onto the low-pass filtering subspace, it can be seen that f res are set to zeros and f acu is only obtained from f rec. In IPR, the reconstructed residuals associated with sampled vertices are copied and assigned to the vertices in the corresponding local-set, and then the projection procedure is conducted. It can be seen that the vertices within the local-set have the same value of reconstructed residuals. It may not fit into the property of actual reconstructed residuals associated with the unsampled vertices. Besides, practically the graph signal associated with every vertex is either directly or indirectly influenced by all the vertices in a graph, which should not be limited in the local-set. Thus, for the purpose of accurate analysis, we decompose the diffused reconstructed residuals f res into two components, and Eq. (8) can be rewritten as

$$\begin{array}{@{}rcl@{}} {f_{\text{acu}}} = {f_{\text{rec}}} + f_{\text{res}}^{\text{mean}} + f_{\text{res}}^{\text{bias}}. \end{array} $$
(9)

where \(f_{\text {res}}^{\text {mean}}\) denote the local-mean component of the diffused reconstructed residuals, and \(f_{\text {res}}^{\text {bias}}\) denote the global-bias component of the diffused constructed residuals. The motivation of the diffused reconstructed residual decomposition is that we expect to establish three-layer analytical framework for the unsampled graph signals. The first layer of unsampled graph signal is obtained by projecting the sampled graph signal onto the low-pass filtering subspace, which is f rec. The second layer of unsampled graph signal is obtained from the reconstructed residual of adjacent sampled vertices, which is \(f_{\text {res}}^{\text {mean}}\). For the local-mean component, the reconstructed residuals associated with the sampled vertices are regarded as the mean of local region around the sampled vertices and are diffused from sampled vertices to their adjacent unsampled vertices. The third layer of unsampled graph signal is obtained by the reconstructed residual of all the sampled vertices, which is \(f_{\text {res}}^{\text {bias}}\). The main purpose of the global-bias component is that the graph signals associated with the unsampled vertices are influenced by all the sampled vertices in graph. The global-bias component is expected to further exploit the reconstructed residuals associated with the sampled vertices, which is beneficial to accelerate the convergence of reconstruction. Besides, the global-bias component is also used to form the differences among the graph signals associated with the unsampled vertices, since the local-mean component assigns the same value of reconstruction residuals from the sampled vertices to their adjacent vertices. The collaboration of the local-mean component and global-bias component is expected to approximate the actual reconstructed residuals associated with the unsampled vertices by further exploiting the smoothness of the bandlimited graph signals.

Besides, Eq. (9) can be regarded as the composition model of unsampled graph signal of ILSR, when \(f_{\text {res}}^{\text {mean}}\) and \(f_{\text {res}}^{\text {bias}}\) are set to zeros. Similarly, Eq. (9) can be regarded as the composition model of unsampled graph signal of IPR, when \(f_{\text {res}}^{\text {bias}}\) is set to zero. The f rec of IPR and ILSR are both obtained by directly projecting the sampled graph signal onto the low-pass filtering subspace. Based on the decomposition, in the following parts of this section, we will propose two diffusion operators (the local-mean diffusion operator and the global-bias diffusion operator) to achieve the local-mean diffusion and global-bias diffusion of reconstruction residuals associated with the sampled vertices.

3.2 The local-mean diffusion operator

In this part, the concept of local-mean diffusion operator is introduced. Then, discussions on the local-mean diffusion operator and local propagation are also included in this part.

According to the decomposition model of (9), the second layer of unsampled graph signal is the local-mean diffusion component. In this part, we proposed a local-mean diffusion operator, which is used to assign the reconstructed residual from the sampled vertices to their adjacent unsampled vertices, to achieve the local-mean diffusion operation. Denote by ρ(v,S) the fewest distance of an unsampled vertex v from the downsampling set S, i.e., the fewest number of edges in a shortest path from v to a vertex in the downsampling set S. Besides, we assume that the fewest distance of a sampled vertex from the downsampling set is zero.

Definition 1

For a given reconstructed residual \({f^{s}} = {\left [ {{f_{1}^{s}},{f_{2}^{s}},\ldots,{f_{N}^{s}}} \right ]^{T}}\), we define the local-mean diffusion operator P m as

$$\begin{array}{@{}rcl@{}} f_{\text{res}}^{\text{mean}} = {P_{m}}{f^{s}} = \sum\limits_{i = 1}^{r} {{\delta_{i}}{{B_{i}}}{f^{s}}} \end{array} $$

where \(f_{\text {res}}^{\text {mean}}\) is an N-by-1 vector and denotes the local-mean component of reconstructed residual, B i =C A i, C is a diagonal matrix with the diagonal element as the reciprocal of the degree of the corresponding vertex, A denotes the adjacency matrix of graph, i denotes the fewest number of edges in a shortest path from an unsampled vertex to a vertex in the downsampling set S, r denotes the maximal number of hop between sampled vertex and unsampled vertex on the graph, and δ i denotes δ-function of ρ with entries

$$\begin{array}{@{}rcl@{}} {{\delta_{i}}\left(u,u \right)} = \left\{ \begin{array}{ll} 1,& i = \rho \left({u,S} \right);\\ 0,&{\text{otherwise}}. \end{array} \right. \end{array} $$

In other words, δ i is a diagonal matrix with the diagonal element as the ρ(u,S) of corresponding vertex u.

Indeed, an underlying idea behind the local-mean diffusion operator is to incorporate the structure of the adjacency matrix into the local-mean diffusion operation of reconstructed residual. In definition 1, we employ the powers of the adjacency matrix A to sequentially diffuse the reconstructed residuals associated with the sampled vertices to the adjacency unsampled vertices. Besides, C is used to obtain the weighted value of diffused reconstructed residual.

In [11], the authors propose an operation G firstly to propagate the reconstructed residual in respective local-sets and then projects the combinatorial signal onto the low-pass filtering subspace. The first step of G provides a solution to propagate the reconstructed residuals associated with the sampled vertices with the help of local-sets. In this paper, we redefine the first propagating step of G as G p . It can be seen that G p can be regarded as a special case of local-mean diffusion operation based on local-set. Thus, it is easy to see that G p and P m achieve the same performance with the precondition of local-set, but P m can diffuse the reconstructed residuals in the more comprehensive scenarios.

3.3 The global-bias diffusion operator

The main motivation of the global-bias component is that the graph signals associated with the unsampled vertices are influenced by all the sampled vertices in graph, which should not be limited in the local region. Besides, the global-bias component is used to provide the difference of reconstructed residual of local region. In this part, we propose a global-bias diffusion operator, which is used to establish the relationship between one sampled vertex and all unsampled vertices, to achieve the global-bias diffusion operation.

We firstly analyze the diffusion character of single element of reconstructed residuals and then extend to all the elements of the downsampling set. Assuming that all the values of reconstructed residual are zeros except at the first vertex, that is, only the first vertex is selected as the sampled vertex. Then, the reconstructed residual can be represented as f s=[f s(1),0,…,0]T, where f s(1) denotes the reconstructed residual resided on the first vertex and \({\hat f^{s}} = {\left [ {{u_{11}}{f^{s}}\left (1 \right),{u_{12}}{f^{s}}\left (1 \right),\ldots,{u_{1N}}{f^{s}}\left (1 \right)} \right ]^{T}}\) denotes graph frequency component. Since the graph signal is band-limited, we assume that there are m graph frequency components within the bandwidth w, that is, \({\hat f^{s}} = {\left [ {{u_{11}}{f^{s}}\left (1 \right),\ldots,{u_{1m}}{f^{s}}\left (1 \right),0,..,0} \right ]^{T}}\). Then, to transform \({\hat f^{s}}\) from the graph frequency domain to the vertex domain and denote \({\tilde f^{s}}\) as the new presentation, which can be written as follows

$${} \begin{aligned} {{\tilde f}^{s}}\left(i \right) &= {u_{i1}}{{\hat f}^{s}}\left(1 \right) + \ldots + {u_{im}}{{\hat f}^{s}}\left(m \right) + 0 + \ldots + 0\\ &= {u_{i1}}{u_{11}}{f^{s}}\left(1 \right) + \ldots + {u_{im}}{u_{1m}}{f^{s}}\left(1 \right) + 0 + \ldots + 0\\ &= \left({{u_{i1}}{u_{11}} + \ldots + {u_{im}}{u_{1m}}} \right){f^{s}}\left(1 \right)\\ &= \sum\limits_{j = 1}^{m} {{u_{ij}}{u_{1j}}{f^{s}}\left(1 \right)} \end{aligned} $$
(10)

It can be seen that Eq. (10) establishes the relationship of reconstructed residual between the first vertex and all the unsampled vertices. Then, we can extend the same assumption to the every element of downsampling set S. Thus, from the perspective of the single unsampled vertex, its estimated reconstructed residual is the sum of diffused reconstructed residuals of all the sampled vertices. Denote \({\bar f^{s}}\left (i \right)\) as the estimated reconstructed residual of the ith unsampled vertex, which can be written as follows

$$ \begin{aligned} {{\bar f}^{s}}\left(i \right) &= \sum\limits_{h \in S} {{{\tilde f}^{s}}\left(h \right)} \\ &= \sum\limits_{h \in S} {\sum\limits_{j = 1}^{m} {{u_{ij}}{u_{hj}}} {f^{s}}\left(h \right)} \end{aligned} $$
(11)

The global-bias diffusion process can be understood from two aspects. From the perspective of sampled vertex, the reconstructed residual of the single sampled vertex is diffused to all the unsampled vertices. From the perspective of unsampled vertex, the estimated reconstructed residual of the single unsampled vertex is the sum of diffused reconstructed residuals of all the sampled vertices. Then, based on the discussion above, we define the global-bias diffusion operator to diffuse the reconstructed residual to all the unsampled vertices.

Definition 2

For a given reconstructed residual \({f^{s}} = {\left [ {{f_{1}^{s}},{f_{2}^{s}},\ldots,{f_{N}^{s}}} \right ]^{T}}\), we define the global-bias diffusion operator P b as

$$\begin{array}{@{}rcl@{}} f_{\text{res}}^{\text{bias}} = {P_{b}}{f^{s}} = U'U^{\prime\prime}{f^{s}} \end{array} $$

where \(f_{\text {res}}^{\text {bias}}\) is an N-by-1 vector and denotes the global-bias diffusion component of estimated reconstructed residual, U denotes the modified Laplacian eigenvector matrix which the rows corresponding to the sampled vertices are set to the zero sequence and the columns corresponding to out of the bandwidth w are set to the zero sequence, and U denotes the modified Laplacian eigenvector matrix in which the rows corresponding to the unsampled vertices are set to the zero sequence and the columns corresponding to out of the bandwidth w are set to the zero sequence.

In each iteration, the global-bias diffusion operator P b diffuses the iterative reconstructed residual from the sampled vertices to all the unsampled vertices.

3.4 The diffusion operator-based reconstruction algorithm

In this part, we propose a diffusion operator based iterative reconstruction (IGDR) algorithm.

Definition 3

For a given reconstructed residual \({f^{s}} = {\left [ {{f_{1}^{s}},{f_{2}^{s}},\ldots,{f_{N}^{s}}} \right ]^{T}}\), we define the diffusion operator P d as

$$\begin{array}{@{}rcl@{}} {f_{\text{res}}} &=& {P_{d}}{f^{s}} \\ &=& \left({{P_{m}} + {P_{b}}} \right){f^{s}} \\ &=& \left({\sum\limits_{i = 1}^{r} {{\delta_{i}}{B_{i}}} + U'U^{\prime\prime}} \right){f^{s}} \end{array} $$

where P m denotes the local-mean diffusion operator, and P b denotes the global-bias diffusion operator.

In each iteration, the diffusion operator diffuses the reconstructed residuals from the sampled vertices to the unsampled vertices, where the local-mean diffusion operator diffuses the reconstructed residual associated with the sampled vertices to their adjacent unsampled vertices and the global-bias diffusion operator diffuses the reconstructed residual associated with the sampled vertices to the all unsampled vertices. According to the discussion above, we propose a diffusion operator based reconstruction algorithm (IGDR), and its process can be written as follows

$$ {f_{0}} = {{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}{f_{d}} $$
(12)
$$ \begin{aligned} f_{k + 1}^{G} &= {{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}\left({f_{k}^{G}} + \left({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right)\right.\\ &\left.\quad+ \left({I - {P_{T}}} \right){P_{d}}\left({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right) \right) \end{aligned} $$
(13)

where \({f_{k}^{G}}\) denotes the kth iterative reconstructed signal of IGDR, \({\left ({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right)}\) denote the reconstructed residual on the downsampling set, and \({\left ({I - {P_{T}}} \right){P_{d}}\left ({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right)}\) denote the reconstructed residual is diffused from the downsampling set to the non-downsampling set by the diffused operator.

3.5 Discussion

ILSR, IPR, and IGDR followed the methodology of POCS, and the sampled data are iteratively projected onto the downsampling subspace and low-pass filtering subspace. The difference of ILSR, IPR, and IGDR lies in the way of their dealing with the reconstructed residual. For ILSR, the sampled signal is directly projected onto the low-pass filtering subspace. For IPR, the reconstructed residuals associated with the sampled vertices are firstly copied and propagated to the unsampled vertices in the corresponding local-sets, and then the signal is projected onto the low-pass filtering subspace. For IGDR, the diffusion operation of reconstructed residuals consists of two components, which are local-mean diffusion operation and global-bias diffusion operation. The local-mean diffusion operation is that the reconstructed residuals associated with the sampled vertices are diffused to the their adjacent vertices. The global-bias diffusion operation establishes the diffusion relationship between one sampled vertex and all the unsampled vertices. The local-mean diffusion operation is used to form the basic component of estimated reconstructed residuals associated with the unsampled vertices, due to one sampled vertex has stronger relationship with the adjacent vertices than others. The global-bias diffusion operation is used to form the differences of estimated reconstructed residuals associated with the unsampled vertices in the local region. The collaboration of the local-mean diffusion operation and global-bias diffusion operation is used to accelerate the convergence of reconstruction. The illustration of the iterations of the three algorithms is shown in Fig. 2.

Fig. 2
figure 2

Illustration of the iterations of the three algorithms

Besides, it can be seen that ILSR can be regarded as a special case of IGDR, when the local-mean diffusion component and global-bias diffusion component are set to zeros. Similarly, IPR can be regarded as a special case of IGDR, when the local-mean diffusion component is designed by the local-set and the global-bias diffusion component is set to zero.

4 The analysis of iterative error and convergence

In this section, we present the theoretical analysis of convergence for the proposed iterative reconstruction algorithm.

Proposition 1

The bandlimited graph signal f can be reconstructed from its downsampling set S by IGDR, for a given graph cut-off frequency w, the reconstructed error bound of IGDR is

$$\begin{array}{@{}rcl@{}} {\eta^{k + 1}} < {\left\| {{{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}\left({I - {P_{T}}} \right)\left({I - {P_{d}}{P_{T}}} \right)} \right\|^{k}} \cdot {\eta^{0}} \end{array} $$
(14)

if satisfying

$$\begin{array}{@{}rcl@{}} \left({\sqrt {\left({\left({2{J_{\max }} + 1} \right){\alpha_{1}} + 2{R_{\max }}w} \right)} + \sqrt {{\alpha_{2}}}} \right) < 1 \end{array} $$
(15)

where η k denotes the k-th iterative error, \({\mathcal {F}}\) and \({{\mathcal {F}}^{- 1}}\) denote the graph Fourier transform and the inverse graph Fourier transform, respectively, w denote the graph cut-off frequency, P T denotes the downsampling operator which chooses the sampled vertices and pads the unsampled vertices with zeros, P w denote the graph frequency cut-off operator, I denotes the identity operator, and P d is diffusion operator. The notation R max, J max, α 1, and α 2 are the parameters of graph signal and the downsampling set, and the detailed explanation can be seen in proof.

Proof

: The proof is postponed to Appendix. □

5 The simulation results

In this section, we adopt the Minnesota path graph [18] as the graph structure to demonstrate the performance of proposed algorithm and current algorithms, which is evaluated from the convergence rate, sensitivity with graph cut-off frequency, the influence on the different downsampling set, and the robustness with additive noise. We use the synthetic data as the bandlimited graph signal on the Minnesota path graph, and the process is shown as follows:

  1. 1.

    Generate a random Gaussian signal on graph.

  2. 2.

    Transform the graph signal into graph spectral domain by graph Fourier transformation and remove the frequency components higher than the given graph cut-off frequency.

  3. 3.

    Transform the graph signal from the graph spectral domain to the vertex domain.

The Minnesota path graph is shown in Fig. 1. The proposed algorithm and the current algorithms are compared on the three classic graph structures for demonstrating the robustness. We also use the real-world data, which are the temperature (2014.1.1) of 94 cities of the USA and the electricity consumption data (2015) of Shandong province of China, to test the performance of the proposed reconstruction algorithm. Moreover, since IPR needs the help of local-set, we use one-hop sampling method to form the downsampling set for fair comparison. Besides, we define the concept of relative error. Let x denote a vector and \(\tilde {x}\) denote the estimated value of x, then the relative error is defined by \(\varepsilon = \left \| {\tilde x - x} \right \| \left / \left \| x \right \|\right.\).

5.1 The convergence performance

In this part, we compare the performance of iterative reconstruction for ILSR, IPR, and IGDR. We adopt the maximum degree division based one-hop sampling method to form the downsampling set, and the number of vertices in the downsampling set is 873. The graph cut-off frequency is set to 0.45 (the range of normalized graph frequency is from 0 to 2). Moreover, the convergence of reconstruction algorithms on the random downsampling set is also considered. For the random downsampling set, 873 vertices are selected completely at random among all the vertices. Since IPR needs the help of local-set, in this experiment, we only consider the performance of ILSR and IGDR. The convergence curves of ILSR, IPR, and IGDR are illustrated in Fig. 3. It is obvious that the convergence rate of the proposed IGDR is improved compared with the current iterative reconstruction algorithms. Besides, it can be seen that the convergence is faster by using the maximum degree division based one-hop sampling set than the random downsampling set.

Fig. 3
figure 3

The convergence curves of ILSR, IPR, and IGDR

5.2 Graph cut-off frequency

Since the main methodology of current reconstruction is iteratively projecting on the downsampling subspace and low-pass filtering subspace, the graph cut-off frequency is a crucial quantity. In this simulation, the effect on the variation of the graph cut-off frequency is investigated. The downsampling set is formed by the one-hop sampling method, and the number of vertices in the downsampling set is 873. The graph cut-off frequency varies from 0.4 to 0.5, which the step size is 0.005. Figure 4 shows the final relative error of 10 iterative reconstructions for ILSR, IPR, and IGDR. It can be seen that IGDR has much higher recovery accuracy than the current iterative reconstruction algorithms via the variation of graph cut-off frequency.

Fig. 4
figure 4

The reconstructed performance of ILSR, IPR, and IGDR for different graph cut-off frequency

5.3 Robustness with additive noise

This simulation focuses on the robustness against the additive noise of IGDR and the current reconstruction algorithms. The independent and identically distributed Gaussian sequence is involved in the observation of sampled graph signal. We adopt the one-hop sampling method to form the downsampling set, and the number of vertices in the downsampling set is 873. The signal to noise ratio (SNR) is considered with 20 and 40 dB. The graph cut-off frequency is set to 0.45. Moreover, the convergence of reconstruction algorithms on the random downsampling set is also considered. For the random downsampling set, 873 vertices are selected completely at random among all the vertices. Since IPR needs the help of local-set, in this experiment, we only consider the performance of ILSR and IGDR. The performances are illustrated in Fig. 5. It can be seen that all the algorithms have almost the same reconstructed performance against the additive noise, but IGDR holds the fastest convergence. Besides, it can be seen that the convergence is faster by using the maximum degree division based one-hop sampling set than the random downsampling set.

Fig. 5
figure 5

Convergence curves of ILSR, IPR, and IGDR with additive noise

5.4 Robustness with different downsampling sets

The choice of downsampling set may affect the performance of convergence and robustness. We use three different downsampling sets to reconstruct the same bandlimited graph signal. The graph cut-off frequency is set to 0.3. The first downsampling set is the maximum degree division based one-hop sampling set, which is formed by the algorithm in [11] and with 873 sampled vertices. The second downsampling set is followed by the minimum degree based greedy algorithm in [19], with 923 sampled vertices. The greedy algorithm is to iteratively remove connected vertices with the smallest degrees from the original graph into the new subset, until the cardinality of the new subset reaches the given maximal cardinality or there is no connected vertex. This greedy algorithm can be regarded as a solution of uniform sampling. For the third downsampling set, 923 vertices are selected completely at random among all the vertices. Since IPR need the help of local-set and the second and third downsampling set do not contain local-set, in this experiment, we only consider the performance of ILSR and IGDR. The convergence curves of the three downsampling sets using ILSR and IGDR are shown in Fig. 6. It can be seen that the convergence is faster by using the maximum and minimum degree division-based downsampling set than the randomly selected downsampling set. The convergence of IGDR is faster than ILSR by using all the three downsampling sets. We can find that the different downsampling set may influence on the convergence rate. Besides, all the three reconstruction algorithms follow the methodology of projection onto convex sets. The unsampled data are extrapolated by alternatively projecting the sampled data onto the downsampling subspace and low-pass filtering subspace. The cornerstone of this method is the close relationship between the sampled data and unsampled data. Based on the theory of signal processing on graphs, the edge denotes the relationship of vertices, which also denotes the relationship of graph signal. That is, the vertex has stronger relationship with its adjacent vertices than others. Thus, from the perspective of experience, if the sampled vertices and the unsampled vertices are uniformly distributed on the graph, the reconstruction algorithm may present more efficient performance than others. The results of this simulation may support this analysis.

Fig. 6
figure 6

Convergence curves of ILSR and IGDR with different downsampling sets

5.5 The performance on three classic graph structures

We demonstrate the performance of IGDR on three classical graph structures: the Erdos-Renyi graph, the small-world graph, and the scale-free graph. The Erdos-Renyi graph is a random graph with a certain connection probability for each edge. We generate a 40-vertices Erdos-Renyi graph, in which the probability of edge connecting is 0.1. The small-world structure is a typical graph in which most vertices are not neighbors of one another, but most vertices can be reached from every other by a small number of hops or steps. We adopt the Watts-Strogatz model [20] to generate a small-world graph with 100 vertices. The scale-free graph is a graph whose degree distribution follows a power law. We generate a 100-vertices scare-free network according to the Barabasi-Albert model [21]. The graph cut-off frequency is set to 0.5 for all three graph structures. The downsampling set is formed by the one-hop sampling method for all the three graphs. For eliminating the random effect of all three graph structures, each simulation result is averaged over 100 random network topologies, and one of those graph structures is shown in Fig. 7 ac. Figure 8 shows the relative error for ILSR, IPR, and IGDR on the three graph structures. It is obvious that IGDR has steady performance for different graph structures and has also lower relative error than other reconstruction algorithms.

Fig. 7
figure 7

The structures of three classic graphs and weather station of the USA

Fig. 8
figure 8

The performance of reconstruction for ILSR, IPR, and IGDR on classic graph structure

5.6 Sensitivity with imprecise knowledge of graph cut-off frequency

For a bandlimited graph signal, the graph cut-off frequency is a crucial quantity during the reconstruction and is known as a priori knowledge. In the actual applications, the graph cut-off frequency may be an imprecise value rather than ground truth. In this simulation, the effect on the imprecise knowledge of graph cut-off frequency is investigated. The downsampling set is formed by the one-hop sampling method, and the number of vertices in the downsampling set is 873. The actual graph cut-off frequency is set to 0.35. The imprecise values of graph cut-off frequency are set to 0.3 and 0.4, which are smaller and larger than the actual value. Figure 9 shows the reconstruction performances of ILSR, IPR, and IGDR. It can be seen that the relative error of smaller-value is larger than the larger value and actual value. The simulation results show that the current reconstruction algorithms and the proposed algorithm have the same performances on the imprecise knowledge of graph cut-off frequency, and the smaller value of the imprecise knowledge of graph cut-off frequency has more sensitive than the larger value.

Fig. 9
figure 9

The reconstructed performance of ILSR, IPR, and IGDR for the imprecise knowledge of graph cut-off frequency

5.7 Real-world data

In this simulation, real-world data is used to test the performance of the proposed reconstructed algorithm. As an example of real-world data, we adopt the daily temperature data (2014.1.1) which measured by the 94 weather stations across the USA. The data is collected by the National Climatic Data Center [22]. We represent these stations with an undirected two-nearest neighbor graph, in which every weather station corresponds to a vertex and is connected to two closest weather stations by edges. The graph is shown in Fig. 7 d. We use one-hop sampling method to form the downsampling set, and the number of vertices in the downsampling set is 31. Since temperature data varies slowly across the graph, most of the graph signal’s energy is concentrated in the low frequencies. The graph cut-off frequency can be estimated by the following two respects: projecting the historical or snapshot data onto the graph frequency domain and the convergence condition in Proposition 1. Thus, the graph cut-off frequency is set to 0.65. In Fig. 10, we can find that IGDR has much faster convergence than the current algorithms.

Fig. 10
figure 10

The performance of reconstruction for ILSR, IPR, and IGDR with the temperature data

As another example of real-world data, the electricity consumption (2015) of Shandong province of China is selected. The data is provided by the Shandong statistical yearbook (2015) [23]. The graph structure consists of 17 vertices, which are 17 cities of Shandong providence. The edge of graph structure denotes the electric power transmission line between the two cities. The graph structure is shown in Fig. 11 a. We use one-hop sampling method to form the downsampling set, and the number of vertices in the downsampling set is 7. The graph cut-off frequency is set to 0.55. In Fig. 11 b, it can be seen that IGDR has much faster convergence than current reconstruction algorithms.

Fig. 11
figure 11

a The power grid graph of Shandong province of China. b The performance of reconstruction for ILSR, IPR, and IGDR with the electricity consumption data

6 Conclusions

In this paper, the problem of the bandlimited graph signal reconstruction was studied. We established a generalized analytical framework of graph signals associated with the unsampled vertices. We defined a concept of diffusion operator, which consists of local-mean diffusion operator and global-bias diffusion operator. Employing the diffusion operator, we proposed an iterative algorithm to reconstruct the unknown data associated with the unsampled vertices from the observed samples. In each iteration, the reconstructed residuals associated with the sampled vertices are diffused to all the unsampled vertices. We also presented the analysis of iterative reconstructed error and convergence of the proposed algorithm. The simulation results are showed that the techniques presented in this paper perform beyond the current reconstruction algorithms. Moreover, the main purpose of this paper is to present a generalized model to accelerate the convergence rate of reconstruction algorithm. The design of the local-mean diffusion operator and global-bias diffusion operator according to the character of network topology and graph signals can be investigated in the future work.

7 Appendix

Proof

: The key notation used in the proof is shown in Table 1.

Table 1 Key notation used in the proof

The graph frequency cut-off operator is defined as the diagonal matrix P w =diag{1 w }, where 1 w is the set indicator vector, whose ith entry is equal to one, if i[0,w), or zero otherwise (the range of normalized graph frequency is from 0 to 2). Thus, it can be seen that P w remove the high-frequency components ([w,2]). Since only [0,w) of the energy of graph signal can be preserved, \({{\mathcal {F}}^{- 1}}{P_{w}}{\mathcal {F}}\) is a contraction mapping. The downsampling operator is defined as a diagonal matrix P T =diag{1 S }, where 1 S is the set indicator vector, whose ith entry is equal to one, if iS, or zero otherwise. Due to only the non-downsampling vertices are preserved, (IP T ) is a contraction mapping. Besides, let f denote a bandlimited graph signal and its graph cut-off frequency is w. Notice that \({{\mathcal {F}}^{- 1}}{P_{w}}{\mathcal {F}}f = f\) and f d =P T f, we have,

$${} \begin{aligned} {\eta^{k + 1}} &= {\left\| {{f_{k + 1}} - f} \right\|}\\ &= \left\| {{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}\left({f_{k}^{G}} + \left({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right) + \left({I - {P_{T}}} \right)\right.\right.\\ &\qquad\left.\left.{P_{d}}\left({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right) \right) - f \right\|\\ &= \left\| {{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}\left({f_{k}^{G}} + \left({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right) + \left({I - {P_{T}}} \right)\right.\right.\\ &\qquad\left.\left.{P_{d}}\left({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right) - f \right) \right\|\\ &< \left\| \left({f_{k}^{G}} + \left({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right) + \left({I - {P_{T}}} \right)\right.\right.\\ &\qquad\left.\left.{P_{d}}\left({{f_{d}} - {P_{T}}{f_{k}^{G}}} \right) - f \right) \right\| \\ &= \left\| \left(\left({{f_{k}^{G}} - f} \right) - \left({{P_{T}}{f_{k}^{G}} - {P_{T}}f} \right) - \left({I - {P_{T}}} \right)\right.\right.\\ &\qquad\left.\left.{P_{d}}\left({{P_{T}}{f_{k}^{G}} - {P_{T}}f} \right) \right) \right\| \\ &= \left\| \left(\left({{f_{k}^{G}} - f} \right) - {P_{T}}\left({{f_{k}^{G}} - f} \right) - \left({I - {P_{T}}} \right)\right.\right.\\ &\qquad\left.\left.{P_{d}}{P_{T}}\left({{f_{k}^{G}} - f} \right) \right) \right\|\\ &= \left\| {\left({\left({I - {P_{T}}} \right)\left({I - {P_{d}}{P_{T}}} \right)\left({{f_{k}^{G}} - f} \right)} \right)} \right\|\\ &< \left\| {\left({I - {P_{d}}{P_{T}}} \right)\left({{f_{k}^{G}} - f} \right)} \right\| \end{aligned} $$
(16)

The diffusion operator P d consists of the local-mean diffusion operator P m and the global-bias diffusion operator P b . We have

$$\begin{array}{@{}rcl@{}} \left\| {I - {P_{d}}{P_{T}}} \right\| & = & \left\| {I - \left({{P_{m}} + {P_{b}}} \right){P_{T}}} \right\| \\ & = & \left\| {\left({I - {P_{m}}{P_{T}}} \right) - {P_{b}}{P_{T}}} \right\| \\ & \le & \left\| {I - {P_{m}}{P_{T}}} \right\| + \left\| {{P_{b}}{P_{T}}} \right\| \end{array} $$
(17)

We first analyze the character of IP m P T . According to the analysis in Section 3.2, the local-mean diffusion operator P m assign the signal from sampled vertices to their adjacent unsampled vertices. Let f denote a bandlimited graph signal, S denote the downsampling set, and K(u) denote the local-diffused unsampled vertices set of the sampled vertex u by the local-mean diffusion operator. Then, we have

$$\begin{array}{@{}rcl@{}} {\left\| {\left({I - {P_{m}}{P_{T}}} \right)f} \right\|^{2}} &=& {\left\| {f - {P_{m}}{P_{T}}f} \right\|^{2}} \\ &=& \underbrace {\sum\limits_{m \in S} {{{\left| {f\left(m \right)} \right|}^{2}}} }_{a}\\ &+& \underbrace {\sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left| {f\left(v \right) - \frac{1}{{d\left(v \right)}}f\left(u \right)} \right|}^{2}}}} }_{b} \end{array} $$
(18)

where d(v) denote the degree of the vertex v. It can be seen that the part a of Eq. (18) is the downsampling part of f2, then

$$\begin{array}{@{}rcl@{}} \sum\limits_{m \in S} {{{\left| {f\left(m \right)} \right|}^{2}}} &=& {\left\| {{P_{T}}f} \right\|^{2}} \\ &=& {\alpha_{1}}{\left\| f \right\|^{2}} \end{array} $$
(19)

where α 1 denote the proportion of quadratic sum of graph signals associated with sampled vertices in the quadratic sum of graph signals associated with all the vertices, i.e.,

$$\begin{array}{@{}rcl@{}} {\alpha_{1}} = \frac{{\sum\limits_{u \in S} {{{\left| {f\left(u \right)} \right|}^{2}}} }}{{\sum\limits_{v \in V} {{{\left| {f\left(v \right)} \right|}^{2}}} }} \end{array} $$

It can be seen that the range of α 1 is from 0 to 1. The value of α 1 approximates to \(\frac {M}{N}\), when the graph signal f is very smooth. For the part b of Eq. (18), we have

$$\begin{array}{@{}rcl@{}} \sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left| {f\left(v \right) - \frac{1}{{d\left(v \right)}}f\left(u \right)} \right|}^{2}}}} \end{array} $$
$${} \begin{aligned} &= \sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left({\frac{1}{{d\left(v \right)}}} \right)}^{2}}{{\left| {d\left(v \right)f\left(v \right) - f\left(u \right)} \right|}^{2}}}} \\ &= \sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left({\frac{1}{{d\left(v \right)}}} \right)}^{2}}{{\left| {d\left(v \right)f\left(v \right) - d\left(v \right)f\left(u \right) + d\left(v \right)f\left(u \right) - f\left(u \right)} \right|}^{2}}}} \\ &= \sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left({\frac{1}{{d\left(v \right)}}} \right)}^{2}}{{\left| {d\left(v \right)\left({f\left(v \right) - f\left(u \right)} \right) + \left({d\left(v \right) - 1} \right)f\left(u \right)} \right|}^{2}}}} \\ &= \sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left| {\left({f\left(v \right) - f\left(u \right)} \right) + \frac{{d\left(v \right) - 1}}{{d\left(v \right)}}f\left(u \right)} \right|}^{2}}}} \\ &\le {\sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {\left({\left| {f\left(v \right) - f\left(u \right)} \right| + \left| {\frac{{d\left(v \right) - 1}}{{d\left(v \right)}}f\left(u \right)} \right|} \right)} }^{2}} \\ &\le 2\sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {\left({{{\left| {f\left(v \right) - f\left(u \right)} \right|}^{2}} + {{\left| {\frac{{d\left(v \right) - 1}}{{d\left(v \right)}}f\left(u \right)} \right|}^{2}}} \right)}} \\ &= \underbrace {2\sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left| {f\left(v \right) - f\left(u \right)} \right|}^{2}}}} }_{c} + \underbrace {2\sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left| {\frac{{d\left(v \right) - 1}}{{d\left(v \right)}}f\left(u \right)} \right|}^{2}}}} }_{d} \end{aligned} $$
(20)

We first discuss the character of the part c of Eq. (20). There is always a shortest path within K(u) from any local-diffused unsampled vertex vK(u) to sampled vertex u, which is denoted as (v,v 1,v 2,…,v r−1,u). Then, we have

$${} \begin{aligned} {\left| {f\left(v \right) - f\left(u \right)} \right|^{2}} &= {\left| {f\left(v \right) - f\left({{v_{1}}} \right) + \ldots + f\left({{v_{r - 1}}} \right) - f\left(u \right)} \right|^{2}} \\ & \le {\left({\left| {f\left(v \right) - f\left({{v_{1}}} \right)} \right| + \ldots + \left| {f\left({{v_{r - 1}}} \right) - f\left(u \right)} \right|} \right)^{2}} \\ & \le {H_{K\left(u \right)}}\left({{\left| {f\left(v \right) - f\left({{v_{1}}} \right)} \right|}^{2}} + \ldots + \left| f\left({{v_{r - 1}}} \right)\right.\right.\\ &\left.\left.\quad- f\left(u \right) \right|^{2} \right) \end{aligned} $$
(21)

where H K(u) denote the maximal distance from sampled vertex u to any local-diffused unsampled vertex within K(u). For each local-diffused unsampled vertex vK(u), the path to the sampled vertex u is not longer than H K(u). Let X K(u) denote the cardinality of K(u). The maximal distance within K(u) is counted for no more than X K(u) times. Then, we have

$$ \begin{aligned} &\sum\limits_{v \in K\left(u \right)} {{\left| {f\left(v \right) - f\left(u \right)} \right|}^{2}} \le {X_{K\left(u \right)}}{H_{K\left(u \right)}}\\ &\sum\limits_{\left({i,\,j} \right) \in E,~i,\,j \in \left\{{K\left(u \right) \cup u} \right\}} {{{\left| {f\left(i \right) - f\left(j \right)} \right|}^{2}}} \end{aligned} $$

Let R max denote the maximal value of the product of X K(u) and H K(u) in graph, i.e.,

$$\begin{array}{@{}rcl@{}} {R_{\max }} = {\underset{{u \in S}}{\max}}\, {X_{K\left(u \right)}}{H_{K\left(u \right)}} \end{array} $$

Thus, we have

$$\begin{array}{@{}rcl@{}} \sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left| {f\left(v \right) - f\left(u \right)} \right|}^{2}}}} &\le& {R_{\max }}\sum\limits_{\left({i,j} \right) \in E} {{{\left| {f\left(i \right) - f\left(j \right)} \right|}^{2}}} \\ &=& {R_{\max }}{f^{T}}Lf \\ &=& {R_{\max }}{f^{T}}U\Lambda {U^{T}}f \\ &=& {R_{\max }}{{\hat f}^{T}}\Lambda \hat f \\ &=& {R_{\max }}\sum\limits_{{\lambda_{p}} \le w} {{\lambda_{p}}{{\left| {\hat f\left(p \right)} \right|}^{2}}} \\ &\le& {R_{\max }}w{\left\| f \right\|^{2}} \end{array} $$
(22)

where V denotes the set of vertices of the graph and E denotes the set of edges connecting vertices. Since f is bandlimited, the components of \({\hat f}\) associated with the frequencies higher than the graph cut-off frequency w are zero. Then, we analyze the character of the part d of Eq. (20). J(u) is denoted as

$$\begin{array}{@{}rcl@{}} J\left(u \right) = \sum\limits_{v \in K\left(u \right)} {{{\left({\frac{{d\left(v \right) - 1}}{{d\left(v \right)}}} \right)}^{2}}} \end{array} $$

and let J max denote the maximal value of J(u), i.e.,

$$\begin{array}{@{}rcl@{}} {J_{\max }} = {\underset{{u \in S}}{\max}}\, J\left(u \right) \end{array} $$

Thus, considering Eq. (19), we have

$$\begin{array}{@{}rcl@{}} \sum\limits_{u \in S} {\sum\limits_{v \in K\left(u \right)} {{{\left| {\frac{{d\left(v \right) - 1}}{{d\left(v \right)}}f\left(u \right)} \right|}^{2}}}} &=& {\sum\limits_{u \in S} {J\left(u \right)\left| {f\left(u \right)} \right|}^{2}} \\ &\le& {J_{\max }}{\sum\limits_{u \in S} {\left| {f\left(u \right)} \right|}^{2}} \\ &=& {J_{\max }}{\left\| {{P_{T}}f} \right\|^{2}} \\ &=& {J_{\max }}{\alpha_{1}}{\left\| f \right\|^{2}} \end{array} $$
(23)

Combining (18), (19), (22), and (23), we have

$$\begin{array}{@{}rcl@{}}{} {\left\| {\left({I - {P_{m}}{P_{T}}} \right)f} \right\|^{2}} &\le& {\alpha_{1}}{\left\| f \right\|^{2}} + 2{R_{\max }}w{\left\| f \right\|^{2}} + 2{J_{\max }}{\alpha_{1}}{\left\| f \right\|^{2}} \\ &=& \left({\left({2{J_{\max }} + 1} \right){\alpha_{1}} + 2{R_{\max }}w} \right){\left\| f \right\|^{2}} \end{array} $$
(24)

Then, we analyze the character of P b P T . According to the explanation in Section 3.3, the global-bias diffusion operation first projects the reconstructed residual associated with the downsampling set onto the low-pass filtering subspace, then preserves the non-downsampling set of new signal. Thus, we have \({P_{b}}{P_{T}} = \left ({I - {P_{T}}} \right){{\mathcal {F}}^{- 1}}{P_{w}}{\mathcal {F}}{P_{T}}\). Due to (IP T ), \({{\mathcal {F}}^{- 1}}{P_{w}}{\mathcal {F}}\), and P T are contraction mapping, it can be seen that \(\left ({I - {P_{T}}} \right){{\mathcal {F}}^{- 1}}{P_{w}}{\mathcal {F}}{P_{T}}\) is contraction mapping, i.e.,

$$\begin{array}{@{}rcl@{}} \left\| {\left({I - {P_{T}}} \right){{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}{P_{T}}f} \right\| &<& \left\| {{{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}{P_{T}}f} \right\| \\ &<& \left\| {{P_{T}}f} \right\| \\ &<& \left\| f \right\| \end{array} $$

Let α 2 denote the proportion of quadratic sum of P b P T f in the quadratic sum of graph signal f. Therefore,

$$\begin{array}{@{}rcl@{}} {\left\| {{P_{b}}{P_{T}}f} \right\|^{2}} = {\alpha_{2}}{\left\| f \right\|^{2}} \end{array} $$
(25)

It can be seen that the range of α 2 is from 0 to 1. Since P T and P w are the known parameter, the value of α 2 can be obtained according to graph signal and downsampling set. Besides, the value of α 2 approximates to \(\left ({\frac {{\left ({N - M} \right)}}{N}\frac {Y}{N}\frac {M}{N}} \right)\), when the graph signal f is very smooth. Then, combining (17), (24) and (25), we have

$${} \begin{aligned} \left\| {\left({I - {P_{d}}{P_{T}}} \right)f} \right\| &\le \sqrt {\left({\left({2{J_{\max }} + 1} \right){\alpha_{1}} + 2{R_{\max }}w} \right)} \left\| f \right\|\\ &\quad+ \sqrt {{\alpha_{2}}} \left\| f \right\| \\ &= \left({\sqrt {\left({\left({2{J_{\max }} + 1} \right){\alpha_{1}} + 2{R_{\max }}w} \right)} + \sqrt {{\alpha_{2}}}} \right)\\ &\qquad\left\| f \right\| \end{aligned} $$
(26)

Therefore, the operation (IP d P T ) is a contraction mapping, if

$$\begin{array}{@{}rcl@{}} \left({\sqrt {\left({\left({2{J_{\max }} + 1} \right){\alpha_{1}} + 2{R_{\max }}w} \right)} + \sqrt {{\alpha_{2}}}} \right) < 1 \end{array} $$
(27)

Thus, for a given graph cut-off frequency, if satisfying the condition of (27), we have

$$\begin{array}{@{}rcl@{}} {\eta^{k + 1}} &=& \left\| {f_{k + 1}^{G} - f} \right\| \\ & < & \left\| {\left({{f_{k}^{G}} - f} \right)} \right\| \\ & = & {\eta^{k}} \end{array} $$

It can be seen that IGDR reduces the reconstructed error at each iteration step. Then, the reconstructed error bound of IGDR satisfies

$$\begin{array}{@{}rcl@{}} {\eta^{k + 1}} < {\left\| {{{\mathcal{F}}^{- 1}}{P_{w}}{\mathcal{F}}\left({I - {P_{T}}} \right)\left({I - {P_{d}}{P_{T}}} \right)} \right\|^{k}} \cdot {\eta^{0}} \end{array} $$

and Proposition 1 is proved. □

References

  1. DI Shuman, SK Narang, P Frossard, A Ortega, P Vandergheynst, The emerging field of signal processing on graphs. IEEE Signal Process. Mag.30(3), 83–98 (2013).

    Article  Google Scholar 

  2. A Sandryhalia, JMF Moura, Big data analysis with signal processing on graphs. IEEE Signal Process. Mag. 31(5), 80–90 (2014).

    Article  Google Scholar 

  3. A Sandryhaila, JMF Moura, Discrete signal processing on graphs: frequency analysis. Signal Process. IEEE Trans. 62(12), 3042–3054 (2014).

    Article  MathSciNet  Google Scholar 

  4. A Sandryhalia, JMF Moura, in Proc. 38th IEEE Int. Conf. Acoust., Speech, Signal Process. Discrete signal processing on graphs: graph filters (IEEE-INST Electrical Electronics Engineers INCPiscataway, 2013), pp. 6163–6166.

    Google Scholar 

  5. H Shomorony, AS Avestimehr, in Proc. 1st IEEE Global Conf. Signal Inf. Process. Sampling large data on graphs (IEEE-INST Electrical Electronics Engineers INCPiscataway, 2014), pp. 933–936.

    Google Scholar 

  6. I Pesenson, Sampling in Paley-Wiener spaces on combinatorial graphs. Trans. Am. Math. Soc. 360(10), 5603–5627 (2008).

    Article  MathSciNet  MATH  Google Scholar 

  7. A Papoulis, A new algorithm in spectral analysis and band-limited extrapolation. Circ. Syst. IEEE Trans. 22(9), 735–742 (1975).

    Article  MathSciNet  Google Scholar 

  8. RW Gerchberg, Super-resolution through error energy reduction. J. Modern Optics. 21(9), 709–720 (1974).

    Google Scholar 

  9. SK Narang, A Gadde, E Sanou, A Ortega, in Proc. 1st IEEE Global Conf. Signal Inf. Process. Localized iterative methods for interpolation in graph structured data (IEEE-INST Electrical Electronics Engineers INCPiscataway, 2013), pp. 491–494.

    Google Scholar 

  10. X Wang, P Liu, Y Gu, in Proc. 1st IEEE Global Conf. Signal Inf. Process. Iterative reconstruction of graph signal in low-frequency subspace (IEEE-INST Electrical Electronics Engineers INCPiscataway, 2014), pp. 448–452.

    Google Scholar 

  11. X Wang, P Liu, Y Gu, Local-set-based graph signal reconstruction. Signal Process. IEEE Trans. 63(9), 2432–2444 (2015).

    Article  MathSciNet  Google Scholar 

  12. S Chen, R Varma, A Sandryhaila, J Kovaevi, Discrete signal processing on graphs: sampling theory. Signal Process. IEEE Trans. 63(24), 6510–6523 (2015).

    Article  MathSciNet  Google Scholar 

  13. AG Marques, S Segarra, G Leus, A Ribeiro, Sampling of graph signals with successive local aggregations. Signal Process. IEEE Trans. 64(7), 1832–1843 (2016).

    Article  MathSciNet  Google Scholar 

  14. A Sandryhalia, JMF Moura, in Proc. 38th IEEE Int. Conf. Acoust., Speech, Signal Process. Discrete signal processing on graphs: graph Fourier transform (IEEE-INST Electrical Electronics Engineers INCPiscataway, 2013), pp. 6167–6170.

    Google Scholar 

  15. SK Narang, A Gadde, A Ortega, in Proc. 38th IEEE Int. Conf. Acoust., Speech, Signal Process. Signal processing techniques for interpolation in graph structured data (IEEE-INST Electrical Electronics Engineers INCPiscataway, 2013), pp. 5445–5449.

    Google Scholar 

  16. A Anis, A Gadde, A Ortega, in Proc. 39th IEEE Int. Conf. Acoust., Speech, Signal Process. Towards a sampling theorem for signals on arbitrary graphs (IEEE-INST Electrical Electronics Engineers INCPiscataway, 2014), pp. 3892–3896.

    Google Scholar 

  17. A Gadde, A Anis, A Ortega, in Proc. 20th ACM SIGKDD Int. Conf. Knowledge Discovery Data Mining (KDD14). Active semi-supervised learning using sampling theory for graph signals (Assoc Computing MachineryNew York, 2014), pp. 492–501.

    Google Scholar 

  18. D Gleich, The MatlabBGL Matlab Library, [Online]. Available: http://www.cs.purdue.edu/homes/dgleich/packages/matlab-bgl/index.html. Accessed 26 Nov 2015.

  19. X Wang, J Chen, Y Gu, in IEEE Global Conference on Signal and Information Processing (GlobalSIP). Generalized graph signal sampling and reconstruction (IEEE-INST Electrical Electronics Engineers INCPiscataway, 2015), pp. 567–571.

    Chapter  Google Scholar 

  20. DJ Watts, SH Strogatz, Collective dynamics of “small-world” networks. Nature. 393(6684), 440–442 (1998).

    Article  Google Scholar 

  21. AL Barabasi, R Albert, Emergence of scaling in random networks. Science. 286(5439), 509–512 (1999).

    Article  MathSciNet  MATH  Google Scholar 

  22. National Climatic Data Center, (2015), [Online]. Available: ftp://ftp.ncdc.noaa.gov/pub/data/gsod. Accessed 8 Dec 2015.

  23. Shandong Bureau of Statistics, Shandong Statistical Yearbook (2015), [Online]. Available: http://www.stats-sd.gov.cn. Accessed 28 Aug 2016.

Download references

Acknowledgements

The authors would like to thank the editor and the reviewers for constructive comments and suggestions that led to improvements in the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lishan Yang.

Additional information

An erratum to this article is available at http://dx.doi.org/10.1186/s13634-017-0447-2.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, L., You, K. & Guo, W. Bandlimited graph signal reconstruction by diffusion operator. EURASIP J. Adv. Signal Process. 2016, 120 (2016). https://doi.org/10.1186/s13634-016-0421-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-016-0421-4

Keywords