Skip to main content

An efficient algorithm with fast convergence rate for sparse graph signal reconstruction

Abstract

In this paper, we consider the graph signals are sparse in the graph Fourier domain and propose an iterative threshold compressed sensing reconstruction (ITCSR) algorithm to reconstruct sparse graph signals in the graph Fourier domain. The proposed ITCSR algorithm derives from the well-known compressed sensing by considering a threshold for sparsity-promoting reconstruction of the underlying graph signals. The proposed ITCSR algorithm enhances the performance of sparse graph signal reconstruction by introducing a threshold function to determine a suitable threshold. Furthermore, we demonstrate that the suitable parameters for the threshold can be automatically determined by leveraging the sparrow search algorithm. Moreover, we analytically prove the convergence property of the proposed ITCSR algorithm. In the experimental, numerical tests with synthetic as well as 3D point cloud data demonstrate the merits of the proposed ITCSR algorithm relative to the baseline algorithms.

1 Introduction

In recent years, there is a noteworthy surge in research interest regarding the analysis of irregular structured signals across various fields and applications. In many practical scenarios, the occurrence of missing signals within these irregular structured signals poses a critical concern. The loss of signals can be attributed to various factors, including sensor malfunctions, signal transfer losses or system maintenance activities. In this context, signal reconstruction techniques are developed to substitute the missing signals with dependable estimations. Furthermore, graph signal processing (GSP) [1,2,3] is introduced as an intuitive framework to deal with graph signals lying on an irregular structure, which applies to a wide class of classical use cases such as traffic [4], sensor network [5], 3D point cloud [6, 7], image processing [8], air pollution monitoring platform [9] and recommendation system [10]. In the realm of GSP tools, the task of reconstruction emerges as a straightforward approach to address the intricate challenge of estimating missing signals.

In classical situations, graph signal reconstruction is often based on some assumptions, such as smoothness, bandlimitedness, stationarity and sparsity. [11] formulated the graph model for conflict resolution (GMCR) and employed the proximal gradient algorithm to solve it. The GMCR assumed signal smoothness on the graph and the penalty term utilized in this approach is based on the quadratic form of graph total variation. However, this assumption can lead to a deteriorated reconstruction if the signals of interest have dissimilar values between the vertices that are connected with respect to the ground truth topology. [12] introduced a graph signal reconstruction technique for estimating the power spectral density (PSD) of graph signals and was further utilized to develop Wiener-type estimation procedures for signals that are both noisy and partially observed. Nevertheless, the reconstruction method proposed in [12] cannot directly estimate the PSD. Instead, the method assumed that the graph signals are stationary and stochastic. Inspired by the idea of the classical Papoulis–Gerchberg iterative scheme, [13] proposed the iterative least square reconstruction (ILSR) to reconstruct the bandlimited graph signals from sampled signals. In addition, [14] proposed two efficient graph signal reconstruction methods, namely the iterative propagating reconstruction (IPR) and the iterative weighting reconstruction (IWR). To mitigate the computational expense associated with eigen-decomposition of the graph Laplacian matrix when dealing with bandlimited graph signals, [15] leveraged Gershgorin disks to optimize a sampling criterion relying on the lower bound of the smallest eigenvalue of the graph Laplacian matrix. The methods presented in [12,13,14,15] assumed that the graph signals are bandlimited in the graph Fourier domain, and the bandlimited graph signals are also defined as sparse graph signals. However, a major difference in this paper is that bandlimited graph signals, as graph Fourier domain sparse signals, have a certain bandwidth limitation across the entire graph Fourier domain. In other words, the nonzero coefficients of bandlimited graph signals only exist in the low-frequency space which cannot fully capture all the characteristics of real-world signals.

Inspired by the theory of compressed sensing (CS), many researchers proposed a variety of signal reconstruction algorithms based on sparse representation, such as basis pursuit (BP) [16], orthogonal matching pursuit (OMP) [17] and iterative hard thresholding (IHT) [18]. These algorithms provided important theoretical foundations and methodologies for sparse graph signal reconstruction. In the references related to sparse graph signal reconstruction [19,20,21], notable contributions include research on modeling the sparsity of graph signals and the development of compressed sensing reconstruction algorithms based on graph signals. The concept of sparse graph signal model refers to the existence of nonzero coefficients throughout the entire graph Fourier domain, including the high-frequency space. [19] proposed iterative method with adaptive thresholding for graph interpolation (IMATGI) to sparsity-promoting reconstruction of graph signals. Furthermore, [20] presented the recovery and sampling algorithms by selecting predetermined cardinality subset of vertices that guarantees reconstruction of the graph signals with the lowest possible reconstruction error. [21] proposed three sparse graph signal reconstruction algorithms based on BP, OMP and IHT, respectively. In order to achieve efficient compression, sampling and reconstruction of signals, the compressive sampling matching pursuit (CoSaMP) algorithm was developed [22]. These references contributed important insights into understanding the sparsity of graph signals and proposed effective reconstruction algorithms. However, it is unfortunate that as the vertex count increases in the graph, the distortion in the reconstructed graph signals using these algorithms also escalates.

In this paper, we consider the graph signals are sparse in the graph Fourier domain. For the sparse graph signal reconstruction, we propose an iterative threshold compressed sensing reconstruction (ITCSR) algorithm, which effectively addresses the challenges associated with signal reconstruction. The proposed ITCSR algorithm enhances the performance of sparse graph signal reconstruction by introducing a threshold function to determine a suitable threshold. Furthermore, we demonstrate that the suitable parameters for the threshold can be automatically determined by leveraging the sparrow search algorithm (SSA) [26]. Convergence analysis of the proposed ITCSR algorithm is provided. Experiments on both synthetic datasets and 3D point cloud datasets demonstrate that the reconstruction performance of the proposed ITCSR algorithm outperforms that of the baseline algorithms.

The main contributions in this paper are summarized as follows.

  1. 1.

    It proposes a new algorithm, referred to as ITCSR, for solving sparse graph signal reconstruction problem.

  2. 2.

    It gives a theoretical analysis of the convergence property of the proposed algorithm by utilizing the restricted isometry property (RIP).

  3. 3.

    It demonstrates that the proposed ITCSR algorithm indeed produces a significant improvement in terms of sparse graph signal reconstruction performance in publicly available 3D point cloud datasets.

1.1 Paper overview

The remainder of this paper is organized as follows. Some related basic definitions of GSP and \(K\)-sparse graph signals are stated in Sect. . Section  formulates the target model and describes the proposed ITCSR algorithm. In Sect. , the convergence analysis for the proposed ITCSR algorithm is presented. In Sect. , the proposed algorithm is compared with the baseline algorithms and experiments to validate the performance of our ITCSR algorithm. Finally, we conclude the paper in Sect. .

1.2 Notation

In this paper, we adopt the following notations. Boldface lowercase letters denote vectors, whereas boldface uppercase letters denote matrices. In the tables, we represent the optimal values in bold font. \(\textrm{diag}(\cdot )\) is a diagonal matrix with its argument along the main diagonal. \((\cdot )^{\top }\), \((\cdot )^{-1}\), \((\cdot )^{\dag }\) denotes transpose operation, inverse operation and pseudoinverse operation, respectively. The notation \(|\cdot |\) indicates the cardinality of a set. For a vector \({\textbf{f}}\), \(\Vert {\textbf{f}}\Vert _0\) and \(\Vert {\textbf{f}}\Vert _2\) represent its 0-norm and Euclidean norm, respectively.

2 Preliminary

In this section, we review definitions of graph Laplacian matrix and sparse graph signals that are used in the development of the proposed ITCSR algorithm and theoretical framework.

2.1 Graph Laplacian matrix

An undirected graph without self-loops is represented as \({\mathcal {G}}:= \{{\mathcal {V}}, {\mathcal {E}}, {\textbf{A}} \}\), where \({\mathcal {V}} = \{v_1, v_2,..., v_N\}\) is the vertex set and \({\mathcal {E}} \subseteq {\mathcal {V}}\times {\mathcal {V}}\) is the set of undirected edges connecting the vertices. \({\textbf{A}}\in {\mathbb {R}}^{N\times N}\) indicates the adjacency matrix of the graph. The elements of \({\textbf{A}}\) are defined as

$$\begin{aligned} {\textbf{A}}_{ij} = {\left\{ \begin{array}{ll} 1, &{} \text {if} \hspace{5.0pt}(v_i,v_j) \in {\mathcal {E}} \\ 0, &{} \text {otherwise} \\ \end{array}\right. }, \ i,j = 1,\cdots ,N. \end{aligned}$$
(1)

In general, adjacency matrix \({\textbf{A}}\) is a matrix used to represent the connection relationship between vertices in a graph. As an extension of the adjacency matrix, the weight matrix \({\textbf{W}}\) contains the weight information of the connection. The element \(w_{ij}\) of \({\textbf{W}}\) represents the weight value of the connection between vertex i and vertex j. It is important to note that edge weights are usually assumed to be nonnegative real numbers. For each vertex \(v_i\), we define degree \(d_i = \sum _{j=1}^N{w_{ij}}\) as the sum of edge weights connected to it. In this way, the degree matrix is defined as \({\textbf{D}}= \text {diag}(d_1, d_2, \cdots , d_N)\). Along with \({\textbf{W}}\) and \({\textbf{D}}\), the combinatorial Laplacian matrix \({\textbf{L}} \in {\mathbb {R}}^{N\times N}\) is defined as

$$\begin{aligned} \mathbf {L = D - W}. \end{aligned}$$
(2)

Since \({\textbf{L}}\) is a real symmetric and positive semi-definite matrix, it admits the eigen-decomposition \({\textbf{L}} = {\textbf{U}} \varvec{\Lambda } {\textbf{U}}^{\top }\), where the \(\varvec{\Lambda } = \text {diag}(\lambda _1, \lambda _2, \cdots , \lambda _N)\) is a diagonal matrix of nonnegative eigenvalues and the unitary matrix \({\textbf{U}} = [\varvec{u}_1, \varvec{u}_2, \cdots , \varvec{u}_N]\) containing the corresponding eigenvectors.

2.2 \(K\)-sparse graph signals

Graph signals are scalar-valued mapping \({\mathcal {f}}: {\mathcal {V}} \mapsto {\mathbb {R}}\) defined on the vertices of the graph. It can be represented as a vector \({\textbf{f}} = \left[ f_1,f_2,\cdots ,f_N\right] ^\top \in \mathbb {R}^N\), where the i-th component \(f_i\) represents the value of the signal at vertex \(v_i\). Since the Laplacian matrix \({\textbf{L}}\) satisfies the property of real symmetry, all the eigenvalues of \({\textbf{L}}\) are nonnegative, \(0 = \lambda _1 \le \lambda _2 \le \cdots \le \lambda _N\), and the corresponding eigenvectors are orthogonal. Then, the graph Fourier transform (GFT) is defined as the projection onto the orthogonal set of the eigenvectors of \({\textbf{L}}\), i.e.,

$$\begin{aligned} \hat{{\textbf{f}}} = \left[ \hat{f}_1, \hat{f}_2, \cdots , \hat{f}_N \right] ^\top = \sum ^{N}_{i=1}{\textbf{u}}_if_{i}={\textbf{U}}^{\top } {\textbf{f}}. \end{aligned}$$
(3)

Similar to classical Fourier analysis, the eigenvalues \(\{\lambda _i\}_{0\le N}\) in this context represent the frequencies of the graph, while \(\hat{f}_i\) represents the frequency component. And the inverse graph Fourier transform (IGFT) can be expressed as

$$\begin{aligned} {\textbf{f}} = {U\hat{\mathbf f}}. \end{aligned}$$
(4)

For a given set \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\), where \(K \le N\), the graph signals \(\hat{{\textbf{f}}}\) exhibit \(K\)-sparsity in GFT domain and can be mathematically defined as follows

$$\begin{aligned} \hat{f}_{k_l} \ne 0, \text {for} \hspace{5.0pt}k_l \in {\mathcal {K}} \hspace{5.0pt}\text {and} \hspace{5.0pt}\hat{f}_{k_l} = 0, \text {for} \hspace{5.0pt}k_l \notin {\mathcal {K}},\ l = 1,\cdots , K. \end{aligned}$$
(5)

In other words, \(K\)-sparse graph signals \(\hat{{\textbf{f}}}\) can be expressed as a linear combination of \(K \le N\) eigenvectors of the graph Laplacian \({\textbf{L}}\), which can be represented as

$$\begin{aligned} f_n = \sum _{l=1}^{K}{\hat{f}_{k_l}}{\textbf{U}}_{k_l,n}, \hspace{5.0pt}n = 1, 2, \cdots , N. \end{aligned}$$
(6)

A major difference is that in many previous works, it is assumed that sparse graph signals in graph Fourier domain are confined to low-frequency space, meaning that the positions corresponding to frequency components \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\) are known [5] [10]. Due to the assumption of smoothness in the graph Fourier domain and the absence of any frequency components beyond its bandwidth, the bandlimited graph signal model fails to capture the details and rapid changes in the high-frequency space of the signal. Consequently, important information may be lost during the graph signal reconstruction process. Therefore, we propose a sparse graph signal model that considers all frequency components within the entire graph Fourier domain, aiming to enhance the performance of graph signal reconstruction and better preserve crucial information within the real-word signals. It implies that the positions corresponding to frequency components \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\) are unknown.

2.3 The framework of SSA

SSA is a swarm intelligence optimization algorithm based on the behavior of sparrows foraging and avoiding predators. In this paper, the position of sparrows can be represented in the following matrix

$$\begin{aligned} X={ \left[ \begin{array}{cc} \alpha _{1,1} &{} \beta _{1,2} \\ \alpha _{2,1} &{} \beta _{2,2} \\ \vdots &{} \vdots \\ \alpha _{n,1} &{} \beta _{n,2} \end{array} \right] }, \end{aligned}$$

where n is the number of sparrows. Then, the fitness value of all sparrows can be expressed by the following vector

$$\begin{aligned} F={ \left[ \begin{array}{c} g([\alpha _{1,1}, \beta _{1,2}])\\ g([\alpha _{2,1}, \beta _{2,2}])\\ \vdots \\ g([\alpha _{n,1}, \beta _{n,2}]) \end{array} \right] }, \end{aligned}$$
(7)

where the value of each row in F represents the fitness value of the individual. During each iteration, the location update of producers is defined as

$$\begin{aligned} X_P^{l+1}= {\left\{ \begin{array}{ll} X_{i,j}^{l} \exp \left( -\frac{i}{\eta l_{\max }}\right) &{} \text {if} \hspace{5.0pt}r<S_T \\ X_{i,j}^l + QI &{} \text {otherwise} \\ \end{array}\right. }, \ i = 1,\cdots ,n, \text {and} \hspace{5.0pt}j = 1,2, \end{aligned}$$
(8)

where \(X_{i,j}\) represents the element in the i-th row and j-th column of the matrix X. Q is a random number following a normal distribution and I represents a matrix with all elements being 1. \(\eta \in (0, 1]\) is a random number and \(l_{\max }\) represents the number of maximum iteration. \(r \in [0, 1]\) and \(S_T \in [0.5, 1]\) represent the alarm value and the safety threshold, respectively. Similarly, the location update of scroungers is defined as

$$\begin{aligned} X_{S}^{l+1}= {\left\{ \begin{array}{ll} Q \exp \left( \frac{X_{w}^l - X_{i,j}^l}{i^2}\right) &{}\text { if } \hspace{5.0pt}i> j/ 2 \\ X_{P}^{l+1}+\left| X_{i,j}-X_P^{l+1}\right| \cdot C^{\dagger }I &{}\text {otherwise} \end{array}\right. }, \end{aligned}$$
(9)

where \(X_{w}\) represents the current globally worst position of the individual and C represents a matrix, where each element is randomly assigned a value of 1 or \(-1\). When sparrow detect danger and the alarm value more than the safety value, the location updated by

$$\begin{aligned} X_{A}^{l+1}= {\left\{ \begin{array}{ll} X_{b}^l+\zeta \left| X_{i,j}^l-X_{b}^l\right| &{} \text{ if } \hspace{5.0pt}F_i>F_b\\ X_{i,j}^l + R\left( \frac{\left| X_{i,j}^l - X_{w}^l\right| }{(F_i - F_w)+ \varepsilon }\right) &{}\text{ if } \hspace{5.0pt}F_i = F_b \end{array}\right. }, \end{aligned}$$
(10)

where \(X_{b}\) represents the current globally best position of the individual. As the control parameter for step size, \(\zeta\) is a normal distribution of random numbers with a mean of 0 and a variance of 1, ensuring the original meaning remains unchanged. \(F_{i}\) represents the current fitness values of the individual, \(F_{b}\) and \(F_{w}\) are the current global best and worst fitness values of the individual, respectively. In this paper, \(F_{b}\) is defined as

$$\hat{i} = \mathop {\arg \min }\limits_{{\alpha _{{i,1}} \in X,\beta _{{i,2}} \in X}} g(\alpha _{{i,1}} ,\beta _{{i,2}} ),\alpha = \alpha _{{\hat{i},1}} ,\beta = \beta _{{\hat{i},2}} {\text{ }}$$
(11)

The pseudo-code for the model proposed in this study is shown in Algorithm 1.

Algorithm 1
figure a

The framework of SSA

3 The proposed ITCSR algorithm for \(K\)-sparse graph signals

In this section, we consider the reconstruction of \(K\)-sparse graph signals. First, we formulate the \(K\)-sparse graph signals reconstruction as solving an underdetermined equation system. We then present the proposed ITCSR algorithm to solve this problem and show the pseudo-code of the proposed ITCSR algorithm.

3.1 Problem formulation

K-sparse graph signals \(\hat{{\textbf{f}}} \in {\mathbb {R}}^{N}\) in the graph Fourier domain, with sparsity \(K\), can be reconstructed using a reduced number of available samples, denoted as \(M < N\). These samples correspond to a random subset \({\mathcal {S}} = \{m_1, m_2, \cdots , m_M\}\), selected from the vertex set \({\mathcal {V}} = \{1, 2, \cdots , N\}\). The measurements vector containing the available samples is defined as

$$\begin{aligned} {\textbf{y}} = \left[ f_{m_1}, f_{m_2}, \cdots , f_{m_M}\right] ^{\top }, \end{aligned}$$
(12)

where \(f_{m_M}\) represents the sampled graph signals. According to (4), it then yields the following form

$$\begin{aligned} {\textbf{y}} = {\textbf{U}}_{MN}\hat{{\textbf{f}}} = \mathbf {\Phi } \hat{{\textbf{f}}}, \end{aligned}$$
(13)

where the matrix \({\textbf{U}}_{MN}\) is obtained by selecting the rows from the matrix \({\textbf{U}}\) that correspond to the set \({\mathcal {S}}\). For simplicity, \({\textbf{U}}_{MN}\) is represented by \(\mathbf {\Phi }\). (13) denotes the underdetermined system of \(M\) equations and \(N\) unknowns (\(M < N\)), which can be formulated as

$$\begin{aligned} \begin{aligned} \left[ \begin{array}{cccc} f_{m_1}\\ f_{m_2}\\ ... \\ f_{m_M} \end{array} \right] = \left[ \begin{array}{ccc} {\textbf{U}}_{1,m_1}&{} ... &{} {\textbf{U}}_{N,m_1}\\ {\textbf{U}}_{1,m_2}&{} ... &{} {\textbf{U}}_{N,m_2}\\ {} &{} ...&{} \\ {\textbf{U}}_{1,m_M}&{} ... &{} {\textbf{U}}_{N,m_M}\\ \end{array} \right] \left[ \begin{array}{cccc} \hat{f}_{1}\\ \hat{f}_{2}\\ ... \\ \hat{f}_{N}\\ \end{array} \right] = \left[ \begin{array}{ccc} {\textbf{U}}_{k_1,m_1}&{} ... &{} {\textbf{U}}_{k_K,m_1}\\ {\textbf{U}}_{k_1,m_2}&{} ... &{} {\textbf{U}}_{k_K,m_2}\\ {} &{} ...&{} \\ {\textbf{U}}_{k_1,m_M}&{} ... &{} {\textbf{U}}_{k_K,m_M}\\ \end{array} \right] \left[ \begin{array}{cccc} \hat{f}_{k_1}\\ \hat{f}_{k_2}\\ ... \\ \hat{f}_{k_K}\\ \end{array} \right] . \end{aligned} \end{aligned}$$
(14)

It is obvious that (14) is an underdetermined system of equations and \(\hat{{\textbf{f}}}\) is not uniquely reconstructive from \({\textbf{y}}\) by linear algebraic means, as (14) may have many solutions. However, we aim to find a sparse solution, and sparsity can serve as a powerful constraint for specific measurement matrices \(\mathbf {\Phi }\). It means that (14) can be solved when the position \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\) is known. The formulation of the \(K\)-sparse graph signal reconstruction in this paper considers the scenario where the positions of nonzero GFT coefficients are unknown, which can be formulated as

$$\begin{aligned} \min \Vert \hat{{\textbf{f}}}\Vert _{0} \hspace{5.0pt}\text {subject to} \hspace{5.0pt}{\textbf{y}} =\mathbf {\Phi } \hat{{\textbf{f}}}, \end{aligned}$$
(15)

where the operator \(\Vert \cdot \Vert _0\) is used as a sparsity measure.

3.2 The proposed ITCSR algorithm

This subsection introduces the proposed ITCSR algorithm to solve (15). The objective of ITCSR is to obtain an approximate solution to \({\textbf{y}} = \mathbf {\Phi }\hat{{\textbf{f}}}\). Now, we will outline its fundamental components. We denote \(\varvec{\eta }\) as a vector of residual correlations and \({\textbf{r}}\) as a residual vector. Next, the maximum function of the residual correlations can be expressed as follows

$$\begin{aligned} {\mathcal {M}} (\varvec{\eta }) = \max (\varvec{\eta }). \end{aligned}$$
(16)

On this basis, a subvector is defined as \({\textbf{b}} \in \mathbb R^{N-1}\) that represents the vector obtained by removing \({\mathcal {M}} (\varvec{\eta })\) from the vector \(\varvec{\eta }\). After that, a threshold function \(\mathrm V_{th}\) can be denoted as

$$\begin{aligned} \mathrm V_{th} = \alpha \left[ {\mathcal {M}}(\varvec{\eta })+ \beta {\mathcal {M}} (\varvec{b})\right] , \end{aligned}$$
(17)

where \(\alpha\) and \(\beta\) are parameters and can be optimized by SSA. In this paper, we set the fitness value \(F = \min \Vert \Phi _{{\mathcal {C}}}^{\top }{\textbf{y}} - {\textbf{y}}\Vert _2^2, \; s.t. \; {\mathcal {C}} = \left\{ q: \left| \Phi ^{\top }{\textbf{y}}\right| \ge \alpha \left[ {\mathcal {M}}(\Phi ^{\top }{\textbf{y}} )+ \beta {\mathcal {M}} (\varvec{\zeta })\right] \right\}\), where \(\varvec{\zeta }\) represents the vector obtained by removing \({\mathcal {M}} (\Phi ^{\top }{\textbf{y}})\) from the vector \(\Phi ^{\top }{\textbf{y}}\). According to the SSA algorithm, the values of parameters \(\alpha\) and \(\beta\) should fall within the range [0, 1]. In conjunction with our threshold function, when \(\alpha\) is set to 0, the threshold setting becomes unreasonable. Therefore, in this paper, the value of \(\alpha\) is set to (0, 1]. After discussing these descriptions of the basic ingredients, we state the proposed ITCSR algorithm in detail. ITCSR is an iterative and greedy algorithm. At each step, ITCSR selects several elements from the residual correlation vector \(\varvec{\eta }\), and adds its index to the identified support set \({\mathcal {K}}\). ITCSR performs least square to estimate the signal. We let t be the t-th iteration of the ITCSR algorithm. Specifically, for the following descriptions, we let superscripts denote the iteration number of the ITCSR algorithm. ITCSR starts with initial graph signals \(\hat{{\textbf{f}}}^0 = 0\) and initial residual vector \({\textbf{r}}^0 = {\textbf{y}}\). The ITCSR algorithm also contains a set of estimates \({\mathcal {K}}\) of the positions of the nonzero coefficients and the initial estimated positions \({\mathcal {K}}^0 = \emptyset\). The ITCSR algorithm, proposed as a solution to the considered problem, is composed of three fundamental steps:

  • Position Estimation: Estimate positions \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\) of nonzero coefficients.

  • Coefficient Reconstruction: Reconstruct nonzero coefficients \(\hat{f}_k\) at estimated positions k.

  • Signal Reconstruction: Reconstruct original graph signals \({\textbf{f}}\).

At the step of Position Estimation, the t-th iteration performs the inner product of the current residual and the measurement matrix \(\varvec{\Phi }\) to obtain the residual correlation vector

$$\begin{aligned} \begin{aligned} \varvec{\eta }^t&= \left| \left\langle {\varvec{\Phi }, {\textbf{r}}^{t-1}}\right\rangle \right| \\ \end{aligned} = \varvec{\Phi }^{\top } {\textbf{r}}^{t-1}, \end{aligned}$$
(18)

which contains a small number of significant nonzeros in each entry. After a simple calculation, we could easily get the \(\varvec{\eta }^t\). The ITCSR algorithm performs thresholding to find the significant nonzeros. Thresholding yields an index set

$$\begin{aligned} {\textbf{L}}^t = \left\{ i: \left| (\varvec{\eta }_i)^t\right| \ge \alpha \left[ {\mathcal {M}}(\varvec{\eta }^t )+ \beta {\mathcal {M}} (\textbf{b}^t)\right] \right\} , \end{aligned}$$
(19)

where \((\varvec{\eta }_i)^t\) represents the i-th element in the vector \(\varvec{\eta }\) at t-th iteration. Then, we merge the subset of newly selected coordinates with the previous support estimate, thereby updating the estimated available positions

$$\begin{aligned} {\mathcal {K}}^t = {\textbf{L}}^t \cup {\mathcal {K}}^{t-1}. \end{aligned}$$
(20)

At the step of Coefficient Reconstruction, according to estimated positions, we can calculate the nonzero coefficients by projecting the vector \({\textbf{y}}\) onto the columns of \(\varvec{\Phi }\). Letting \(\varvec{\Phi }_{{\mathcal {K}}}\) denote the \(M \times |{\mathcal {K}}|\) matrix with columns chosen using index set \(\mathcal K\). The updated \((\hat{{\textbf{f}}}')^t\) is defined as

$$\begin{aligned} (\hat{{\textbf{f}}}')^t = (\varvec{\Phi }_{{\mathcal {K}}^t}^{\top } \varvec{\Phi }_{{\mathcal {K}}^t})^{-1}\varvec{\Phi }_{{\mathcal {K}}^t}^{\top }{\textbf{y}}. \end{aligned}$$
(21)

In combination with (20), the estimation of \(\hat{{\textbf{f}}}\) is defined as

$$\begin{aligned} (\hat{{\textbf{f}}}'_n)^t = {\left\{ \begin{array}{ll} (\hat{{\textbf{f}}}'_k)^t &{} \text {if} \; n = k, k\in {\mathcal {K}}^t \\ 0 &{} \text {otherwise}\\ \end{array}\right. }, n = 1, 2,\cdots , N. \end{aligned}$$
(22)

By combining (20) and (22), the updated residual vector can be expressed as follows

$$\begin{aligned} {\textbf{r}}^t = {\textbf{y}} - \varvec{\Phi }_{{\mathcal {K}}^t} (\hat{{\textbf{f}}}')^t. \end{aligned}$$
(23)

At the step of Signal Reconstruction, in accordance with the estimated positions and the corresponding nonzero coefficients, the reconstructed graph signals \((\hat{{\textbf{f}}}')^t\) are calculated by (4)

$$\begin{aligned} ({{\textbf{f}}}')^t = {\textbf{U}} (\hat{{\textbf{f}}}')^t. \end{aligned}$$
(24)
Algorithm 2
figure b

The ITCSR algorithm

Algorithm 2 provided a summary of the proposed ITCSR algorithm. It is worth noticing that the thresholding strategy used in ITCSR algorithm utilizes the nonlinear approach that will allow several elements to be selected in (19). This threshold policy allows the proposed ITCSR algorithm to converge in fewer iterations.

4 Convergence analysis

In this section, we first introduce the restricted isometry constant (RIC) [23,24,25] for ease of theorem statement. Then, analytically prove the convergence of the proposed ITCSR algorithm through the introduction of Theorem 1. Our convergence analysis method cites reference [34].

The \(\delta _K\) represents an upper limit on the singular values of any submatrix of \(\varvec{\Phi }\) containing \(K\) or fewer elements, and it is symmetric. The RIC (Restricted Isometry Constant) is defined as the minimum value that satisfies the following condition

$$\begin{aligned} \left( 1 - \delta _K \right) \le \frac{\left\| \varvec{\Phi }\textbf{a}\right\| _2^2}{\left\| {\textbf{a}}\right\| ^2_2} \le \left( 1 + \delta _K\right) . \end{aligned}$$
(25)

The condition holds true for all \({\textbf{a}}\) vectors that contain at most \(K\) nonzero elements. If the matrix has a small constant \(\delta _K\), then every submatrix is very nearly orthogonal.

Theorem 1

For any observation \({\mathbf{y}} = {\varvec{\Phi }}\hat{f}\), the ITCSR algorithm will end up with selecting \(K\) nonzero elements if the threshold optimization parameters \(\alpha\), \(\beta\) satisfy the following condition,

$$\begin{aligned} \delta _{K+1}<\frac{\alpha \left( 1+\beta \right) }{\sqrt{ K }+\alpha \left( 1 + \beta \right) }. \end{aligned}$$
(26)

The estimate signal \(\tilde{{\textbf{y}}}^{t} = \mathbf {\Phi }\hat{{\textbf{f}}}^t\) satisfies

$$\begin{aligned} ||{\textbf{y}} - \tilde{{\textbf{y}}}^{t}||_2 \le c||{\textbf{y}} - (\tilde{{\textbf{y}}}^{*})^{t}||_2, \end{aligned}$$
(27)

where

$$\begin{aligned} c = \sqrt{1 + \left( \frac{1}{\alpha \left( 1+\beta \right) \sqrt{\frac{1-\delta _K}{K}}-\frac{\delta _{K+1}}{\sqrt{1-\delta _K}}}\right) ^2}, \end{aligned}$$
(28)

and where \(\alpha\) and \(\beta\) as defined in Sect. . The superscript t represents the current iteration number. \((\tilde{{\textbf{y}}}^{*})^{t} = \varvec{\Phi }_{\mathbf {\Gamma }^{*}} \hat{\textbf{f}}_{\varvec{\Gamma }^{*}}\) and \(\varvec{\Gamma }^{*}\) is the index set of the largest \(K\) elements in \(\hat{{\textbf{f}}}\). \(\blacksquare\)

The mathematical proof for Theorem 1 is available in the Appendix. Theorem 1 states that the proposed ITCSR algorithm guarantees a solution within a finite number of iterations. In general, achieving an exact reconstruction relies on having a RIP with a small RIC. Based on this relationship, a smaller value of \(\delta _K\) corresponds to a smaller constant c in equation (28). In equation (27), it is evident that a smaller value of c corresponds to a smaller reconstruction error. Similarly, equation (28) demonstrates how c changes with different values of the threshold optimization parameters \(\alpha\) and \(\beta\). In this paper, the value range of \(\alpha\) is (0, 1] and \(\beta\) is [0, 1] [26]. When \(\alpha\) and \(\beta\) approach their upper bounds, c tends to decrease. Additionally, when \(\delta _K\) falls within the range of (0, 1), a smaller value of c leads to better reconstruction performance.

5 Experiments

Table 1 Comparison of computational complexity

This section conducts numerical experiments to gain insights into the proposed ITCSR algorithm and assess its reconstruction performance. Unless specified otherwise, we utilize GSPBOX in MATLAB for graph signal processing, graph construction and visualizations. Firstly, we test the proposed ITCSR algorithm with synthetic data under different sampling methods and compare the results with baseline algorithms. The compared algorithms are CoSaMP, ILSR and IMATGI. The computational complexity of the ILSR, CoSaMP, IMATGI and ITCSR algorithms is shown in Table 1, where K represents the level of sparsity and t represents the number of iterations. The computational complexity of the ITCSR algorithm is \(O(tK\log N)\). Secondly, we apply the proposed algorithm to 3D point cloud model datasets [39] which are the Block, Cactus, Fandisk, Skull, Gargo and Dino, respectively. The experiments are conducted using MATLAB R2018a on a desktop computer equipped with an Intel Core i7-10700 CPU and 32GB RAM. Three metrics, namely maximum absolute error (MAX), mean square error (MSE) and signal noise ratio (SNR) are employed to evaluate the reconstruction performance of all algorithms. MAX is defined as

$$\begin{aligned} \text {MAX} = \max {\left\{ f_i' - f_i \right\} }, i = 1, 2, \cdots , N , \end{aligned}$$
(29)

MSE is defined as

$$\begin{aligned} \text {MSE} = \frac{\left\| {\sum _{i = 1}^{ N }\left( f_i' - f_i \right) }\right\| _{2}^{2}}{N_f}, \end{aligned}$$
(30)

and SNR is defined as

$$\begin{aligned} \text {SNR}=\frac{\left\| {{\textbf{f}}}\right\| _2^2}{\Vert {{\textbf{f}} - \hat{{\textbf{f}}}\Vert _2^2}}, \end{aligned}$$
(31)

, respectively, where \(N_f\) represents the graph signal length. In the following experiments, the parameters \(\alpha\) and \(\beta\) are selected by SSA. Smaller MAX and MSE values indicate better reconstruction performance. To guarantee exact reconstruction from compressed measurements, one should choose a specific matrix which satisfies the RIP. It is notable that the coherence of a matrix \(\varvec{\Phi }\) is a computable property that offers concrete guarantees for reconstruction. The largest inner product between any two columns \(\phi _i\) and \(\phi _j\) of matrix \(\varvec{\Phi }\) defines the coherence of the matrix \(\varvec{\Phi }\), denoted as

$$\begin{aligned} \mu (\varvec{\Phi }) = \max _{1 \le i < j \le N}{\frac{\left| \left\langle \phi _i, \phi _j \right\rangle \right| }{{\left\| \phi _i\right\| _2^2}{\left\| \phi _j\right\| _2^2}}}. \end{aligned}$$
(32)

According to the Welch bound [28], the lower bound of \(\mu (\varvec{\Phi })\) is \(\sqrt{\frac{N-M}{M\left( N-1\right) }}\). It is possible to show that the coherence of a matrix is always in the range \(\left[ \sqrt{\frac{N-M}{M(N-1)}}, 1\right]\). Similarly, in accordance with Geršgorin circle theorem [29], we can obtain that \(\varvec{\Phi }\) satisfies the RIP of order K with \(\delta _K = (K - 1) \mu (\varvec{\Phi })\) if \(\varvec{\Phi }\) has coherence \(\mu (\varvec{\Phi })\) for all \(K < \frac{1}{\mu (\varvec{\Phi })}\). Based on this new relation, the coherence in this paper should be in the range \(\left[ \sqrt{\frac{N-M}{M(N-1)}}, \frac{1}{2K-1}\right]\) that guarantees reconstruction uniqueness.

5.1 Experiments on synthetic data

Table 2 MSE (e-04) results by different sampling methods on the synthetic dataset with the level of sparsity \(K = 10\)
Table 3 Coherence of matrix \(\varvec{\Phi }\) under different sampling methods on the synthetic dataset with the level of sparsity \(K = 10\)

In this subsection, the performance of the proposed ITCSR algorithm is shown by two numerical experiments. In the experiments, two synthetic datasets are adopted, including the sensor network dataset and the swiss roll dataset.

5.1.1 Synthetic data: sensor network

In this experiment, we use the sensor network dataset to illustrate the reconstruction performance by the proposed ITCSR algorithm. To begin, we establish the default experimental setup in this experiment. The sensor network dataset is generated with \(N=600\) by

$$\begin{aligned} {\left\{ \begin{array}{ll} x &{}= \frac{1}{{\lceil {\sqrt{N}} \rceil } * [ x \sim U(0,1) ] } + \frac{i}{\lceil {\sqrt{N}} \rceil } \\ y &{}= \frac{1}{{\lceil {\sqrt{N}} \rceil } * [ y \sim U(0,1) ] } + \frac{j}{\lceil {\sqrt{N}} \rceil } \end{array}\right. } , \; \{i,j = 0, \cdots , N-1 \}, \end{aligned}$$
(33)

where \({\lceil {\sqrt{N}} \rceil }\) denotes the smallest integer that is not less than \(\sqrt{N}\) and \(x \sim U(0,1)\) represents a random variable x that follows uniform U(0, 1) distribution. Then, we construct an undirected graph by 6-nearest neighboring algorithm. The weight matrix \({\textbf{W}}\) can be obtained by assigning a value of 1 if there is an edge between two sensors, and it is 0 otherwise. With \(\mathbf {L = U\Lambda U}^{\top }\) denoting the eigen-decomposition of graph Laplacian, the \(K\)-sparse graph signals \({\textbf{f}}\) are generated as \(\varvec {f = U\hat{f}}\), where \(\hat{f}\) are independent and identically distributed (i.i.d) Gaussian distribution and the \(K = 4\) entries with most significant absolute values in the random signal are kept while the rest are set to zero. As for the selection of the sample set in this experiment, we mainly chose the following methods: maxsigmin (M1) [30], maxvolume (M2) [31], maxigmin (M3) [32] and random sampling (M4). For simplicity, the strategy of M4 in this paper includes the following two steps. First, initializing an array of size \(N\), where \(N\) is the number of elements in the permutation. Fill the array with the values 1 to \(N\) and randomly shuffle the numbers. Then, the first \(M\) number is selected as the sampling vertex.

Fig. 1
figure 1

Percentage of sensor network graph signals reconstructed correctly (N=600)

Fig. 2
figure 2

The reconstruction performance of different algorithms when the value of K changes

Fig. 3
figure 3

MSE versus the iteration number on the sensor network dataset

Fig. 4
figure 4

The estimation percentage reconstructed with the number of measurements (M)

In this paper, we conducted signal reconstructions using various parameter values to investigate the influence of parameter settings on the performance of the reconstruction algorithm. Table 2 shows the MSEs results under different sampling methods. Different parameters may lead to different performances for different sampling methods. As shown in Table 2, the values of parameters \(\alpha\) and \(\beta\) set to (1, 0) and (0.5, 0.5) by randomly and (0.4043, 0.6886) by SSA. As can be seen, the reconstruction performance improves as the value of optimal parameters \(\alpha\) and \(\beta\) are obtained by SSA. In particular, the MSE of the reconstructed signal by the proposed ITCSR algorithm can be reach \(3.2383e-04\) when sampling rate is 0.5 after adjusting the threshold parameters by SSA, where \(e-04\) represents a value where the decimal point is moved four places to the left. Then, to verify the convergence of the proposed ITCSR algorithm, we calculate the coherence of the matrix \(\varvec{\Phi }\) under different sampling rates with different sampling methods in Table 3. First, we used the above four sampling methods to generate 100 groups of data at different sampling rates and then substituted the data of each group in (32) to calculate coherence and finally took the average value. It can be seen that the coherence of matrix \(\varvec{\Phi }\) guarantees uniqueness.

In Fig. 1, to reveal the impact of the value K on ITCSR algorithm, we reconstructed the sensor network graph signals using different measurements M and the value K. As also described in Fig. 1, percentage reconstructed can achieve better performance under the same value K when measurements M increased. To compared the reconstruction performance of the proposed algorithm with baseline algorithms, we set the sparsity factor K/N from \(10\%\) to \(60\%\). For each sparsity factor and different algorithms, we reconstruct using 20 iterations and report the average achieved SNR in Fig. 2. As observed in Fig. 2, all curves experience a sudden knee like fall in reconstruction SNR as the sparsity factor increases. As expected, the proposed ITCSR algorithm can successfully reconstruct less sparse signals. Figure 3 shows the MSE versus the number of iterations, with \(60\%\) sampling rate. It is observed that the proposed ITCSR algorithm exhibits both the fast convergence and the minimum reconstruction error. Overall, the proposed ITCSR algorithm can achieve good reconstruction performance while maintaining fast convergence. To evaluate the reconstruction performance of the ITCSR algorithm against conventional sensor signal reconstruction algorithms, we conducted a specific experiment. In this experiment, we additionally introduced a relative threshold-based sparsity estimation method (RTSE) [37] and robust sparsity estimation method (RSE) [38] for comparative analysis. Figure 4 indicates the percentage reconstructed estimation with the number of measurements. In the simulation, we set \(K = 12\). Figure 4 illustrates that the percentage reconstructed increased with the increase of M. It is obvious that the percentage reconstructed of ITCSR is larger than percentage reconstructed of RTSE and RSE. In other words, the proposed ITCSR algorithm has higher correct rate than conventional algorithms in our simulation.

Fig. 5
figure 5

Comparative subjective performance of swiss roll dataset reconstruction. a Original swiss roll, b Sampled swiss roll, c Reconstructed swiss roll by ILSR, d Reconstructed swiss roll by CoSaMP, e Reconstructed swiss roll by IMATGI, f Reconstructed swiss roll by ITCSR

Fig. 6
figure 6

MSE versus the iteration number on the swiss roll dataset

5.1.2 Synthetic data: swiss roll

In this experiment, we use the swiss roll dataset in 3D space to illustrate the reconstruction performance by the proposed ITCSR algorithm. The swiss roll dataset is generated with \(N = 2000\) by

$$\begin{aligned} {\left\{ \begin{array}{ll} x &{}= \frac{3\pi }{2} \cdot (1 + t) \cdot \cos \theta , \\ y &{}= 21\cdot t, \\ z &{}= (1+t) \cdot \sin \theta . \end{array}\right. } \end{aligned}$$
(34)

Then, we construct an undirected graph by 5-nearest neighboring algorithm. It is worth mentioning that the M4 method is selected.

In Fig. 5, we show the 3D swiss roll dataset visualization of the reconstructed data. Figure 5a, 5b illustrate the original swiss roll and sampled swiss roll. Figure 5c, d, e and f illustrates the visualization reconstruct results under sampling rate is \(50\%\) by using ILSR, CoSaMP, IMATGI and proposed ITCSR algorithm for swiss roll dataset, respectively. Overall, we observe that ITCSR outperforms other reconstruction algorithm. Even when compared to the ILSR and IMATGI algorithms, ITCSR still exhibits competitive reconstruction performance. To be specific, compare Fig. 5a and c, the reconstruction performance of the proposed ILSR algorithm has obvious deviation. Compare with Fig. 5a, d and e, many obvious outliers exist in the swiss roll data reconstructed by the CoSaMP and IMATGI algorithms. The convergence comparison of the considered methods with \(60\%\) sampling rate as shown in Fig. 6. As expected, the proposed ITCSR algorithm exhibits the minimum reconstruction errors and the fastest convergence.

Table 4 MAX results by different sampling ratio on the 3D point datasets
Table 5 Coherence of matrix \(\varvec{\Phi }\) under different sampling methods on the 3D point dataset
Fig. 7
figure 7

Comparative subjective performance of Fandisk 3D point cloud model reconstruction. a Original Fandisk 3D point cloud, b Sampled Fandisk 3D point cloud, c Reconstructed Fandisk 3D point cloud by ILSR, d Reconstructed Fandisk 3D point cloud by CoSaMP, e Reconstructed Fandisk 3D point cloud by IMATGI, f Reconstructed Fandisk 3D point cloud by ITCSR

Fig. 8
figure 8

Comparative subjective performance of Skull 3D point cloud model reconstruction. a Original Skull 3D point cloud, b Sampled Skull 3D point cloud, c Reconstructed Skull 3D point cloud by ILSR, d Reconstructed Skull 3D point cloud by CoSaMP, e Reconstructed Skull 3D point cloud by IMATGI, f Reconstructed Skull 3D point cloud by ITCSR

5.2 Experiments on 3D point cloud data

In this subsection, we focus on applying our proposed ITCSR algorithm to six 3D point cloud datasets which are Block, Cactus, Fandisk, Skull, Gargo and Dino, respectively. To simplify the process, the graph is constructed using the 5-nearest neighboring algorithm. The weighted matrix is formed by assigning the edge weight as \(\textbf{W}(i,j) = -\frac{1}{d_{i,j}^2}\), where \(d_{i,j}\) represents the distance between vertex i and j. To normalize \({\textbf{W}}_{\textrm{norm}}\), we divide \({\textbf{W}}\) by its maximum value, denoted as \(\max ({\textbf{W}})\). Then, the graph signals are defined as “Location X,” “Location Y,” and “Location Z.”

Our objective is to estimate the missing Location values within an incomplete matrix of relative Location data. We select \(50\%\), \(60\%\) and \(70\%\) of the measurements by the M4 method and reconstruct all graph signals through the spectrum domain. We calculate the MAXs on six 3D point cloud datasets, and the results are shown in Table 4. In general, we observe that for all reconstruction algorithms, the reconstruction performance improves when the sampling ratio increases. For the different sampling ratios under test, the MAX performance of ITCSR is very competitive. The proposed ITCSR algorithm is far superior to the ILSR algorithm and IMATGI algorithm. Specifically, in Gargo 3d point cloud dataset, the Maxs of the ILSR algorithm, IMATGI algorithm and proposed algorithm in Location Z with 0.5 sampling ratio are 39.3352, 36.4275 and 14.5528, respectively. Then, to verify the convergence of the proposed ITCSR algorithm, we calculate the coherence of the matrix \(\varvec{\Phi }\) under different sampling rates with different sampling methods in Table 5.

To illustrate the performance of the proposed ITCSR algorithm, as shown in Figs. 7b, 8b, the Fandisk with 6470 vertices and Skull 3D point cloud datasets with 20002 vertices are selected and sampling ratio is \(50 \%\). Compared with Figs. 7a, 8a, it is observed that the original signals are significantly missing. Overall, we observe that ITCSR improves the reconstruction performance in comparison with the other algorithms. This is because ITCSR utilizes the more general assumption of sparsity rather than the bandlimitedness of the underlying graph signals. Moreover, ITCSR combines the theory of compressed sensing and related concepts of graph signal processing and optimizes the selection of the threshold. To be specific, as indicated in Figs. 7c, 8c, the reconstructed 3D point cloud signal by using ILSR is distorted. Figure 7d, 8d shows the reconstructed 3D point cloud signal by using CoSaMP algorithm. As we can see, the accuracy of reconstructed signals by CoSaMP algorithm is insufficient as the data amount increases. Figures 7e and 8e show the reconstructed 3D point cloud signal by using IMATGI can retain its original basic shape, but there are many outliers. Moreover, it can be seen from Figs. 7f and 8f that with the increase in vertex, the proposed ITCSR algorithm is obviously superior to the ILSR and IMATGI algorithms.

6 Conclusion

In this paper, we proposed the ITCSR algorithm for sparsity-promoting reconstruction of signals defined on graphs. The ITCSR algorithm considered the suitable threshold to ensure the performance in signal reconstruction and we provided a formal convergence analysis. Compared with the OMP, CoSaMP, ILSR and IMATGI, the proposed ITCSR algorithm decreased the MSE on synthetic datasets under different sampling methods. Experimental results on six public 3D Point Cloud datasets demonstrated that our algorithm performs better, the MAX on Block, Cactus, Fandisk, Skull, Gargo and Dino less than those of the reference algorithms under 0.5, 0.6, 0.7 sampling ratio, respectively.

Availability of data and material

Not applicable.

References

  1. A. Sandryhaila and J.M.F. Moura, Discrete signal processing on graphs: graph fourier transform, in: 2013 International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 6167-6170 (2013) https://doi.org/10.1109/ICASSP.2013.6638850

  2. D.I. Shuman, S.K. Narang, P. Frossard, A. Ortega, P. Vandergheynst, The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 30(3), 83–98 (2013). https://doi.org/10.1109/MSP.2012.2235192

    Article  ADS  Google Scholar 

  3. A. Ortega, P. Forssard, J. Kovaevi, J.M.F. Moura, P. Vandergheynst, Graph signal processing: overview challenges, and applications. Proc. IEEE Inst. Electr. Electron Eng. 106(5), 808–828 (2018)

    Article  Google Scholar 

  4. P. Xu, J. Zhang, T. Gao, S. Chen, X. Wang, H. Jiang, W. Gao, Real-time fast charging station recommendation for electric vehicles in coupled power-transportation networks: a graph reinforcement learning method. Int. J Elec. Power. 141, 108030 (2022). https://doi.org/10.1016/j.ijepes.2022.108030

    Article  Google Scholar 

  5. J. Feng, F. Chen, H. Chen, Data reconstruction coverage based on graph signal processing for wireless sensor networks. IEEE Wirel. Commun. Lett. 11(1), 48–52 (2022). https://doi.org/10.1109/LWC.2021.3120276

    Article  MathSciNet  Google Scholar 

  6. C. Dinesh, G. Cheung and I.V. Baji, 3D point cloud super-resolution via graph total variation on surface normals, in: 2019 IEEE International Conference on Image Processing (ICIP), pp. 4390-4394 (2019) https://doi.org/10.1109/ICIP.2019.8803560

  7. X. Shang, R. Ye, H. Feng and Xue-Qin. Jiang, Robust feature graph for point cloud denoising, in: 2022 7th International Conference on Communication, Image and Signal Processing (CCISP), pp. 330-336 (2022) https://doi.org/10.1109/CCISP55629.2022.9974370

  8. Q. Huang, R. Li, Z. Jiang, W. Feng, S.Wei, S. Lin, H.Feng and B.Hu, Fast color-guided depth denoising for RGB-D images by graph filtering, in: 2019 53rd Asilomar Conference on Signals, Systems, and Computers(ACSSC), pp. 1811-1815 (2019)https://doi.org/10.1109/IEEECONF44664.2019.9048703

  9. P.F. Cid, J.M.B. Ordinas, J.G. Vidal, Graph signal reconstruction techniques for IoT air pollution monitoring platforms. IEEE Internet Things J. 9(24), 25350–25362 (2022). https://doi.org/10.1109/JIOT.2022.3196154

    Article  Google Scholar 

  10. A. Hashemi, R. Shafipour, H. Vikalo, G. Mateos, Towards accelerated greedy sampling and reconstruction of bandlimited graph signals. Signal Process. 195(10), 0165–1684 (2022). https://doi.org/10.1016/j.sigpro.2022.108505

    Article  Google Scholar 

  11. S. Chen, A. Sandryhaila, J.M.F. Moura, J. Kovaevi, Signal recovery on graphs: variation minimization. IEEE Trans. Signal Process. 63(17), 4609–4624 (2015). https://doi.org/10.1109/TSP.2015.2441042

    Article  ADS  MathSciNet  Google Scholar 

  12. N. Perraudin, P. Vandergheynst, Stationary signal processing on graphs. IEEE Trans. Signal Process. 65(13), 3462–3477 (2017). https://doi.org/10.1109/TSP.2017.2690388

    Article  ADS  MathSciNet  Google Scholar 

  13. S.K. Narang, A. Gadde, E. Sanou, A. Ortega, Localized iterative methods for interpolation in graph structured data. IEEE Glob. Conf. Signal Imform. Process. (GlobaISIP) 2013, 491–494 (2013). https://doi.org/10.1109/GlobalSIP.2013.6736922

    Article  Google Scholar 

  14. X. Wang, P. Liu, Y. Gu, Iterative reconstruction of graph signal in low-frequency subspace. IEEE Glob. Conf. Signal Inform. Process. (GlobalSIP) 2014, 448–452 (2014). https://doi.org/10.1109/GlobalSIP.2014.7032157

    Article  Google Scholar 

  15. Y. Bai, F. Wang, G. Cheung, Y. Nakatsukasa, W. Gao, Fast graph sampling set selection using Gershgorin disc alignment. IEEE Trans. Signal Process. 68, 2419–2434 (2020). https://doi.org/10.1109/TSP.2020.2981202

    Article  ADS  MathSciNet  Google Scholar 

  16. D.L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  17. J.A. Tropp, A.C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53(12), 4655–4666 (2007). https://doi.org/10.1109/TIT.2007.909108

    Article  MathSciNet  Google Scholar 

  18. T. Blumensath, M. Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009). https://doi.org/10.1016/j.acha.2009.04.002

    Article  MathSciNet  Google Scholar 

  19. M.B. Mashhadi, M. Fallah, F. Marvasti, Interpolation of sparse graph signals by sequential adaptive thresholds, in. Int. Conf. Sampl. Theory Appl. (SampTA) 2017, 266–270 (2017). https://doi.org/10.1109/SAMPTA.2017.8024339

    Article  Google Scholar 

  20. M. Brajović, I. Stanković, M. Daković and L. Stanković, Reconstruction of sparse graph signals from reduced sets of samples, in 2023 27th International Conference on Information Technology (ICIT), pp. 1-5 (2023) https://doi.org/10.1109/IT57431.2023.10078603

  21. C.C. Tseng and L. Su-Ling, A missing data recovery method of sparse graph signal in GFT domain, in: 2018 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), pp. 1-2, https://doi.org/10.1109/ICCE-China.2018.8448787 (2018)

  22. D. Needell, J.A. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009). https://doi.org/10.1016/j.acha.2008.07.002

    Article  MathSciNet  Google Scholar 

  23. E. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory, (2005), https://doi.org/10.48550/arXiv.math/0502327

  24. W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory (2009). https://doi.org/10.1109/TIT.2009.2016006

    Article  MathSciNet  Google Scholar 

  25. J. Wang, S. Kwon, and B. Shim, Generalized orthogonal matching pursuit, IEEE Trans. Signal Process., 60(12), pp. 6202–6216 (2012)https://doi.org/10.1109/TSP.2012.2218810

  26. J. Xue, B. Shen, A novel swarm intelligence optimization approach: sparrow search algorithm. Syst. Sci. Control. Eng. 8(1), 22–34 (2020). https://doi.org/10.1080/21642583.2019.1708830

    Article  Google Scholar 

  27. E. Candes, T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2007). https://doi.org/10.1109/TIT.2006.885507

    Article  MathSciNet  Google Scholar 

  28. S. Datta, S. Howard, D. Cochran, Geometry of the Welch bounds. Linear Algebra Appl. 437(10), 2455–2470 (2012). https://doi.org/10.1016/j.laa.2012.05.036

    Article  MathSciNet  Google Scholar 

  29. N. Beresford, A result complementary to Gersgorin? s circle theorem. Linear Algebra Appl. 431(1), 20–27 (2009). https://doi.org/10.1016/j.laa.2009.01.030

    Article  MathSciNet  Google Scholar 

  30. B. Girault, A. Ortega and S. S. Narayayan, Graph vertex sampling with arbitrary graph signal hilbert spaces, in: 2020 International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5670-5674 (2020) https://doi.org/10.1109/ICASSP40776.2020.9054723

  31. D. E. Olivier Tzamarias, P. Akyazi and P. Frossard, A novel method for sampling bandlimited graph signals, in 2018 26th European Signal Processing Conference (EUSIPCO), pp. 126-130 (2018) https://doi.org/10.23919/EUSIPCO.2018.8553064

  32. H. Shomorony and A. S. Avestimehr, Sampling large data on graphs, in: IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 933-936 (2014) https://doi.org/10.1109/GlobalSIP.2014.7032257

  33. D. Donoho, M. Elad, Maximal sparsity representation via l-1 minimization. Comput. Sci. 100(12), 2197–2202 (2002). https://api.semanticscholar.org/CorpusID:17259152

    Google Scholar 

  34. J. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004). https://doi.org/10.1109/TIT.2004.834793

    Article  MathSciNet  Google Scholar 

  35. B. Girault, A. Ortega, and S.S. Narayayan, Graph vertex sampling with arbitrary graph signal hilbert spaces, in: 2014 International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 5670-5674, https://doi.org/10.48550/arXiv.2002.11238 (2014)

  36. H. Shomorony and A. S. Avestimehr, Sampling large data on graphs, in: 2014 IEEE Global Conference on Signal and Imformation Processing (GlobaISIP), pp. 933-936 (2014) https://doi.org/10.48550/arXiv.1411.3017

  37. X. Xu, J. Chen, N. Wan, D. Chen and J. Wan, Sparsity estimation method in compressed data gathering of wireless sensor networks, in: 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), pp. 833-836 (2019) https://doi.org/10.1109/ITAIC.2019.8785897

  38. Qin, S., Yin, J, A Robust sparsity estimation method in compressed sensing, in: China Conference on Wireless Sensor Networks (CWSN), pp. 481-488 (2014) https://doi.org/10.1007/978-3-662-46981-1-46

  39. 3D Point Datasets, http://graphics.stanford.edu/data/3Dscanrep/ (2014)

Download references

Acknowledgements

Not applicable

Funding

This study was partly supported by the Innovation Program for Quantum Science and Technology (Grant No.2021ZD0300703) and the National Natural Science Foundation project (Grant No. 61971146).

Author information

Authors and Affiliations

Authors

Contributions

Yuting Cao was involved in methodology, data curation, software, writing—original draft. Xue-Qin Jiang helped in conceptualization, methodology, supervision, writing—review & editing. Jian Wang contributed to writing—review & editing. Shubo Zhou helped in supervision. Xinxin Hou contributed to writing— review & editing.

Corresponding author

Correspondence to Xue-Qin Jiang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors have no competing interests to declare.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A The proof of Theorem 1

Appendix A The proof of Theorem 1

Now, we prove Theorem 1 in Sect. . The method we adopted is an extension and enhancement based of prior work [34]. Before presenting the proof, we introduce two lemmas.

Lemma 1

For a matrix \(\varvec{\Phi }\) whose columns are distinct m vectors, indexed \(\lambda _1,\cdots ,\lambda _m\), we have a conclusion that the squared singular values of \(\varvec{\Phi }\) exceed \(1-\delta _K\).

Lemma 2

It can be shown that \(\delta _K \le K\delta _{K+1}\) is established for every natural number K when matrix \(\varvec{\Phi }\) satisfies the RIP with \(\delta _K\).

We omit the easy proof and move on to the demonstration of the theorem.

Proof

Let \(\varvec{\Gamma }^{*}\) and \(\hat{{\textbf{f}}}_{\varvec{\Gamma }^{*}}^{*} = \varvec{\Phi }_{\varvec{\Gamma }^{*}}^{\dagger }{\textbf{y}}\) be under the assumption of Theorem 1. In iteration t, suppose that the algorithm has selected the index set \(\varvec{\Gamma }_{t} \subset \varvec{\Gamma }^{*}\). Let the index set \(H = \varvec{\Gamma }^{*}\) and \(J = \left\{ i,i\notin H \right\}\). Then, the equation can be defined

$$\begin{aligned} \frac{\left\| \varvec{\Phi }_{ J }^{\top } {\textbf{r}}_t\right\| _{\infty }}{\left\| \varvec{\Phi }_{ H }^{\top } {\textbf{r}}^t\right\| _{\infty }} \le \frac{\left\| \varvec{\Phi }_{ J }^{\top } \left( {\textbf{y}} - (\tilde{{\textbf{y}}}^{*})^{t}\right) \right\| _{\infty }}{\left\| \varvec{\Phi }_{ H }^{\top } \left( (\tilde{{\textbf{y}}}^{*})^{t} - \tilde{{\textbf{y}}}^{t} \right) \right\| _\infty } + \frac{\left\| \varvec{\Phi }_{ J }^{\top } \left( (\tilde{{\textbf{y}}}^{*})^{t} - \tilde{\textbf{y}}^{t} \right) \right\| _{\infty }}{\left\| \varvec{\Phi }_{ H }^{\top } \left( (\tilde{{\textbf{y}}}^{*})^{t} - \tilde{\textbf{y}}^{t} \right) \right\| _\infty }. \end{aligned}$$
(A1)

We may calculate that

$$\begin{aligned} \begin{aligned} \frac{\left\| \varvec{\Phi }_{ J }^{\top } \left( (\tilde{\textbf{y}}^{*})^{t} - \tilde{{\textbf{y}}}^{t} \right) \right\| _{\infty }}{\left\| \varvec{\Phi }_{ H }^{\top } \left( (\tilde{{\textbf{y}}}^{*})^{t} - \tilde{{\textbf{y}}}^{t} \right) \right\| _\infty }&\le \max _{i\in J}{\left\| \varvec{\Phi }_{ H }^{\dagger }\varvec{\phi }_{ji}\right\| _1}\\ {}&\le \sqrt{ K }\max _{i\in J } {\left\| \left( \varvec{\Phi }_{ H }^{\top }\varvec{\Phi }_{ J }\right) ^{\dagger }\right\| _2} \cdot {\left\| \varvec{\Phi }_{ H }\varvec{\phi }_{ji}\right\| _2}\\ {}&\le \sqrt{ K } \cdot \frac{\delta _{K+1}}{1-\delta _K}, \end{aligned} \end{aligned}$$
(A2)

where the last inequality comes from standard properties of the RIP constant [see Lemma 1 [33]]. Continuing the calculation

$$\frac{{\left\| {{\text{ }}\Phi _{J}^{{ \top }} \left( {{\mathbf{y}} - (\widetilde{{\mathbf{y}}}^{*} )^{t} } \right)} \right\|_{\infty } }}{{\left\| {{\text{ }}\Phi _{H}^{{ \top }} \left( {(\widetilde{{\mathbf{y}}}^{*} )^{t} - \widetilde{{\mathbf{y}}}^{t} } \right)} \right\|_{\infty } }} = \frac{{\max _{{i \in J}} \left\| {\phi _{{ji}}^{{ \top }} \left( {({\mathbf{y}} - (\widetilde{{\mathbf{y}}}^{*} )^{t} } \right)} \right\|_{2} }}{{\left\| {{\text{ }}\Phi _{H}^{{ \top }} \left( {(\widetilde{{\mathbf{y}}}^{*} )^{t} - \widetilde{{\mathbf{y}}}^{t} } \right)} \right\|_{\infty } }} \le \frac{{\max _{{i \in J}} \left\| {\phi _{{ji}}^{{ \top }} \left( {{\mathbf{y}} - (\widetilde{{\mathbf{y}}}^{*} )^{t} } \right)} \right\|_{2} }}{{K^{{ - 1/2}} \left\| {{\text{ }}\Phi _{H}^{{ \top }} \left( {(\widetilde{{\mathbf{y}}}^{*} )^{t} - \widetilde{{\mathbf{y}}}^{t} } \right)} \right\|_{2} }} \le \frac{{\sqrt K \max _{{i \in J}} \left\| {\phi _{{ji}} } \right\|_{2} \cdot \left\| {{\mathbf{y}} - (\widetilde{{\mathbf{y}}}^{*} )^{t} } \right\|_{2} }}{{\sigma _{{{\text{min}}}} \left( {{\text{ }}\Phi _{H} } \right)\left\| {(\widetilde{{\mathbf{y}}}^{*} )^{t} - \widetilde{{\mathbf{y}}}^{t} } \right\|}} \le \frac{{\sqrt K \left\| {{\mathbf{y}} - (\widetilde{{\mathbf{y}}}^{*} )^{t} } \right\|_{2} }}{{\sqrt {1 - \delta _{K} } \left\| {(\widetilde{{\mathbf{y}}}^{*} )^{t} - \widetilde{{\mathbf{y}}}^{t} } \right\|_{2} }},$$
(A3)

where the last step refers to Lemma 2 [34]. In accordance with (A2) and (A3), the proposed ITCSR algorithm therefore selects elements from set \(H\) satisfies

$$\begin{aligned} \frac{\sqrt{ K }\left\| {\textbf{y}} - (\tilde{\textbf{y}}^{*})^{t}\right\| _2 }{\sqrt{1 - \delta _K}\left\| (\tilde{{\textbf{y}}}^{*})^{t} - \tilde{{\textbf{y}}}^{t} \right\| _2} + \sqrt{ K } \cdot \frac{\delta _{ K +1}}{1-\delta _K} \le \alpha \left( 1+\beta \right) , \end{aligned}$$
(A4)

which (as both terms on the left need to be positive) is only possible if \(\delta _{ K +1}<\frac{\alpha \left( 1+\beta \right) }{\sqrt{ K }+\alpha \left( 1 + \beta \right) }\). It then yields the following form of

$$\begin{aligned} \left\| {\textbf{y}} - \tilde{{\textbf{y}}}^{t}\right\| _2 \le \sqrt{1 + \left( \frac{1}{\alpha \left( 1+\beta \right) \sqrt{\frac{1-\delta _ K }{ K }}-\frac{\delta _{ K +1}}{\sqrt{1-\delta _ K }}}\right) ^2} \left\| {\textbf{y}} - \tilde{{\textbf{y}}}_{t}^{*}\right\| _2. \end{aligned}$$
(A5)

\(\square\)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cao, Y., Jiang, XQ., Wang, J. et al. An efficient algorithm with fast convergence rate for sparse graph signal reconstruction. EURASIP J. Adv. Signal Process. 2024, 38 (2024). https://doi.org/10.1186/s13634-024-01133-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-024-01133-3

Keywords