 Research
 Open access
 Published:
An efficient algorithm with fast convergence rate for sparse graph signal reconstruction
EURASIP Journal on Advances in Signal Processing volume 2024, Article number: 38 (2024)
Abstract
In this paper, we consider the graph signals are sparse in the graph Fourier domain and propose an iterative threshold compressed sensing reconstruction (ITCSR) algorithm to reconstruct sparse graph signals in the graph Fourier domain. The proposed ITCSR algorithm derives from the wellknown compressed sensing by considering a threshold for sparsitypromoting reconstruction of the underlying graph signals. The proposed ITCSR algorithm enhances the performance of sparse graph signal reconstruction by introducing a threshold function to determine a suitable threshold. Furthermore, we demonstrate that the suitable parameters for the threshold can be automatically determined by leveraging the sparrow search algorithm. Moreover, we analytically prove the convergence property of the proposed ITCSR algorithm. In the experimental, numerical tests with synthetic as well as 3D point cloud data demonstrate the merits of the proposed ITCSR algorithm relative to the baseline algorithms.
1 Introduction
In recent years, there is a noteworthy surge in research interest regarding the analysis of irregular structured signals across various fields and applications. In many practical scenarios, the occurrence of missing signals within these irregular structured signals poses a critical concern. The loss of signals can be attributed to various factors, including sensor malfunctions, signal transfer losses or system maintenance activities. In this context, signal reconstruction techniques are developed to substitute the missing signals with dependable estimations. Furthermore, graph signal processing (GSP) [1,2,3] is introduced as an intuitive framework to deal with graph signals lying on an irregular structure, which applies to a wide class of classical use cases such as traffic [4], sensor network [5], 3D point cloud [6, 7], image processing [8], air pollution monitoring platform [9] and recommendation system [10]. In the realm of GSP tools, the task of reconstruction emerges as a straightforward approach to address the intricate challenge of estimating missing signals.
In classical situations, graph signal reconstruction is often based on some assumptions, such as smoothness, bandlimitedness, stationarity and sparsity. [11] formulated the graph model for conflict resolution (GMCR) and employed the proximal gradient algorithm to solve it. The GMCR assumed signal smoothness on the graph and the penalty term utilized in this approach is based on the quadratic form of graph total variation. However, this assumption can lead to a deteriorated reconstruction if the signals of interest have dissimilar values between the vertices that are connected with respect to the ground truth topology. [12] introduced a graph signal reconstruction technique for estimating the power spectral density (PSD) of graph signals and was further utilized to develop Wienertype estimation procedures for signals that are both noisy and partially observed. Nevertheless, the reconstruction method proposed in [12] cannot directly estimate the PSD. Instead, the method assumed that the graph signals are stationary and stochastic. Inspired by the idea of the classical Papoulis–Gerchberg iterative scheme, [13] proposed the iterative least square reconstruction (ILSR) to reconstruct the bandlimited graph signals from sampled signals. In addition, [14] proposed two efficient graph signal reconstruction methods, namely the iterative propagating reconstruction (IPR) and the iterative weighting reconstruction (IWR). To mitigate the computational expense associated with eigendecomposition of the graph Laplacian matrix when dealing with bandlimited graph signals, [15] leveraged Gershgorin disks to optimize a sampling criterion relying on the lower bound of the smallest eigenvalue of the graph Laplacian matrix. The methods presented in [12,13,14,15] assumed that the graph signals are bandlimited in the graph Fourier domain, and the bandlimited graph signals are also defined as sparse graph signals. However, a major difference in this paper is that bandlimited graph signals, as graph Fourier domain sparse signals, have a certain bandwidth limitation across the entire graph Fourier domain. In other words, the nonzero coefficients of bandlimited graph signals only exist in the lowfrequency space which cannot fully capture all the characteristics of realworld signals.
Inspired by the theory of compressed sensing (CS), many researchers proposed a variety of signal reconstruction algorithms based on sparse representation, such as basis pursuit (BP) [16], orthogonal matching pursuit (OMP) [17] and iterative hard thresholding (IHT) [18]. These algorithms provided important theoretical foundations and methodologies for sparse graph signal reconstruction. In the references related to sparse graph signal reconstruction [19,20,21], notable contributions include research on modeling the sparsity of graph signals and the development of compressed sensing reconstruction algorithms based on graph signals. The concept of sparse graph signal model refers to the existence of nonzero coefficients throughout the entire graph Fourier domain, including the highfrequency space. [19] proposed iterative method with adaptive thresholding for graph interpolation (IMATGI) to sparsitypromoting reconstruction of graph signals. Furthermore, [20] presented the recovery and sampling algorithms by selecting predetermined cardinality subset of vertices that guarantees reconstruction of the graph signals with the lowest possible reconstruction error. [21] proposed three sparse graph signal reconstruction algorithms based on BP, OMP and IHT, respectively. In order to achieve efficient compression, sampling and reconstruction of signals, the compressive sampling matching pursuit (CoSaMP) algorithm was developed [22]. These references contributed important insights into understanding the sparsity of graph signals and proposed effective reconstruction algorithms. However, it is unfortunate that as the vertex count increases in the graph, the distortion in the reconstructed graph signals using these algorithms also escalates.
In this paper, we consider the graph signals are sparse in the graph Fourier domain. For the sparse graph signal reconstruction, we propose an iterative threshold compressed sensing reconstruction (ITCSR) algorithm, which effectively addresses the challenges associated with signal reconstruction. The proposed ITCSR algorithm enhances the performance of sparse graph signal reconstruction by introducing a threshold function to determine a suitable threshold. Furthermore, we demonstrate that the suitable parameters for the threshold can be automatically determined by leveraging the sparrow search algorithm (SSA) [26]. Convergence analysis of the proposed ITCSR algorithm is provided. Experiments on both synthetic datasets and 3D point cloud datasets demonstrate that the reconstruction performance of the proposed ITCSR algorithm outperforms that of the baseline algorithms.
The main contributions in this paper are summarized as follows.

1.
It proposes a new algorithm, referred to as ITCSR, for solving sparse graph signal reconstruction problem.

2.
It gives a theoretical analysis of the convergence property of the proposed algorithm by utilizing the restricted isometry property (RIP).

3.
It demonstrates that the proposed ITCSR algorithm indeed produces a significant improvement in terms of sparse graph signal reconstruction performance in publicly available 3D point cloud datasets.
1.1 Paper overview
The remainder of this paper is organized as follows. Some related basic definitions of GSP and \(K\)sparse graph signals are stated in Sect. . Section formulates the target model and describes the proposed ITCSR algorithm. In Sect. , the convergence analysis for the proposed ITCSR algorithm is presented. In Sect. , the proposed algorithm is compared with the baseline algorithms and experiments to validate the performance of our ITCSR algorithm. Finally, we conclude the paper in Sect. .
1.2 Notation
In this paper, we adopt the following notations. Boldface lowercase letters denote vectors, whereas boldface uppercase letters denote matrices. In the tables, we represent the optimal values in bold font. \(\textrm{diag}(\cdot )\) is a diagonal matrix with its argument along the main diagonal. \((\cdot )^{\top }\), \((\cdot )^{1}\), \((\cdot )^{\dag }\) denotes transpose operation, inverse operation and pseudoinverse operation, respectively. The notation \(\cdot \) indicates the cardinality of a set. For a vector \({\textbf{f}}\), \(\Vert {\textbf{f}}\Vert _0\) and \(\Vert {\textbf{f}}\Vert _2\) represent its 0norm and Euclidean norm, respectively.
2 Preliminary
In this section, we review definitions of graph Laplacian matrix and sparse graph signals that are used in the development of the proposed ITCSR algorithm and theoretical framework.
2.1 Graph Laplacian matrix
An undirected graph without selfloops is represented as \({\mathcal {G}}:= \{{\mathcal {V}}, {\mathcal {E}}, {\textbf{A}} \}\), where \({\mathcal {V}} = \{v_1, v_2,..., v_N\}\) is the vertex set and \({\mathcal {E}} \subseteq {\mathcal {V}}\times {\mathcal {V}}\) is the set of undirected edges connecting the vertices. \({\textbf{A}}\in {\mathbb {R}}^{N\times N}\) indicates the adjacency matrix of the graph. The elements of \({\textbf{A}}\) are defined as
In general, adjacency matrix \({\textbf{A}}\) is a matrix used to represent the connection relationship between vertices in a graph. As an extension of the adjacency matrix, the weight matrix \({\textbf{W}}\) contains the weight information of the connection. The element \(w_{ij}\) of \({\textbf{W}}\) represents the weight value of the connection between vertex i and vertex j. It is important to note that edge weights are usually assumed to be nonnegative real numbers. For each vertex \(v_i\), we define degree \(d_i = \sum _{j=1}^N{w_{ij}}\) as the sum of edge weights connected to it. In this way, the degree matrix is defined as \({\textbf{D}}= \text {diag}(d_1, d_2, \cdots , d_N)\). Along with \({\textbf{W}}\) and \({\textbf{D}}\), the combinatorial Laplacian matrix \({\textbf{L}} \in {\mathbb {R}}^{N\times N}\) is defined as
Since \({\textbf{L}}\) is a real symmetric and positive semidefinite matrix, it admits the eigendecomposition \({\textbf{L}} = {\textbf{U}} \varvec{\Lambda } {\textbf{U}}^{\top }\), where the \(\varvec{\Lambda } = \text {diag}(\lambda _1, \lambda _2, \cdots , \lambda _N)\) is a diagonal matrix of nonnegative eigenvalues and the unitary matrix \({\textbf{U}} = [\varvec{u}_1, \varvec{u}_2, \cdots , \varvec{u}_N]\) containing the corresponding eigenvectors.
2.2 \(K\)sparse graph signals
Graph signals are scalarvalued mapping \({\mathcal {f}}: {\mathcal {V}} \mapsto {\mathbb {R}}\) defined on the vertices of the graph. It can be represented as a vector \({\textbf{f}} = \left[ f_1,f_2,\cdots ,f_N\right] ^\top \in \mathbb {R}^N\), where the ith component \(f_i\) represents the value of the signal at vertex \(v_i\). Since the Laplacian matrix \({\textbf{L}}\) satisfies the property of real symmetry, all the eigenvalues of \({\textbf{L}}\) are nonnegative, \(0 = \lambda _1 \le \lambda _2 \le \cdots \le \lambda _N\), and the corresponding eigenvectors are orthogonal. Then, the graph Fourier transform (GFT) is defined as the projection onto the orthogonal set of the eigenvectors of \({\textbf{L}}\), i.e.,
Similar to classical Fourier analysis, the eigenvalues \(\{\lambda _i\}_{0\le N}\) in this context represent the frequencies of the graph, while \(\hat{f}_i\) represents the frequency component. And the inverse graph Fourier transform (IGFT) can be expressed as
For a given set \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\), where \(K \le N\), the graph signals \(\hat{{\textbf{f}}}\) exhibit \(K\)sparsity in GFT domain and can be mathematically defined as follows
In other words, \(K\)sparse graph signals \(\hat{{\textbf{f}}}\) can be expressed as a linear combination of \(K \le N\) eigenvectors of the graph Laplacian \({\textbf{L}}\), which can be represented as
A major difference is that in many previous works, it is assumed that sparse graph signals in graph Fourier domain are confined to lowfrequency space, meaning that the positions corresponding to frequency components \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\) are known [5] [10]. Due to the assumption of smoothness in the graph Fourier domain and the absence of any frequency components beyond its bandwidth, the bandlimited graph signal model fails to capture the details and rapid changes in the highfrequency space of the signal. Consequently, important information may be lost during the graph signal reconstruction process. Therefore, we propose a sparse graph signal model that considers all frequency components within the entire graph Fourier domain, aiming to enhance the performance of graph signal reconstruction and better preserve crucial information within the realword signals. It implies that the positions corresponding to frequency components \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\) are unknown.
2.3 The framework of SSA
SSA is a swarm intelligence optimization algorithm based on the behavior of sparrows foraging and avoiding predators. In this paper, the position of sparrows can be represented in the following matrix
where n is the number of sparrows. Then, the fitness value of all sparrows can be expressed by the following vector
where the value of each row in F represents the fitness value of the individual. During each iteration, the location update of producers is defined as
where \(X_{i,j}\) represents the element in the ith row and jth column of the matrix X. Q is a random number following a normal distribution and I represents a matrix with all elements being 1. \(\eta \in (0, 1]\) is a random number and \(l_{\max }\) represents the number of maximum iteration. \(r \in [0, 1]\) and \(S_T \in [0.5, 1]\) represent the alarm value and the safety threshold, respectively. Similarly, the location update of scroungers is defined as
where \(X_{w}\) represents the current globally worst position of the individual and C represents a matrix, where each element is randomly assigned a value of 1 or \(1\). When sparrow detect danger and the alarm value more than the safety value, the location updated by
where \(X_{b}\) represents the current globally best position of the individual. As the control parameter for step size, \(\zeta\) is a normal distribution of random numbers with a mean of 0 and a variance of 1, ensuring the original meaning remains unchanged. \(F_{i}\) represents the current fitness values of the individual, \(F_{b}\) and \(F_{w}\) are the current global best and worst fitness values of the individual, respectively. In this paper, \(F_{b}\) is defined as
The pseudocode for the model proposed in this study is shown in Algorithm 1.
3 The proposed ITCSR algorithm for \(K\)sparse graph signals
In this section, we consider the reconstruction of \(K\)sparse graph signals. First, we formulate the \(K\)sparse graph signals reconstruction as solving an underdetermined equation system. We then present the proposed ITCSR algorithm to solve this problem and show the pseudocode of the proposed ITCSR algorithm.
3.1 Problem formulation
Ksparse graph signals \(\hat{{\textbf{f}}} \in {\mathbb {R}}^{N}\) in the graph Fourier domain, with sparsity \(K\), can be reconstructed using a reduced number of available samples, denoted as \(M < N\). These samples correspond to a random subset \({\mathcal {S}} = \{m_1, m_2, \cdots , m_M\}\), selected from the vertex set \({\mathcal {V}} = \{1, 2, \cdots , N\}\). The measurements vector containing the available samples is defined as
where \(f_{m_M}\) represents the sampled graph signals. According to (4), it then yields the following form
where the matrix \({\textbf{U}}_{MN}\) is obtained by selecting the rows from the matrix \({\textbf{U}}\) that correspond to the set \({\mathcal {S}}\). For simplicity, \({\textbf{U}}_{MN}\) is represented by \(\mathbf {\Phi }\). (13) denotes the underdetermined system of \(M\) equations and \(N\) unknowns (\(M < N\)), which can be formulated as
It is obvious that (14) is an underdetermined system of equations and \(\hat{{\textbf{f}}}\) is not uniquely reconstructive from \({\textbf{y}}\) by linear algebraic means, as (14) may have many solutions. However, we aim to find a sparse solution, and sparsity can serve as a powerful constraint for specific measurement matrices \(\mathbf {\Phi }\). It means that (14) can be solved when the position \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\) is known. The formulation of the \(K\)sparse graph signal reconstruction in this paper considers the scenario where the positions of nonzero GFT coefficients are unknown, which can be formulated as
where the operator \(\Vert \cdot \Vert _0\) is used as a sparsity measure.
3.2 The proposed ITCSR algorithm
This subsection introduces the proposed ITCSR algorithm to solve (15). The objective of ITCSR is to obtain an approximate solution to \({\textbf{y}} = \mathbf {\Phi }\hat{{\textbf{f}}}\). Now, we will outline its fundamental components. We denote \(\varvec{\eta }\) as a vector of residual correlations and \({\textbf{r}}\) as a residual vector. Next, the maximum function of the residual correlations can be expressed as follows
On this basis, a subvector is defined as \({\textbf{b}} \in \mathbb R^{N1}\) that represents the vector obtained by removing \({\mathcal {M}} (\varvec{\eta })\) from the vector \(\varvec{\eta }\). After that, a threshold function \(\mathrm V_{th}\) can be denoted as
where \(\alpha\) and \(\beta\) are parameters and can be optimized by SSA. In this paper, we set the fitness value \(F = \min \Vert \Phi _{{\mathcal {C}}}^{\top }{\textbf{y}}  {\textbf{y}}\Vert _2^2, \; s.t. \; {\mathcal {C}} = \left\{ q: \left \Phi ^{\top }{\textbf{y}}\right \ge \alpha \left[ {\mathcal {M}}(\Phi ^{\top }{\textbf{y}} )+ \beta {\mathcal {M}} (\varvec{\zeta })\right] \right\}\), where \(\varvec{\zeta }\) represents the vector obtained by removing \({\mathcal {M}} (\Phi ^{\top }{\textbf{y}})\) from the vector \(\Phi ^{\top }{\textbf{y}}\). According to the SSA algorithm, the values of parameters \(\alpha\) and \(\beta\) should fall within the range [0, 1]. In conjunction with our threshold function, when \(\alpha\) is set to 0, the threshold setting becomes unreasonable. Therefore, in this paper, the value of \(\alpha\) is set to (0, 1]. After discussing these descriptions of the basic ingredients, we state the proposed ITCSR algorithm in detail. ITCSR is an iterative and greedy algorithm. At each step, ITCSR selects several elements from the residual correlation vector \(\varvec{\eta }\), and adds its index to the identified support set \({\mathcal {K}}\). ITCSR performs least square to estimate the signal. We let t be the tth iteration of the ITCSR algorithm. Specifically, for the following descriptions, we let superscripts denote the iteration number of the ITCSR algorithm. ITCSR starts with initial graph signals \(\hat{{\textbf{f}}}^0 = 0\) and initial residual vector \({\textbf{r}}^0 = {\textbf{y}}\). The ITCSR algorithm also contains a set of estimates \({\mathcal {K}}\) of the positions of the nonzero coefficients and the initial estimated positions \({\mathcal {K}}^0 = \emptyset\). The ITCSR algorithm, proposed as a solution to the considered problem, is composed of three fundamental steps:

Position Estimation: Estimate positions \({\mathcal {K}} = \{k_1, k_2, \cdots , k_K\}\) of nonzero coefficients.

Coefficient Reconstruction: Reconstruct nonzero coefficients \(\hat{f}_k\) at estimated positions k.

Signal Reconstruction: Reconstruct original graph signals \({\textbf{f}}\).
At the step of Position Estimation, the tth iteration performs the inner product of the current residual and the measurement matrix \(\varvec{\Phi }\) to obtain the residual correlation vector
which contains a small number of significant nonzeros in each entry. After a simple calculation, we could easily get the \(\varvec{\eta }^t\). The ITCSR algorithm performs thresholding to find the significant nonzeros. Thresholding yields an index set
where \((\varvec{\eta }_i)^t\) represents the ith element in the vector \(\varvec{\eta }\) at tth iteration. Then, we merge the subset of newly selected coordinates with the previous support estimate, thereby updating the estimated available positions
At the step of Coefficient Reconstruction, according to estimated positions, we can calculate the nonzero coefficients by projecting the vector \({\textbf{y}}\) onto the columns of \(\varvec{\Phi }\). Letting \(\varvec{\Phi }_{{\mathcal {K}}}\) denote the \(M \times {\mathcal {K}}\) matrix with columns chosen using index set \(\mathcal K\). The updated \((\hat{{\textbf{f}}}')^t\) is defined as
In combination with (20), the estimation of \(\hat{{\textbf{f}}}\) is defined as
By combining (20) and (22), the updated residual vector can be expressed as follows
At the step of Signal Reconstruction, in accordance with the estimated positions and the corresponding nonzero coefficients, the reconstructed graph signals \((\hat{{\textbf{f}}}')^t\) are calculated by (4)
Algorithm 2 provided a summary of the proposed ITCSR algorithm. It is worth noticing that the thresholding strategy used in ITCSR algorithm utilizes the nonlinear approach that will allow several elements to be selected in (19). This threshold policy allows the proposed ITCSR algorithm to converge in fewer iterations.
4 Convergence analysis
In this section, we first introduce the restricted isometry constant (RIC) [23,24,25] for ease of theorem statement. Then, analytically prove the convergence of the proposed ITCSR algorithm through the introduction of Theorem 1. Our convergence analysis method cites reference [34].
The \(\delta _K\) represents an upper limit on the singular values of any submatrix of \(\varvec{\Phi }\) containing \(K\) or fewer elements, and it is symmetric. The RIC (Restricted Isometry Constant) is defined as the minimum value that satisfies the following condition
The condition holds true for all \({\textbf{a}}\) vectors that contain at most \(K\) nonzero elements. If the matrix has a small constant \(\delta _K\), then every submatrix is very nearly orthogonal.
Theorem 1
For any observation \({\mathbf{y}} = {\varvec{\Phi }}\hat{f}\), the ITCSR algorithm will end up with selecting \(K\) nonzero elements if the threshold optimization parameters \(\alpha\), \(\beta\) satisfy the following condition,
The estimate signal \(\tilde{{\textbf{y}}}^{t} = \mathbf {\Phi }\hat{{\textbf{f}}}^t\) satisfies
where
and where \(\alpha\) and \(\beta\) as defined in Sect. . The superscript t represents the current iteration number. \((\tilde{{\textbf{y}}}^{*})^{t} = \varvec{\Phi }_{\mathbf {\Gamma }^{*}} \hat{\textbf{f}}_{\varvec{\Gamma }^{*}}\) and \(\varvec{\Gamma }^{*}\) is the index set of the largest \(K\) elements in \(\hat{{\textbf{f}}}\). \(\blacksquare\)
The mathematical proof for Theorem 1 is available in the Appendix. Theorem 1 states that the proposed ITCSR algorithm guarantees a solution within a finite number of iterations. In general, achieving an exact reconstruction relies on having a RIP with a small RIC. Based on this relationship, a smaller value of \(\delta _K\) corresponds to a smaller constant c in equation (28). In equation (27), it is evident that a smaller value of c corresponds to a smaller reconstruction error. Similarly, equation (28) demonstrates how c changes with different values of the threshold optimization parameters \(\alpha\) and \(\beta\). In this paper, the value range of \(\alpha\) is (0, 1] and \(\beta\) is [0, 1] [26]. When \(\alpha\) and \(\beta\) approach their upper bounds, c tends to decrease. Additionally, when \(\delta _K\) falls within the range of (0, 1), a smaller value of c leads to better reconstruction performance.
5 Experiments
This section conducts numerical experiments to gain insights into the proposed ITCSR algorithm and assess its reconstruction performance. Unless specified otherwise, we utilize GSPBOX in MATLAB for graph signal processing, graph construction and visualizations. Firstly, we test the proposed ITCSR algorithm with synthetic data under different sampling methods and compare the results with baseline algorithms. The compared algorithms are CoSaMP, ILSR and IMATGI. The computational complexity of the ILSR, CoSaMP, IMATGI and ITCSR algorithms is shown in Table 1, where K represents the level of sparsity and t represents the number of iterations. The computational complexity of the ITCSR algorithm is \(O(tK\log N)\). Secondly, we apply the proposed algorithm to 3D point cloud model datasets [39] which are the Block, Cactus, Fandisk, Skull, Gargo and Dino, respectively. The experiments are conducted using MATLAB R2018a on a desktop computer equipped with an Intel Core i710700 CPU and 32GB RAM. Three metrics, namely maximum absolute error (MAX), mean square error (MSE) and signal noise ratio (SNR) are employed to evaluate the reconstruction performance of all algorithms. MAX is defined as
MSE is defined as
and SNR is defined as
, respectively, where \(N_f\) represents the graph signal length. In the following experiments, the parameters \(\alpha\) and \(\beta\) are selected by SSA. Smaller MAX and MSE values indicate better reconstruction performance. To guarantee exact reconstruction from compressed measurements, one should choose a specific matrix which satisfies the RIP. It is notable that the coherence of a matrix \(\varvec{\Phi }\) is a computable property that offers concrete guarantees for reconstruction. The largest inner product between any two columns \(\phi _i\) and \(\phi _j\) of matrix \(\varvec{\Phi }\) defines the coherence of the matrix \(\varvec{\Phi }\), denoted as
According to the Welch bound [28], the lower bound of \(\mu (\varvec{\Phi })\) is \(\sqrt{\frac{NM}{M\left( N1\right) }}\). It is possible to show that the coherence of a matrix is always in the range \(\left[ \sqrt{\frac{NM}{M(N1)}}, 1\right]\). Similarly, in accordance with Geršgorin circle theorem [29], we can obtain that \(\varvec{\Phi }\) satisfies the RIP of order K with \(\delta _K = (K  1) \mu (\varvec{\Phi })\) if \(\varvec{\Phi }\) has coherence \(\mu (\varvec{\Phi })\) for all \(K < \frac{1}{\mu (\varvec{\Phi })}\). Based on this new relation, the coherence in this paper should be in the range \(\left[ \sqrt{\frac{NM}{M(N1)}}, \frac{1}{2K1}\right]\) that guarantees reconstruction uniqueness.
5.1 Experiments on synthetic data
In this subsection, the performance of the proposed ITCSR algorithm is shown by two numerical experiments. In the experiments, two synthetic datasets are adopted, including the sensor network dataset and the swiss roll dataset.
5.1.1 Synthetic data: sensor network
In this experiment, we use the sensor network dataset to illustrate the reconstruction performance by the proposed ITCSR algorithm. To begin, we establish the default experimental setup in this experiment. The sensor network dataset is generated with \(N=600\) by
where \({\lceil {\sqrt{N}} \rceil }\) denotes the smallest integer that is not less than \(\sqrt{N}\) and \(x \sim U(0,1)\) represents a random variable x that follows uniform U(0, 1) distribution. Then, we construct an undirected graph by 6nearest neighboring algorithm. The weight matrix \({\textbf{W}}\) can be obtained by assigning a value of 1 if there is an edge between two sensors, and it is 0 otherwise. With \(\mathbf {L = U\Lambda U}^{\top }\) denoting the eigendecomposition of graph Laplacian, the \(K\)sparse graph signals \({\textbf{f}}\) are generated as \(\varvec {f = U\hat{f}}\), where \(\hat{f}\) are independent and identically distributed (i.i.d) Gaussian distribution and the \(K = 4\) entries with most significant absolute values in the random signal are kept while the rest are set to zero. As for the selection of the sample set in this experiment, we mainly chose the following methods: maxsigmin (M1) [30], maxvolume (M2) [31], maxigmin (M3) [32] and random sampling (M4). For simplicity, the strategy of M4 in this paper includes the following two steps. First, initializing an array of size \(N\), where \(N\) is the number of elements in the permutation. Fill the array with the values 1 to \(N\) and randomly shuffle the numbers. Then, the first \(M\) number is selected as the sampling vertex.
In this paper, we conducted signal reconstructions using various parameter values to investigate the influence of parameter settings on the performance of the reconstruction algorithm. Table 2 shows the MSEs results under different sampling methods. Different parameters may lead to different performances for different sampling methods. As shown in Table 2, the values of parameters \(\alpha\) and \(\beta\) set to (1, 0) and (0.5, 0.5) by randomly and (0.4043, 0.6886) by SSA. As can be seen, the reconstruction performance improves as the value of optimal parameters \(\alpha\) and \(\beta\) are obtained by SSA. In particular, the MSE of the reconstructed signal by the proposed ITCSR algorithm can be reach \(3.2383e04\) when sampling rate is 0.5 after adjusting the threshold parameters by SSA, where \(e04\) represents a value where the decimal point is moved four places to the left. Then, to verify the convergence of the proposed ITCSR algorithm, we calculate the coherence of the matrix \(\varvec{\Phi }\) under different sampling rates with different sampling methods in Table 3. First, we used the above four sampling methods to generate 100 groups of data at different sampling rates and then substituted the data of each group in (32) to calculate coherence and finally took the average value. It can be seen that the coherence of matrix \(\varvec{\Phi }\) guarantees uniqueness.
In Fig. 1, to reveal the impact of the value K on ITCSR algorithm, we reconstructed the sensor network graph signals using different measurements M and the value K. As also described in Fig. 1, percentage reconstructed can achieve better performance under the same value K when measurements M increased. To compared the reconstruction performance of the proposed algorithm with baseline algorithms, we set the sparsity factor K/N from \(10\%\) to \(60\%\). For each sparsity factor and different algorithms, we reconstruct using 20 iterations and report the average achieved SNR in Fig. 2. As observed in Fig. 2, all curves experience a sudden knee like fall in reconstruction SNR as the sparsity factor increases. As expected, the proposed ITCSR algorithm can successfully reconstruct less sparse signals. Figure 3 shows the MSE versus the number of iterations, with \(60\%\) sampling rate. It is observed that the proposed ITCSR algorithm exhibits both the fast convergence and the minimum reconstruction error. Overall, the proposed ITCSR algorithm can achieve good reconstruction performance while maintaining fast convergence. To evaluate the reconstruction performance of the ITCSR algorithm against conventional sensor signal reconstruction algorithms, we conducted a specific experiment. In this experiment, we additionally introduced a relative thresholdbased sparsity estimation method (RTSE) [37] and robust sparsity estimation method (RSE) [38] for comparative analysis. Figure 4 indicates the percentage reconstructed estimation with the number of measurements. In the simulation, we set \(K = 12\). Figure 4 illustrates that the percentage reconstructed increased with the increase of M. It is obvious that the percentage reconstructed of ITCSR is larger than percentage reconstructed of RTSE and RSE. In other words, the proposed ITCSR algorithm has higher correct rate than conventional algorithms in our simulation.
5.1.2 Synthetic data: swiss roll
In this experiment, we use the swiss roll dataset in 3D space to illustrate the reconstruction performance by the proposed ITCSR algorithm. The swiss roll dataset is generated with \(N = 2000\) by
Then, we construct an undirected graph by 5nearest neighboring algorithm. It is worth mentioning that the M4 method is selected.
In Fig. 5, we show the 3D swiss roll dataset visualization of the reconstructed data. Figure 5a, 5b illustrate the original swiss roll and sampled swiss roll. Figure 5c, d, e and f illustrates the visualization reconstruct results under sampling rate is \(50\%\) by using ILSR, CoSaMP, IMATGI and proposed ITCSR algorithm for swiss roll dataset, respectively. Overall, we observe that ITCSR outperforms other reconstruction algorithm. Even when compared to the ILSR and IMATGI algorithms, ITCSR still exhibits competitive reconstruction performance. To be specific, compare Fig. 5a and c, the reconstruction performance of the proposed ILSR algorithm has obvious deviation. Compare with Fig. 5a, d and e, many obvious outliers exist in the swiss roll data reconstructed by the CoSaMP and IMATGI algorithms. The convergence comparison of the considered methods with \(60\%\) sampling rate as shown in Fig. 6. As expected, the proposed ITCSR algorithm exhibits the minimum reconstruction errors and the fastest convergence.
5.2 Experiments on 3D point cloud data
In this subsection, we focus on applying our proposed ITCSR algorithm to six 3D point cloud datasets which are Block, Cactus, Fandisk, Skull, Gargo and Dino, respectively. To simplify the process, the graph is constructed using the 5nearest neighboring algorithm. The weighted matrix is formed by assigning the edge weight as \(\textbf{W}(i,j) = \frac{1}{d_{i,j}^2}\), where \(d_{i,j}\) represents the distance between vertex i and j. To normalize \({\textbf{W}}_{\textrm{norm}}\), we divide \({\textbf{W}}\) by its maximum value, denoted as \(\max ({\textbf{W}})\). Then, the graph signals are defined as “Location X,” “Location Y,” and “Location Z.”
Our objective is to estimate the missing Location values within an incomplete matrix of relative Location data. We select \(50\%\), \(60\%\) and \(70\%\) of the measurements by the M4 method and reconstruct all graph signals through the spectrum domain. We calculate the MAXs on six 3D point cloud datasets, and the results are shown in Table 4. In general, we observe that for all reconstruction algorithms, the reconstruction performance improves when the sampling ratio increases. For the different sampling ratios under test, the MAX performance of ITCSR is very competitive. The proposed ITCSR algorithm is far superior to the ILSR algorithm and IMATGI algorithm. Specifically, in Gargo 3d point cloud dataset, the Maxs of the ILSR algorithm, IMATGI algorithm and proposed algorithm in Location Z with 0.5 sampling ratio are 39.3352, 36.4275 and 14.5528, respectively. Then, to verify the convergence of the proposed ITCSR algorithm, we calculate the coherence of the matrix \(\varvec{\Phi }\) under different sampling rates with different sampling methods in Table 5.
To illustrate the performance of the proposed ITCSR algorithm, as shown in Figs. 7b, 8b, the Fandisk with 6470 vertices and Skull 3D point cloud datasets with 20002 vertices are selected and sampling ratio is \(50 \%\). Compared with Figs. 7a, 8a, it is observed that the original signals are significantly missing. Overall, we observe that ITCSR improves the reconstruction performance in comparison with the other algorithms. This is because ITCSR utilizes the more general assumption of sparsity rather than the bandlimitedness of the underlying graph signals. Moreover, ITCSR combines the theory of compressed sensing and related concepts of graph signal processing and optimizes the selection of the threshold. To be specific, as indicated in Figs. 7c, 8c, the reconstructed 3D point cloud signal by using ILSR is distorted. Figure 7d, 8d shows the reconstructed 3D point cloud signal by using CoSaMP algorithm. As we can see, the accuracy of reconstructed signals by CoSaMP algorithm is insufficient as the data amount increases. Figures 7e and 8e show the reconstructed 3D point cloud signal by using IMATGI can retain its original basic shape, but there are many outliers. Moreover, it can be seen from Figs. 7f and 8f that with the increase in vertex, the proposed ITCSR algorithm is obviously superior to the ILSR and IMATGI algorithms.
6 Conclusion
In this paper, we proposed the ITCSR algorithm for sparsitypromoting reconstruction of signals defined on graphs. The ITCSR algorithm considered the suitable threshold to ensure the performance in signal reconstruction and we provided a formal convergence analysis. Compared with the OMP, CoSaMP, ILSR and IMATGI, the proposed ITCSR algorithm decreased the MSE on synthetic datasets under different sampling methods. Experimental results on six public 3D Point Cloud datasets demonstrated that our algorithm performs better, the MAX on Block, Cactus, Fandisk, Skull, Gargo and Dino less than those of the reference algorithms under 0.5, 0.6, 0.7 sampling ratio, respectively.
Availability of data and material
Not applicable.
References
A. Sandryhaila and J.M.F. Moura, Discrete signal processing on graphs: graph fourier transform, in: 2013 International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 61676170 (2013) https://doi.org/10.1109/ICASSP.2013.6638850
D.I. Shuman, S.K. Narang, P. Frossard, A. Ortega, P. Vandergheynst, The emerging field of signal processing on graphs: extending highdimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 30(3), 83–98 (2013). https://doi.org/10.1109/MSP.2012.2235192
A. Ortega, P. Forssard, J. Kovaevi, J.M.F. Moura, P. Vandergheynst, Graph signal processing: overview challenges, and applications. Proc. IEEE Inst. Electr. Electron Eng. 106(5), 808–828 (2018)
P. Xu, J. Zhang, T. Gao, S. Chen, X. Wang, H. Jiang, W. Gao, Realtime fast charging station recommendation for electric vehicles in coupled powertransportation networks: a graph reinforcement learning method. Int. J Elec. Power. 141, 108030 (2022). https://doi.org/10.1016/j.ijepes.2022.108030
J. Feng, F. Chen, H. Chen, Data reconstruction coverage based on graph signal processing for wireless sensor networks. IEEE Wirel. Commun. Lett. 11(1), 48–52 (2022). https://doi.org/10.1109/LWC.2021.3120276
C. Dinesh, G. Cheung and I.V. Baji, 3D point cloud superresolution via graph total variation on surface normals, in: 2019 IEEE International Conference on Image Processing (ICIP), pp. 43904394 (2019) https://doi.org/10.1109/ICIP.2019.8803560
X. Shang, R. Ye, H. Feng and XueQin. Jiang, Robust feature graph for point cloud denoising, in: 2022 7th International Conference on Communication, Image and Signal Processing (CCISP), pp. 330336 (2022) https://doi.org/10.1109/CCISP55629.2022.9974370
Q. Huang, R. Li, Z. Jiang, W. Feng, S.Wei, S. Lin, H.Feng and B.Hu, Fast colorguided depth denoising for RGBD images by graph filtering, in: 2019 53rd Asilomar Conference on Signals, Systems, and Computers(ACSSC), pp. 18111815 (2019)https://doi.org/10.1109/IEEECONF44664.2019.9048703
P.F. Cid, J.M.B. Ordinas, J.G. Vidal, Graph signal reconstruction techniques for IoT air pollution monitoring platforms. IEEE Internet Things J. 9(24), 25350–25362 (2022). https://doi.org/10.1109/JIOT.2022.3196154
A. Hashemi, R. Shafipour, H. Vikalo, G. Mateos, Towards accelerated greedy sampling and reconstruction of bandlimited graph signals. Signal Process. 195(10), 0165–1684 (2022). https://doi.org/10.1016/j.sigpro.2022.108505
S. Chen, A. Sandryhaila, J.M.F. Moura, J. Kovaevi, Signal recovery on graphs: variation minimization. IEEE Trans. Signal Process. 63(17), 4609–4624 (2015). https://doi.org/10.1109/TSP.2015.2441042
N. Perraudin, P. Vandergheynst, Stationary signal processing on graphs. IEEE Trans. Signal Process. 65(13), 3462–3477 (2017). https://doi.org/10.1109/TSP.2017.2690388
S.K. Narang, A. Gadde, E. Sanou, A. Ortega, Localized iterative methods for interpolation in graph structured data. IEEE Glob. Conf. Signal Imform. Process. (GlobaISIP) 2013, 491–494 (2013). https://doi.org/10.1109/GlobalSIP.2013.6736922
X. Wang, P. Liu, Y. Gu, Iterative reconstruction of graph signal in lowfrequency subspace. IEEE Glob. Conf. Signal Inform. Process. (GlobalSIP) 2014, 448–452 (2014). https://doi.org/10.1109/GlobalSIP.2014.7032157
Y. Bai, F. Wang, G. Cheung, Y. Nakatsukasa, W. Gao, Fast graph sampling set selection using Gershgorin disc alignment. IEEE Trans. Signal Process. 68, 2419–2434 (2020). https://doi.org/10.1109/TSP.2020.2981202
D.L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
J.A. Tropp, A.C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53(12), 4655–4666 (2007). https://doi.org/10.1109/TIT.2007.909108
T. Blumensath, M. Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009). https://doi.org/10.1016/j.acha.2009.04.002
M.B. Mashhadi, M. Fallah, F. Marvasti, Interpolation of sparse graph signals by sequential adaptive thresholds, in. Int. Conf. Sampl. Theory Appl. (SampTA) 2017, 266–270 (2017). https://doi.org/10.1109/SAMPTA.2017.8024339
M. Brajović, I. Stanković, M. Daković and L. Stanković, Reconstruction of sparse graph signals from reduced sets of samples, in 2023 27th International Conference on Information Technology (ICIT), pp. 15 (2023) https://doi.org/10.1109/IT57431.2023.10078603
C.C. Tseng and L. SuLing, A missing data recovery method of sparse graph signal in GFT domain, in: 2018 IEEE International Conference on Consumer ElectronicsTaiwan (ICCETW), pp. 12, https://doi.org/10.1109/ICCEChina.2018.8448787 (2018)
D. Needell, J.A. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009). https://doi.org/10.1016/j.acha.2008.07.002
E. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory, (2005), https://doi.org/10.48550/arXiv.math/0502327
W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory (2009). https://doi.org/10.1109/TIT.2009.2016006
J. Wang, S. Kwon, and B. Shim, Generalized orthogonal matching pursuit, IEEE Trans. Signal Process., 60(12), pp. 6202–6216 (2012)https://doi.org/10.1109/TSP.2012.2218810
J. Xue, B. Shen, A novel swarm intelligence optimization approach: sparrow search algorithm. Syst. Sci. Control. Eng. 8(1), 22–34 (2020). https://doi.org/10.1080/21642583.2019.1708830
E. Candes, T. Tao, Nearoptimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2007). https://doi.org/10.1109/TIT.2006.885507
S. Datta, S. Howard, D. Cochran, Geometry of the Welch bounds. Linear Algebra Appl. 437(10), 2455–2470 (2012). https://doi.org/10.1016/j.laa.2012.05.036
N. Beresford, A result complementary to Gersgorin? s circle theorem. Linear Algebra Appl. 431(1), 20–27 (2009). https://doi.org/10.1016/j.laa.2009.01.030
B. Girault, A. Ortega and S. S. Narayayan, Graph vertex sampling with arbitrary graph signal hilbert spaces, in: 2020 International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 56705674 (2020) https://doi.org/10.1109/ICASSP40776.2020.9054723
D. E. Olivier Tzamarias, P. Akyazi and P. Frossard, A novel method for sampling bandlimited graph signals, in 2018 26th European Signal Processing Conference (EUSIPCO), pp. 126130 (2018) https://doi.org/10.23919/EUSIPCO.2018.8553064
H. Shomorony and A. S. Avestimehr, Sampling large data on graphs, in: IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 933936 (2014) https://doi.org/10.1109/GlobalSIP.2014.7032257
D. Donoho, M. Elad, Maximal sparsity representation via l1 minimization. Comput. Sci. 100(12), 2197–2202 (2002). https://api.semanticscholar.org/CorpusID:17259152
J. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004). https://doi.org/10.1109/TIT.2004.834793
B. Girault, A. Ortega, and S.S. Narayayan, Graph vertex sampling with arbitrary graph signal hilbert spaces, in: 2014 International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 56705674, https://doi.org/10.48550/arXiv.2002.11238 (2014)
H. Shomorony and A. S. Avestimehr, Sampling large data on graphs, in: 2014 IEEE Global Conference on Signal and Imformation Processing (GlobaISIP), pp. 933936 (2014) https://doi.org/10.48550/arXiv.1411.3017
X. Xu, J. Chen, N. Wan, D. Chen and J. Wan, Sparsity estimation method in compressed data gathering of wireless sensor networks, in: 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), pp. 833836 (2019) https://doi.org/10.1109/ITAIC.2019.8785897
Qin, S., Yin, J, A Robust sparsity estimation method in compressed sensing, in: China Conference on Wireless Sensor Networks (CWSN), pp. 481488 (2014) https://doi.org/10.1007/978366246981146
3D Point Datasets, http://graphics.stanford.edu/data/3Dscanrep/ (2014)
Acknowledgements
Not applicable
Funding
This study was partly supported by the Innovation Program for Quantum Science and Technology (Grant No.2021ZD0300703) and the National Natural Science Foundation project (Grant No. 61971146).
Author information
Authors and Affiliations
Contributions
Yuting Cao was involved in methodology, data curation, software, writing—original draft. XueQin Jiang helped in conceptualization, methodology, supervision, writing—review & editing. Jian Wang contributed to writing—review & editing. Shubo Zhou helped in supervision. Xinxin Hou contributed to writing— review & editing.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors have no competing interests to declare.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A The proof of Theorem 1
Appendix A The proof of Theorem 1
Now, we prove Theorem 1 in Sect. . The method we adopted is an extension and enhancement based of prior work [34]. Before presenting the proof, we introduce two lemmas.
Lemma 1
For a matrix \(\varvec{\Phi }\) whose columns are distinct m vectors, indexed \(\lambda _1,\cdots ,\lambda _m\), we have a conclusion that the squared singular values of \(\varvec{\Phi }\) exceed \(1\delta _K\).
Lemma 2
It can be shown that \(\delta _K \le K\delta _{K+1}\) is established for every natural number K when matrix \(\varvec{\Phi }\) satisfies the RIP with \(\delta _K\).
We omit the easy proof and move on to the demonstration of the theorem.
Proof
Let \(\varvec{\Gamma }^{*}\) and \(\hat{{\textbf{f}}}_{\varvec{\Gamma }^{*}}^{*} = \varvec{\Phi }_{\varvec{\Gamma }^{*}}^{\dagger }{\textbf{y}}\) be under the assumption of Theorem 1. In iteration t, suppose that the algorithm has selected the index set \(\varvec{\Gamma }_{t} \subset \varvec{\Gamma }^{*}\). Let the index set \(H = \varvec{\Gamma }^{*}\) and \(J = \left\{ i,i\notin H \right\}\). Then, the equation can be defined
We may calculate that
where the last inequality comes from standard properties of the RIP constant [see Lemma 1 [33]]. Continuing the calculation
where the last step refers to Lemma 2 [34]. In accordance with (A2) and (A3), the proposed ITCSR algorithm therefore selects elements from set \(H\) satisfies
which (as both terms on the left need to be positive) is only possible if \(\delta _{ K +1}<\frac{\alpha \left( 1+\beta \right) }{\sqrt{ K }+\alpha \left( 1 + \beta \right) }\). It then yields the following form of
\(\square\)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Cao, Y., Jiang, XQ., Wang, J. et al. An efficient algorithm with fast convergence rate for sparse graph signal reconstruction. EURASIP J. Adv. Signal Process. 2024, 38 (2024). https://doi.org/10.1186/s13634024011333
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634024011333