Skip to main content

Robust reconstruction algorithm for compressed sensing in Gaussian noise environment using orthogonal matching pursuit with partially known support and random subsampling

Abstract

The compressed signal in compressed sensing (CS) may be corrupted by noise during transmission. The effect of Gaussian noise can be reduced by averaging, hence a robust reconstruction method using compressed signal ensemble from one compressed signal is proposed. The compressed signal is subsampled for L times to create the ensemble of L compressed signals. Orthogonal matching pursuit with partially known support (OMP-PKS) is applied to each signal in the ensemble to reconstruct L noisy outputs. The L noisy outputs are then averaged for denoising. The proposed method in this article is designed for CS reconstruction of image signal. The performance of our proposed method was compared with basis pursuit denoising, Lorentzian-based iterative hard thresholding, OMP-PKS and distributed compressed sensing using simultaneously orthogonal matching pursuit. The experimental results of 42 standard test images showed that our proposed method yielded higher peak signal-to-noise ratio at low measurement rate and better visual quality in all cases.

1. Introduction

Compressed sensing (CS) is a sampling paradigm that provides the signal compression at a rate significantly below the Nyquist rate [1–3]. It is based on that a sparse or compressible signal can be represented by the fewer number of bases than the one required by Nyquist theorem, when it is mapped to the space with bases incoherent to the bases of the sparse space. The incoherent bases are called the measurement vectors. CS has a wide range of applications including radar imaging [4], DNA microarrays [5], image reconstruction and compression [6–14], etc.

There are three steps in CS: (1) the construction of a sparse signal, (2) the compression of a sparse signal, and (3) the reconstruction of the compressed signal. The focus of this article is the CS reconstruction of image data. The reconstruction problem aims to find the sparsest signal which produces the compressed signal (known as the compressed measurement signal). It can be written as the optimization problem as follows:

arg min s s 0 s .t y = Φ s ,
(1)

where s and y are the sparse and the compressed measurement signals, respectively; Φ is the random measurement matrix having sampled measurement vectors (known as random measurement vectors) as its column vectors and ||s||0 is the ℓ0 norm of s. One of the ways to construct Φ is as follows:

  1. (1)

    Define the square matrix, Ω, as the matrix having measurement vectors as its column vectors.

  2. (2)

    Randomly remove the rows in Ω to make the row dimension of Ω equal to the one of Φ.

  3. (3)

    Set Φ to Ω after row removal.

  4. (4)

    Normalize every column in Φ

The optimization of ℓ0 norm which is non-convex quadratically constrained optimization is NP-hard and cannot be solved in practice. There are two major approaches for problem solving: (1) basis pursuit (BP) approach and (2) greedy approach. In BP approach, the ℓ0 norm is relaxed to the ℓ1 norm [15–17]. The y = Φs condition becomes the minimum ℓ2 norm of y - Φs. When Φ satisfies the restricted isometry property (RIP) condition [18], the BP approach is an effective reconstruction approach and does not require the exactness of the sparse signal. However, it requires high computation. In the greedy approach [19, 20], the heuristic rule is used in place of ℓ1 optimization. One of the popular heuristic rules is that the non-zero components of s correspond to the coefficients of the random measurement vectors having the high correlation to y. The examples of greedy algorithm are OMP [19], regularized OMP (ROMP) [20], etc. The greedy approach has the benefit of fast reconstruction.

The reconstruction of the noisy compressed measurement signals requires the relaxation of the y - Φs constraint. Most algorithms provide the acceptable bound for the error between y and Φs [17–26]. The error bound is created based on the noise characteristic such as bounded noise, Gaussian noise, finite variance noise, etc. The authors in [17] show that it is possible to use BP and OMP to reconstruct the noisy signals, if the conditions of the sufficient sparsity and the structure of the overcompleted system are met. The sufficient conditions of the error bound in basis pursuit denoising (BPDN) for successful reconstruction in the presence of Gaussian noise is discussed in [21]. In [22], the Danzig selector is used as the reconstruction technique. ℓ∞ norm is used in place of ℓ2 norm. The authors of [23] propose using weighted myriad estimator in the compression step and Lorentzian norm constraint in place of ℓ2 norm minimization in the reconstruction step. It is shown that the algorithm in [23] is applicable for reconstruction in the environment corrupted by either Gaussian or impulsive noise.

OMP is robust to the small Gaussian noise in y due to its â„“2 optimization during parameter estimation. ROMP [20, 26] and compressed sensing matching pursuit (CoSaMP) [24, 26] have the stability guarantee as the â„“1-minimization method and provide the speed as greedy algorithm. In [25], the authors used the mutual coherence of the matrix to analyze the performance of BPDN, OMP, and iterative hard thresholding (ITH) when y was corrupted by Gaussian noise. The equivalent of cost function in BPDN was solved through ITH in [27]. ITH gives faster computation than BPDN but requires very sparse signal. In [28], the reconstruction by Lorentzian norm [23] is achieved by ITH and the algorithm is called Lorentzian-based ITH (LITH). LITH is not only robust to Gaussian noise but also impulsive noise. Since LITH is based on ITH, therefore it requires the signal to be very sparse.

Recently, most researches in CS focus on the structure of sparse signals and creation of model-based reconstruction algorithms [29–35]. These algorithms utilize the structure of the transformed sparse signal (e.g., wavelet-tree structure) as the prior information. The model-based methods are attractive because of their three benefits: (1) the reduction of the number of measurements, (2) the increase in robustness, and (3) the faster reconstruction.

Distributed compressed sensing (DCS) [33, 35, 36] is developed for reconstructing the signals from two or more statistically dependent data sources. Multiple sensors measure signals which are sparse in some bases. There is correlation between sensors. DCS exploits both intra and inter signal correlation structures and rests on the joint sparsity (the concept of the sparsity of the intra signal). The creators of DCS claim that a result from separate sensors is the same when the joint sparsity is used in the reconstruction. Simultaneously OMP (SOMP) is applied to reconstruct the distributed compressed signals. DCS-SOMP provides fast computation and robustness. However, in case of the noisy y, the noise may lead to incorrect basis selection. In DCS-SOMP reconstruction, if the incorrect basis selection occurs, the incorrect basis will appear in every reconstruction, leading to error that cannot be reduced by averaging method.

In this article, the reconstruction method for Gaussian noise corrupted y is proposed. It utilizes the fact that image signal can be reconstructed from parts of y, instead of an entire y. It creates the member in the ensemble of sampled y by randomly subsampling y. The reconstruction is applied to reconstruct each member in the ensemble. We hypothesize that all randomly sub-sampled y are corrupted with the noise of the same mean and variance; therefore, we can remove the effect of Gaussian noise by averaging the reconstruction results of the signals in the ensemble. The reconstruction is achieved by OMP with partially known support (OMP-PKS) [34]. Our proposed method differs from DCS in that it requires only one y as the input. It is simple and requires no complex parameter adjustment.

2. Background

2.1 Compressed sensing

CS is based on the assumption of the sparse property of signal and incoherency between the bases of sparse domain and the bases of measurement vectors [1–3]. CS has three major steps: the construction of k-sparse representation, the compression, and the reconstruction. The first step is the construction of k-sparse representation, where k is the number of the non-zero entries of sparse signal. Most natural signal can be made sparse by applying orthogonal transforms such as wavelet transform, Fast Fourier transform, discrete cosine transform. This step is represented as

s = Ψ T x ,
(2)

where x is an N-dimensional non-sparse signal; s is a weighted N-dimensional vector (sparse signal with k nonzero elements), and Ψ is an N × N orthogonal basis matrix.

The second step is compression. In this step, the random measurement matrix is applied to the sparse signal according to the following equation.

y = Φ s = Φ Ψ T x ,
(3)

where Φ is an M × N random measurement matrix (M < N). If Ψ is an identity matrix, s is equivalent to x. Without loss of generality, Ψ is defined as an identity matrix in this article. M is the number of measurements (the row dimension of y) sufficient for high probability of successful reconstruction and is defined by

M ≥ C μ 2 ( Φ , Ψ ) k log N ,
(4)

for some positive constant C. μ(Φ, Ψ) is the coherence between Φ and Ψ, and defined by

μ ( Φ , Ψ ) = N max i , j ϕ i , ψ j .
(5)

If the elements in Φ and Ψ are correlated, the coherence is large. Otherwise, it is small. From linear algebra, it is known that μ ( Φ , Ψ ) ∈ 1 , N [2]. In the measurement process, the error (due to hardware noise, transmission error, etc.) may occur. The error is added into the compressed measurement vector as follows.

y = Φ s + e ,
(6)

where e is an M-dimensional noise vector.

2.2 Reconstruction method

The successful reconstruction depends on the degree that Φ complies with RIP. RIP is defined as follows.

( 1 - δ k ) s 2 2 ≤ Φ s 2 2 ≤ 1 + δ k s 2 2 ,
(7)

where δ k is the k-restricted isometry constant of a matrix Φ. RIP is used to ensure that all subsets of k columns taken from Φ are nearly orthogonal. It should be noted that Φ has more column than rows; thus, Φ cannot be exactly orthogonal [2].

The reconstruction is the optimization problem to solve (1). In (2), when Ψ is an identity matrix, s is x. Equation (1) can be rewritten as (8). Equation (8) is the reconstruction problem used in this article.

arg min x x 0 s .t y = Φ x .
(8)

The reconstruction algorithms used in the experiment are BPDN, OMP-PKS, LITH, and DCS-SOMP. They are described in the following sections.

2.2.1 BPDN

BP [15, 16] is one of the popular â„“1-minimization methods. The â„“0-norm in (8) is relaxed to â„“1-norm. It reconstructs the signal by solving the following problem.

arg min x x 1 s .t y = Φ x .
(9)

BPDN [21] is the relaxed version of BP and is used to reconstruct the noisy y. It reconstructs the signal by solving the following optimization problem.

arg min x x 1 s .t y - Φ x 2 ≤ ε ,
(10)

where ε is the error bound.

BPDN is often solved by linear programming. It guarantees a good reconstruction if Φ satisfies RIP condition. However, it has the high computational cost as BP.

2.2.2. OMP-PKS

OMP-PKS [34] is adapted from the classical OMP [19]. It makes use of the sparse signal structure that some signals are more important than the others and should be set as non-zero components. It has the characteristic of OMP that the requirement of RIP is not as severe as BP's [26]. It has a fast runtime but may fail to reconstruct the signal (lacks of stability). It has the benefit over the classical OMP as it can successfully reconstruct y even when y is very small (very low measurement rate (M/N)). It is different from tree-based OMP (TOMP) [30] in that the subsequent bases selection of OMP-PKS does not consider the previously selected bases, while TOMP sequentially compares and selects the next good wavelet sub-tree and the group of related atoms in the wavelet tree.

In this article, sparse signal is in wavelet domain, where the signal in LL subband must be included for successful reconstruction. All components in LL subband are selected as non-zero components without testing for the correlation. The algorithm for OMP-PKS when the data are represented in wavelet domain is as follows.

Input:

  • An M × N measurement matrix, Φ = [φ1, φ2, φ3, ..., φ N ]

  • The M-dimensional compressed measurement signal, y

  • The set containing the indexes of the bases in LL subbands, Γ = {γ1, γ2, ..., γ|Γ|}.

  • The number of non-zero entries in the sparse signal, k.

Output:

  • The set containing k indexes of the non-zero element in x, Λ k = {λ i }; i = 1,2,...,k.

Procedure:

Phase 1: Basis preselection (initial step)

  1. (a)

    Select every bases in LL subband.

    t = Γ Λ t = Γ Φ t = φ γ 1 φ γ 2 . . . φ γ t .
  2. (b)

    Solve the least squared problem to obtain the new reconstructed signal, z t .

    z t = arg min z y - Φ t z 2
  3. (c)

    Calculate the new approximation, a t , and find the residual (error, r t ). a t is the projection of y on the space spanned by Φ t .

    a t = Φ t z t r t = y - a t .

Phase 2: Reconstruction by OMP

  1. (a)

    Increment t by one, and terminate if t > k.

  2. (b)

    Find the index, λ t , of the measurement basis, φ j , that has the highest correlation to the residual in the previous iteration (rt-1).

    λ t = arg max j = [ 1 , N ] , j ∉ Λ t - 1 r t - 1 , φ j .

    If the maximum occurs for multiple bases, select one deterministically.

  3. (c)

    Augment the index set and the matrix of the selected basis.

    Λ t = Λ t - 1 ∪ λ t and Φ t = Φ t - 1 φ λ t .
  4. (d)

    Solve the least squared problem to obtain the reconstructed signal, z t .

    z t = arg min z y - Φ t z 2
  5. (e)

    Calculate the new approximation, a t , that best describes y. Then, calculate the residual, r t , of the current approximation.

    a t = Φ t z t r t = y - a t
  6. (f)

    Go to step (a)

The reconstructed sparse signal, x ^ , has indexes of non-zero components listed in Λ k . The value of the λ j th component of x ^ equals to the j th component of z t . The termination criterion can be changed from t > k to that rt-1is less than the predefined threshold.

2.2.3. LITH

LITH [34] was proposed to reconstruct signals in the presence of Gaussian and impulsive noise. It differs from ITH in the usage of Lorentzian norm instead of â„“2 norm. It reconstructs the signal according to the following function.

arg min x y - Φ x L L 2 , α s .t x 0 ≤ k
(11)

where u L L 2 , α is Lorentzian norm (LL q norm with q (tail parameter) = 2) of u and defined as follows.

u L L 2 , α = log 1 + 1 2 u α 2 ,
(12)

where α is a scale parameter. The algorithm for LITH is as follows.

Input:

  • An M × N measurement matrix, Φ

  • The M-dimensional compressed measurement signal, y

  • The number of non-zero entries in the sparse signal, k.

Output:

  • The reconstructed signal, x.

Procedure:

  1. (a)

    Set x(0) to zero vector and t to 0.

  2. (b)

    At each iteration, x(t + 1) was computed by

    x ( t + 1 ) = H k ( x ( t ) ) + μ g ( t ) ) ,

    where H k (a) is the nonlinear operator where the k largest components in a are kept but the remaining components are set to zero. μ is the step size. In this article, g is defined as follows.

    g ( t ) = Φ T W t ( y - Φ x ( t ) ) .

    W t is an M × N diagonal matrix. The diagonal element in W t is defined as

    W t ( i , i ) = α 2 α 2 + ( y i - Φ i T x ( t ) ) 2 , i = 1 , . . . . , M .

    The step size is set as

    μ ( t ) = g k ( t ) ( t ) 2 2 W t 1 / 2 Φ k ( t ) g k ( t ) ( t ) 2 2 .

    In case that y - Φ x ( t + 1 ) L L 2 , α > y - Φ x ( t ) L L 2 , α , μ(t) is set to 0.5μ(t).

  3. (c)

    Terminate when the difference between Φx and y is less than or equal to the predefined error.

LITH is the fast and robust algorithm but it faces the same problem as ITH. It requires that either x must be very sparse or y must be very large (high measurement rate). It is faster than OMP but with less stability.

2.2.4. DCS-SOMP

DCS uses the concept of joint sparsity, which is the sparsity of every signal in the ensemble. It is used under the environment that there are a number of y whose original signals (x) are related. It has three models: sparse common component with innovations, common sparse support, and non sparse common component with sparse innovations [31, 33]. In this article, the common sparse support model is used. SOMP [31, 36] is proposed as the reconstruction algorithm. SOMP is adapted from OMP.

DCS-SOMP searches for the solution that contains maximum energy in the signal ensemble. Given that the ensemble of y is {y i }; i = 1,2,...,L. The basis selection criterion in DCS-SOMP is changed from λ t = arg max j = [ 1 , N ] , j ∉ Λ t - 1 r t - 1 , φ j to λ t = arg max j = [ 1 , N ] , j ∉ Λ t - 1 ∑ i = 1 L r i , t - 1 , φ i , j , where ri,t-1is the residual of y i to the projection of y i on to the space spanned by Φt-1. The rest of the procedure remains the same as OMP. The indexes of non-zero components in the reconstructed x i (i = 1, 2, ..., L) are the same, but the value of non-zero components may differ. It should be noted that when L is equal to one, the DCS-SOMP is OMP.

3. Proposed method

This section addresses the problem of image reconstruction from Gaussian noise corrupted y. The block processing is applied to reduce the computational cost. Block processing and the vectorization of the wavelet coefficients is described in Section 3.1. The proposed reconstruction process from the ensemble of y is explained in Section 3.2.

3.1 Block processing and the vectorization of the wavelet coefficients

In this article, the image is sparsified by the octave-tree discrete wavelet transform. Figure 1 shows an example of block processing and vectorization of the wavelet coefficients. Figure 1a shows the structure of a wavelet transformed image. The LL3 subband is shown in red. Other subbands (LH, HL, and HH) in the third, the second, and the first level are shown in green, orange, and blue, respectively. The LL3 subband is the most important subband, because it contains most of the energy in the image. Figure 1b shows the re-ordering of the wavelet coefficients. The coefficients are ordered such that the LL3 subband is located at the beginning of each row. The LL3 subband is followed by the other subbands in the third, the second, and the first level.

Figure 1
figure 1

The example of block processing and vectorization. (a) The structure of the wavelet transformed image, (b) wavelet subbands vectorization and reorganization, and (c) wavelet block.

The wavelet-domain image in Figure 1b is divided into blocks along its row as shown in Figure 1c. In Figure 1c, the image has eight rows and is divided into eight blocks. The signal can be made sparser by wavelet shrinkage thresholding [37]. All coefficients in LL3 subband are preserved. By using the wavelet shrinkage thresholding, we can set most coefficients in the other subbands to zero with little distinct visual degradation. Each row in Figure 1c is considered as the sparse signal for our study.

It should be noted that by experiments, it is found that the vectorization according to the structure of Figure 1c is better than the one by the lexicographic ordering. Figure 2 shows reconstruction examples when these two vectorizations were used. The sparsity rate and the measurement rate were set to 0.15 and 0.45, respectively. All images were reconstructed using OMP-PKS. The top row of each image shows the reconstruction when the vectorization in each block was done such that it had the structure as Figure 1c. The bottom row of each image shows the reconstruction when the vectorization in each block was done by lexicographic ordering. There is no fail reconstruction (dark spot) in the top rows; whereas, there are some in the bottom rows.

Figure 2
figure 2

The reconstruction examples when the vectorization of the wavelet block is different. Types I and II indicate the vectorization according to the structure in Figure 1c and the vectorization by lexicographic ordering, respectively. (a) Girl, (b) Jelly Beans, (c) Airplane (F-16), and (d) Mandrill.

3.2. Reconstruction

The reconstruction method is divided into three stages: the construction of the ensemble of y, the reconstruction by OMP-PKS, and data merging.

3.2.1. Construction of the ensemble of y

Given that there are L different pM-dimension signals in the ensemble of y. p is the ratio of the sampled signal's size to the original size. p and L are predefined. The i th signal in the ensemble is denoted by y i . The algorithm for constructing y i is as follows.

Input:

  • An M × N measurement matrix, Φ

  • The M-dimensional compressed measurement signal, y

  • The dimension of y i , β = pM.

Output:

  • The i th signal in the ensemble, y i .

  • The truncated measurement matrix for y i , Φ i

Procedure:

  1. (a)

    Create the set of β random integers, R = {r1, r2,...,r β }, having the following properties.

    For all j, l ∈ [1, β], r j ∈ [1, M] and r j = r l only if j = l.

  2. (b)

    Construct y i by setting the j th component of y i to the r j th component of y for all j ∈ [1, β].

  3. (c)

    Construct Φ i , according to the following function.

For all j ∈ [1, β], set the j th row of Φ i to the r j th row of Φ.

Figure 3 shows the result of applying the above procedure for L times to create the ensemble of L sampled signals. The total dimension of the ensemble is pM × 1 × L. The ensemble is accompanied by L truncated measurement matrices. The size of the truncated matrix is pM × N. Since all y i 's are the parts of the same y, their information is the same and they contain Gaussian noise of the same mean and the same variance. As long as the reconstruction does not use all signals in the ensemble at once, it is safe to assume that reconstruction results from different y i contain different noise.

Figure 3
figure 3

The ensemble of compressed measurement vector and measurement matrix.

3.2.2. Reconstruction by OMP-PKS

The reconstruction of the proposed algorithm has the following requirements:

  • the reconstruction of the signal at low measurement rate (M/N),

  • fast reconstruction,

  • independent reconstruction result for each signal in the ensemble.

The first requirement comes from the fact that the reconstruction is performed on the sampled signal which is smaller than y. The RIP is not always guaranteed. The second requirement is necessary because the reconstruction must be performed L times (L is the number of the signal in the ensemble). The third requirement is the result of taking the information from only one signal. By combining every sampled signal, original noisy y will be acquired. In the proposed algorithm, the denoising by averaging is possible when each y i has the distinct reconstruction result from one another. Since each y i carries different set of the y's components, its total noise is different. Consequently, the reconstruction on each y i gives the result having different noise corrupted to each pixel. The noise in each pixel can be reduced by averaging.

Even though the reconstruction is performed on the ensemble of y as DCS, DCS-SOMP is not applicable, since it does not meet the third requirement. Any greedy algorithms applied to each y i meet the second and the third requirements. The measurement rate can be kept low (the first requirement) by including the model into the reconstruction. OMP-PKS [34] is chosen in this algorithm, because its requirement for measurement rate is low. The experiment in [34] shows that the requirement of OMP-PKS was lower than CoSaMP-PKS.

OMP-PKS is applied to every y i in the ensemble and forms L different sparse signals (wavelet coefficient). At the end of this stage, there are L noisy images.

3.2.3. Data merging

L noisy images at the end of the reconstruction process have noise that is similar to Gaussian noise (Figure 4). At the same position, the noise in different reconstructed images had distinctly different magnitude; consequently, it can be reduced by taking the average at each pixel. Because the average is not done in spatial domain, therefore the loss in spatial resolution is low. The denoising in spatial domain can be done by using the conventional denoising algorithms such as the Gaussian smoothing model [38], the Yaroslavsky neighborhood filters and an elegant variant [39, 40], the translation invariant wavelet thresholding [41], and the discrete universal denoiser [42].

Figure 4
figure 4

The reconstruction examples of y i .

4. Experimental results

4.1. Experiment setup

The proposed method, OMP-PKS+random subsampling (OMP-PKS+RS), was compared with BPDN, LITH, OMP-PKS, and DCS-SOMP. The performance comparison was evaluated using 42 standard test images with the size of 256 × 256 (available at http://decsai.ugr.es/cvg/dbimagenes/index.php) as depicted in Figure 5. Each image was transformed to the wavelet domain using db8. The measurement matrix is Hadamard matrix. Each wavelet image was divided into the block of 1 × 256. The number of blocks was 256. The average sparsity rate (k/N) of blocks in an image was 0.1. Peak signal-to-noise ratio (PSNR) and visual inspection were used for performance evaluation. All PSNRs shown in the graph were average PSNRs.

Figure 5
figure 5

The test images.

Since the compression step in CS consists mostly of linear operations, Gaussian noise corrupting the signal in the earlier states is approximated as the Gaussian noise corrupting the compressed measurement vector. The state where the noise corrupted the image was not specified; therefore, we simply corrupted the compressed measurement vector by different level of Gaussian noise indicated by its variance (σ2).

The experiment consists of two parts: (1) the evaluation for the required parameters (L and p) of OMP-PKS+RS and DCS-SOMP in Section 4.2 and (2) the performance evaluation in Section 4.3.

4.2. Evaluation for L and p

Both OMP-PKS+RS and DCS-SOMP require the ensemble of y. We randomly subsampled y with the algorithm described in Section 3.1 to create the ensemble. First, we investigated for the size of the ensemble (L) and the size of the signal in the ensemble for the optimum performance of OMP-PKS+RS and DCS-SOMP. The size of the signal in the ensemble was investigated in term of the ratio to the size of y (p).

Figure 6 shows the PSNR of the reconstruction images at different L and p. The measurement rate (M/N) was set to 0.4. The solid line and the dashed line show the PSNR of the reconstruction by DCS-SOMP and OMP-PKS+RS, respectively. Figure 6a-d shows the PSNR when the noise variance was 0.05, 0.1, 0.15, and 0.2, respectively. The figures clearly show that the best performance of OMP-PKS+RS was better than the one of DCS-SOMP in all cases.

Figure 6
figure 6

The average PSNR of reconstructed results by DCS-SOMP and OMP-PKS+RS at M / N = 0.4 from y corrupted by Gaussian noise at (a) σ2 = 0.05, (b) σ2 = 0.1, (c) σ2 = 0.15, and (d) σ2 = 0.2.

The line in the graph of Figure 6 was shown in different color to represent p that was varied. The effect of p was more pronounced in OMP-PKS+RS than in DCS-SOMP. The maximum PSNR in OMP-PKS+RS was achieved when p = 0.6 in all cases, while the maximum PSNR in DCS-SOMP was achieved with different value of p. When σ2 were 0.05, 0.1, 0.15, and 0.2, the optimum p for DCS-SOMP were 0.9, 0.6, 0.7, and 0.6, respectively. No trend could be established for optimum p in DCS-SOMP.

The x-axis in Figure 6 represents L. When L was changed, the performance of DCS-SOMP was almost unchanged. On the other hand, the performance of OMP-PKS+RS was better, when L was larger. When then noise was higher, OMP-PKS+RS required larger L to achieve the optimum performance. In order to achieve the best performance, OMP-PKS+RS required the larger L than DCS-SOMP in all cases. In most cases, DCS-SOMP and OMP-PKS+RS had already converged to their optimum performance at L = 6 and 31, respectively.

The optimum p and L at various M/N and various noise levels were summarized in Tables 1 and 2, respectively. In DCS-SOMP, the optimum p varied from 0.6 to 0.9. Out of 20 cases shown in the table, the optimum p was 0.7 in 10 cases. The result in Figure 6 indicated that p had little effect to the PSNR, so p for DCS-SOMP was set to 0.7 in Section 4.3. In OMP-PKS+RS, the optimum p varied from 0.6 to 0.8, note that in most cases (16 out of 20 cases) the optimum p was 0.6. Even though p in OMP-PKS+RS had more effect to the result's PSNR than DCS-SOMP, it was found that the PSNR difference between the best case and p = 0.6 was less than 0.5 dB. Hence, p for OMP-PKS+RS was set to 0.6 in Section 4.3.

Table 1 The number of p which provided the highest PSNR
Table 2 The number of L at which the converged PSNR was guaranteed

From Table 2 the optimum L for DCS-SOMP was always equal to 6; thus, L for DCS-SOMP was set to 6 in Section 4.3. In OMP-PKS+RS, the optimum L varied from 21 to 36. Out of 20 cases shown in the table, the optimum L was 31 in 10 cases. The optimum L for OMP-PKS+RS was set to 31 in Section 4.3.

4.3. Performance evaluation

The performance of OMP-PKS+RS was compared with the ones of BPDN, LITH, OMP-PKS, and DCS-SOMP in this section. BPDN, LITH, and OMP-PKS used the single y to reconstruct the result, while OMP-PKS+RS and DCS-SOMP used the ensemble of y. The error bound of BPDN was set to σ2. The value of α in LITH was set to the optimum value of 0.25 [28].

4.3.1. Evaluation by PSNR

Figure 7a-d shows the PSNR when σ2 was set to 0.05, 0.1, 015, and 0.2, respectively. Different reconstruction methods are shown in different color. When M/N was higher, better reconstruction was achieved in all cases. However, the effect of the measurement rate to the performance of OMP-PKS+RS was lower than the other techniques.

Figure 7
figure 7

The average PSNR of reconstructed results when y is corrupted by Gaussian noise with (a) σ2 = 0.05, (b) σ2 = 0.1, (c) σ2 = 0.15, and (d) σ2 = 0.2.

Figure 7 also indicates that the proposed OMP-PKS+RS was the most effective reconstruction at small M/N (< 0.4). When M/N = 0.4 or higher, the PSNR acquired by the reconstruction from OMP-PKS+RS and DCS-SOMP was approximately the same. At σ2 = 0.05 and M/N = 0.6, all techniques achieved approximately the same PSNR. However, when the noise was increased, the reconstruction from the signal ensemble (OMP-PKS+RS and DCS-SOMP) was better than the performance of the reconstruction from one signal (BPDN, LITH, and OMP-PKS) in all cases but at M/N = 0.2.

It should be noted that even though LITH was designed for the reconstruction of noisy signal, its performance was the worst in almost all cases. This was due to its requirement of very sparse data (or very high M/N). Its performance was still not converged at M/N = 0.6; however, M/N could not be increased indefinitely. The major benefit of CS is the capability to reconstruct the signal from small y, so the large M/N will eliminate the CS benefit. For example, at the sparsity rate of 0.1, M/N = 0.5 would lead to y with the size of 50% of the original image size. Such large compressed image could be achieved by conventional image compression techniques. Thus, it was rare that M/N could be increased to 0.5 or larger.

Since OMP-PKS+RS and OMP-PKS used the same reconstruction method, the PSNR difference between OMP-PKS+RS and OMP-PKS indicated the PSNR improvement by using the ensemble of y. The average PSNR improvement was more than 1 dB in all σ2. With the exception of σ2 = 0.05, the PSNR from OMP-PKS+RS at M/N = 0.2 was higher than the one from OMP-PKS at M/N = 0.6. It indicated that by using the ensemble of signal, OMP-PKS+RS required lower M/N to achieve the same performance level of OMP-PKS.

4.3.2. Evaluation by visual inspection

Images of Car, Pallons, and Elaine were used in this section. Car was selected because it contains the sharp edge. Pallons was selected because it has numbers of smooth surface. Elaine was selected because it contains a number of textures. Figure 8 shows the examples of reconstruction results when M/N = 0.4 and σ2 = 0.05. The original images are shown in the first column. The reconstruction results based on BPDN, LITH, OMP-PKS, DCS-SOMP, and OMP-PKS+RS are shown in the second, the third, the fourth, the fifth, and the sixth columns, respectively. BPDN and LITH failed to reconstruct some blocks as shown as dark dots (such as on the car's windshield in Figures 8(a2-3), the rightmost balloon in Figures 8(b2-3)). Moreover, the results showed that OMP-PKS, DCS-SOMP, and OMP-PKS+RS successfully reconstructed every part. The smoothest reconstruction was acquired from the proposed OMP-PKS+RS. In all images, the change in the intensity contrast was due to the normalization of the inverse wavelet transform.

Figure 8
figure 8

Comparisons of the reconstructed images with M / N = 0.4 and σ2 = 0.05. From left to right, the image are original image, reconstructed images based on BPDN, LITH, OMP-PKS, DCS-SOMP (p = 0.7, L = 6), and OMP-PKS+RS (p = 0.6, L = 31).

The PSNR performance of the proposed OMP-PKS+RS and DCS-SOMP was very close; hence, further visual investigation is performed. Figures 9, 10, and 11 showed the examples of reconstruction based on OMP-PKS+RS and DCS-SOMP when σ2 = 0.05, 0.1, 0.15, and 0.2 and M/N ≥ 0.4. The top and the bottom rows are the reconstruction based on DCS-SOMP and OMP-PKS+RS, respectively. Although DCS-SOMP gave higher PSNR, its result was noisy. The noise was reduced in the reconstruction based on OMP-PKS+RS. The edge was sharper and the uniform intensity regions were smoother. For example, at σ2 = 0.2 and M/N = 0.6, the PSNR of the reconstructed Car based on DCS-SOMP was 5.36 dB higher than the one based on OMP-PKS+RS. But as Figure 9 indicated, the car's body in the top row was less smooth and the edge was more blurred. Similar examples could be found in Figures 10 and 11. Furthermore, DCS-SOMP failed to reconstruction some blocks (shown as dark dots), while OMP-PKS+RS successfully reconstructed every images.

Figure 9
figure 9

Comparisons of the reconstructed Car by DCS-SOMP (top row) and OMP-PKS+RS (bottom row) with M / N = 0.4, 0.5 and 0.6 at σ2 = 0.05, 0.1, 0.15 and 0.2.

Figure 10
figure 10

Comparisons of the reconstructed Pallons by DCS-SOMP (top row) and OMP-PKS+RS (bottom row) with M / N = 0.4, 0.5 and 0.6 at σ2 = 0.05, 0.1, 0.15 and 0.2.

Figure 11
figure 11

Comparisons of the reconstructed Elaine by DCS-SOMP (top row) and OMP-PKS+RS (bottom row) with M / N = 0.4, 0.5 and 0.6 at σ2 = 0.05, 0.1, 0.15 and 0.2.

4.3.3. Evaluation between OMP-PKS+RS and DCS-SOMP at optimum L and p

The performance of OMP-PKS+RS and DCS-SOMP at optimum L and p was compared, in this section. M/N was set at 0.6 to ensure the best performance for DCS-SOMP. Table 3 shows the PSNR of the reconstruction results when p and L were set to the values in Tables 1 and 2, respectively. OMP-PKS+RS had at least 2.5 and 1 dB higher PSNR at M/N = 0.2 and 0.3 , respectively. DCS-SOMP started to have the higher PSNR when M/N was set larger than 0.4. The trend was the same as the result in Section 4.3.1.

Table 3 The average PSNR when p and L were set according to Tables 1 and 2, respectively

Figure 12 shows the reconstruction examples when L and p were set according to Tables 1 and 2, respectively. The top and the bottom rows of each image in Figure 12 show the reconstruction based on DCS-SOMP and OMP-PKS+RS, respectively. Even though the PSNRs of some images in the top row were higher, the images in the bottom row had sharper edge and smoother uniform regions. Noise was less distinct in the reconstruction based on OMP-PKS+RS. The result followed the same trend as the result in Section 4.3.2.

Figure 12
figure 12

Comparisons of reconstructed images by DCS-SOMP (top row) and OMP-PKS+RS (bottom row) when p and L was set according to Tables 1 and 2, respectively. M/N was set to 0.6.

By comparing Figure 12 with Figures 9, 10, and 11, we found that the PSNR of some reconstructed images in Figure 12 was lower than Figures 9, 10, and 11. At σ2 = 0.2, the PSNR of the reconstructed Car based on DCS-SOMP dropped from 24.61 dB (Figure 9) to 17.07 dB (Figure 12). The reconstructed image was also degraded visually. On the other hand, the reconstructed Car based on OMP-PKS+RS at σ2 = 0.1 had 2.31 dB lower PSNR but the visual quality was approximately the same. The PSNR and visual quality drop were also found in other images but with less degree (e.g., the reconstruction of Pallons based on DCS-SOMP at σ2 = 0.2).

The PSNR drop was caused by the variance of the best p among test images. The visual quality of the reconstruction based on OMP-PKS+RS was approximately the same but the one based on DCS-SOMP dropped drastically in some cases. Consequently, it was possible to use one p for every image in OMP-PKS+RS but p must be determined image by image in DCS-SOMP.

From the comparison between OMP-PKS+RS and DCS-SOMP, it could be concluded that though OMP-PKS+RS produced the results with less PSNR than DCS-SOMP in some cases, the results had better visual quality. Furthermore, the parameter adjustment in OMP-PKS+RS was easier.

The reason behind the noise reduction was because the reconstruction based on OMP-PKS+RS produced different result for difference signal in the ensemble; therefore, the noise in each pixel could be reduced by averaging the intensity among signals in the ensemble. On the other hand, DCS-SOMP tried to find one result for every signal in the ensemble. Because the ensemble came from only one signal; hence, the noise was the same and the noise went directly to the result.

5. Conclusions

This article proposed the robust CS reconstruction algorithm for image with the presence of Gaussian noise. The proposed algorithm, OMP-PKS+RS, firstly applied random subsampling to create the ensemble of L sampled signals. Then OMP-PKS was used to reconstruct the signal. The Gaussian denoising was performed by averaging the image reconstruction of every signal in the ensemble. The experiment shows that by using the ensemble of signal, the proposed algorithm improved the PSNR of the original OMP-PKS by at least 0.34 dB. Moreover, the proposed algorithm was efficient in removing the noise when the compression rate was high (small measurement rate). For future work, we plan to add the impulsive noise model into OMP-PKS+RS to develop the reconstruction algorithm that is robust to both impulsive and Gaussian noises.

Appendix 1: Computational costs of OMP, OMP-PKS, OMP-PKS+RS, and DCS-SOMP

The computational costs of OMP, OMP-PKS, OMP-PKS+RS, and DCS-SOMP are investigated. The variables are the same as in Sections 2 and 3. The number of multiplication and the one of ℓ2 optimization are used to measure the computational cost. The computational cost of the t th iteration in the classic OMP is summarized in Table 4. The first |Γ| iterations in OMP are replaced by the basis preselection in OMP-PKS. The computational cost of the basis preselection is summarized in Table 5. The total computational costs of OMP and OMP-PKS for a k-sparse signal are as follows:

Table 4 The computational cost of the t th iteration in OMP
Table 5 The computation cost of the basis preselection in OMP-PKS
The number of multiplication in OMP = ∑ t = 1 k M N + M
(13)
The number of â„“ 2 optimization in OMP = ∑ t = 1 k ( â„“ 2  optimization for  t  variables )
(14)
The number of multiplication in OMP - PKS = ∑ t = Γ + 1 k M N + M + Γ
(15)
The number of â„“ 2 optimization in OMP - PKS = ∑ t = Γ k ( â„“ 2  optimization for  t  variables )
(16)

From (13) to (16), it can be concluded that OMP-PKS reduces the computational cost of OMP in two aspects.

  1. (1)

    The number of multiplication of the first |Γ|th loops is reduced from (MN+M)|Γ| to |Γ|.

  2. (2)

    The ℓ2 optimization in the first (|Γ| - 1) iterations is removed.

In OMP-PKS+RS, the size of y i is reduced from M to pM. The reconstruction is performed L times. Therefore, the total computational time of OMP-PKS+RS is L times the reconstruction of OMP-PKS, where M is replaced by pM. In DCS-SOMP, the computational cost of the t th iteration is summarized in Table 6.

Table 6 The computation cost of the t th iteration in DCS-SOMP

The total computational costs of OMP-PKS+RS and DCS-SOMP for k-sparse signal are as follows.

The number of multiplication in OMP - PKS + RS = L p ( M N + M ) ( k - Γ ) + Γ
(17)
The number of â„“ 2 optimization in OMP - PKS + RS = L ∑ t = | Γ | k ( â„“ 2  optimization for  t variables )
(18)
The number of multiplication in DCS - SOMP = L p ( M N + M ) k
(19)
The number of â„“ 2 optimization in DCS - SOMP = L ∑ t = 1 k ( â„“ 2  optimization for  t variables )
(20)

From (15) to (18), it can be concluded that the computational cost of OMP-PKS+RS is approximately pL times the cost of OMP-PKS. From (13), (14), (19) and (20), it can be concluded that the computational cost of DCS-SOMP is pL times the cost of OMP. Since both OMP-PKS+RS and DCS-SOMP reconstruct the ensemble of signals, their computational costs are higher than OMP and OMP-PKS.

From (17) to (20), it can be concluded that at the same L and p, the cost of OMP-PKS+RS is lower than DCS-SOMP because of the usage of OMP-PKS. However, it was found that the optimum L and p in OMP-PKS+RS and DCS-SOMP were different. The product of pL was much higher in OMP-PKS+RS, so OMP-PKS+RS had the highest computational cost. The effect of higher computing cost in OMP-PKS+RS can be reduced by parallel processing, because the reconstruction of each signal in OMP-PKS+RS can be done separately.

Table 7 summarizes the computational cost of the four methods, when they are applied to reconstruct a k-sparse signal in Section 4.

Table 7 The total computational cost of the reconstruction of a k-sparse signal by OMP, OMP-PKS, OMP-PKS+RS, and DCS-SOMP

References

  1. Donoho DL: Compressive sensing. IEEE Trans Inf Theory 2006, 52(4):1289-1306.

    Article  MathSciNet  MATH  Google Scholar 

  2. Candes EJ, Wakin MB: An introduction to compressive sampling. IEEE Signal Process Mag 2008, 25: 21-30.

    Article  Google Scholar 

  3. Candes EJ, Romberg J: Sparsity and incoherence in compressive sampling. Inverse Problem 2007, 23(3):969-985. 10.1088/0266-5611/23/3/008

    Article  MathSciNet  MATH  Google Scholar 

  4. Patel VM, Easley GR, Healy DM, Chellappa R: Compressed synthetic aperture radar. IEEE J Sel Topics Signal Process 2010, 4(2):244-254.

    Article  Google Scholar 

  5. Parvaresh F, Vikalo H, Misra S, Hassibi B: Recovering sparse signal using measurement matrices in compressed DNA microarrays. IEEE J Sel Topics Signal Process 2008, 2(3):275-285.

    Article  Google Scholar 

  6. Shi G, Gao D, Liu D, Wang L: High resolution image reconstruction: a new imager via movable random exposure. Proc ICIP, Cairo, Egypt 2009, 1177-1180.

    Google Scholar 

  7. Yang J, Zhang Y, Yin W: A fast alternative direction method for TVL1-L2 signal reconstruction from partial fourier data. IEEE J Sel Topics Signal Proces 2010, 4(2):288-297.

    Article  Google Scholar 

  8. Marcia RF, Willett RM: Compressive coded aperture superresoltion image reconstruction. In Proc ICASSP. Nevada, U.S.A.; 2008:833-836.

    Google Scholar 

  9. Gan L: Block compressed sensing of natural images. In Proc DSP. Wales, U.K.; 2007:403-406.

    Google Scholar 

  10. Yang Y, Au OC, Fang L, Wen X, Tang W: Perceptual compressive sensing for image signals. In Proc ICME. New York, U.S.A.; 2009:89-92.

    Google Scholar 

  11. Do TT, Tran TD, Gan L: Fast compressive sampling with structurally random matrices. In Proc ICASSP. Nevada, U.S.A.; 2008:3369-3372.

    Google Scholar 

  12. Gan L, Do TT, Tran TD: Fast compressive image using scrambled block hadamard ensemble. In Proc EUSIPCO. Lausanne, Switzerland; 2008.

    Google Scholar 

  13. Mun S, Fowler JE: Block compressed sensing of images using directional transforms. In Proc ICIP. Cairo, Egypt; 2009:3021-3024.

    Google Scholar 

  14. Sermwuthisarn P, Auethavekiat S, Patanavijit V: A fast image recovery using compressive sensing technique with block based orthogonal matching pursuit. In Proc ISPACS. Kanazawa, Japan; 2009:212-215.

    Google Scholar 

  15. Chen S, Donoho DL, Saunders M: Atomic decomposition by basis pursuit. SIAM Rev 2001, 43(1):129-159. 10.1137/S003614450037906X

    Article  MathSciNet  MATH  Google Scholar 

  16. Donoho DL, Elad M, Temlyakov VN: Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans Inf Theory 2006, 52(1):6-18.

    Article  MathSciNet  MATH  Google Scholar 

  17. Tropp JA: Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans Inf Theory 2006, 52(3):1030-1051.

    Article  MathSciNet  MATH  Google Scholar 

  18. Candes EJ, Romberg J, Tao T: Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 2006, 59(8):1207-1223. 10.1002/cpa.20124

    Article  MathSciNet  MATH  Google Scholar 

  19. Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 2007, 53(12):4655-4666.

    Article  MathSciNet  MATH  Google Scholar 

  20. Needell D, Vershynin R: Uniform uncertaintity principle and signal recovery via regularized orthogonal matching pursuit. Found Comput Math 2009, 9(3):317-334. 10.1007/s10208-008-9031-3

    Article  MathSciNet  MATH  Google Scholar 

  21. Omidiran D, Wainwright MJ: High-dimensional subset recovery in noise: sparsified measurements without loss of statistical efficiency, in Technical report 753. Department of Statistics, UC Berkeley, U.S.A.; 2008.

    Google Scholar 

  22. Candès EJ, Tao T: The dantzig selector: statistical estimation when p is much larger than n . Ann Stat 2007, 35(6):2313-2351. 10.1214/009053606000001523

    Article  MathSciNet  MATH  Google Scholar 

  23. Carrillo RE, Barner KE, Aysal TC: Robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. IEEE J Sel Topics Signal Proc 2010, 4(2):392-408.

    Article  Google Scholar 

  24. Needell D, Tropp JA: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmonic Anal 2008, 26(3):301-321.

    Article  MathSciNet  MATH  Google Scholar 

  25. Ben-Haim Z, Eldar YC, Elad M: Coherence-based near-oracle performance guarantees for sparse estimation under Gaussian noise. Proc ICASSP, Prague, Czech Republic 2010, 3590-3593.

    Google Scholar 

  26. Needell D: Topics in compressed sensing. In Ph.D. Dissertation, Math. Univ. of California, Davis; 2009.

    Google Scholar 

  27. Blumensath T, Davies ME: Normalized iterative hard thresholding: guaranteed stability and performance. IEEE J Sel Topics Signal Process 2010, 4(2):298-309.

    Article  Google Scholar 

  28. Carrillo RE, Barner KE: Lorentzian based iterative hard thresholding for compressed sensing, in Proc . IEEE ICASSP, Prague, Czech Republic 2011, 3664-3667.

    Google Scholar 

  29. La C, Do MN: Signal reconstruction using sparse tree representation. In Proc of SPIE Conf on Wavelet Applications in Signal and Image Proc. San Diego, U.S.A; 2005:5914.

    Google Scholar 

  30. La C, Do MN: Tree-based orthogonal matching pursuit algorithm for signal reconstruction. In Proc IEEE ICIP. Georgia, U.S.A; 2006:1277-1280.

    Google Scholar 

  31. Duarte MF: Compressed sensing for signal ensembles. In Ph.D. Dissertation. Department of Electrical Engineering, Rice University, Houston, TX; 2009.

    Google Scholar 

  32. He L, Carin L: Exploiting strucrure in wavelet-based Baysian compressive sensing. IEEE Trans Signal Process 2009, 57: 3488-3497.

    Article  MathSciNet  Google Scholar 

  33. Baron D, Wakin MB, Duarte MF, Sarvotham S, Baraniuk RG: Distributed compressed sensing. In Technical Report, TREE-0612. Rice University, Department of Electrical and Computer Engineering, Houston, TX; 2006.

    Google Scholar 

  34. Carrillo RE, Polania LF, Barner KE: Iterative algorithm for compressed sensing with partially know support. In Proc IEEE ICASSP. Texas, U.S.A; 2010:3654-3657.

    Google Scholar 

  35. Xu M, Lu J: K-cluster-values compressive sensing for imaging. EURASIP J Adv Signal Process 2011. doi:10.1186/1687-6180-2011-75

    Google Scholar 

  36. Aravind NV, Abhinandan K, Vineeth VA, Suman DS: Comparison of OMP and SOMP in the reconstruction of compressively sensed hyperspectral images. In Proc IEEE ICCSP. Hamirpur, India; 2011:188-192.

    Google Scholar 

  37. Donoho D: De-noising by soft thresholding. IEEE Trans Inf Theory 1995, 38(2):613-627.

    Article  MathSciNet  MATH  Google Scholar 

  38. Catté F, Dibos F, Koepfler G: A morphological scheme for mean curvature motion and applications to anisotropic diffusion and motion of level sets. SIAM J Numer Anal 1995, 32(6):1845-1909.

    Article  MathSciNet  MATH  Google Scholar 

  39. Yaroslavsky LP: Digital Picture Processing---An Introduction. Springer Verlag, Berlin, Heidelberg; 1985.

    Book  Google Scholar 

  40. Yaroslavsky L, Eden M: Fundamentals of Digital Optics. Birkhauser, Boston; 1996.

    Book  MATH  Google Scholar 

  41. Coifman RR, Donoho D: Translation-Invariant De-Noising, Wavelets and Statistics. Springer Verlag, New York; 1995:125-150.

    Book  MATH  Google Scholar 

  42. Ordentlich E, Seroussi G, Verdú S, Weinberger M, Weissman T: A discrete universal denoiser and its application to binary image. In Proc IEEE ICIP. Volume 1. Catalonia, Spain; 2003:117-120.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the reviewers for their comments and suggestions. This research has financially been supported by the National Telecommunications Commission Fund (Grant No. PHD/006/2551 to P. Sermwuthisarn and S. Auethavekiat), the Telecommunications Research Industrial and Development Institute (TRIDI).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Supatana Auethavekiat.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Sermwuthisarn, P., Auethavekiat, S., Gansawat, D. et al. Robust reconstruction algorithm for compressed sensing in Gaussian noise environment using orthogonal matching pursuit with partially known support and random subsampling. EURASIP J. Adv. Signal Process. 2012, 34 (2012). https://doi.org/10.1186/1687-6180-2012-34

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-34

Keywords