Skip to content

Advertisement

  • Research
  • Open Access

Impulsive noise rejection method for compressed measurement signal in compressed sensing

  • 1,
  • 2,
  • 3 and
  • 1Email author
EURASIP Journal on Advances in Signal Processing20122012:68

https://doi.org/10.1186/1687-6180-2012-68

Received: 15 September 2011

Accepted: 20 March 2012

Published: 20 March 2012

Abstract

The Lorentzian norm of robust statistics is often applied in the reconstruction of the sparse signal from a compressed measurement signal in an impulsive noise environment. The optimization of the robust statistic function is iterative and usually requires complex parameter adjustments. In this article, the impulsive noise rejection for the compressed measurement signal with the design for image reconstruction is proposed. It is used as the preprocessing for any compressed sensing reconstruction given that the sparsified version of the signal is obtained by utilizing octave-tree discrete wavelet transform with db8 as the mother wavelet. The presence of impulsive noise is detected from the energy distribution of the reconstructed sparse signal. After the noise removal, the noise corrupted coefficients are estimated. The proposed method requires neither complex optimization nor complex parameter adjustments. The performance of the proposed method was evaluated on 60 images. The experimental results indicated that the proposed method effectively rejected the impulsive noise. Furthermore, at the same impulsive noise corruption level, the reconstruction with the proposed method as the preprocessing required much lower measurement rate than the model-based Lorentzian iterative hard thresholding.

Keywords

  • compressed sensing (CS)
  • model-based method
  • impulsive noise
  • orthogonal matching pursuit with partially known support (OMP-PKS)

1. Introduction

Compressed sensing (CS) is a sampling paradigm that provides compressible signals at a rate significantly below the Nyquist rate. It reveals that a compressible or sparse signal can be recovered by a small amount of measurements [13]. The connection between sampling and reconstruction methods of CS and those of other sparse signal processing is presented in [4]. The description of commonly used reconstruction algorithms is also given. Consider a measurement process in CS that is modeled as
y = Φ x ,
(1)
where y and Φ are an M × 1 compressed measurement signal and an M × N random measurement matrix, respectively; x is an N × 1 compressible signal. In CS, it is considered that M < N. A signal is compressible if it is sparse in some domain; thus, x can be written as follows
x = Ψ s ,
(2)

where s and Ψ are a k-sparse signal and an N × N orthogonal basis matrix, respectively. k is the number of non-zero elements or a sparsity level. Without loss of generality, Ψ is defined as an identity matrix in this article and x is equivalent to s.

In practice, y can be corrupted by noise during transmission in a noisy channel. The measurement process in the noisy channel is modeled as
y = Φ s + e ,
(3)

where e is the additive noise.

CS reconstruction methods aim to find the sparsest s that creates y. The reconstruction of s in the noisy channel can be written as the following optimization problems.
arg min s s 0 s . t . y - Φ s 2 ε ,
(4)

where ε and ||u|| p are the error bound and the L P norm of u, respectively. The error bound is set based on the noise characteristics, such as bounded noise, Gaussian noise, finite variance noise, etc. [514]. L0 norm in Equation (4) is relaxed to L1 norm in the reconstruction by Basis Pursuit Denoising (BPDN), whereas it is replaced by heuristic rules in the reconstruction by greedy algorithms.

The optimization problems in BPDN [6] is
arg min s s 1 s . t . y - Φ s 2 ε ,
(5)
which is equivalent to
arg min s 1 2 y - Φ s 2 2 + τ s 1 ,
(6)

where τ is a regularization parameter.

When the noise is impulsive noise, e is considered to be very large. It is well known that the optimization of L2 norm is not robust to outliers in y; thus, the optimization leads to the incorrect result of s. In [15], the reconstruction from the signal corrupted by the impulsive noise is performed by solving one of the following two optimization problems.
arg min s 1 2 α y - Φ s - e δ 2 2 + e δ 1 + τ s 1 ,
(7)
arg min s 1 2 α y - Φ s - e δ 2 2 + e δ 1 + τ s TV ,
(8)
where e δ and α are a sparse vector with large non-zero coefficients (impulsive noise) and a pre-defined threshold, respectively; ||u||TV is a total variation norm of u. This method first estimates s and then estimates e δ . The estimation is performed iteratively. However, the unique solution is guaranteed only when the cost function is convex. The effect of impulsive noise can be suppressed by applying robust statistics [1622]. The Generalized Cauchy Distribution (GCD)-based maximum likelihood has been proposed as the optimization algorithm that is robust to impulsive noise [1622]. The Lorentzian norm, which is the special case of GCD, is utilized in a number of robust CS reconstructions [1820, 22]. The Lorentzian norm is used in the place of L2 norm in Equation (5) for the Lorentzian-based Basis Pursuit (LBP) [18]. Similar to Basis Pursuit (BP), the LBP is slow to solve. Furthermore, it requires complex parameter adjustments for the effective optimization of the Lorentzian norm. The reconstruction in [19, 20] applies the iterative algorithm and the weighted myriad operator to solve the following problem.
arg min s H - R T s L L + τ s 0 ,
(9)
where ||u|| LL , H and R are the Lorentzian norm of u, a Cauchy random projection signal and a Cauchy random projection matrix, respectively. The reconstruction in [21] applies the weighted median operator and the iterative thresholding to solve the following L0-regularized Least Absolute Deviation regression problem.
arg min s i = 1 M ( y i - φ i T s ) + τ s 0 ,
(10)

where y i and φ i T are the i th element of y and the i th row of Φ, respectively. The Lorentzian-based Iterative Hard Thresholding (LIHT) approach is proposed as the fast reconstruction method in [22]. Iterative Hard Thresholding (IHT) is used in the place of BP to increase the speed of LBP. However, it faces the same problem as IHT in that it requires high measurement rate in order to acquire successful reconstruction [13]. Consequently, LIHT is suitable for very sparse signals.

The noise tolerance can be increased by including prior knowledge. One of the popular knowledge is the model of a sparse signal [2329], such as the wavelet-tree structure. Model-based reconstruction methods have three benefits: (1) the reduction of the number of measurements, (2) the increase in robustness, and (3) the faster reconstruction.

Even though robust statistic provides the tolerance against impulsive noise, its optimization problem is often difficult. In this article, the impulsive noise rejection method for image data is proposed. It is used as the preprocessing to estimate the noise-free y. It iteratively applies the heuristic rule that is based on the energy distribution of the image data in wavelet domain to detect the existence of the impulsive noise. Octave-tree discrete wavelet transform (DWT) is used to transform signals to sparse domain in this article. In an image, most energy should be contained in the third-level subband. The existence of the impulsive noise leads to the high ratio of the energy outside the third-level subband to the total energy. The rejection and the estimation of the noise corrupted elements are made possible by the following fact. In most images, the k-sparse signal s can successfully be reconstructed even though some elements in y are removed, because the image data are redundant. The proposed rejection method requires only two parameters: the energy-ratio threshold and the rejection-ratio threshold. These two parameters are easily adjusted and are evaluated for the optimal values as presented in the "Experimental" section.

The proposed method and the impulsive noise cancellation method in [30] are similar as they have two stages: the noise detection and the signal estimation stages. Both methods detect impulsive noise iteratively. However, they are different in a number of aspects. Only a few are mentioned here. The proposed method detects the impulsive noise via the energy distribution of the projected sparse signal. Its estimation stage is separated from its detection stage. The estimation is performed only once after the noise detection has been completed. On the other hand, the method in [30] detects the noise via the difference between the original noisy and the estimated signals; consequently, its estimation stage is integrated into the same loop as its detection stage. The estimation is performed iteratively.

The remainder of this article is organized as follows. Section 2 addresses a brief review of CS, the reconstruction by Orthogonal Matching Pursuit (OMP) and OMP with Partially Known Support (OMP-PKS). Section 3 describes the proposed impulsive noise rejection method. The block processing and the vectorization are also given. In Section 4, the proposed method is evaluated. The conclusion is given in Section 5.

2. Background

2.1. Compressed sensing

CS is based on the assumption of the sparse property of signals and incoherency between the basis of the sparse domain and the basis of measurement vectors [13]. CS has three major steps: the construction of k-sparse representation, the measurement, and the reconstruction. The first step is the construction of the k-sparse representation, where k is the sparsity level of the sparse signal. Most natural signals can be made sparse by applying orthogonal transforms such as wavelet transform, Fast Fourier Transform, or Discrete Cosine Transform. This step is represented as Equation (2).

The random measurement matrix is applied to measure the signal by the following equation.
y = Φ x = Φ Ψ s
(11)
Since Ψ is an identity matrix in this article, s is equivalent to x. The sufficient condition for the high probability of successful reconstruction is as follows.
M C μ 2 ( Φ , Ψ ) k log N ,
(12)
for some positive constant C. μ(Φ, Ψ) is the coherence between Φ and Ψ, and defined by
μ ( Φ , Ψ ) = N max i , j φ i , ψ j ,
(13)

where φ i and ψ j are the i th and the j th column in Φ and Ψ, respectively. If the elements in Φ and Ψ are correlated, the coherence is large. Otherwise, it is small. From linear algebra, it is known that μ ( Φ , Ψ ) 1 , N [2]. In the measurement process, the error (due to hardware noise, transmission error, etc.) may occur. The error is added into the compressed measurement signal as described in Equation (3).

The final step is the reconstruction. There are two major reconstruction approaches: L1-minimization [58] and greedy algorithm [1014, 31]. Convex optimization is applied in the reconstruction by L1-minimization approach. The successful reconstruction depends on the degree that Φ complies with the Restricted Isometry Property (RIP). RIP is defined as follows.
( 1 - δ k ) s 2 2 Φ s 2 2 1 + δ k s 2 2 ,
(14)

where δ k is the k-restricted isometry constant of Φ. RIP is used to ensure that all the subsets of k columns taken from Φ are nearly orthogonal. It should be noted that Φ has more columns than rows; thus, Φ cannot exactly be orthogonal [2].

The reconstruction by L1-minimization as in BP is stable but slow. Greedy algorithms increase the reconstruction speed by applying heuristic rules. In OMP [31], the heuristic rule is created based on the assumption that y has the large correlation to the bases corresponding to the non-zero elements (or the elements with large magnitude) of s. OMP selects the bases of the non-zero elements according to the correlation and estimates the values of the non-zero elements by the least squared method. The selection is iterated until the certain condition is reached. The reconstruction by greedy algorithms has a fast runtime, but lacks stability and uniform guarantee. RIP is not seriously considered in the greedy algorithms [12].

2.2 Orthogonal matching pursuit

OMP is a well-known reconstruction algorithm [31]. It was developed from matching pursuit [32] using different method to estimate the magnitude of the non-zero elements in s. Instead of projecting the residual signal onto the selected basis, it estimates the magnitude of the non-zero elements by solving the least squared error between the projection of the reconstructed s and y. OMP has the advantage of simple and fast implementations. The algorithm is as follows.

Input:

  • The M × N measurement matrix, Φ = [ φ 1 φ 2 . . . φ N ]

  • The M-dimension compressed measurement signal, y

  • The sparsity level of the sparse signal, k

Output:

  • The reconstructed signal, ŝ

  • The set containing k indexes of non-zero elements in ŝ, Λ k = {λ1, λ2, ..., λ k }

Procedure:
  1. (a)
    Initialize the residual (r 0), the index set (Λ0) and the iteration counter (t) as follows.
    r 0 = y , Λ 0 = , t = 1
     
  2. (b)
    Find the index λ t of the measurement basis that has the highest correlation to the residual in the previous iteration, r t-1.
    λ t = arg max j = 1 , . . . , N r t - 1 , φ j
     
If the maximum occurs in multiple bases, select one deterministically.
  1. (c)

    Augment the index set and the matrix of chosen bases: Λ t = Λt-1{λ t } and Φ t = Φ t - 1 φ λ t , where Φ 0 is an empty matrix.

     
  2. (d)
    Solve the following least squared problem to obtain the new reconstructed signal, z t .
    z t = arg min z y - Φ t z 2
     
  3. (e)
    Calculate the new approximation, a t , that best describes y. Then, calculate the residual of the t-th iteration, r t .
    a t = Φ t z t r t = y - a t
     
  4. (f)

    Increment t by one.

     
  5. (g)

    If t > k, terminate; otherwise, go to step (b).

     

The reconstructed signal, ŝ, has non-zero elements at the indexes listed in Λ k . The value of the λ j th elements in ŝ equals to the j th element of z k (j = 1,2,...,k). The termination criterion can be changed from t > k to that rt-1is less than the predefined threshold.

2.3. OMP with partially known support

OMP-PKS [28] is adapted from the classical OMP [31]. The partially known support gives a priori information to determine which subbands in the sparse signal structure are more important than the others and should be selected as non-zero elements. It has the characteristic of OMP that the requirement of RIP is not as severe as BP's [6]. It has a fast implementation but may fail to reconstruct the signal (lacks stability). It requires very low measurement rate. It is different from Tree-based OMP (TOMP) [24] in that the subsequent basis selection of OMP-PKS does not consider the previously selected bases, while TOMP sequentially compares and selects the next good wavelet sub-tree and the group of related atoms in the wavelet tree.

The wavelet transform of an image is realized using filter banks as shown in Figure 1. The image is decomposed into four subbands: HH, HL, LH, and LL. These four subbands contain diagonal details, vertical details, horizontal details, and approximation coefficients, respectively. In this article, octave-tree DWT is used to obtain the sparse representation of images. The second and the third-level subbands are constructed by applying the filter bank analysis to the LL subband in the first and the second levels, respectively. The example of octave-tree DWT is shown in Figure 2. The original and the wavelet transformed images are shown in Figure 2a, b, respectively. Since the LL subband in the third level (LL3 subband) contains most information in the image, the signal in the LL3 subband must be included for successful reconstruction. All elements in the LL3 subband are selected as non-zero elements without testing for the correlation. The algorithm for OMP-PKS when the data are represented in wavelet domain is as follows.
Figure 1
Figure 1

Wavelet decomposition by filter bank analysis. HP and LP are high pass filter and low pass filter, respectively.

Figure 2
Figure 2

The example of octave-tree DWT: (a) the original image and (b) the wavelet transformed image. Subbands inside the blue, orange and green windows are the first, the second, and the third level subbands, respectively.

Input:

  • The M × N measurement matrix, Φ = [ φ 1 φ 2 . . . φ N ]

  • The M-dimension compressed measurement signal, y

  • The set containing the indexes of the bases in LL3 subbands, Γ = {γ1, γ2, ..., γ|Γ|}

  • The sparsity level of the sparse signal, k

Output:

  • The reconstructed signal, ŝ

  • The set containing k indexes of the non-zero element in ŝ, Λ k = {λ1, λ2, ..., λ k }

Procedure:

Phase 1: Selection without correlation test
  1. (a)
    Select every basis in the LL3 subband.
    t = Γ
    Λ t = Γ
    Φ t = φ γ 1 φ γ 2 . . . φ γ t
     
  2. (b)
    Solve the least squared problem to obtain the new reconstructed signal, z t .
    z t = arg min z y - Φ t z 2
     
  3. (c)
    Calculate the new approximation, a t , and find the residual (error, r t ). a t is the projection of y on the space spanned by Φ t .
    a t = Φ t z t r t = y - a t
     
Phase 2: Reconstruction by OMP
  1. (a)

    Increment t by one, and terminate if t > k.

     
  2. (b)

    Apply steps (b)-(g) of OMP described in Section 2.2 to find the remaining k-|Γ| non-zero elements of ŝ.

     

The reconstructed sparse signal, ŝ, has the indexes of non-zero elements listed in Λ k . The value of the λ j th element of ŝ equals to the j th element of z k .

3. Proposed method

The proposed impulsive noise rejection method is described in this section. Block processing and the vectorization of the wavelet coefficients are addressed before a description of the noise rejection method. The block processing is applied to reduce the computation cost. The proposed noise rejection method is applied before the reconstruction and divided into two stages. In the first stage, the algorithm to detect impulsive noise is applied. Then, OMP-PKS is applied to estimate the information that is lost due to the impulsive noise. The algorithm to detect the impulsive noise and the estimation of the missing information are described in Sections 3.2 and 3.3, respectively.

3.1. Block processing and the vectorization of the wavelet coefficients

In this article, the DWT is used to obtain the sparsified version of an image. Figure 3 shows an example of block processing and the vectorization of the wavelet coefficients. Figure 3a shows the structure of a wavelet transformed image. The LL3 subband is presented in red. Other subbands (LH, HL, and HH) in the third, the second, and the first levels are presented in green, orange, and blue, respectively. The LL3 subband is the most important subband, because it contains most of the energy in the image. Figure 3b shows the re-ordering of the wavelet coefficients. The coefficients are ordered such that the LL3 subband is located at the beginning of each row. The LL3 subband is followed by the other subbands in the third, the second, and the first levels.
Figure 3
Figure 3

The illustration of block processing and vectorization in Section 3.1: (a) wavelet transformed image; (b) wavelet subbands vectorization and reorganization; and (c) wavelet blocks.

The wavelet-domain image in Figure 3b is divided into blocks along its rows as shown in Figure 3c. In Figure 3c, the image has eight rows; consequently, it is divided into eight blocks. Each row in Figure 3c is considered as a sparse signal in this article.

The signal can be made more sparse by the wavelet shrinkage thresholding [33]. In the wavelet shrinkage thresholding, all the coefficients in the LL3 subband are preserved, while coefficients outside the LL3 subband with magnitude less than the wavelet shrinkage threshold are set to zero. Note that not all coefficients outside the LL3 subband are set to zero. Since only the small coefficients in high-frequency subband are set to zero, most distinct edges in the image are preserved. The sparsifying transformation by the wavelet shrinkage thresholding has little distinct visual degradation if the wavelet shrinkage threshold is selected properly.

In the experiments, it is found that the vectorization according to the structure of Figure 3c is better than the one by the lexicographic order of Figure 3a. Figure 4 shows some reconstruction examples when these two vectorization methods were used. The sparsity rate (k/N) and the measurement rate (M/N) were set to 0.1 and 0.3, respectively. All images were reconstructed by OMP-PKS. The top row of each image shows the reconstruction when the vectorization in each block was done such that it had the structure as shown in Figure 3c. The bottom row of each image shows the reconstruction when the vectorization in each block was done by the lexicographic order of the structure shown in Figure 3a. There was no fail reconstruction (dark spot) in the top row, whereas there were some in the bottom row.
Figure 4
Figure 4

The reconstruction examples for different vectorization of the wavelet blocks. Types I and II indicate the vectorization according to the structure in Figure 3c and the vectorization by the lexicographic order of Figure 3a, respectively. (a) Lena, (b) Artificial image, (c) Dog, and (d) Flower.

3.2. The detection of the impulsive noise

Figures 5 and 6 show the examples of the reconstruction from y corrupted by impulsive noise. Figures 5a-c and 6a-c show the original blue y corrupted by the red impulsive noise, the original s, and the reconstructed ŝ from Figures 5a and 6a, respectively. The figures clearly indicate that the energy distribution was different. The energy of the signals in Figures 5c and 6c was spread out, while most energy of the signals in Figures 5b and 6b was contained in the third-level subbands.
Figure 5
Figure 5

The first reconstruction example when y was corrupted by impulsive noise. (a) The 128- D y corrupted by six impulsive noise, (b) the original 256-D s (k = 25), and (c) the signal reconstructed from (a) by OMP-PKS. In (b, c), the area to the left of the red dashed line belongs to the third-level subband; the area to the right belongs to the first- and the second-level subbands.

Figure 6
Figure 6

The second reconstruction example when y was corrupted by impulsive noise. (a) The 128- D y corrupted by six impulsive noise, (b) the original 256-D s (k = 25), and (c) the signal reconstructed from (a) by OMP-PKS. In (b, c), the area to the left of the red dashed line belongs to the third-level subband; the area to the right belongs to the first- and the second-level subbands.

Even though there is no definite structure of y, Figures 5 and 6 indicate that the energy distribution of s can be exploited to detect the existence of impulsive noise. The large impulsive noise leads to a bad approximation of ŝ whose energy leaks out of the third-level subband. The ratio of the energy outside the third-level subband to the total energy is used to determine the existence of the impulsive noise in y. The high ratio indicates that the energy is spread out; thus, the existence of the impulsive noise. The impulsive noise has very large magnitude in comparison to y. Consequently, if the impulsive noise exists, it has the largest magnitude. The removal of the impulsive noise is simply the removal of the elements with the largest magnitude. The size of the impulsive noise may vary, so the removal is performed iteratively until either of the following two stopping criteria is satisfied.
  1. (1)

    The reconstructed ŝ has most of its energy inside the third-level subband.

     
  2. (2)

    The reconstruction is unlikely to be successful because too many elements in y have been removed.

     

According to the stopping criteria, there are two thresholds that need to be defined. The threshold in the first criterion is used to indicate the amount of the energy that is allowed to be leaked out of the third-level subband. The amount of the energy is measured as the ratio to the total energy. The threshold is defined as the energy-ratio threshold, η. The threshold in the second criterion is required to ensure that there is sufficient information left for the reconstruction. This threshold is called rejection-ratio threshold, T, which is defined as the ratio between the numbers of the removed elements to the size of y(M). Thus, the maximum number of the elements that can be removed is TM. The optimum values of η and T are investigated in Section 4.2.

At each iteration, the noise-corrupted elements are removed and the size of the available measurement signal becomes smaller. Hence, it is required that the reconstruction algorithm is still effective at low measurement rate. OMP-PKS is adopted by including the algorithm for the detection and the removal of impulsive noise as follows.

Input:

  • The M × N measurement matrix, Φ = [ φ 1 φ 2 . . . φ N ]

  • The M-dimension compressed measurement signal, y

  • The sparsity level of the sparse signal, k

  • The number of wavelet coefficients in the third-level subband, l3

  • The energy-ratio threshold, η

  • The rejection-ratio threshold, T

Output:

  • The number of impulsive noise corrupted elements, n δ

  • The set containing the n δ indexes of the impulsive noise-corrupted elements, ς δ = ϖ 1 , ϖ 2 , . . . , ϖ n δ

Procedure:
  1. (a)

    Initialize t = 0, n δ = 0, ς δ = , y t = y, Φ t = Φ.

     
  2. (b)

    Apply OMP-PKS to reconstruct ŝ from y t and Φ t .

     
  3. (c)
    Calculate the energy-ratio (ER).
    ER = i = l 3 + 1 N ŝ i 2 j = 1 N ŝ j 2 ,
     
where ŝ i is the i th element of ŝ.
  1. (d)

    Terminate if ER < η.

     
  2. (e)

    Assign the elements in y t having the maximum magnitude as the impulsive noise. α m ( m = 1 , 2 , , n δ t ; n δ t is the number of the elements having the maximum magnitude in y t ) are defined as the indexes of the recently assigned impulsive noise elements. Note that α m are the indexes of y. In case that there are more than one element having the maximum magnitude ( n δ t > 1 ), all of them are to be removed in step (i).

     
  3. (f)

    Increment n δ by n δ t and add α m to ς δ .

     
  4. (g)

    Terminate if n δ TM.

     
  5. (h)

    Set t = t+1.

     
  6. (i)

    y t is assigned the value of y after the noise elements (the elements with the indexes in ς δ ) are removed from y. Φ t is assigned the value of Φ after the rows corresponding to the noise elements are removed from Φ.

     
  7. (j)

    Go to step (b).

     

If the algorithm is terminated in step (g), then the removal of impulsive noise is unsuccessful. Too many elements have been removed and it is unlikely that there is sufficient information to reconstruct ŝ and estimate the missing information in the next stage.

It should be noted that the proposed algorithm is applicable to images because image data have some degree of redundancy. The rejection-ratio threshold, T, can be set quite large. For the signal data that have low degree of redundancy, the value of T has to be very small. In this case, the reconstruction is unlikely to succeed if every information in y is not used.

3.3. Estimation of the missing information

The outputs from the detection stage and y are used as the inputs of this stage. The noise-corrupted elements, specified in ς δ , are removed. After the noise removal, the size of the compressed measurement signal y is smaller than the size of the original y; consequently, the reconstruction methods requiring high measurement rate may fail to reconstruct ŝ. It is necessary to estimate the values of the removed elements to preserve the measurement rate. In the proposed method, the values are estimated such that they comply with other noiseless elements. The estimation algorithm is as follows.

Input:

  • The M-dimension compressed measurement signal, y

  • The number of impulsive noise-corrupted elements, n δ

  • The set containing the n δ indexes of the impulsive noise-corrupted elements, ς δ = ϖ 1 , ϖ 2 , . . . , ϖ n δ

Output:

  • The estimated noise-free y, ŷ

Procedure:
  1. (a)

    Define y s as y with its ϖ i th (i = 1, 2, ..., n δ ) elements removed. Define Φ s as Φ with its ϖ i th (i = 1, 2, ..., n δ ) rows removed.

     
  2. (b)

    Apply OMP-PKS to reconstruct ŝ s from y s and Φ s .

     
  3. (c)
    Define = Φŝ s and estimate the i th elements in ŷ as follows.
    ŷ i = y i ; i ς δ i ; i ς δ ,
     

where the subscript i indicates the i th elements of the signal and i = 1, 2, ..., M.

After this process, the impulsive noise-corrupted elements in y are replaced by values complying with noise-free elements. Conventional CS reconstruction methods can be applied to reconstruct ŝ from the impulsive noise-free ŷ.

4. Experiment and discussion

4.1. Experimental environment

The experiment was conducted on a PC with 2.83-GHz Intel Core 2 Quad CPU and 4-GB RAM. All methods were implemented by 64-bit MATLAB R2011a. The proposed method was tested on 60 images. All the test images were resized to 256 × 256. Figure 7 shows the test images that consist of 10 standard test images, 12 artificial images, and 38 natural images. The artificial and the natural images are available at http://sourceforge.net/projects/testimages/files/.
Figure 7
Figure 7

The images used in the experiment. Images in the first row are the standard test images. Images in the second row and the first two images in the third row are the artificial images. The remaining images are the natural images. (The artificial and natural images are available at http://sourceforge.net/projects/testimages/files/).

Octave-tree DWT was used to transform test images to sparse domain. The mother wavelet used in the implementation was Daubechies 8 (db8). The wavelet shrinkage thresholding [33] was applied to make the signal more sparse. The probability of impulsive noise is denoted as p; p {0, 0.05, 0.10, 0.15, 0.20}. The magnitude of impulsive noise was set relative to the maximum magnitude in y(ymax). The measurement matrix was Hadamard matrix. Each wavelet image was divided into 256 blocks of 1 × 256. The sparsity rates (k/N) of blocks in an image were intentionally varied to demonstrate that one set of thresholds was applicable for various sparsity rates. The average sparsity rate in each test image was set to 0.1. The measurement rate (M/N) of an image was the rate averaged over every block in the image. The average measurement rates used in the experiment were 0.2, 0.3, 0.4, 0.5, and 0.6.

The experiment consists of two parts: (1) the evaluation of the two thresholds (η and T) and the minimum size of the detectable impulsive noise given in Section 4.2 and (2) the performance evaluation of the proposed method given in Section 4.3.

4.2. Evaluation of the two thresholds and the minimum size of the detectable impulsive noise

In this section, 500 blocks were randomly selected from blocks in 60 test images. The sparsity rate was fixed at 0.1. Table 1 lists the percentage that the proposed method was unable to correctly reject the impulsive noise-corrupted elements. The threshold value η and the magnitude of impulsive noise were varied. The value in the table was the value averaged over five values of p and five values of measurement rates. The values in the table indicated that the proposed method was unable to keep the percentage of inaccurate rejection to less than 1% if the magnitude of the impulsive noise was less than 2.5 ymax.
Table 1

The percent of inaccurate noise rejection of the proposed method

η

The magnitudes of impulsive noise

 

1.25 ymax

2.5 ymax

3 ymax

5 ymax

10 ymax

0.01

9.09

8.41

8.40

8.39

8.34

0.02

4.02

1.71

1.76

1.70

1.72

0.03

5.28

0.60

0.60

0.54

0.54

0.04

8.46

0.35

0.33

0.28

0.28

0.05

12.00

0.24

0.13

0.17

0.15

0.1

33.04

1.22

0.30

0.16

0.10

0.15

50.07

5.21

1.68

0.16

0.14

0.2

61.62

13.98

5.35

0.30

0.18

0.25

68.34

24.28

12.64

0.94

0.26

0.3

73.98

36.53

22.03

2.00

0.60

The bold style is the minimum percent of inaccurate noise rejection.

Table 1 also indicated the relationship of η to the percentage of inaccurate rejection. The inaccurate rejection was the result of (1) the rejection of the noise-free elements and (2) the failure to reject the noise-corrupted elements. When η was too small, the energy-ratio criterion was too strict and the proposed method did not accept even the correct energy distribution of ŝ; consequently, it started to remove the elements uncorrupted by noise. In the opposite case, when η was too large, the energy-ratio criterion became too lax and the proposed method accepted even the incorrect energy distribution of ŝ; consequently, it failed to remove the noise-corrupted elements. The range of η giving less than 1% of inaccurate rejection was larger, when the magnitude of the impulsive noise was larger. This was because the effect of the impulsive noise to the energy distribution became more distinct and easier to detect when the size of the noise was larger. When the magnitude of the impulsive noise was at least 2.5 ymax, the values of η giving less than 1% inaccurate rejection were 0.03, 0.04, and 0.05. Among the three values, the values of η = 0.05 gave the most accurate rejection.

The evaluation for the optimum rejection-ratio threshold, T, was performed by investigating for the maximum number of the elements in y that can be removed without causing the high error between ŝ and s. Figure 8 shows the MSE of the signals reconstructed by OMP-PKS when TM elements in y were removed. Different measurement rates were presented with different colors. The figure indicated that when the measurement rate increased, more elements could be removed without causing a drastic change in MSE. At the measurement rate of 0.2, MSE approximately increased at the exponential rate, when T was larger or equal to 0.45. At the higher measurement rates, the effect of T was not distinct, even when more than half of y was removed.
Figure 8
Figure 8

The MSE of the reconstructed signal when T was varied. The sparsity rate was set to 0.1.

Because the benefit of CS is the capability of compressing the signal to very small size, the measurement rate should be kept low. It is recommended that T be selected such that it is applicable even at low measurement rate. In the following section, T was set to 0.4 to ensure the high probability of successful reconstruction. The value of η was set to 0.05 as it was the optimal value (Table 1).

4.3. Performance evaluation

In this section, the following four reconstruction methods were investigated.
  1. (1)

    OMP-PKS

     
  2. (2)

    OMP-PKS with the proposed rejection method as the preprocessing (OMP-PKS+R)

     
  3. (3)

    Model-based LIHT (MLIHT) which is the LIHT that is forced to consider the elements in LL3 subband as non-zero elements.

     
  4. (4)

    MLIHT with the proposed rejection method as the preprocessing (MLIHT+R)

     

The Lorentzian parameter and the number of iteration for MLIHT were 0.25 and 100, respectively. The values of η and T were 0.05 and 0.4, respectively. There were 256 y's in an image and ymax was chosen as the maximum magnitude among 256 y's in the image. The magnitude of impulsive noise varied according to the Gaussian pdf with the mean of 7 ymax and the standard deviation of ymax.

The performance is evaluated based on the PSNR of the reconstructed images, the computation time and the visual quality of the reconstructed images.

Figure 9 shows the experimental results of the standard test images. Figure 9a-e shows the PSNR (the left column) and the computation time (the right column) at different p (noise probability). At p = 0 (noiseless), the addition of the proposed method to OMP-PKS and MLIHT did not reduce the PSNR of the reconstructed images. It indicated that the proposed method preserved y when there was no impulsive noise. When y was corrupted by impulsive noise (p > 0), the reconstruction based on OMP-PKS (the blue line) gave very low PSNR, because OMP-PKS is designed with the assumption of bounded noise. The reconstruction based on OMP-PKS could not be improved by increasing the measurement rate. However, when the noisy y was preprocessed by the proposed method, the reconstruction based on OMP-PKS (the dashed blue line) was very effective. At the measurement rate of 0.4 and higher, the reconstruction from the noisy y by OMP-PKS+R had the comparable PSNR to the reconstruction from the noiseless y by OMP-PKS.
Figure 9
Figure 9

The performance comparisons for standard test images at various measurement rates when the noise probability ( p ) is (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The graphs in the first column (second column) show the relationship between PSNR (the computation time) and the measurement rate.

At p = 0.05, the effect of adding the proposed method as the preprocessing to MLIHT was minimal; however, at higher p, the addition of the proposed method (the dashed red line) resulted in higher PSNR than the reconstruction by MLIHT alone (the red line). When p was 0.15 or higher, MLIHT was no longer an effective reconstruction method, but MLIHT+R was still effective. It indicated that the addition of the proposed method increased the robustness against p to MLIHT.

It should be noted that even though MLIHT was based on LIHT which was designed to be robust against impulsive noise. MLIHT+R provided less PSNR than OMP-PKS+R, because MLIHT required the higher measurement rate. Figure 9 indicated that MLIHT+R was as effective as OMP-PKS when the measurement rate was 0.6 and it should become better at the higher measurement rate. However, the improvement by increasing the measurement rate is not recommended because it leads to the large size of y and eliminates the benefit of CS.

Figure 9 also indicates the relationship between measurement rate and p (noise probability). When p was higher, the measurement rate should be set higher. This was because the number of the noise-corrupted elements was larger at higher p. Consequently, the larger size of y was required to cope with the removal of more elements. The figure shows that in OMP-PKS+R, the measurement rate of 0.4 gave the good reconstruction for all p in this experiment. The right column of Figure 9 shows the computation time of OMP-PKS, OMP-PKS+R, MLIHT, and MLIHT+R. Since at least one reconstruction is required in the proposed method, the computation time will be at least doubled. The computation time for reconstructing 256 blocks in an image could be reduced as follows.
  1. (a)

    Apply the proposed rejection method to the first block. Define β as the smallest magnitude of the noise corrupted elements in the first block.

     
  2. (b)

    Move to the next block. Define the compressed measurement of the new block as y curr

     
  3. (c)

    Assign the elements in y curr having the magnitude not less than β as the impulsive noise. Initialize variables in step (a) of Section 3.2 such that they reflect the removal of the elements with the magnitude not less than β.

     
  4. (d)

    Apply the proposed rejection method to y curr . If β is larger than the smallest magnitude of the noise corrupted elements in y curr , set β to this value.

     
  5. (e)

    If the current block is the last block in the image, terminate. Otherwise, go to step (b).

     

The assumption of the above algorithm is that the magnitude of impulsive noise in every block is approximately the same (or share the same distribution). The graphs indicated that the computation time of the reconstruction with the proposed rejection method was no more than four times the computation time of the reconstruction without the proposed rejection method.

Figures 10 and 11 show the results from the artificial and the natural images, respectively. The trends of the PSNR and the computation time were similar to Figure 9. From the three figures, it could be concluded that the proposed method should be included in the reconstruction from the impulsive noise corrupted y. The addition of the proposed method increased the computation time no more than four times the original computation time. Finally, OMP-PKS+R was more optimal than MLIHT+R.
Figure 10
Figure 10

The performance comparisons for artificial images at various measurement rates when the noise probability ( p ) is (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The graphs in the first column (second column) show the relationship between PSNR (the computation time) and the measurement rate.

Figure 11
Figure 11

The performance comparisons for natural images at various measurement rates when the noise probability ( p ) is (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The graphs in the first column (second column) show the relationship between PSNR (the computation time) and the measurement rate.

Figures 12, 13, 14, 15, 16, and 17 show the examples of the reconstruction results when the measurement rate was 0.5. The original image is shown in the first column. The reconstruction results based on MLIHT, MLIHT+R, OMP-PKS, and OMP-PKS+R are shown in the second, the third, the fourth, and the fifth columns, respectively. When the impulsive noise was added to y, the reconstruction based on OMP-PKS failed in every case. The reconstruction based on MLIHT failed in some cases at p = 0.1, and failed in every case at p ≥ 0.15. The addition of the proposed algorithm to OMP-PKS and MLIHT, namely, OMP-PKS+R and MLIHT+R, led to the successful reconstruction in every case. Furthermore, the reconstruction based on OMP-PKS+R provided the reconstruction results that were more similar to the original images than the ones based on MLIHT+R. These results complied with the conclusion that was drawn from the PSNR graphs in Figures 9, 10, and 11.
Figure 12
Figure 12

The part of the reconstructed Peppers at the measurement rate of 0.5 with the noise probability ( p ) of (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The images from left to right are the original image and reconstructed images based on MLIHT, MLIHT+R, OMP-PKS, and OMP-PKS+R, respectively.

Figure 13
Figure 13

The part of the reconstructed Mandrill at the measurement rate of 0.5 with the noise probability ( p ) of (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The images from left to right are the original image and reconstructed images based on MLIHT, MLIHT+R, OMP-PKS, and OMP-PKS+R, respectively.

Figure 14
Figure 14

The part of the reconstructed artificial image (Ripple) at the measurement rate of 0.5 with the noise probability ( p ) of (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The images from left to right are the original image and reconstructed images based on MLIHT, MLIHT+R, OMP-PKS, and OMP-PKS+R, respectively.

Figure 15
Figure 15

The part of the reconstructed artificial image (Arc) at the measurement rate of 0.5 with the noise probability ( p ) of (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The images from left to right are the original image and reconstructed images based on MLIHT, MLIHT+R, OMP-PKS, and OMP-PKS+R, respectively.

Figure 16
Figure 16

The part of the reconstructed natural image (Building) at the measurement rate of 0.5 with the noise probability ( p ) of (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The images from left to right are the original image and reconstructed images based on MLIHT, MLIHT+R, OMP-PKS, and OMP-PKS+R, respectively.

Figure 17
Figure 17

The part of the reconstructed natural image (Wing) at the measurement rate of 0.5 with the noise probability ( p ) of (a) 0, (b) 0.05, (c) 0.1, (d) 0.15, and (e) 0.2. The images from left to right are the original image and reconstructed images based on MLIHT, MLIHT+R, OMP-PKS, and OMP-PKS+R, respectively.

It is possible that more than one kind of noise exist in the system. The proposed method was applied to the reconstruction from y corrupted by both Gaussian and impulsive noises. The examples of the reconstruction results are shown in Figure 18. The Gaussian noise was applied such that the SNR of y was 20 dB. The noise probability (p) and the measurement rate were set to 0.1 and 0.5, respectively. The first column shows the reconstruction results when the impulsive noise was correctly removed. The second column shows the reconstruction results when OMP-PKS+R was applied to the reconstruction from y corrupted by both Gaussian and impulsive noises. In order to cope with the higher error from the Gaussian noise, more energy was allowed outside the third-level subband and more data were required for the reconstruction. The values of η and T were set to 0.1 and 0.3, respectively. The images in the first and the second columns were quite similar. The artifacts in the reconstruction based on OMP-PKS+R were mostly the result of the Gaussian noise. Figure 18 demonstrated the prospect of using the proposed method to remove the impulsive noise in the environment corrupted by more than one type of noise. However, further test for y corrupted by more than one type of noise is necessary and is the part of our future research.
Figure 18
Figure 18

The parts of reconstructed: (a) Peppers, (b) Mandrill, (c) Ripple, (d) Arc, (e) Building, and (f) Wing. The first column shows the reconstruction results based on OMP-PKS when y was corrupted by Gaussian noise only. The second column shows the reconstruction results based on OMP-PKS+R when y was corrupted by Gaussian and impulsive noises. The measurement rate was 0.5. The signal with Gaussian noise has 20 dB SNR and p = 0.1.

5. Conclusion

The impulsive noise rejection for CS reconstruction of image data is proposed. The sparsified version of an image is obtained by applying octave-tree DWT using db8 as the mother wavelet. The structure of energy distribution in wavelet domain and the capability to reconstruct the signal from an incomplete y are exploited in order to detect the presence of the impulsive noise. After the noise-corrupted elements are removed, the values of the removed elements are estimated. The experimental results of 60 test images indicated that the proposed rejection method improved the robustness against the impulsive noise of the conventional CS reconstruction methods. The robustness of the reconstruction method against both Gaussian and impulsive noises was also investigated.

Declarations

Acknowledgements

The authors would like to thank the reviewers for their valuable comments and suggestions. This research was financially supported by the National Telecommunications Commission Fund (Grant No. PHD/006/2551 to P. Sermwuthisarn and S. Auethavekiat) and the Telecommunications Research Industrial and Development Institute (TRIDI).

Authors’ Affiliations

(1)
Department of Electrical Engineering, Chulalongkorn University, Bangkok, Thailand
(2)
National Electronics and Computer Technology Center, Pathumthani, Thailand
(3)
Department of Electrical Engineering, Assumption University, Bangkok, Thailand

References

  1. Donoho DL: Compressive sensing. IEEE Trans Inf Theory 2006, 52(4):1289-1306.MathSciNetView ArticleGoogle Scholar
  2. Candes EJ, Romberg J: Sparsity and incoherence in compressive sampling. Inverse Problem 2007, 23(3):969-985. 10.1088/0266-5611/23/3/008MathSciNetView ArticleGoogle Scholar
  3. Candes EJ, Wakin MB: An introduction to compressive sampling. IEEE Signal Process Mag 2008, 25(2):21-30.View ArticleGoogle Scholar
  4. Marvasti F, Amini A, Haddadi F, Soltanolkotabi M, Khalaj BH, Aldroubi A, Sanei S, Chambers J: A unified approach to sparse signal processing. EURASIP J Adv Signal Process 2012. doi:10.1186/PREACCEPT-1686979482577015Google Scholar
  5. Tropp JA: Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans Inf Theory 2006, 52(3):1030-1051.MathSciNetView ArticleGoogle Scholar
  6. Candes EJ, Romberg J, Tao T: Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 2006, 59(8):1207-1223. 10.1002/cpa.20124MathSciNetView ArticleGoogle Scholar
  7. Needell D, Vershynin R: Uniform uncertaintity principle and signal reconstruction via regularized orthogonal matching pursuit. Found Comput Math 2008, 9(3):317-334.MathSciNetView ArticleGoogle Scholar
  8. Omidiran D, Wainwright MJ: High-dimensional subset recovery in noise: sparsified measurements without loss of statistical efficiency. Department of Statistics, UC Berkeley, USA; 2008.Google Scholar
  9. Candès EJ, Tao T: The dantzig selector: Statistical estimation when p is much larger than n . Ann Statist 2007, 35(6):2313-2351. 10.1214/009053606000001523MathSciNetView ArticleGoogle Scholar
  10. Needell D, Tropp JA: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmonic Anal 2008, 26(3):301-321.MathSciNetView ArticleGoogle Scholar
  11. Ben-Haim Z, Eldar YC, Elad M: Coherence-based near-oracle performance guarantees for sparse estimation under Gaussian noise. In Proc ICASSP. Texas, USA; 2010:3590.Google Scholar
  12. Needell D: Topics in compressed sensing. In Ph.D. dissertation. Math. Univ. of California, Davis; 2009.Google Scholar
  13. Blumensath T, Davies ME: Iterative hard thresholding for compressed sensing. Appl Comput Harmonic Anal 2009, 27(3):265-274. 10.1016/j.acha.2009.04.002MathSciNetView ArticleGoogle Scholar
  14. Blumensath T, Davies ME: Normalized iterative hard thresholding: guaranteed stability and performance. IEEE J Sel Topics Signal Process 2010, 4(2):298-309.View ArticleGoogle Scholar
  15. Popilka B, Setzer S, Steidl G: Signal recovery from incomplete mesurements in the presence of outliers. Inverse Problems Imag 2007, 1(4):661-672.MathSciNetView ArticleGoogle Scholar
  16. Carrillo RE, Aysal TC, Barner KE: A theoretical framework for problem requiring robust behavior. In Proc IEEE CAMSAP. Aruba, Dutch Antilles; 2009:25.Google Scholar
  17. Carrillo RE, Aysal TC, Barner KE: A generalized cauchy distribution framework for problems requiring robust behavior. EURASIP J Adv Signal Process 2010, 2010: 19. Article ID 312989View ArticleGoogle Scholar
  18. Carrillo RE, Barner KE, Aysal TC: Robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. IEEE J Sel Topics Signal Process 2010, 4(2):392-408.View ArticleGoogle Scholar
  19. Arce GR, Otero D, Ramirez AB, Paredes J: Reconstruction of sparse signal from l1dimensionality reduced cauchy random-projections. In Proc IEEE ICASSP. Texas, USA; 2010:4014.Google Scholar
  20. Ramirez AB, Arce GR, Sadler BM: Fast algorithm for reconstruction of sparse signal from cauchy random projections. In Proc EUSIPCO. Aalborg, Denmark; 2010:432.Google Scholar
  21. Paredes JL, Arce GR: Compressive sensing signal reconstruction by weighted median regression estimates. IEEE Trans Signal Proces 2011, 59(6):2585-2601.MathSciNetView ArticleGoogle Scholar
  22. Carrillo RE, Barner KE: Lorentzian based iterative hard thresholding for compressed sensing. In Proc IEEE ICASSP. Prague, Czech Republic; 2011:3664.Google Scholar
  23. La C, Do MN: Signal reconstruction using sparse tree representation. In Proc of SPIE Conf on Wavelet App in Sig and Image Proc. Volume 5914. San Diego, USA; 2005:273.Google Scholar
  24. La C, Do MN: Tree-based orthogonal matching pursuit algorithm for signal reconstruction. In Proc IEEE ICIP. Georgia, USA; 2006:1277.Google Scholar
  25. Duarte MF: Compressed sensing for signal ensembles. In Ph.D. dissertation. Dept. Elect. Eng., Rice Univ., Houston, Texas; 2009.Google Scholar
  26. He L, Carin L: Exploiting structure in wavelet-based Baysian compressive sensing. IEEE Trans Signal Process 2009, 57: 3488-3497.MathSciNetView ArticleGoogle Scholar
  27. Baron D, Wakin MB, Duarte MF, Sarvotham S, Baraniuk RG: Distributed compressed sensing. Rice Univ., Dept. Elect. and Com. Eng., Houston, TX; 2006.Google Scholar
  28. Carrillo RE, Polania LF, Barner KE: Iterative algorithm for compressed sensing with partially known support. In Proc IEEE ICASSP. Texas, USA; 2010:3654.Google Scholar
  29. Xu M, Lu J: K-cluster-values compressive sensing for imaging. Eurasip J Adv Signal Process 2011, 2011: 75. 10.1186/1687-6180-2011-75View ArticleGoogle Scholar
  30. Zahedpour S, Feizi S, Amini A, Ferdosizadeh M, Marvasti F: Impulsive noise cancellation based on soft decision and recursion. IEEE Trans Instrum Meas 2009, 58(8):2780-2790.View ArticleGoogle Scholar
  31. Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 2007, 53(12):4655-4666.MathSciNetView ArticleGoogle Scholar
  32. Mallat SG, Zhang Z: Matching pursuits with time-frequency dictionaries. IEEE Trans Signal Process 1993, 41(12):3397-3415. 10.1109/78.258082View ArticleGoogle Scholar
  33. Donoho DL: De-noising by soft thresholding. IEEE Trans Inf Theory 1995, 38(2):613-627.MathSciNetView ArticleGoogle Scholar

Copyright

© Sermwuthisarn et al; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement