- Research
- Open Access
- Published:

# RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

*EURASIP Journal on Advances in Signal Processing*
**volume 2014**, Article number: 125 (2014)

## Abstract

Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

## 1 Introduction

Compressive sensing (CS) [1, 2] has been attracting high attention in compressive radar/sonar sensing [3, 4] due to its many applications such as civilian, military, and biomedical. The main task of CS problems can be divided into three aspects as follows: (1) sparse signal learning: The basic model suggests that natural signals can be compactly expressed, or efficiently approximated, as a linear combination of prespecified atom signals, where the linear coefficients are sparse as shown in Figure 1 (i.e., most of them zero). (2) Random measurement matrix design: It is important to make a sensing matrix which allows recovery of as many entries of unknown signal as possible by using as few measurements as possible. Hence, sensing matrix should satisfy the conditions of incoherence and restricted isometry property (RIP) [5]. Fortunately, some special matrices (e.g., Gaussian matrix and Fourier matrix) have been reported that they are satisfying RIP in high probability. (3) Sparse reconstruction algorithms: Based on the previous two steps, many sparse reconstruction algorithms have been proposed to find the suboptimal sparse solution.

It is well known that the CS provides a robust framework that can reduce the number of measurements required to estimate a sparse signal. Many nonlinear sparse sensing (NSS) algorithms and their variants have been proposed to deal with CS problems. They mainly fall into two basic categories: convex relaxation (basis pursuit de-noise (BPDN) [6]) and greedy pursuit (orthogonal matching pursuit (OMP) [7]). The above NSS-based CS methods have either high complexity or low performance, especially in the case of low signal-to-noise ratio (SNR) regime. Indeed, it was very hard to adapt trade-off between high complexity and good performance.

In this paper, we propose an adaptive sparse sensing (ASS) method using the reweighted zero-attracting normalized mean fourth error algorithm (RZA-NLMF) [8] to solve the CS problems. Different from NSS methods, each observation and corresponding sensing signal vector will be implemented by the RZA-NLMF algorithm to reconstruct the sparse signal during the process of adaptive filtering. According to the concrete requirements, the complexity of the proposed ASS method could be adaptively reduced without sacrificing much recovery performance. The effectiveness of our proposed method is confirmed via computer simulation when comparing with NSS.

The remainder of the paper is organized as follows. The basic CS problem is introduced and the typical NSS method is presented in Section 2. In Section 3, ASS using the RZA-NLMF algorithm is proposed for solving CS problems and its derivation process is highlighted. Computer simulations are given in Section 4 in order to evaluate and compare performances of the proposed ASS method. Finally, our contributions are summarized in Section 5.

## 2 Nonlinear sparse sensing

Assume that a finite-length discrete signal vector ** s** = [

*s*

_{1},

*s*

_{2},⋯,

*s*

_{ N }]

^{T}can be sparse represented in a signal domain

**, that is,**

*D*where ** h** = [

*h*

_{1},

*h*

_{2},⋯,

*h*

_{ N }]

^{T}is the unknown

*K*-sparse coefficient vector (

*K*≪

*N*) and

**is an**

*D**N*×

*N*orthogonal basis matrix with {

*d*_{ i },

*i*= 1, 2,⋯,

*N*} as its columns. Take a random measurement signal matrix

**, and then the received signal vector**

*W***= [**

*y**y*

_{1},⋯,

*y*

_{ m },⋯,

*y*

_{ M }]

^{T}can be written as

where ** X** =

**denotes a**

*WD**M*×

*N*random sensing matrix as

and ** z** = [

*z*

_{1},⋯,

*z*

_{ m },⋯,

*z*

_{ M }]

^{T}is an additive white Gaussian noise (AWGN) with distribution \mathcal{CN}\left(0,{\sigma}_{n}^{2}{\mathbf{I}}_{M}\right), where

*I*_{ M }denotes an

*M*×

*M*identity matrix. From the perspective of CS, the sensing matrix

**satisfies the restricted isometry property (RIP) in overwhelming probability [5] so that the sparse signal**

*X***can be reconstructed correctly by NSS methods, e.g., BPDN [6] and OMP [7]. Take the BPDN as an example to illustrate the NSS realization approach. Since the sensing matrix**

*h***satisfies RIP of order**

*X**K*with positive parameter

*δ*

_{ K }∈ (0, 1), i.e.,

**∈ RIP(**

*X**K*,

*δ*

_{ K }) if

holds for all ** h** having no more than

*K*nonzero coefficients, then the unknown sparse vector

**can be reconstructed by BPDN as**

*h*where *λ* denotes a regularization parameter which balances the mean square error (MSE) term and sparsity of ** h**. If the mutual interference of sensing matrix

**can be completely removed, then the theoretical Cramer-Rao lower bound (CRLB) of the NSS can be derived as [9, 10]**

*X*## 3 Adaptive sparse sensing

We reconsider the above system model (2) with respect to the adaptive sensing case. At the observation side, the *m* th observed signal *y*_{
m
} can be written as

for *m* = 1, 2,⋯, *M*. The objective of ASS is to adaptively estimate the unknown sparse vector ** h** using the sensing signal vector

*x*_{ m }and the observed signal

*y*

_{ m }

*.*Different from NSS approaches, we proposed an alternative ASS method using the RZA-NLMF algorithm as shown in Figure 2. Assume that {\tilde{y}}_{m}\left(n\right)={\mathbf{x}}_{m}^{T}\tilde{\mathbf{h}}\left(n\right) is an estimated observed signal which depends on signal estimator \tilde{\mathbf{h}}\left(n\right) and hence the

*n*th observed signal error as

*e*

_{ m }(

*n*) =

*y*

_{ m }-

*ỹ*

_{ m }(

*n*). Notice that

*e*

_{ m }(

*n*) is in correspondence with the

*n*th iterative error when using the

*m*th sensing signal vector

*x*_{ m }and

*m*= mod(

*n*,

*M*). Notice that mod(⋅) denotes a modulo function, for example, mod(5,3) = 2 and mod(5,2) = 1. First of all, the cost function of the RZA-NLMF algorithm is constructed as

where *λ*_{ass} > 0 is a regularization parameter which trades off the sensing error and coefficient vector sparsity. *ϵ* > 0 denotes a reweighted factor which enhances to exploit the signal sparsity at each iteration. A figure example to show the relationship between reweighted factors and sparse constraint strength is given in Figure 3. According to the cost function (8), the corresponding update equation can be derived as

where *ρ* = *μ*_{iss}*λ*/*ϵ* is a parameter which depends on initial step size *μ*_{iss}, regularization parameter *λ*, and threshold *ϵ*. In the second term of (9), if the coefficient magnitudes of \tilde{\mathbf{h}}\left(n\right) are smaller than 1/*ϵ*, then these small coefficients will be replaced by zeros in high probability [11]. Here, it is worth noting that *μ*_{ass}(*n*) is a variable step size:

which depends on three factors: initial step size *μ*_{iss}, input signal *x*_{
m
}, and update iterative error *e*_{
m
}(*n*). Since *μ*_{iss} is a given initial step size and *x*_{
m
} is a random scaling input signal, hence, *μ*_{ass} in Equation 10 can also be rewritten as

which is a variable step size (VSS) that is adaptive to change as square sensing error *e*_{
m
}^{2}(*n*); a smaller error incurs a smaller step size to ensure the stability of the gradient descent while a larger error yields a larger step size to accelerate the convergence speed of this algorithm [12]. According to the update equation in (9), our proposed ASS method can be concluded in Algorithm 1.

As for the trademark of the performance comparisons, the CRLB of the proposed ASS method is derived in the subsequent. The signal error is defined as \mathbf{v}\left(n\right):=\tilde{\mathbf{h}}\left(n\right)-\mathbf{h}, and *e*(*n*) can be written as *e*_{
m
}(*n*) = *z*_{
m
} - *v*^{T}(*n*)*x*_{
m
}. To simply derive the CRLB, four assumptions are considered in the subsequent analysis: (1) the input signal *x*_{
m
} and noise *z*_{
m
} are mutually independent, (2) each row *x*_{
m
} of the signal matrix ** X** is random independent with zero mean and random Gaussian variance

*σ*

^{2}

*I*_{ N }, (3) noise

*z*

_{ m }is random independent with zero mean and variance

*σ*

_{ n }

^{2}, (4) \tilde{\mathbf{h}}\left(n\right) is independent of

**. Assume that the**

*X**n*th adaptive receive error

*e*(

*n*) is sufficiently small so that

*e*

_{ m }

^{2}(

*n*) ≪

**x**

_{ m }; hence, {\mu}_{\mathrm{ass}}={\mu}_{\mathrm{iss}}{e}_{m}^{2}\left(n\right)/{\mathbf{x}}_{m}. According to (9), the

*n*th update signal error

**(**

*v**n*+ 1) can be written as

where *e*_{
m
}^{3}(*n*) can be expanded as

Substituting (13) into (12), ** v**(

*n*+ 1) can be further represented as

Hence, the steady-state mean square error (MSE) can be derived as

Based on the abovementioned independent assumptions and ideal Gaussian noise assumption [13], we can get the following approximations:

Due to the independence between *x*_{
m
} and ** v**(

*n*), {

*v*^{T}(

*n*)

*x*_{ m }} satisfies zero-mean Gaussian distribution, that is,

*E*[

*v*^{T}(

*n*)

*x*_{ m }] = 0 [13]. Hence, we can also get the following approximations:

By neglecting the random fluctuations in *v*^{T}(*n*)** v**(

*n*) and using approximation equation

*v*^{T}(

*n*)

**(**

*v**n*) ≈

*E*[

*v*^{T}(

*n*)

**(**

*v**n*)] =

*b*(

*n*), substitute (16) to (22) into (15) which can be simplified as

where *ϕ*(*n*) is incurred by the last term of (12) and it is expressed by

Since the adaptive update square error *b*(*n*) is too small (i.e., *b*(*n*) ≪ 1), hence, higher than two-order errors are considered zero, i.e., *b*^{2}(*n*) = 0 and *b*^{3}(*n*) = 0. The MSE can be derived from (23) as

Assume that ideal reconstruction vector \tilde{\mathbf{h}}\left(n\right) can be obtained, then one can get \underset{n\to \infty}{lim}{||\tilde{\mathbf{h}}\left(n\right)||}_{1}={||\mathbf{h}||}_{1} and {lim}_{n\to \infty}\text{sgn}\left({\tilde{\mathbf{h}}}^{T}\left(n\right)\right)\text{sgn}\left(\tilde{\mathbf{h}}\left(n\right)\right)=K, where *K* denotes the number of nonzero coefficients in ** h**. Hence,

*ϕ*(∞) in (25) can be derived as

Finally, the CRLB of the proposed ASS can be obtained as

## 4 Computer simulations

In this section, the proposed ASS approach using the RZA-NLMF algorithm is evaluated. For achieving average performance, 1,000 independent Monte Carlo runs are adopted. For easy evaluation of the effectiveness of the proposed approach, signal representation domain *D* is assumed as an identity matrix *I*_{N × N} and unknown signal ** s** is set as sparse directly. Sensing matrix is equivalent to random measurement matrix, i.e.,

**=**

*X***. For ensuring that**

*W***satisfies RIP,**

*X***is set as a random Gaussian matrix [5]. Then, sparse coefficient vector**

*W***equals to**

*h***. The details of the simulation parameters are listed in Table 1. Notice that each nonzero coefficient of**

*s***follows random Gaussian distribution as \mathcal{CN}\left(0,{\sigma}^{2}\right) and their positions are randomly allocated within the signal length of**

*h***which is subject to**

*h**E*{||

**h**||

_{2}

^{2}} = 1, where

*E*{∙} denotes the expectation operator. The output signal-to-noise ratio (SNR) is defined as 20 log (

*E*

_{ s }/

*σ*

_{ n }

^{2}), where

*E*

_{ s }= 1 is the unit transmission power. All of the step sizes and regularization parameters are listed in Table 1. The estimation performance is evaluated by average mean square error (MSE) which is defined by

where ** h** and \tilde{\mathbf{h}}\left(n\right) are the actual channel vector and its

*n*th iterative adaptive channel estimator, respectively. According to our previous work [8], the regularization parameter for RZA-NLMF is set as

*λ*= 5 × 10

^{-8}so that it can exploit signal sparsity robustly. Since the RZA-NLMF-based ASS method depends highly on the reweighted factor

*ϵ*, hence, we first select the reasonable factor

*ϵ*by virtue of the Monte Carlo method. Later, we compare the proposed method with two typical NSS ones, i.e., BPDN [6] and OMP [7].

### 4.1 Reweighted factor selection

Since the RZA-NLMF algorithm depends highly on reweighted factor, hence, selection of the robust reweighted factor for different noise environments and different signal sparsities is a typical important step for the RZA-NLMF algorithm. It is well known that *ℓ*_{0}-norm normalized least mean fourth (L0-NLMF) for CS can achieve optimal solution, but it is a NP-hard problem in practical applications such as noise environment [2]. One can find that RZA-NLMF reduces to L0-NLMF when the reweighted factor approaches to infinity. Due to the noise interference, we should select the suitable reweighted factor which not only can exploit signal sparsity but also can mitigate noise interference effectively. Hence, the reweighted factor of RZA-NLMF is selected empirically. By means of the Monte Carlo method, the performance curves of the proposed ASS method with different reweighted factors *ϵ* ∈ {2, 20, 200, 2,000, 20,000} with respect to different numbers of nonzero coefficients *K* ∈ {2, 6, 10} and different SNR regimes (5 and 10 dB) are depicted in Figures 4, 5, 6, 7. Under the simulation setup considered, RZA-NLMF using *ϵ* = 2,000 can achieve robust performance in different cases as shown in Figures 4, 5, 6, 7. From the four figures, one can find that sparser signal requires larger reweighted factor but no more than 20,000 in this system. This is concise with the fact that stronger sparse penalty not only exploits more sparse information but also mitigates more noise interference.

### 4.2 Performance comparisons with NSS

Two experiments of ASS are verified in performance comparisons with conventional NSS methods (e.g., BPDN [6] and OMP [7]). In the first experiment, the ASS method is evaluated in the case of SNR = 10 dB as shown in Figure 8. On the one hand, according to this figure, we can find that the proposed ASS method using the RZA-NLMF algorithm achieves much lower MSE performance than NSS methods and even if it is CRLB. The existing big performance gap between ASS and NSS is because ASS using RZA-NLMF not only exploits the signal sparsity but also mitigates the noise interference using high-order error statistics for adaptive error updating. On the other hand, we can also find that ASS depends on the signal sparseness. That is to say, for sparser signal, ASS can exploit more signal structure information as for prior information and vice versa. In the second experiment, the number of nonzero coefficients is fixed as *K* = 2 as shown in Figure 9. It is easy to find that our proposed ASS is much better than conventional NSS as the SNR increases.

## 5 Conclusions

In this paper, we proposed an ASS method using the RZA-NLMF algorithm for dealing with CS problems. First, we selected the reweighted factor and regularization parameter for the proposed algorithm by virtue of the Monte Carlo method. Later, based on the update equation of RZA-NLMF, the CRLB of ASS was also derived based on random independent assumptions. Finally, several representative simulations have been given to show that the proposed method achieves much better MSE performance than NSS with respect to different signal sparsities, especially in the case of low SNR regime.

Since the empirical reweighted factor was selected for RZA-NLMF in the noise environment, in the future work, we will develop the learning reweighted factor for RZA-NLMF in the case of a noiseless environment. It is expected that RZA-NLMF using learning reweighted factor can achieve much better recovery performance without sacrificing much computational complexity.

## References

Candes EJ, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information.

*IEEE Trans. Inf. Theory*2006, 52(2):489-509.Donoho DL: Compressed sensing.

*IEEE Trans. Inf. Theory*2006, 52(4):1289-1306.Baraniuk R: Compressive radar imaging.

*IEEE Radar Conference, Boston, 17–20 Apr 2007*ᅟ, 128-133.Herman M, Strohmer T: Compressed sensing radar.

*IEEE Radar Conference, Rome, 26–30 May 2008*ᅟ, 1-6.Candes EJ: The restricted isometry property and its implications for compressed sensing.

*Comptes Rendus Math*2008, 1(346):589-592.Chen SS, Donoho DL, Saunders MA: Atomic decomposition by basis pursuit.

*SIAM J. Sci. Comput.*1998, 20(1):33-61. 10.1137/S1064827596304010Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit.

*IEEE Trans. Inf. Theory*2007, 53(12):4655-4666.Gui G, Mehbodniya A, Adachi F: Adaptive sparse channel estimation using re-weighted zero-attracting normalized least mean fourth.

*2nd IEEE/CIC International Conference on Communications in China (ICCC), Xian, 12 Aug 2013*ᅟ, 368-373.Dai L, Zhaocheng W, Yang Z: Compressive sensing based time domain synchronous OFDM transmission for vehicular communications.

*IEEE J. Sel. Areas Commun.*2013, 31(9):460-469.Dai L, Wang Z, Yang Z: Spectrally efficient time-frequency training OFDM for mobile large-scale MIMO systems.

*IEEE J. Sel. Areas Commun.*2013, 31(2):251-263.Chen Y, Gu Y, Hero AO III: Sparse LMS for system identification.

*IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, 19–24 Apr 2009*ᅟ, 3125-3128.Gui G, Dai L, Kumagai S, Adachi F: Variable earns profit: improved adaptive channel estimation using sparse VSS-NLMS algorithms.

*IEEE International Conference on Communications (ICC), Sydney, 10–14 June 2014*ᅟ, 1-5.Eweda E, Bershad NJ: Stochastic analysis of a stable normalized least mean fourth algorithm for adaptive noise canceling with a white Gaussian reference.

*IEEE Trans. Signal Process.*2012, 60(12):6235-6244.

## Acknowledgements

The authors would like to appreciate the editor and the anonymous reviewers for their help comments and suggestion to improve the quality of this paper.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Gui, G., Xu, L. & Adachi, F. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing.
*EURASIP J. Adv. Signal Process.* **2014**, 125 (2014). https://doi.org/10.1186/1687-6180-2014-125

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/1687-6180-2014-125

### Keywords

- Nonlinear sparse sensing (NSS)
- Adaptive sparse sensing (ASS)
- Normalized least mean fourth (NLMF)
- Reweighted zero-attracting NLMF (RZA-NLMF)
- Sparse constraint
- Compressive sensing