RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.


Introduction
Compressive sensing [1], [2] has been attracting high attentions in compressive Radar/sonar sensing [3], [4] due to many applications such as civilian, military, and biomedical.The main task of CS problems can be divided into three aspects as follows: 1) sparse signal learning: The basic model suggests that natural signals can be compactly expressed, or efficiently approximated, as a linear combination of prespecified atom signals, where the linear coefficients are sparse (i.e., most of them zero); 2) random measurement matrix design.It is important to make a sensing matrix which allows recovery of as many entries of unknown signal as possible by using as few measurements as possible Sensing matrix should satisfy the conditions of incoherence and restricted isometry property (RIP) [5].Fortunately, some special matrices (e.g., Gaussian matrix and Fourier matrix) have been reported that they are satisfying RIP in high probably; 3) sparse reconstruction algorithms.Based on previous two steps, many sparse reconstruction algorithms have been proposed to find the suboptimal sparse solution.
It was well known that the CS provides a robust framework that can reduce the number of measurements required to estimate a sparse signal.Many NSS algorithms and their variants have been proposed to deal with CS problems.They mainly fall into two basic categories: convex relaxation (basis pursuit de-noise, BPDN [6]) and greedy pursuit (orthogonal matching pursuit, OMP [7]).Above NSS based CS methods are either high complexity or low performance, especially in the case of low signal-to-noise (SNR) regime.
In this paper, we propose an adaptive sparse sensing (ASS) method using reweighted zeroattracting normalized mean fourth error algorithm (RZA-NLMF) [8] to solve the CS problems.
Different from NSS methods, each observation and corresponding sensing signal vector will be implemented by the RZA-NLMF algorithm to reconstruct the sparse signal during the process of adaptive filtering.The effectiveness of our proposed method is confirmed via computer simulation when comparing with NSS.
The remainder of the paper is organized as follows.Basic CS problem is introduced and typical NSS method is presented in Section 2. In section 3, ASS using RZA-NLMF algorithm is proposed for solving CS problems and its derivation process is highlighted.Computer simulations are given in Section 4 in order to evaluate and compare performances of the proposed ASS method.Finally, our contributions are summarized in Section 5.  From the perspective of CS, the sensing matrix X satisfies the restricted isometry property (RIP) in overwhelming probability [9] so that the sparse signal h can be reconstructed correctly by NSS methods, e.g., BPDN [6] and OMP [7].Take the BPDN as for the example to illustrate NSS realization approach.Since the sensing matrix X satisfies RIP of order with positive parameter ( , ) holds for all h having no more than nonzero coefficients.Then the unknown sparse vector h can be reconstructed by BPDN as where  denotes a regularization parameter which balances the mean-square error (MSE) term and sparsity of h .If the mutual interference of sensing matrix X can be completely removed, then the theoretical Cramer-Rao lower bound (CRLB) of the NSS can be derived as [10]   CRLB{ } .

Adaptive sparse sensing
We reconsider the above system model (2) with respect to adaptive sensing case.At observation side, -th observed signal m y can be written as yz  hx (7) for , , , mM  12 . The objective of ASS is to adaptively estimate the unknown sparse vector h using the sensing signal vector m x and the observed signal m y .Different from NSS approaches, we proposed an alternative ASS method using RZA-NLMF algorithm as shown in Fig. 2. Assume the .Notice that the mod( )  denotes a modulo function, for example, mod( , )  5 3 2 and mod( , )  5 2 1 .First of all, the cost function of RZA-NLMF algorithm is constructed as where ass   0 is a regularization parameter which trades off the sensing error and coefficients vector sparsity.  0 denotes a reweighted factor which enhances to exploit the signal sparsity at each iteration.A figure example to show the relationship between reweighted factors and sparse constraint strength is given in Fig. 3.According to the cost function (8), the corresponding update equation can be derived as is a parameter which depends on initial step-size iss  , regularization parameter  and threshold  , respectively.In the second term of ( 9), if coefficient magnitudes of 1 , then these small coefficients will be replaced by zeros in high probability [11].
Here, it is worth noting that () ass n  is a variable step-size (10) can also be rewritten as which is a variable step-size (VSS) which is adaptive change as square sensing error , smaller error incurs the smaller step-size to ensure the stability of the gradient descend while larger error yields larger step-size to accelerate the convergence speed of this algorithm [12].According to the update equation in (9), our proposed ASS method can be concluded in Algorithm 1.
As for the trademark of the performance comparisons, CRLB of the proposed ASS method is derived in the subsequent.The signal error is defined as () en can be written as e n z n vx .To simply derive the CRLB, four assumptions are considered in the subsequent analysis: 1) the input signal m x and noise m z are mutually independent; 2) each row m x of the signal matrix X is random independent with zero mean and random Gaussian variance Assume that the -th adaptive receive error () en sufficient small so that , according to (9), the -th update signal error () n  v 1 can be written as where can be expended as 23 32 33 (13) Substituting ( 13) into ( 12), () n  v 1 can be further represented as Hence, the steady-state mean square error (MSE) can be derived as  En  vx0 [13].Hence, we can also get following approximations

Computer Simulations
In this section, the proposed ASS approach using RZA-NLMF algorithm is evaluated.For achieving average performance, 1000 independent Monte-Carlo runs are adopted.For easy evaluating the effectiveness of the proposed approach, signal representation domain D is assumed as an identity matrix NN  I and unknown signal s is set as sparse directly.Sensing matrix is equivalent to random measurement matrix, i.e.,  XW .For ensuring X satisfies the RIP, W is set as random Gaussian matrix [9].Then, sparse coefficient vector h equals to s .The detail simulation parameters are listed in Tab. 1.Notice that each nonzero coefficient of h follows random Gaussian distribution as  2 0 and their positions are randomly allocated within the signal length of h which is subject to , where denotes the expectation operator.The output signal-to-noise ratio (SNR) is defined as , where is the unit transmission power.All of the step sizes and regularization parameters are listed in Tab.I.The estimation performance is evaluated by average mean square error (MSE) which is defined by where h and () n h are the actual channel vector and its -th iterative adaptive channel estimator, respectively.According to our previous work [8], regularization parameter for RZA-NLMF is set as    8  5 10 so that it can exploit signal sparsity robustly.Since the RZA-NLMF-based ASS method depends highly on the reweighted factor  , hence, we first select the reasonable factor  by virtual of Monte Carlo.Later, we compare the proposed method with two typical NSS ones, i.e., BPDN [6] Input: Random sensing matrix X , observation signal vector y .
(1) Initialize      Two experiments of ASS are verified in performance comparisons with conventional NSS methods (e.g., BPDN [6] and OMP [7]).In the first experiment, ASS method is evaluated in the case of SNR dB  10 as shown in Fig. 8. On the one hand, according to this figure, we can find that the proposed ASS method using RZA-NLMF algorithm achieves much lower MSE performance than NSS methods and even if its CRLB.The existing big performance gap between ASS and NSS is that ASS using RZA-NLMF not only exploits the signal sparsity but also mitigates the noise interference using high-order error statistis for adaptive error updating.On the other hand, we can also find that ASS depends on the signal sparseness.That is to say, for sparser signal, ASS can exploit more signal structure information as for prior information and vice versa.In the second experiment, number of nonzero coefficients is fixed as K  2 as shown in Fig. 9.It is easy to find that our proposed ASS is much better than conventional NSS as the SNR increasing.

Conclusion
In this paper, we proposed an ASS method using RZA-NLMF algorithm for dealing with the CS problems.First, we decided the reweighted factor and regularization parameter for the proposed algorithm by virtual of Monte Carlo method.Later, based on update equation of the RZA-NLMF, CRLB of ASS was also derived based on the random independent assumptions.Finally, several representative simulations have been given to show that proposed method achieves much better MSE performance than NSS with respect to different signal sparsity, especially in the case of low SNR regime.

Acknowledgments
This work was supported by a grant-in-aid from Japan Society for the Promotion of Science (JSPS) (grant number 24•02366).

Figure 1 .
Figure 1.A typical example of sparse structure signal.

I
sparse coefficients vector ( KN ), and D is an NN  orthogonal basis matrix with { , , , , } i iN  d 12 as its columns.Take a random measurement signal matrix W and then the received signal vector denotes a MN  sensing matrix as , denotes an MM  identity matrix.
correspondence with the -th iterative error when using -th sensing signal vector m three factors: initial step-size iss  , input signal m x and update iterative error () m en .Since iss  is given initial steps-size and m x is random scaling input signal, hence, ass  in Eq.