Performance analysis of probabilistic soft SSDF attack in cooperative spectrum sensing

In cognitive radio networks, spectrum sensing data falsification (SSDF) attack is a crucial factor deteriorating the detection performance of cooperative spectrum sensing. In this paper, we propose and analyze a novel probabilistic soft SSDF attack model, which goes beyond the existing models for its generalization. Under this generalized SSDF attack model, we firstly obtain closed form expressions of global sensing performance at the fusion center. Then, we theoretically evaluate the performance of the proposed attack model, in terms of destructiveness and stealthiness, sequentially. Numerical simulations match the analytical results well. Last but not least, an interesting trade-off between destructiveness and stealthiness is discovered, which is a fundamental issue involved in SSDF attack, however, ignored by most of the previous studies.


Introduction
Cognitive radio (CR) has been regarded as a promising technology to improve the spectrum utilization [1]. To enable CR, cooperative spectrum sensing, among multiple spectrum sensors, is one of the key technologies [2]. However, due to the openness of low-layer protocol stack of CR, the reliability of cooperative spectrum sensing is challenged by many security threats [3].
The most well-known security threat is the spectrum sensing data falsification (SSDF) attack [4], where abnormal or malicious spectrum sensors falsify their true sensing results. The main goal of SSDF attack can be roughly expressed as two points. One point is to decrease the global detection probability for disturbing the normal operation of the primary user (PU). The other is to increase the global probability of false alarm with the purpose of wasting the access opportunities of the honest secondary users (SUs).
Previous studies on SSDF attack modeling can be generally grouped into two classes: hard SSDF attack and soft SSDF attack. Briefly, in hard SSDF attack, malicious SUs falsify their local binary decisions [5][6][7][8], while in soft SSDF attack, malicious SUs falsify their received energy values. Compared to hard SSDF attack, soft SSDF attack is generally more powerful and elusory for its relatively larger value space [9][10][11][12], since malicious SUs falsify real energy observations, rather than binary decisions, to mislead the fusion center (FC). Three soft SSDF attack models have been widely adopted to test various secure sensing algorithms: Always Yes [13], Always No [14,15], and Always Adverse [16]. In Always Yes soft SSDF attack, an attacker raises its local observations by injecting a positive offset in every sensing slot. In Always No attack, an attacker decreases its local observations by injecting a negative offset in every sensing slot. In Always Adverse attack, an attacker firstly performs a local binary hypothesis testing between H 0 and H 1 by comparing its energy observation with a predefined threshold, with H 0 denoting the case that the primary signal is absent and H 1 otherwise; then, the malicious SU raises its observations when its local binary decision is H 0 and decreases its observations when its decision is H 1 . One main limitation of the existing soft SSDF attack models is that they are oversimplified and not general enough to serve as the baseline for the design of counter-attack or secure sensing algorithms.
Motivated by the observations above, in this paper, we start with an objective to develop a more general soft SSDF attack model, which should go beyond the existing http://asp.eurasipjournals.com/content/2014/1/81 models and include them as special cases. We also consider that each attacker should be smart to have dual goals in mind: (i) causing harmful disturb to cooperative spectrum sensing and (ii) protecting itself against being easily detected. Specifically, the main contributions of this paper are as follows: • Propose a generic probabilistic soft SSDF attack model and derive the corresponding closed-form expressions of global sensing performance at the fusion center. • Analyze the destructiveness of the proposed attack model under three general scenarios and obtain the corresponding optimal attack strategies. • Define a stealthiness metric and analyze the stealthiness of the proposed attack model under a classical secure sensing algorithm. • Discover an interesting trade-off between destructiveness and stealthiness, which is a fundamental issue involved in SSDF attack, however, ignored by most of the previous studies.
The remainder of this paper is organized as follows. Section 2 presents the spectrum sensing preliminaries. In Section 3, we formulate the proposed probabilistic soft SSDF attack model and present the analysis of its impacts on the sensing performance. Analytical results on destructiveness and stealthiness of the proposed attack model are provided in Sections 4 and 5, respectively. Numerical results are given in Section 6, and conclusions are provided in Section 7.

Spectrum sensing preliminaries
As shown in Figure 1, we consider a cooperative spectrum sensing system consisting of N SUs and a FC. Each SU conducts energy detection and transmits its observation to the FC. At the FC, the global decision is made based on the combination of observations. In particular, there are some malicious SUs reporting falsified observations to the FC to deteriorate the spectrum sensing performance.
For a given frequency band, spectrum sensing is generally formulated as a binary hypotheses as follows: where H 0 denotes the case that the primary signal is absent and H 1 denotes the case that PU is present, N is the number of SUs, is the channel gain, and n i (t) denotes the additive white Gaussian noise (AWGN). With an energy detector, the collected energy observation at the i-th SU can be given as where U = TW is the time-bandwidth product. According to the central limit theorem, when U is large enough (e.g., U >> 10), x Ei can be well approximated as a Gaussian random variable under both hypotheses H 0 and H 1 as follows [14,16,17]: where u 0i = 2U, σ 2 0i = 4U, u 1i = 2U(γ i + 1), σ 2 1i = 4U(2γ i + 1) and γ i is the received SNR of the ith SU.
The local binary decision d i at the i-th SU can be obtained by comparing its energy observation with a local threshold η i , At the FC, a weighted combination is generally used to obtain the global decision D as follows: where η f is the global threshold at the FC and w i ∈ [0, 1] is the weight assigned to the i-th SU by the FC, and N i=1 w i = 1.

Probabilistic soft SSDF attack
In this section, we will propose a generic soft SSDF attack model and present the analysis of its impacts on the sensing performance. Before going into deep analysis, we declare that a generic and effective SSDF attack model should at least have the following features: • Attackers should be able to exploit their local sensing results to implement effective attacks, and the http://asp.eurasipjournals.com/content/2014/1/81 imperfection of the local sensing results should also be considered. • Attackers should be able to jointly consider harmfully disturbing the FC to obtain wrong decisions and reliably protecting itself from being easily detected, by properly adjusting attack parameters.

The proposed attack model
Based on considerations previously mentioned, a probabilistic soft SSDF attack model is proposed as follows.
Firstly, an attacker (say, the i-th SU) makes its local binary decision via (3). Then, it utilizes a probability p i to decide whether to perform attack. If it decides to attack, it will randomly produce a Gaussian value as sensing result to report; otherwise, it will hold its true observation. Mathematically, this attack model can be written as

Local decision
Reported result where u 2i and σ 2 2i , respectively, denote the mean and variance when the local decision of the i-th SU is H 0 (i.e., PU is absent), u 3i and σ 2 3i denote the mean and variance when the local decision is H 1 . Obviously, for an honest SU, the attack probability equals to zero. For a malicious SU, the attack probability p i ∈ (0, 1). Naturely, the two mean values of random distributions have such a relation: u 2i ≥ u 3i , as the attacker generally falsifies sensing results by reversing them. To facilitate the following analysis, we define (u 2i , u 3i ) as the attack strength and consider the case σ 2 2i = σ 2 3i = σ 2 1i . The attack model in (5) is general enough to include the existing models as special cases, via properly adjusting the attack parameters (u 2i , u 3i , p i ). For example, Always No [13], Always Yes [14,15], and Always Adverse attacks [16] can be realized when (u 2i , u 3i , p i ) is set as (u 0i , u 0i , 1), (u 1i , u 1i , 1) and (u 1i , u 0i , 1), respectively. In particular, the attack probability p i can make the proposed attack more elusory and flexible.

Local sensing performance under probabilistic soft SSDF attack
For the proposed attack model in the Section 3.1, the probability density function (PDF) of the reported result by the i-th malicious SU, under hypothesis H 0 and H 1 , can be respectively calculated as where f x−u i σ i represents the PDF of the Gaussian variable x with the mean u i and standard deviation σ i , Q (x) is the Gaussian Q-function and η i is local threshold.

Global sensing performance under probabilistic soft SSDF attack
In a cooperative spectrum sensing system with a FC and N SUs, among which the first k SUs are malicious attackers, the FC fuses results from both malicious SUs and honest SUs via Equation 4. The fusion result's PDF, under hypothesis H 0 and H 1 , can be respectively calculated as (9) http://asp.eurasipjournals.com/content/2014/1/81 In (8) and (9), we have Let P f and P d denote the probabilities of detection and false alarm at the FC, respectively, which can be obtained as

Destructiveness analysis
In this section, we evaluate the impacts of the three model parameters p i , u 2i , u 3i on the proposed attack model's destructiveness. Specifically, we analyze three general and actual scenarios in sequence: (i) Probabilistic attack : The optimal attack probability is derived to cause the largest harm to FC's detection performance. (ii) Contention attack : Setting appropriate attack strength, a malicious SU implements the optimal attack to maximize the global probability of false alarm to waste access opportunities of honest SUs. (iii) Interference attack : With specific attack strength, a malicious SU conducts the optimal attack with the purpose of minimizing the global probability of detection to disturb the normal operation of the primary user.
Furthermore, without loss of generality, in the following analysis, the weight w i is set as 1/N.

Probabilistic attack
Consider a scenario that a malicious SU aims to deteriorate spectrum sensing performance of the system, raising P f and reducing P d . Here, we analyze the impacts of the attack probability p i on performing the above attack objective. The problem can be expressed as: Theorem 1. For the given attack strength (u 2i , u 3i ) = (u 1i , u 0i ), the probability of detection P d decreases with the attack probability p i and the probability of false alarm P f increases with p i .
Consequently, the probability of false alarm P f increases with the attack probability p i . Similarly, we can obtain P d (p i + ) − P d (p i ) < 0, and thus the probability of detection P d decreases with p i . Therefore, the detection error probability P e = {(1 − P d ) P (H 1 ) + P f P (H 0 )} increases with p i .
Note that Theorem 1 without taking into account any defense or secure sensing algorithms at the FC, obviously, a malicious SU with a high attack probability is prone to being easily found out for its low stealthiness, which will be further studied in the next section.

Contention attack
Consider a scenario that a malicious SU intends to contend with honest SUs for secondary access opportunities, i.e., to induce the FC to maximize the probability of false alarm, which can be expressed as follows: To simplify proofs, we assume that there exists a single malicious SU (i.e., the i-th SU) in the system. Simultaneously, to partially ensure the stealthiness of attack behaviors, attack strength is limited (see Appendix).

Theorem 2.
Given the probability of detection as P d = β and the attack probability as p i = p a , the probability of false alarm P f increases with u 2i .
Proof. For a given probability of detection at the FC, we have Take the derivation of both sides, we have From (11), we have Based on limitation of attack strength in the Appendix, we have Theorem 2 points out that the global probability of false alarm can reach the maximum when u 2i reaches the available maximum. This theorem provides us with an approach to derive the optimal solution of attack strength to the optimization in (15). Fixed P d provides an implicit function about u 3i and u 2i whose range is limited in the Appendix. Then based on Theorem 2, the optimal value can be selected from the set of (u 2i , u 3i ) satisfying the function.

Interference attack
Consider another scenario that a malicious SU aims to bring harmful interference to disturb the normal operation of the PU, i.e., to minimize the probability of detection, which can be formulated as subject to P f = α ∈ (0, 1) .
(22) http://asp.eurasipjournals.com/content/2014/1/81 Theorem 3. Given the probability of false alarm as P f = α and the attack probability as p i = p a , the probability of detection decreases with u 3i .
Proof. For a given probability of false alarm at the FC, we have Take the derivation of both sides, we have From (12), it can be calculated as follows: Based on the results in (19) and (20), we finally have Theorem 3 points out that the global probability of detection can reach the maximum when u 3i reaches the available minimum. This theorem provides us with an approach to derive the optimal solution of attack strength to the optimization in (22). Fixed P f provides an implicit function about u 3i and u 2i whose range is limited in Appendix. Then based on Theorem 3, the optiaml value can be selected from the set of (u 2i , u 3i ) saitisfying the function.

Stealthiness analysis
In the previous section, analysis on destructiveness is done without taking into consideration any defense or secure sensing algorithms at the FC, while in this section, we consider that a classical secure sensing algorithm developed in [18] is adopted at the FC to find out the potential attackers. Therefore, stealthiness of the proposed attack model should further be studied.
Current secure algorithms at the FC mainly leverage history sensing results to identify malicious SUs [3]. To ensure the generality of the stealthiness analysis, a classical algorithm developed in [18] is chosen in this paper. Breifly, we first review this algorithm as follows.
Initially, all SUs are treated as reliable ones with a reputation value of r i (0). Then, the reputation value of the i-th SU at the k-th time slot is updated as [18] where D(k) represents the global decision at the FC and d i (k) is the i-th SU's local decision at the k-th time slot. When the reputation value is lower than a discarded threshold λ, the SU is identified as a malicious one; otherwise, it is treated as a honest one. In [18], r i (0) = λ + , where λ is set as 1 and is set as 4. At the k-th time slot, the local decision of the i-th SU is obtained as follows: The global decision D(k) is calculated as where S(k) represents the set of SUs with the reputation values larger than the threshold λ, and Through analysis about the previous algorithm, we define a stealthiness metric ψ as follows: where N m is the number of malicious SUs, r m i (k) denotes the reputation value of the i-th malicious SU, and r h i (k) denotes the reputation value of the i-th honest SU. We choose honest SUs as a baseline and the FC hardly distinguish a malicious SU from malicious ones when the stealthiness metric is close to 1, that is to say, the malicious SU has good stealthiness. Deeper analysis is done in the next section.

Performance evaluation and discussions
In this section, numerical simulations are used to verify the analytical results on destructiveness and stealthiness of the proposed probabilistic soft SSDF attack model.
In the following simulations, the cooperative spectrum sensing system consists of a FC and N SUs, among which N m SUs are malicious. The average received SNR is set as -7 dB and the local threshold is obtained by setting the http://asp.eurasipjournals.com/content/2014/1/81 local false alarm probability as 0.1. The time bandwidth product U is 100. The probability of primary signal being present is set as P(H 1 ) = 0.5. Without loss of generality, in the following simulations, for all malicious SUs, we set A = 1.1u 1i , B = 0.9u 0i , u 2i = u 2 , u 3i = u 3 , ∀i = 1, . . . , N m . Figure 2 shows the global sensing performance at the FC under the proposed SSDF attack model, in terms of global detection error probability P e , defined as

Destructiveness evaluation
In the simulation, N = 10, N m = 3. Curves with four groups of attack strengths (u 2 , u 3 ) are plotted. We also plot the curve without malicious SUs as the baseline. It is shown in Figure 2 that (i) the global detection error probability increases with the attack probability, (ii) when the attack strength increases, i.e., increasing of u 2 and/or decreasing of u 3 , the global detection error probability increases, and (iii) simulations match the theoretical results very well. Figure 3 shows the global false alarm probability versus different detection probabilities, when the malicious SU implements contention attack as discussed in Section 4.2.
In the simulation, the attack probability is set as 0.7 and the optimal values of attack strength (u 2 , u 3 ) is obtained according to Theorem 2. Remind that in Theorem 2, we consider the case that there is a single malicious SU in the system and we set N = 5, N m = 1. Other two curves are also presented for comparison when 0.95u 2 and 0.9u 2 are set lower than the optimal value u 2 , u * 3 and u * * 3 are correspondingly calculated for the given P d . It can be observed in Figure 3 that the global false alarm probability P f gets its maximum when the malicious SU implements contention attack.   Figure 4 shows the global detection probability versus different false alarm probabilities when the malicious SU implements interference attack as discussed in Section 4.3. In the simulation, the attack probability is set as 0.7 and optimal value of attack strength (u 2 , u 3 ) is obtained according to Theorem 3. Remind that in Theorem 2, we consider the case that there is a single malicious SU in the system and we set N = 5, N m = 1. Other two curves are also presented for comparison when 1.01u 3 and 1.02u 3 are set larger than the optimal value u 3 , u * 2 and u * * 2 are correspondingly calculated for the given P f . As shown in Figure 4, the global detection probability P d gets the minimum when the malicious SU implements interference attack.

Stealthiness evaluation
In this subsection, we study the impact of the attack probability on stealthiness of the proposed attack model. The attack strength is set as u 2 = u 1 , u 3 = u 0 , and N = 10, N m = 3. Figure 5 shows the evolution of the stealthiness metric, defined in (30), under different attack probabilities. Two main observations are as follows: • After a few sensing slots, the stealthiness metric restrains itself to a constant value ψ for each given attack probability p a . • Malicious users' stealthiness deteriorates with the attack probability p a .
Mathematically, denote k as the index of the sensing slot, the first observation above can be rewritten as: As shown in Figure 5, the secure algorithm is not sufficiently valid to probabilistic attack. Essentially, the secure algorithm in [18] uses a reputation accumulation mechanism based on history decision information. However, the reputation accumulating process is ruined by such malicious SUs that may behave in a honest manner, then turn malicious at the next moment. Furthermore, there exists a probability of collision p c (k), with which the i-th malicious SU's local decision d m i (k) is inconsistent with FC's global decision D(k), which can be denoted as Simultaneously, there is a collision between FC's global decision and a honest user's local decision with a probability p c (k) during the k-th sensing slot as well. Consistent with the stealthiness metric, the collision probabilities restrain themselves to constant values as well. Furthermore, we can obtain the relation between collision probabilities and the stealthiness metric as follows The second observation above is opposite to the results in Figure 2. Briefly, destructiveness in Figure 2 increases with the attack probability while stealthiness in Figure 5 decreases with the attack probability. Consequently, there should be a trade-off between stealthiness and destructiveness with respect to the attack probability.
Motivated by this discovery, we further plot Figure 6, which shows the global detection error probability P e of the robust defense or secure sensing algorithm developed in [18], under the proposed probabilistic soft attack model. In the simulation, N = 10, N m = 3.
Although the x-axis and y-axis of Figures 2 and 6 are of the same content, their results are quite different. The main reason behind the differences lies on the fact that the secure sensing algorithm developed in [18] is adopted at the FC in producing Figure 6 to discard the detected malicious SUs before the global fusion. It is observed in Figure 6 that with defense or secure sensing algorithms into consideration, there generally exists an optimal attack probability p a ∈ (0, 1], not necessarily equal to 1. Taking the curve with the attack strength (u 1 , u 0 ) as an example and comparing it with the curve without malicious SUs, we can divide the attack probability into three intervals: • An interval with a low attack probability than the first point drawn as a circle in Figure 6, named role reversal interval, where malicious SUs' participation does not pose harm to FC's global fusion, but is beneficial to it. • An interval of a high attack probability than the second point drawn as a square, named exposure interval, where attackers are found out and removed, and their powerful attack action is nearly transparent to FC for their low stealthiness. • An interval of a medium attack probability between the above two points, named favorable attack interval, where effective but stealthy attack can be implemented.

Conclusions
In this paper, a generic and novel probabilistic soft SSDF attack model has been proposed. In the proposed attack model, a malicious SU has a certain probability, varying from 0 to 1, to conduct attacks. Under this generalized SSDF attack model, we firstly obtained closedform expressions of the global sensing performance at the fusion center. Then, we theoretically evaluated the performance of the proposed attack model, in terms of destructiveness and stealthiness, sequentially. Moreover, numerical simulations match the analytical results well. An interesting trade-off between destructiveness and stealthiness has also been discovered, which is a fundamental issue involved in SSDF attack, however, ignored by most of the previous studies.