Skip to content

Advertisement

  • Research
  • Open Access

Performance analysis of probabilistic soft SSDF attack in cooperative spectrum sensing

EURASIP Journal on Advances in Signal Processing20142014:81

https://doi.org/10.1186/1687-6180-2014-81

  • Received: 2 March 2014
  • Accepted: 20 May 2014
  • Published:

Abstract

In cognitive radio networks, spectrum sensing data falsification (SSDF) attack is a crucial factor deteriorating the detection performance of cooperative spectrum sensing. In this paper, we propose and analyze a novel probabilistic soft SSDF attack model, which goes beyond the existing models for its generalization. Under this generalized SSDF attack model, we firstly obtain closed form expressions of global sensing performance at the fusion center. Then, we theoretically evaluate the performance of the proposed attack model, in terms of destructiveness and stealthiness, sequentially. Numerical simulations match the analytical results well. Last but not least, an interesting trade-off between destructiveness and stealthiness is discovered, which is a fundamental issue involved in SSDF attack, however, ignored by most of the previous studies.

Keywords

  • Cognitive radio
  • Spectrum sensing data falsification attack
  • Destructiveness
  • Stealthiness

1 Introduction

Cognitive radio (CR) has been regarded as a promising technology to improve the spectrum utilization[1]. To enable CR, cooperative spectrum sensing, among multiple spectrum sensors, is one of the key technologies[2]. However, due to the openness of low-layer protocol stack of CR, the reliability of cooperative spectrum sensing is challenged by many security threats[3].

The most well-known security threat is the spectrum sensing data falsification (SSDF) attack[4], where abnormal or malicious spectrum sensors falsify their true sensing results. The main goal of SSDF attack can be roughly expressed as two points. One point is to decrease the global detection probability for disturbing the normal operation of the primary user (PU). The other is to increase the global probability of false alarm with the purpose of wasting the access opportunities of the honest secondary users (SUs).

Previous studies on SSDF attack modeling can be generally grouped into two classes: hard SSDF attack and soft SSDF attack. Briefly, in hard SSDF attack, malicious SUs falsify their local binary decisions[58], while in soft SSDF attack, malicious SUs falsify their received energy values. Compared to hard SSDF attack, soft SSDF attack is generally more powerful and elusory for its relatively larger value space[912], since malicious SUs falsify real energy observations, rather than binary decisions, to mislead the fusion center (FC).

Three soft SSDF attack models have been widely adopted to test various secure sensing algorithms: Always Yes[13], Always No[14, 15], and Always Adverse[16]. In Always Yes soft SSDF attack, an attacker raises its local observations by injecting a positive offset in every sensing slot. In Always No attack, an attacker decreases its local observations by injecting a negative offset in every sensing slot. In Always Adverse attack, an attacker firstly performs a local binary hypothesis testing between H0 and H1 by comparing its energy observation with a predefined threshold, with H0 denoting the case that the primary signal is absent and H1 otherwise; then, the malicious SU raises its observations when its local binary decision is H0 and decreases its observations when its decision is H1. One main limitation of the existing soft SSDF attack models is that they are oversimplified and not general enough to serve as the baseline for the design of counter-attack or secure sensing algorithms.

Motivated by the observations above, in this paper, we start with an objective to develop a more general soft SSDF attack model, which should go beyond the existing models and include them as special cases. We also consider that each attacker should be smart to have dual goals in mind: (i) causing harmful disturb to cooperative spectrum sensing and (ii) protecting itself against being easily detected. Specifically, the main contributions of this paper are as follows:

  • Propose a generic probabilistic soft SSDF attack model and derive the corresponding closed-form expressions of global sensing performance at the fusion center.

  • Analyze the destructiveness of the proposed attack model under three general scenarios and obtain the corresponding optimal attack strategies.

  • Define a stealthiness metric and analyze the stealthiness of the proposed attack model under a classical secure sensing algorithm.

  • Discover an interesting trade-off between destructiveness and stealthiness, which is a fundamental issue involved in SSDF attack, however, ignored by most of the previous studies.

The remainder of this paper is organized as follows. Section 2 presents the spectrum sensing preliminaries. In Section 3, we formulate the proposed probabilistic soft SSDF attack model and present the analysis of its impacts on the sensing performance. Analytical results on destructiveness and stealthiness of the proposed attack model are provided in Sections 4 and 5, respectively. Numerical results are given in Section 6, and conclusions are provided in Section 7.

2 Spectrum sensing preliminaries

As shown in Figure1, we consider a cooperative spectrum sensing system consisting of N SUs and a FC. Each SU conducts energy detection and transmits its observation to the FC. At the FC, the global decision is made based on the combination of observations. In particular, there are some malicious SUs reporting falsified observations to the FC to deteriorate the spectrum sensing performance.
Figure 1
Figure 1

Cooperative spectrum sensing system. PU, primary user; SU, secondary user; FC, fusion center.

For a given frequency band, spectrum sensing is generally formulated as a binary hypotheses as follows:
H 0 : r i ( t ) = n i ( t ) , i = 1 , 2 , N H 1 : r i ( t ) = h i ( t ) s i ( t ) + n i ( t ) , i = 1 , 2 , N ,
(1)

where H0 denotes the case that the primary signal is absent and H1 denotes the case that PU is present, N is the number of SUs, r i (t) is the t-th sample of the i-th SU’s received signal, s i (t) is the PU’s transmit signal, h i (t) is the channel gain, and n i (t) denotes the additive white Gaussian noise (AWGN).

With an energy detector, the collected energy observation at the i-th SU can be given as x Ei = t = 1 2 U r i ( t ) 2 , where U=T W is the time-bandwidth product. According to the central limit theorem, when U is large enough (e.g., U>>10), x E i can be well approximated as a Gaussian random variable under both hypotheses H0 and H1 as follows[14, 16, 17]:
H 0 : x Ei N u 0 i , σ 0 i 2 H 1 : x Ei N u 1 i , σ 1 i 2 ,
(2)

where u0i=2U, σ 0 i 2 = 4 U , u1i=2U(γ i +1), σ 1 i 2 = 4 U ( 2 γ i + 1 ) and γ i is the received SNR of the i th SU.

The local binary decision d i at the i-th SU can be obtained by comparing its energy observation with a local threshold η i ,
x Ei d i = H 0 d i = H 1 η i .
(3)
At the FC, a weighted combination is generally used to obtain the global decision D as follows:
x E = i = 1 N w i x Ei D = H 0 D = H 1 η f
(4)

where η f is the global threshold at the FC and w i [ 0,1] is the weight assigned to the i-th SU by the FC, and i = 1 N w i = 1 .

3 Probabilistic soft SSDF attack

In this section, we will propose a generic soft SSDF attack model and present the analysis of its impacts on the sensing performance. Before going into deep analysis, we declare that a generic and effective SSDF attack model should at least have the following features:

  • Attackers should be able to exploit their local sensing results to implement effective attacks, and the imperfection of the local sensing results should also be considered.

  • Attackers should be able to jointly consider harmfully disturbing the FC to obtain wrong decisions and reliably protecting itself from being easily detected, by properly adjusting attack parameters.

3.1 The proposed attack model

Based on considerations previously mentioned, a probabilistic soft SSDF attack model is proposed as follows. Firstly, an attacker (say, the i-th SU) makes its local binary decision via (3). Then, it utilizes a probability p i to decide whether to perform attack. If it decides to attack, it will randomly produce a Gaussian value as sensing result to report; otherwise, it will hold its true observation. Mathematically, this attack model can be written as
Local observation Local decision Reported result x Ei H 0 p i x N ( u 2 i , σ 2 i 2 ) 1 - p i x = x Ei x Ei H 1 p i x N ( u 3 i , σ 3 i 2 ) 1 - p i x = x Ei
(5)

where u2i and σ 2 i 2 , respectively, denote the mean and variance when the local decision of the i-th SU is H0 (i.e., PU is absent), u3i and σ 3 i 2 denote the mean and variance when the local decision is H1. Obviously, for an honest SU, the attack probability equals to zero. For a malicious SU, the attack probability p i (0,1). Naturely, the two mean values of random distributions have such a relation: u2iu3i, as the attacker generally falsifies sensing results by reversing them. To facilitate the following analysis, we define (u2i,u3i) as the attack strength and consider the case σ 2 i 2 = σ 3 i 2 = σ 1 i 2 .

The attack model in (5) is general enough to include the existing models as special cases, via properly adjusting the attack parameters (u2i,u3i,p i ). For example, Always No[13], Always Yes[14, 15], and Always Adverse attacks[16] can be realized when (u2i,u3i,p i ) is set as (u0i,u0i,1), (u1i,u1i,1) and (u1i,u0i,1), respectively. In particular, the attack probability p i can make the proposed attack more elusory and flexible.

3.2 Local sensing performance under probabilistic soft SSDF attack

For the proposed attack model in the Section 3.1, the probability density function (PDF) of the reported result by the i-th malicious SU, under hypothesis H0 and H1, can be respectively calculated as
g H 0 i ( x ) = lim Δ 0 1 Δ x x + Δ f x - u 0 i σ 0 i dx 1 - p i + x x + Δ f x - u 2 i σ 2 i dx 1 - Q η i - u 0 i σ 0 i p i + x x + Δ f x - u 3 i σ 3 i dxQ η i - u 0 i σ 0 i p i = f x - u 0 i σ 0 i 1 - p i + 1 - Q η i - u 0 i σ 0 i × f x - u 2 i σ 2 i p i + Q η i - u 0 i σ 0 i f x - u 3 i σ 3 i p i ,
(6)
g H 1 i x = lim Δ 0 1 Δ x x + Δ f x - u 1 i σ 1 i dx 1 - p i + x x + Δ f x - u 2 i σ 2 i dx 1 - Q η i - u 1 i σ 1 i p i + x x + Δ f x - u 3 i σ 3 i dxQ η i - u 1 i σ 1 i p i = f x - u 1 i σ 1 i 1 - p i + 1 - Q η i - u 1 i σ 1 i × f x - u 2 i σ 2 i p i + Q η i - u 1 i σ 1 i f x - u 3 i σ 3 i p i ,
(7)

where f x - u i σ i represents the PDF of the Gaussian variable x with the mean u i and standard deviation σ i , Q(x) is the Gaussian Q-function and η i is local threshold.

3.3 Global sensing performance under probabilistic soft SSDF attack

In a cooperative spectrum sensing system with a FC and N SUs, among which the first k SUs are malicious attackers, the FC fuses results from both malicious SUs and honest SUs via Equation 4. The fusion result’s PDF, under hypothesis H0 and H1, can be respectively calculated as
g m H 0 ( x ) = m 1 = 0 , 2 , 3 m k = 0 , 2 , 3 a m 1 1 a m 2 2 a m k k · f x - w 1 u m 1 1 - w 2 u m 2 2 - w k u m k k - u h 0 w 1 2 σ m 1 1 2 + w 2 2 σ m 2 2 2 + w k 2 σ m k k 2 + σ h 0 2 ,
(8)
g m H 1 ( x ) = m 1 = 1 , 2 , 3 m k = 1 , 2 , 3 b m 1 1 b m 2 2 b m k k · f x - w 1 u m 1 1 - w 2 u m 2 2 - w k u m k k - u h 1 w 1 2 σ m 1 1 2 + w 2 2 σ m 2 2 2 + w k 2 σ m k k 2 + σ h 1 2 .
(9)
In (8) and (9), we have
a 0 i = 1 - p i , a 2 i = p i 1 - Q η i - u 0 i σ 0 i , a 3 i = p i Q η i - u 0 i σ 0 i , b 1 i = 1 - p i , b 2 i = p i 1 - Q η i - u 1 i σ 1 i , b 3 i = p i Q η i - u 1 i σ 1 i u h 0 = i = k + 1 N w i u i 0 , σ h 0 2 = i = k + 1 N w i 2 σ i 0 2 , u h 1 = i = k + 1 N w i u i 1 , σ h 1 2 = i = k + 1 N w i 2 σ i 1 2 .
(10)
Let P f and P d denote the probabilities of detection and false alarm at the FC, respectively, which can be obtained as
P f = Pr { x E η f | H 0 } = m 1 = 0 , 2 , 3 m k = 0 , 2 , 3 a m 1 1 a m 2 2 a m k k · · Q η f - w 1 u m 1 1 - w 2 u m 2 2 - w k u m k k - u h 0 w 1 2 σ m 1 1 2 + w 2 2 σ m 2 2 2 + w k 2 σ m k k 2 + σ h 0 2 ,
(11)
P d = Pr { x E η f | H 1 } = m 1 = 1 , 2 , 3 m k = 1 , 2 , 3 b m 1 1 b m 2 2 b m k k · · Q η f - w 1 u m 1 1 - w 2 u m 2 2 - w k u m k k - u h 1 w 1 2 σ m 1 1 2 + w 2 2 σ m 2 2 2 + w k 2 σ m k k 2 + σ h 1 2 .
(12)

4 Destructiveness analysis

In this section, we evaluate the impacts of the three model parameters p i ,u2i,u3i on the proposed attack model’s destructiveness. Specifically, we analyze three general and actual scenarios in sequence:

  1. (i)

    Probabilistic attack: The optimal attack probability is derived to cause the largest harm to FC’s detection performance.

     
  2. (ii)

    Contention attack: Setting appropriate attack strength, a malicious SU implements the optimal attack to maximize the global probability of false alarm to waste access opportunities of honest SUs.

     
  3. (iii)

    Interference attack: With specific attack strength, a malicious SU conducts the optimal attack with the purpose of minimizing the global probability of detection to disturb the normal operation of the primary user.

     

Furthermore, without loss of generality, in the following analysis, the weight w i is set as 1/N.

4.1 Probabilistic attack

Consider a scenario that a malicious SU aims to deteriorate spectrum sensing performance of the system, raising P f and reducing P d . Here, we analyze the impacts of the attack probability p i on performing the above attack objective. The problem can be expressed as:
max p i { 1 - P d P H 1 + P f P H 0 } , subject to u 2 i = u 1 i , u 3 i = u 0 i .
(13)

Theorem 1

For the given attack strength (u2i,u3i)=(u1i,u0i), the probability of detection P d decreases with the attack probability p i and the probability of false alarm P f increases with p i .

Proof

Given (u2i,u3i)=(u1i,u0i), Δ>0, we have
P f p i + Δ - P f p i = m 1 = 0 , 2 , 3 a m 1 m i = 0 , 2 , 3 a mi p i + Δ - a mi p i m k = 0 , 2 , 3 a m k k Q η f - l = 1 k u m l l - u h 0 l = 1 k σ m l l 2 + σ h 0 2 > m 1 = 0 , 2 , 3 a m 1 m i - 1 = 0 , 2 , 3 a m i - 1 m i + 1 = 0 , 2 , 3 a m i + 1 m k = 0 , 2 , 3 a m k k · - Δ 1 - Q η i - u 0 i σ 0 i Q η f - n i w n u m n 1 - w i u 0 i - u h 0 n i w n 2 σ m n n 2 + w i 2 σ 0 i 2 + σ h 0 2 + Δ 1 - Q η i - u 0 i σ 0 i Q η f - n i w n u m n 1 - w i u 2 i - u h 0 n i w n 2 σ m n n 2 + w i 2 σ 1 i 2 + σ h 0 2 .
(14)
Furthermore, we have
σ 1 i 2 > σ 0 i 2 , u 1 i > u 0 i , Δ > 0 , i = 1 , , k. Q η f - n i w n u m n 1 - w i u 0 i - u h 0 n i w n 2 σ m n n 2 + w i 2 σ 0 i 2 + σ h 0 2 < Q η f - n i w n u m n 1 - w i u 2 i - u h 0 n i w n 2 σ m n n 2 + w i 2 σ 1 i 2 + σ h 0 2 P f p i + Δ - P f p i > 0

Consequently, the probability of false alarm P f increases with the attack probability p i . Similarly, we can obtain P d (p i +Δ)-P d (p i )<0, and thus the probability of detection P d decreases with p i . Therefore, the detection error probability P e ={(1-P d )P(H1)+P f P(H0)} increases with p i .

Note that Theorem 1 without taking into account any defense or secure sensing algorithms at the FC, obviously, a malicious SU with a high attack probability is prone to being easily found out for its low stealthiness, which will be further studied in the next section.

4.2 Contention attack

Consider a scenario that a malicious SU intends to contend with honest SUs for secondary access opportunities, i.e., to induce the FC to maximize the probability of false alarm, which can be expressed as follows:
max u 2 i , u 3 i P f , subjectto P d = β ( 0 , 1 ) .
(15)

To simplify proofs, we assume that there exists a single malicious SU (i.e., the i-th SU) in the system. Simultaneously, to partially ensure the stealthiness of attack behaviors, attack strength is limited (see Appendix).

Theorem 2

Given the probability of detection as P d =β and the attack probability as p i =p a , the probability of false alarm P f increases with u2i.

Proof.

For a given probability of detection at the FC, we have
P d ( u 2 i , u 3 i ) = β.
(16)
Take the derivation of both sides, we have
d u 3 i d u 2 i = - b 2 i f N η f - u 2 i - N u h 1 σ 1 i 2 + N 2 σ h 1 2 b 3 i f N η f - u 3 i - N u h 1 σ 1 i 2 + N 2 σ h 1 2 .
(17)
From (11), we have
d P f d u 2 i = a 2 i f N η f - u 2 i - N u h 0 σ 1 i 2 + N 2 σ h 0 2 - a 3 i f N η f - u 3 i - N u h 0 σ 1 i 2 + N 2 σ h 0 2 d u 3 i d u 2 i .
(18)
Based on limitation of attack strength in the Appendix, we have
a 2 i a 3 i > 1 > b 2 i b 3 i N η f - u 2 i - N u h 0 σ 1 i 2 + N 2 σ h 0 2 < N η f - u 3 i - N u h 0 σ 1 i 2 + N 2 σ h 0 2 N η f - u 3 i - N u h 1 σ 1 i 2 + N 2 σ h 1 2 < N η f - u 2 i - N u h 1 σ 1 i 2 + N 2 σ h 1 2 .
(19)
b 3 i f N η f - u 3 i - N u h 1 σ 1 i 2 + N 2 σ h 1 2 b 2 i f N η f - u 2 i - u h 1 σ 1 i 2 + N 2 σ h 1 2 > 1 > a 3 i f N η f - u 3 i - N u h 0 σ 1 i 2 + N 2 σ h 0 2 a 2 i f N η f - u 2 i - N u h 0 σ 1 i 2 + N 2 σ h 0 2 .
(20)
d P f d u 2 i > 0 .
(21)

Theorem 2 points out that the global probability of false alarm can reach the maximum when u2i reaches the available maximum. This theorem provides us with an approach to derive the optimal solution of attack strength to the optimization in (15). Fixed P d provides an implicit function about u3i and u2i whose range is limited in the Appendix. Then based on Theorem 2, the optimal value can be selected from the set of (u2i,u3i) satisfying the function.

4.3 Interference attack

Consider another scenario that a malicious SU aims to bring harmful interference to disturb the normal operation of the PU, i.e., to minimize the probability of detection, which can be formulated as
min u 2 i , u 3 i P d , subjectto P f = α 0 , 1 .
(22)

Theorem 3

Given the probability of false alarm as P f =α and the attack probability as p i =p a , the probability of detection decreases with u3i.

Proof.

For a given probability of false alarm at the FC, we have
P f u 2 i , u 3 i = α.
(23)
Take the derivation of both sides, we have
d u 2 i d u 3 i = - b 3 i f N η f - u 3 i - N u h 1 σ 1 i 2 + N 2 σ h 1 2 b 2 i f N η f - u 2 i - N u h 1 σ 1 i 2 + N 2 σ h 1 2 .
From (12), it can be calculated as follows:
d P d d u 3 i = a 3 i f N η f - u 3 i - N u h 0 σ 1 i 2 + N 2 σ h 0 2 d u 2 i d u 3 i - a 2 i f N η f - u 2 i - N u h 0 σ 1 i 2 + N 2 σ h 0 2
(24)
Based on the results in (19) and (20), we finally have
d P d d u 3 i < 0 .
(25)

Theorem 3 points out that the global probability of detection can reach the maximum when u3i reaches the available minimum. This theorem provides us with an approach to derive the optimal solution of attack strength to the optimization in (22). Fixed P f provides an implicit function about u3i and u2i whose range is limited in Appendix. Then based on Theorem 3, the optiaml value can be selected from the set of (u2i,u3i) saitisfying the function.

5 Stealthiness analysis

In the previous section, analysis on destructiveness is done without taking into consideration any defense or secure sensing algorithms at the FC, while in this section, we consider that a classical secure sensing algorithm developed in[18] is adopted at the FC to find out the potential attackers. Therefore, stealthiness of the proposed attack model should further be studied.

Current secure algorithms at the FC mainly leverage history sensing results to identify malicious SUs[3]. To ensure the generality of the stealthiness analysis, a classical algorithm developed in[18] is chosen in this paper. Breifly, we first review this algorithm as follows.

Initially, all SUs are treated as reliable ones with a reputation value of r i (0). Then, the reputation value of the i-th SU at the k-th time slot is updated as[18]
r i ( k ) = r i ( k - 1 ) + ( - 1 ) d i ( k ) + D ( k )
(26)
where D(k) represents the global decision at the FC and d i (k) is the i-th SU’s local decision at the k-th time slot. When the reputation value is lower than a discarded threshold λ, the SU is identified as a malicious one; otherwise, it is treated as a honest one. In[18], r i (0)=λ+, where λ is set as 1 and is set as 4. At the k-th time slot, the local decision of the i-th SU is obtained as follows:
Γ i ( k ) d i ( k ) = H 0 d i ( k ) = H 1 η i
(27)
where
Γ i ( k ) = ln Pr ( x Ei ( k ) H 1 ) Pr ( x Ei ( k ) H 0 )
The global decision D(k) is calculated as
Γ ( k ) = j S ( k ) w j ( k ) Γ i ( k ) D ( k ) = H 0 D ( k ) = H 1 η f
(28)
where S(k) represents the set of SUs with the reputation values larger than the threshold λ, and
w j ( k ) = r j ( k - 1 ) i S ( k ) r i ( k - 1 ) .
(29)
Through analysis about the previous algorithm, we define a stealthiness metric ψ as follows:
ψ ( k ) = 1 N m i = 1 N m r i m ( k ) 1 N - N m j = 1 N - N m r j h ( k ) , i = 1 N m r i m ( k ) > 0 0 , otherwise
(30)

where N m is the number of malicious SUs, r i m ( k ) denotes the reputation value of the i-th malicious SU, and r i h ( k ) denotes the reputation value of the i-th honest SU. We choose honest SUs as a baseline and the FC hardly distinguish a malicious SU from malicious ones when the stealthiness metric is close to 1, that is to say, the malicious SU has good stealthiness. Deeper analysis is done in the next section.

6 Performance evaluation and discussions

In this section, numerical simulations are used to verify the analytical results on destructiveness and stealthiness of the proposed probabilistic soft SSDF attack model.

In the following simulations, the cooperative spectrum sensing system consists of a FC and N SUs, among which N m SUs are malicious. The average received SNR is set as -7 dB and the local threshold is obtained by setting the local false alarm probability as 0.1. The time bandwidth product U is 100. The probability of primary signal being present is set as P(H1)=0.5. Without loss of generality, in the following simulations, for all malicious SUs, we set A=1.1u1i,B=0.9u0i,u2i=u2,u3i=u3,i=1,…,N m .

6.1 Destructiveness evaluation

Figure2 shows the global sensing performance at the FC under the proposed SSDF attack model, in terms of global detection error probability P e , defined as
P e = P ( H 1 ) ( 1 - P d ) + P ( H 0 ) P f .
(31)
Figure 2
Figure 2

Global detection performance at the FC under different attack probabilities. N=10,N m =3.

In the simulation, N=10,N m =3. Curves with four groups of attack strengths (u2,u3) are plotted. We also plot the curve without malicious SUs as the baseline. It is shown in Figure2 that (i) the global detection error probability increases with the attack probability, (ii) when the attack strength increases, i.e., increasing of u2 and/or decreasing of u3, the global detection error probability increases, and (iii) simulations match the theoretical results very well.

Figure3 shows the global false alarm probability versus different detection probabilities, when the malicious SU implements contention attack as discussed in Section 4.2. In the simulation, the attack probability is set as 0.7 and the optimal values of attack strength (u2,u3) is obtained according to Theorem 2. Remind that in Theorem 2, we consider the case that there is a single malicious SU in the system and we set N=5,N m =1. Other two curves are also presented for comparison when 0.95u2 and 0.9u2 are set lower than the optimal value u2, u 3 and u 3 are correspondingly calculated for the given P d . It can be observed in Figure3 that the global false alarm probability P f gets its maximum when the malicious SU implements contention attack.
Figure 3
Figure 3

Global detection performance at the FC under contention attack. N=5,N m =1.

Figure4 shows the global detection probability versus different false alarm probabilities when the malicious SU implements interference attack as discussed in Section 4.3. In the simulation, the attack probability is set as 0.7 and optimal value of attack strength (u2,u3) is obtained according to Theorem 3. Remind that in Theorem 2, we consider the case that there is a single malicious SU in the system and we set N=5,N m =1. Other two curves are also presented for comparison when 1.01u3 and 1.02u3 are set larger than the optimal value u3, u 2 and u 2 are correspondingly calculated for the given P f . As shown in Figure4, the global detection probability P d gets the minimum when the malicious SU implements interference attack.
Figure 4
Figure 4

Global detection performance at the FC under interference attack. N=5,N m =1.

6.2 Stealthiness evaluation

In this subsection, we study the impact of the attack probability on stealthiness of the proposed attack model. The attack strength is set as u2=u1,u3=u0, and N=10,N m =3.

Figure5 shows the evolution of the stealthiness metric, defined in (30), under different attack probabilities. Two main observations are as follows:

  • After a few sensing slots, the stealthiness metric restrains itself to a constant value ψ for each given attack probability p a .
    Figure 5
    Figure 5

    Stealthiness under different attack probabilities. N=10,N m =3.

  • Malicious users’ stealthiness deteriorates with the attack probability p a .

Mathematically, denote k as the index of the sensing slot, the first observation above can be rewritten as:
constant M R , k > M , ψ ( k ) = ψ = h ( p a )
(32)
As shown in Figure5, the secure algorithm is not sufficiently valid to probabilistic attack. Essentially, the secure algorithm in[18] uses a reputation accumulation mechanism based on history decision information. However, the reputation accumulating process is ruined by such malicious SUs that may behave in a honest manner, then turn malicious at the next moment. Furthermore, there exists a probability of collision p c (k), with which the i-th malicious SU’s local decision d i m ( k ) is inconsistent with FC’s global decision D(k), which can be denoted as
p c ( k ) = P ( d i m ( k ) D ( k ) ) = P ( H 0 ) P ( d i m ( k ) D ( k ) | H 0 ) + P ( H 1 ) P ( d i m ( k ) D ( k ) | H 1 )
(33)
Simultaneously, there is a collision between FC’s global decision and a honest user’s local decision with a probability p c (k) during the k-th sensing slot as well. Consistent with the stealthiness metric, the collision probabilities restrain themselves to constant values as well.
M , k > M , p c ( k ) = p c p c ( k ) = p c .
(34)
Furthermore, we can obtain the relation between collision probabilities and the stealthiness metric as follows
ψ = p c p c .
(35)

The second observation above is opposite to the results in Figure2. Briefly, destructiveness in Figure2 increases with the attack probability while stealthiness in Figure5 decreases with the attack probability. Consequently, there should be a trade-off between stealthiness and destructiveness with respect to the attack probability.

Motivated by this discovery, we further plot Figure6, which shows the global detection error probability P e of the robust defense or secure sensing algorithm developed in[18], under the proposed probabilistic soft attack model. In the simulation, N=10,N m =3.
Figure 6
Figure 6

Relationship between attack performance and attack probability with destructiveness and stealthiness into consideration. N=10,N m =3.

Although the x-axis and y-axis of Figures2 and6 are of the same content, their results are quite different. The main reason behind the differences lies on the fact that the secure sensing algorithm developed in[18] is adopted at the FC in producing Figure6 to discard the detected malicious SUs before the global fusion. It is observed in Figure6 that with defense or secure sensing algorithms into consideration, there generally exists an optimal attack probability p a (0,1], not necessarily equal to 1.

Taking the curve with the attack strength (u1,u0) as an example and comparing it with the curve without malicious SUs, we can divide the attack probability into three intervals:

  • An interval with a low attack probability than the first point drawn as a circle in Figure6, named role reversal interval, where malicious SUs’ participation does not pose harm to FC’s global fusion, but is beneficial to it.

  • An interval of a high attack probability than the second point drawn as a square, named exposure interval, where attackers are found out and removed, and their powerful attack action is nearly transparent to FC for their low stealthiness.

  • An interval of a medium attack probability between the above two points, named favorable attack interval, where effective but stealthy attack can be implemented.

7 Conclusions

In this paper, a generic and novel probabilistic soft SSDF attack model has been proposed. In the proposed attack model, a malicious SU has a certain probability, varying from 0 to 1, to conduct attacks. Under this generalized SSDF attack model, we firstly obtained closed-form expressions of the global sensing performance at the fusion center. Then, we theoretically evaluated the performance of the proposed attack model, in terms of destructiveness and stealthiness, sequentially. Moreover, numerical simulations match the analytical results well. An interesting trade-off between destructiveness and stealthiness has also been discovered, which is a fundamental issue involved in SSDF attack, however, ignored by most of the previous studies.

Appendix

Limitation of attack strength

An extremely large or small value viciously reported by malicious users will bring about huge damage for the fusion center without any defense schames, but such attack behaviors are very prone to be identified. So, before studying optimal attack parameters’ setting, we list restricted conditions as follows:
A u 2 i u 3 i B N η f - u h 1 < u 3 i + u 2 i 2 < N η f - u h 0 ( a ) ( b )
(36)

where A and B are constants. In the condition (a), attack stength are restricted within certain intervals determined by the constants A and B and the relationship of size between u2i and u3i is referred to before, revealing the attack intention. Further, the condition (b) is used to avoid the large deviation from honest reports through the inter-constraint relationship between u2i and u3i, and the two sides of the inequation can be used to denote the residual of FC’s threshold minus the honest means.

Declarations

Acknowledgements

This work is supported in part by the National Natural Science Foundation of China under Grant No. 61172062 and No. 61301160 and in part by Jiangsu Province Natural Science Foundation under Grant No. BK2011116.

Authors’ Affiliations

(1)
College of Communications Engineering, PLA University of Science and Technology, Yudao Road, Nanjing, 210007, China

References

  1. Mitola J: Cognitive radio: an integrated agent architecture for software defined radio. Ph.D. Dissertation, KTH 2000.Google Scholar
  2. Wu Q, Ding G, Wang J, Yao YD: Spatial-temporal opportunity detection in spectrum-heterogeneous cogntive radio networks: two-dimensional sensing. IEEE Trans. Wireless Commun 2013, 12(2):516-526.View ArticleGoogle Scholar
  3. Rifa Pous H, Blasco MJ, Garrigues C: Review of robust cooperative spectrum sensing techniques for cognitive radio networks. Wireless Personal Commun 2012, 67(2):175-198. 10.1007/s11277-011-0372-xView ArticleGoogle Scholar
  4. Chen R, Park JM, Hou YT: Toward secure distributed spectrum sensing in cognitive radio networks. IEEE Commun. Mag 2008, 46(4):50-55.View ArticleGoogle Scholar
  5. Penna F, Sun Y, Dolecek L, Cabric D: Detecting and counteracting statistical attacks in cooperative spectrum sensing. IEEE Trans. Signal Process 2012, 60(4):1806-1822.MathSciNetGoogle Scholar
  6. Yao JN, Wu Q, Wang J: Attacker detection based on dissimilarity of local reports in collaborative spectrum sensing. IEICE Trans. Commun. 2012, 9: 3024-3027.View ArticleGoogle Scholar
  7. Li H, Han Z: Catch me if you can: an abnormality detection approach for collaborative spectrum sensing in cognitive radio networks. IEEE Trans. Wireless Commun 2010, 9(11):3554-3565.View ArticleGoogle Scholar
  8. Vempaty A, Tong L, Varshney P: Distributed inference with byzantine data: state-of-the-art review on data falsification attacks. IEEE Signal Process. Mag 2013, 30(5):65-75.View ArticleGoogle Scholar
  9. Farmani F, Berangi R, MA Jannat-Abad: Detection of SSDF attack using SVDD algorithm in cognitive radio networks. IEEE Third International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN): 26–28 July 2011 (IEEE, Bali, 2011) 201-204.View ArticleGoogle Scholar
  10. Min AW, Shin KG, Hu X: Secure cooperative sensing in IEEE 802.22 WRANs using shadow fading correlation. IEEE Trans. Mobile Comput 2011, 10(10):1434-1447.View ArticleGoogle Scholar
  11. Ding G, Wu Q, Yao YD, Wang JL, Chen YY: Kernel-based learning for statistical signal processing in cognitive radio networks: Theoretical foundations, example applications, and future directions. IEEE Signal Process. Mag 2013, 30: 126-136.View ArticleGoogle Scholar
  12. Cui S, Han Z, Kar S, Kim TT, Poor H, Tajer A: Coordinated data-injection attack and detection in smart grid. IEEE Signal Process. Mag 2012, 29(5):106-115.View ArticleGoogle Scholar
  13. Chen R, Park JM, Bian K: Robust distributed spectrum sensing in cognitive radio networks. INFOCOM 13-18 April 2008; Phoenix, AZ (IEEE, 2008), pp. 1876–1884Google Scholar
  14. Han Y, Chen Q, Wang JX: An enhanced DS theory cooperative spectrum sensing algorithm against SSDF attack. IEEE Vehicular Technol.ogy Conference (VTC Spring) 6–9 May 2012; Yokohama (IEEE, 2012), pp.1–5Google Scholar
  15. Kaligineedi P, Khabbazian M, Bhargava VK: Secure cooperative sensing techniques for cognitive radio systems. IEEE International Conference on Communications 19–23 May 2008; Beijing (IEEE, 2008), pp. 3406–3410Google Scholar
  16. Nguyen-Thanh N, Koo I: A robust secure cooperative spectrum sensing scheme based on evidence theory and robust statistics in cognitive radio. IEICE Trans. Commun 2009, 12: 3644-3652.View ArticleGoogle Scholar
  17. Urkowitz H: Energy detection of unknown deterministic signals. Proc. IEEE 1967, 55(4):523-531.View ArticleGoogle Scholar
  18. Zeng K, Paweczak P, Cabric D: Reputation-based cooperative spectrum sensing with trusted nodes assistance. IEEE Commun. Lett 2010, 14(3):226-228.View ArticleGoogle Scholar

Copyright

© Zhang et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Advertisement