Skip to main content
  • Research Article
  • Open access
  • Published:

Securing Collaborative Spectrum Sensing against Untrustworthy Secondary Users in Cognitive Radio Networks

Abstract

Cognitive radio is a revolutionary paradigm to migrate the spectrum scarcity problem in wireless networks. In cognitive radio networks, collaborative spectrum sensing is considered as an effective method to improve the performance of primary user detection. For current collaborative spectrum sensing schemes, secondary users are usually assumed to report their sensing information honestly. However, compromised nodes can send false sensing information to mislead the system. In this paper, we study the detection of untrustworthy secondary users in cognitive radio networks. We first analyze the case when there is only one compromised node in collaborative spectrum sensing schemes. Then we investigate the scenario that there are multiple compromised nodes. Defense schemes are proposed to detect malicious nodes according to their reporting histories. We calculate the suspicious level of all nodes based on their reports. The reports from nodes with high suspicious levels will be excluded in decision-making. Compared with existing defense methods, the proposed scheme can effectively differentiate malicious nodes and honest nodes. As a result, it can significantly improve the performance of collaborative sensing. For example, when there are 10 secondary users, with the primary user detection rate being equal to 0.99, one malicious user can make the false alarm rate increase to 72%. The proposed scheme can reduce it to 5%. Two malicious users can make increase to 85% and the proposed scheme reduces it to 8%.

1. Introduction

Nowadays the available wireless spectrum becomes more and more scarce due to increasing spectrum demand for new wireless applications. It is obvious that current static frequency allocation policy cannot meet the needs of emerging applications. Cognitive radio networks [1–3], which have been widely studied recently, are considered as a promising technology to migrate the spectrum shortage problem. In cognitive radio networks, secondary users are allowed to opportunistically access spectrums which have already been allocated to primary users, given that they do not cause harmful interference to the operation of primary users. In order to access available spectrums, secondary users have to detect the vacant spectrum resources by themselves without changing the operations of primary users. Existing detection schemes include matched filter, energy detection, cyclostationary detection, and wavelet detection [2–6]. Among these schemes, energy detection is commonly adopted because it does not require a priori information of primary users.

It is known that wireless channels are subject to fading and shadowing. When secondary users experience multipath fading or happen to be shadowed, they may fail to detect the existence of primary signal. As a result, it will cause interference to primary users if they try to access this occupied spectrum. To cope with this problem, collaborative spectrum sensing [7–12] is proposed. It combines sensing results of multiple secondary users to improve the probability of primary user detection. There are many works that address the cooperative spectrum sensing schemes and challenges. The performance of hard-decision combining scheme and soft-decision combining scheme is investigated in [7, 8]. In these schemes, all secondary users send sensing reports to a common decision center. Cooperative sensing can also be done in a distributed way, where secondary users collect reports from their neighbors and make the decision individually [13–15]. Optimized cooperative sensing is studied in [16, 17]. When the channel that forwards sensing observations experiences fading, the sensing performance degrades significantly. This issue is investigated in [18, 19]. Furthermore, energy efficiency in collaborative spectrum sensing is addressed in [20].

There are some works that address the security issues of cognitive radio networks. Primary user emulation attack is analyzed in [21, 22]. In this attack, malicious users transmit fake signals which have similar feature of primary signal. In this way attacker can mislead legitimate secondary users to believe that primary user is present. The defense scheme in [21] is to identify malicious user by estimating location information and observing received signal strength (RSS). In [22], it uses signal classification algorithms to distinguish primary signal and secondary signal. Primary user emulation attack is an outsider attack, targeting both collaborative and noncollaborative spectrum sensing. Another type of attack is insider attack that targets collaborative spectrum sensing. In current collaborative sensing schemes, secondary users are often assumed to report their sensing information honestly. However, it is quite possible that wireless devices are compromised by malicious parties. Compromised nodes can send false sensing information to mislead the system. A natural defense scheme [23] is to change the decision rule. The revised rule is, when there are malicious nodes, the decision result is on only if there are at least nodes reporting on. However, this defense scheme has three disadvantages. First, the scheme does not specify how to estimate the number of malicious users, which is difficult to measure in practice. Second, the scheme will not work in soft-decision case, in which secondary users report sensed energy level instead of binary hard decisions. Third, the scheme has very high false alarm rate when there are multiple attackers. This will be shown by the simulation results in Section 4. The problem of dishonest users in distributed spectrum sensing is discussed in [24]. The defense scheme in this work requires secondary users to collect sensing reports from their neighbors when confirmative decision cannot be made. The scheme is also only applied to hard-decision reporting case. Finally, current security issues in cognitive radio networks, including attacks and corresponding defense schemes, are concluded in [25].

In this paper, we develop defense solutions against one or multiple malicious secondary users in soft-decision reporting collaborative spectrum sensing. We first analyze the single malicious user case. The suspicious level of each node is estimated by their reporting histories. When the suspicious level of a node goes beyond certain threshold, it will be considered as malicious and its report will be excluded in decision-making. Then, we extend this defense method to handle multiple attackers by using an "onion-peeling approach." The idea is to detect malicious users in a batch-by-batch way. The nodes are classified into two sets, honest set and malicious set. Initially all users are assumed to be honest. When one node is detected to be malicious according to its accumulated suspicious level, it will be moved into malicious set. The way to calculate suspicious level will be updated when the malicious node set is updated. This procedure continues until no new malicious node can be found.

Extensive simulations are conducted. We simulate the collaborative sensing scheme without defense, the straightforward defense scheme in [23], and the proposed scheme with different parameter settings. We observe that even a single malicious node can significantly degrade the performance of spectrum sensing when no defense scheme is employed. And multiple malicious nodes can make the performance even much worse. Compared with existing defense methods, the proposed scheme can effectively differentiate honest nodes from malicious nodes and significantly improve the performance of collaborative spectrum sensing. For example, when there are 10 secondary users, with the primary user detection rate being equal to 0.99, one malicious user can make the false alarm rate () increase to 72%. While a simple defense scheme can reduce to 13%, the proposed scheme reduces it to 5%. Two malicious users can make increase to 85%, the simple defense scheme can reduce to 23%, the proposed scheme reduces it to 8%. We study the scenario that malicious nodes dynamically change their attack behavior. Results show that the scheme can effectively capture the dynamic change of nodes. For example, if a node behaves well for a long time and suddenly turns bad, the proposed scheme rapidly increases the suspicious level of this node. If it only behaves badly for a few times, the proposed scheme allows slow recovery of its suspicious level.

The rest of paper is organized as follows. Section 2 describes the system model. Attack models and the proposed scheme are presented in Section 3. In Section 4, simulation results are demonstrated. Conclusion is drawn in Section 5.

2. System Model

Studies show that collaborative spectrum sensing can significantly improve the performance of primary user detection [7, 8]. While most collaborative spectrum sensing schemes assume that secondary users are trustworthy, it is possible that attackers compromise cognitive radio nodes and make them send false sensing information. In this section, we describe the scenario of collaborative spectrum sensing and present two attack models.

2.1. Collaborative Spectrum Sensing

In cognitive radio networks, secondary users are allowed to opportunistically access available spectrum resources. Spectrum sensing should be performed constantly to check vacant frequency bands. For the detection based on energy level, spectrum sensing performs the hypothesis test

(1)

where is the sensed energy level at the th secondary user, is the signal transmitted by the primary user, is the additive white Gaussian noise (AWGN), and is the channel gain from the primary transmitter to the th secondary user.

We denote by the sensed energy for the th cognitive user in time slots, the received signal-to-noise ratio (SNR), and the time-bandwidth product. According to [7], follows centralized distribution under and noncentralized distribution under :

(2)

From (2), we can see that under the probability depends on only. Under , depends on and . Recall that is the received SNR of secondary user , which can be estimated according to path loss model and location information.

By comparing with a threshold , secondary user makes a decision about whether the primary user is present. As a result, the detection probability and false alarm probability are given by

(3)
(4)

respectively.

Notice that (3) and (4) are detection rate and false rate for single secondary user. In practice it is known that wireless channels are subject to multipath fading or shadowing. The performance of spectrum sensing degrades significantly when secondary users experience fading or happen to be shadowed [7, 8]. Collaborative sensing is proposed to alleviate this problem. It combines sensing information of several secondary users to make more accurate detection. For example, considering collaborative spectrum sensing with secondary users. When OR-rule, that is, the detection result of primary user is on if any secondary user reports on, is the decision rule, the detection probability and false-alarm probability for collaborative sensing are [7, 8]

(5)
(6)

respectively. A scenario of collaborative spectrum sensing is demonstrated in Figure 1. We can see that with OR rule, decision center will miss detect the existence of primary user only when all secondary users miss detect it.

Figure 1
figure 1

Collaborative spectrum sensing.

2.2. Attack Model

The compromised secondary users can report false sensing information to the decision center. According to the way they send false sensing reports, attackers can be classified into two categories: selfish users and malicious users. The selfish users report yes or high energy level when their sensed energy level is low. In this way they intentionally cause false alarm such that they can use the available spectrum and prevent others from using it. The malicious users report no or low signal level when their sensed energy is high. They will reduce the detection rate, which yields more interference to the primary user. When the primary user is not detected, the secondary users may transmit in the occupied spectrum and interfere with the transmission of the primary user. In this paper, we investigate two attack models, False Alarm (FA) Attack and False Alarm & Miss Detection (FAMD) Attack, as presented in [26, 27].

In energy spectrum sensing, secondary users send reports to decision center in each round. Let denote the observation of node about the existence of the primary user at time slot . The attacks are modeled by three parameters: the attack threshold (), attack strength (), and attack probability (). The two attack models are the following.

(i) False Alarm (FA) Attack: for time slot , if sensed energy is higher than , it will not attack in this round, and just report ; otherwise it will attack with probability by reporting . This type of attack intends to cause false alarm.

(ii) False Alarm & Miss Detection (FAMD) Attack: for time slot , attacker will attack with probability . If it does not choose to attack this round, it will just report ; otherwise it will compare with . If is higher than , the attacker reports ; Otherwise, it reports . This type of attack causes both false alarm and miss detection.

3. Secure Collaborative Sensing

In this paper, we adopt the centralized collaborative sensing scheme in which cognitive radio nodes report to a common decision center. Among these cognitive radio nodes, one or more secondary users might be compromised by attackers. We first study the case when only one secondary node is malicious. By calculating the suspicious level, we propose a scheme to detect malicious user according to their report histories. Then we extend the scheme to handle multiple attackers. As we will discuss later, malicious users can change their attack parameters to avoid being detected, so the optimal attack strategy is also analyzed.

3.1. Single Malicious User Detection

In this section, we assume that there is at most one malicious user. Define

(7)

as the suspicious level of node at time slot , where is the type of node, which could be H(Honest) or M(Malicious), and is observations collected from time slot 1 to time slot . By applying Bayesian criterion, we have

(8)

Suppose that for all nodes. Then, we have

(9)

It is easy to verify

(10)

where

(11)

which represents the probability of reports at time slot conditioned that node is malicious. Note that the first equation in (10) is obtained by repeatedly applying the following equation:

(12)

Let and denote the observation probabilities under busy and idle states, respectively, that is,

(13)

Note that calculation in (13) is based on the fact that the sensed energy level follows centralized distribution under and noncentralized distribution under [7]. The distribution is stated in (2), in which the channel gain should be estimated based on (i) the distance between the primary transmitter and secondary users and (ii) the path loss model. We assume that the primary transmitter (TV tower, etc.) is stationary and the position of secondary users can be estimated by existing positioning algorithms [28–32]. Of course, the estimated distance may not be accurate. In Section 4.5, the impact of distance estimation error on the proposed scheme will be investigated.

Therefore, the honest user report probability is given by

(14)

The malicious user report probability, , depends on the attack model. When FA attack is adopted, there are two cases that malicious user will report in round . In the first case, is the actual sensed result, which means that is greater than . In the second case, is the actual sensed result plus . So the actual sensed energy is and is less than . In conclusion, the malicious user report probability under FA is,

(15)

Similarly, when FAMD attack is adopted,

(16)

In (14)–(16), and are the priori probabilities of whether the primary user is present or not, which can be obtained through a two-state Markov chain channel model [33]. The observation probabilities, , , and other similar terms can be calculated by (13). , , and similar terms, are detection probabilities or false alarm probabilities, which can be evaluated under specific path loss model [7, 8]. Therefore, we can calculate the value of in (11) as long as , , , , , and are known or can be estimated. In this derivation, we assume that the common receiver has the knowledge of the attacker's policy. This assumption allows us to obtain the performance upper bound of the proposed scheme and reveal insights of the attack/defense strategies. In practice, the knowledge about the attacker's policy can be obtained by analyzing previous attacking behaviors. For example, if attackers were detected previously, one can analyze the reports from these attackers and identify their attack behavior and parameters. Investigation on the unknown attack strategies will be investigated in the future work.

The computation of is given by

(17)

We convert suspicious level into trust value as

(18)

Trust value is the measurement for honesty of secondary users. But this value alone is not sufficient to determine whether a node is malicious or not. In fact, we find that trust values become unstable if there is no malicious user at all. The reason is that above deduction is based on the assumption that there is one and only one malicious user. When there is no attacker, the trust values of honest users become unstable. To solve this problem, we define trust consistency value of user (i.e., ) as

(19)
(20)

where is the size of the window in which the variation of recent trust values is compared with overall trust value variation.

Procedure 1 shows the process of by applying the trust value and the consistency value in primary user detection algorithm. The basic idea is to eliminate the reports from users who have consistent low trust values. The value of and can be chosen dynamically. This procedure can be used together with many existing primary user detection algorithms such as hard decision combing and soft decision combing. The study in [23] has shown that hard decision performs almost the same as soft decision in terms of achieving performance gain when the cooperative users (10–20) face independent fading. For simplicity, in this paper, we will use the hard decision combining algorithm in [7, 8] to demonstrate the performance of the proposed scheme and other defense schemes.

Procedure 1: Primary user detection.

() receive reports from secondary users.

() calculate trust values and consistency values for all users.

() for each user   do

()  if   and   then

()   the report from user is removed

()  end if

() end for

() perform primary user detection algorithm based on the

remaining reports.

3.2. Multiple Malicious Users Detection

The detection of single attacker is to find the node that has the largest probability to be malicious. We can extend this method to multiple attackers case. The idea is enumerating all possible malicious nodes set and trying to identify the set with the largest suspicious level. We call this method "ideal malicious node detection." However, as we will discuss later, this method faces the curse of dimensionality when the number of secondary users is large. As a result, we propose a heuristic scheme named "Onion-peeling approach" which is applicable in practice.

3.2.1. Ideal Malicious Node Detection

For any (note that could be an empty set, i.e., there is no attacker), we define

(21)

as the belief that all nodes in are malicious nodes while all other nodes are honest.

Given any particular set of malicious nodes , by applying Bayesian criterion, we have

(22)

Suppose that for all nodes. Then, we have

(23)

where is the cardinality of .

Next, we can calculate

(24)

where

(25)

For each possible malicious node set , using (22)–(25), we can calculate the probability that this contains only malicious users and no honest users. And we can find the with the largest value. Then compare this with certain threshold, if it is beyond this threshold, the nodes in are considered to be malicious.

However, for a cognitive radio network with secondary users, there are different choices of set . Thus, the complexity grows exponentially with . So this ideal detection of attackers faces the curse of dimensionality. When is large, we have to use approximation.

3.2.2. Onion-Peeling Approach

To make the detection of multiple malicious nodes feasible in practice, we propose a heuristic "onion-peeling approach" that detects the malicious user set in a batch-by-batch way. Initially all nodes are assumed to be honest. We calculate suspicious level of all users according to their reports. When the suspicious level of a node is beyond certain threshold, it will be considered as malicious and moved into the malicious user set. Reports from nodes in malicious user set are excluded in primary user detection. And the way to calculate suspicious level is updated once the malicious node set is updated. We continue to calculate the suspicious level of remaining nodes until no malicious node can be found.

In the beginning, we initialize the set of malicious nodes, , as an empty set. In the first stage, compute the a posteriori probability of attacker for any node , which is given by

(26)

where we assume that all other nodes are honest when computing and . In (26) we only calculate the suspicious level for each node rather than that of a malicious nodes set, the computation complexity is reduced from to .

Recall that denote the collection of , that is, reports from all secondary nodes at time slot . It is easy to verify

(27)

where

(28)

Here, means the probability of reports at time slot conditioned that node is malicious. Note that the first equation in (27) is obtained by repeatedly applying (12).

Similarly, we can calculate by

(29)

where

(30)

As mentioned before, and are the priori probabilities of whether the primary user exists or not, and are the observation probabilities of under busy and idle states. An honest user's report probability can be calculated by (14).

Then for each reporting round, we can update each node's suspicious level based on above equations. We set a threshold and consider as a malicious node when is the first node such that

(31)

Then, add into .

Through (26)–(31), we have shown how to detect the first malicious node. In the th stage, we compute the a posteriori probability of attacker in the same manner of (26). The only difference is that when computing and , we assume that all nodes in are malicious. Equations (28) and (30) now become (32) and (33), respectively, and they can be seen as the special cases of (32) and (33) when is empty.

(32)
(33)

Add to when is the first node (not in ) such that

(34)

Repeat the procedure until no new malicious node can be found.

Based on the above discussion, the primary user detection process is shown in Procedure 2. The basic idea is to exclude the reports from users who have suspicious level higher than threshold. In this procedure, can be chosen dynamically. This procedure can be used together with many existing primary user detection algorithms. As discussed in Section 3.1, hard decision performs almost the same as soft decision in terms of achieving performance gain when the cooperative users (10–20) face independent fading. So for simplicity, we still use the hard decision combining algorithm in [7, 8] to demonstrate the performance of the proposed scheme.

Procedure 2: Primary user detection.

initialize the set of malicious nodes.

collect reports from secondary users.

calculate suspicious level for all users.

  for each user   do

 if    then

  move node to malicious nodes set, the report

    from user   is removed

  exit loop

 end if

  end for

perform primary user detection algorithm based

   nodes that are currently assumed to be honest.

go to step 2 and repeat the procedure

3.3. Optimal Attack

As presented in Section 2.2, the attack model in this paper has three parameters: the attack threshold (), attack strength (), and attack probability (). These parameters determine the power and covertness of the attack. Here, the power of attack can be described by the probability that the attack is successful (i.e., causing false alarm and/or miss detection). The covertness of the attack can be roughly described by the likelihood that the attack will not be detected.

Briefly speaking, when or increases, the attack happens more frequently. When increases, the attack goal is easier to achieve. Thus, the power of attack increases with , , and . On the other hand, when the attack power increases, the covertness reduces. Therefore, there is the tradeoff between attack power and covertness.

The attacker surely prefers maximum attack power and maximum covertness. Of course, these two goals cannot be achieve simultaneously. Then, what is the "best" way to choose attack parameters from the attacker's point of view? In this section, we define a metric called damage that considers the tradeoff between attack power and covertness, and find the attack parameters that maximize the damage. To simplify the problem, we only consider one attacker case in this study.

We first make the following arguments.

  1. (i)

    The attacker can damage the system if it achieves the attack goal and is not detected by the defense scheme. Thus, the total damage can be described by the number of successful attacks before the attacker is detected.

  2. (ii)

    Through experiments, we found that the defense scheme cannot detect some conservative attackers, who use very small , , and values. It can be proved that all possible values of that will not trigger the detector form a continuous 3D region, referred to as the undetectable region.

  3. (iii)

    Thus, maximizing the total damage is equivalent to finding attack parameters in the undetectable region that maximize the probability of successful attack.

Based on the above arguments, we define damage as the probability that the attacker achieves the attack goal (i.e., causing false alarm) in one round of collaborative sensing. Without loss of generality, we only consider FA attack in this section. In FA attack, when sensed energy is below attack threshold , the attacker will report with probability . When is greater than the decision threshold and the primary user does not present, the attacker causes false alarm and the attack is successful. Thus, the damage is calculated as:

(35)

where is the priori probability that channel is idle and is the priori probability that channel is busy.

From the definition of and in (3) and (4), we have,

(36)
(37)

Similarly,

(38)
(39)

Substitute (36)–(39) to (35), then we have

(40)

Under the attack models presented in this paper, the attacker should choose the attack parameters that maximize and are in the undetectable region.

Finding optimal attack has two purposes. First, with the strongest attack (in our framework), we can evaluate the worst-case performance of the proposed scheme. Second, it reveals insights of the attack strategies. Since it is extremely difficult to obtain the close form solution of the undetectable region, we will find undetectable region through simulations and search for optimal attack parameters using numerical methods. Details will be presented in Section 4.4.

4. Simulation Results

We simulate a cognitive radio network with (=10) secondary users. Cognitive radio nodes are randomly located around the primary user. The minimum distance from them to primary transmitter is 1000 m and maximum distance is 2000 m. The time-bandwidth product [7, 8] is . Primary transmission power and noise level are 200 mw and −110 dBm, respectively. The path loss factor is 3 and Rayleigh fading is assumed. Channel gains are updated based on node's location for each sensing report. The attack threshold is , the attack strength is , and the attack probability is 100% or 50%. We conduct simulations for different choices of thresholds. Briefly speaking, if trust value threshold is set too high or suspicious level threshold is set too low, it is possible that honest nodes will be regarded as malicious. If trust consistency value is set too low, it will take more rounds to detect malicious users. In simulation, for single malicious node detection, we choose the trust value threshold , the consistency value threshold , and the window size for calculating consistency value is . For multiple malicious users detection, the suspicious level threshold is set to .

4.1. Single Attacker

Three schemes of primary user detection are compared.

  1. (i)

    OR Rule: the presence of primary user is detected if one or more secondary users' reported value is greater than certain threshold. This is the most common hard fusion scheme.

  2. (ii)

    Ki Rule: the presence of primary user is detected if or more secondary users' reported value is greater than certain threshold. This is the straightforward defense scheme proposed in [23].

  3. (iii)

    Proposed Scheme: Use OR rule after removing reports from malicious nodes.

Performance of these schemes are shown by Receiver Operating Characteristic (ROC) curves, which is a plot of the true positive rate versus the false positive rates as its discrimination threshold is varied. Figures 2–5 show ROC curves for primary user detection in 6 cases when only one secondary user is malicious. Case  1 is for OR rule with honest users. Case  2 is for OR rule with honest users. In Case  3–6, there are honest users and one malicious user. Case  3 is for OR rule. Case  4 is for K2 rule. Case  5 is for the proposed scheme with , where is the index of detection rounds. Case  6 is for the proposed scheme with .

Figure 2
figure 2

ROC curves for different collaborative sensing schemes (, False Alarm Attack).

Figure 3
figure 3

ROC curves for different collaborative sensing schemes (, False Alarm Attack).

Figure 4
figure 4

ROC curves for different collaborative sensing schemes (, False Alarm & Miss Detection Attack).

Figure 5
figure 5

ROC curves for different collaborative sensing schemes (, False Alarm & Miss Detection Attack).

When the attack strategy is the FA Attack, Figures 2 and 3 show the ROC curves when the attack probability is 100% and 50%, respectively. The following observations are made.

  1. (i)

    By comparing the ROC for Case  1 and Case  3, we see that the performance of primary user detection degrades significantly even when there is only one malicious user. This demonstrates the vulnerability of collaborative sensing, which leads inefficient usage of available spectrum resource.

  2. (ii)

    The proposed scheme demonstrates significant performance gain over the scheme without defense (i.e., OR rule) and the straightforward defense scheme (i.e., K2 rule). For example, Table 1 shows the false alarm rate () for two given detection rate (), when attack probability () is 1. When the attack probability is 0.5, the performance advantage is smaller but still large.

Table 1 False Alarm Rate (when detection rate = 0.99).
  1. (iii)

    In addition, as increases, the performance of the proposed scheme gets close to the performance of Case  2, which represents perfect detection of the malicious nodes.

4.2. Multiple Attackers

Figures 6–9 are the ROC curves for six cases when there are multiple attackers. Similarly, Case  1 is honest users, no malicious node, and OR rule. Case  2 is (or ) honest users, no attacker, and OR rule. Case  3–6 are (or ) honest users and (or ) malicious users. OR rule is used in Case  3 and Ki rule is used in case  4. Case  5 and Case  6 are with the proposed scheme with different detection rounds. Case  5 is the performance evaluated at round and Case  6 is at round .

Figure 6
figure 6

ROC curves (False Alarm Attack, Two Attackers).

Figure 7
figure 7

ROC curves (False Alarm Attack, Three Attackers).

Figure 8
figure 8

ROC curves (False Alarm & Miss Detection Attack, Two Attackers).

Figure 9
figure 9

ROC curves (False Alarm & Miss Detection Attack, Three Attackers).

When the attack strategy is the FA Attack, Figures 6 and 7 show the ROC curves when the attacker number is 2 and 3, respectively. We still compare the three schemes described in Section 4.1. Similarly, following observations are made.

  1. (i)

    By comparing the ROC curves for Case  1 and Case  3, we see that the performance of primary user detection degrades significantly when there are multiple malicious users. And the degradation is much more severe than single malicious user case.

  2. (ii)

    The proposed scheme demonstrates significant performance gain over the scheme without defense (i.e., OR rule) and the straightforward defense scheme (i.e., Ki rule). Table 2 shows the false alarm rate () when detection rate is .

  3. (iii)

    When there are three attackers, false alarm rates for all these schemes become larger, but the performance advantage of the proposed scheme over other schemes is still large.

  4. (iv)

    In addition, as increases, the performance of the proposed scheme becomes close to the performance of Case  2, which is the performance upper bound.

Table 2 False Alarm Rate (when detection rate = 0.99).

Figures 4 and 5 show the ROC performance when the malicious user adopts the FAMD attack. We observe that the FAMD attack is stronger than FA. In other words, the OR rule and K2 rule have worse performance when facing the FAMD attack. However, the performance of the proposed scheme is almost the same under both attacks. That is, the proposed scheme is highly effective under both attacks, and much better than the traditional OR rule and the simple defense K2 rule. The example false alarm rates are listed as follows.

Figures 8 and 9 shows the ROC performance when the schemes face the FAMD attack for multiple malicious users. We observe that the FAMD attack is stronger than FA. Compared to the cases with FA attack, performance of the OR rule and Ki rule is worse when facing the FAMD attack. However, the performance of the proposed scheme is almost the same under both attacks. That is, the proposed scheme is highly effective under both attacks, and much better than the traditional OR rule and the simple defense Ki rule. The examples of false alarm rate are listed in Table 1.

4.3. Dynamic Behaviors

We also analyze the dynamic change in behavior of malicious nodes for FAMD attack. Figures 10 and 11 are for single malicious user. In Figure 10, the malicious user changes the attack probability from 0 to 1 at and from 1 to 0 at time . The dynamic change of trust value can be divided into three intervals. In Interval 1, , malicious user does not attack. The trust value of malicious user and honest user are not stable since there is no attacker. Note that the algorithm will not declare any malicious nodes because the trust consistency levels are high. In Interval 2, , malicious user starts to attack, and its trust value quickly drops when it turns from good to bad. In Interval 3, where , the trust value of malicious user is consistently low. In Figure 11, one user behaves badly in only 5 rounds starting at . We can have similar observations. In Interval 1, malicious user does not attack. It has high trust value. Please note that these dynamic figures are just snap shots of trust values. In Figure 11, the trust value in Region 1 does not fluctuate as frequently as that in Figure 10. This is also normal. The reason for unstable trust value may due to channel variation or unintentional errors. In Interval 2, , malicious user starts to attack, its trust value drops quickly. In Interval 3, where , trust value of malicious user recovers very slowly.

Figure 10
figure 10

Dynamic trust value in proposed scheme (a user attacks during time [5.,90], ).

Figure 11
figure 11

Dynamic trust value in proposed scheme (a user attacks during time [50,55], ).

Similarly, we also make observations for dynamic change in behaviors for multiple attackers. Suspicious level of honest users and malicious users are shown in Figures 12 and 13. Please note that we only demonstrate suspicious level curve for one honest node. The malicious user adopts the FA attack and dynamically chooses which round to start attack and which round to stop attack. In Figure 12, the malicious users start to attack at and stop to attack at time . In Figure 13, one user behaves badly in only 10 rounds starting at . Similar observations can be made. We can see that the suspicious level of malicious nodes increases steadily when nodes turn from good to bad. And the scheme allows slow recovery of suspicious level for occasional bad behavior.

Figure 12
figure 12

Dynamic suspicious level in proposed scheme (two malicious nodes perform FA attack during time [20, 100]).

Figure 13
figure 13

Dynamic suspicious level in proposed scheme (two malicious nodes perform FA attack during time [5, 15]).

4.4. Optimal Attack

As discussed in Section 3.3, given the defense scheme, the attacker can find the optimal attack parameters that maximize the damage. In this set of experiments, we find the optimal attack parameters and evaluate the worst performance of the proposed scheme.

We assume that there are cognitive radio nodes performing collaborative sensing. We set the decision threshold so that the overall detection rate is 99% when all users are honest. When OR rule is used, leads to = 99%.

Obviously, the practical values of and cannot be over certain range. Within the range, for each pair of (, ), we run simulations to identify the maximum attack probability that the attacker can use and avoid being detected. In particular, binary search is used to find the maximum . We first try an initial , which is usually the value of a neighbor pair. For example, if we already obtain the for pair (, ) through simulation, then normally the maximum for pair (, ) is a little bit smaller than that of pair (, ). Then, we run the simulation for 2000 rounds. If the attacker is not detected within 2000 rounds, we will search the middle value of range (, 1), otherwise we search the middle value of range (0, ). The search continues until the maximum is found. Then, the boundary of undetectable region is determined. We would like to point out that there exists more computational efficient ways to search for the undetectable region, which can be exploited in the future work.

Figure 14 shows the undetectable region when and other simulation parameters are the same as these in Section 4. The -axis and -axis are attack threshold and attack strength , respectively, and -axis is attack probability . The following observations are made. When and are small, can be as large as 100%. This is easy to understand. If is small, the probability that sensed energy is below is small. If is small, the reporting values are just a little higher than true sensed values. Thus, when both and are small, the behavior of malicious node is not very different from that of honest nodes. Each attack is very weak and the attacker can do more attacks (i.e., larger ) without triggering the detector. As or increases, the maximum allowed attack probability decreases. When both and are large, should be very small (0–5%).

Figure 14
figure 14

Region that detection is impossible.

According to (40), we know that the maximum damage will occur at the boundary of the undetectable region. Using (40), we can find the point (i.e., attack parameters) that maximizes the damage in the undetectable region. In this experiment, the optimal attack parameters are , , and , the maximum damage is 0.02.

We also plot the damage in Figure 15. The -axis and -axis are and , respectively, and -axis is damage . The damage value is calculated for the boundary points of the undetectable region. We do not show the value because each (, ) pair corresponds to one value on the boundary. From this figure, we can see that when and are low, the damage is 0. The attacker can cause larger damage by choosing relatively large and values and small values.

Figure 15
figure 15

Damage in region.

With the optimal attack parameters, for decision threshold , the overall false alarm rate will increase from 1% to 3%. Recall that the decision threshold was determined to ensure 99% detection rate. This is the worst-case performance of the proposed scheme. Please note that this is the worst case when the attackers are undetectable. When malicious users can be detected, as discussed in Section 4.1, the performance will get close to upper bound (the performance of honest nodes) as detection round increases.

For K2 rule with = 10 secondary users, to maintain overall detection rate being 99%, the decision threshold should be decreased to 22. Because K2 rule does not try to detect malicious users, attacker has no risk of being detected even they launch the strongest attack. For our attack model, they can set attack probability to 1, and set attack threshold and attack strength as large as possible. For K2 rule, when two or more secondary users report on, the decision result is on. The attacker can launch the strongest attack which is similar to report on in hard-decision reporting case. But only when another one or more honest nodes also make false alarm, the attacker can mislead the decision center. So the overall false alarm rate is not 1. In the simulation, we set to 1, and both to 1000. The overall false alarm rate is 17.5% for K2 rule under these settings, which is much larger than the worst case of the proposed scheme. For OR rule, the overall false alarm rate is 1. This result is summarized in Table 3. In this table, the ideal case means all secondary users are honest, and other three columns are the worse performance for different schemes when one of the cognitive radio nodes is malicious.

Table 3 False Alarm Rate (when detection rate = 0.99).

Finally, we would like to point out that the optimal attack is only optimal under certain attack model and certain defense scheme. The method of finding the optimal attack can be extended to study other attack models. We believe the proposed scheme will still work very well under many other attack models, since the attacker's basic philosophies are similar.

4.5. Impact of Position Estimation Error upon Performance

Recall that the proposed scheme needs to know the channel gains that are estimated based on the position of secondary nodes. There are many existing schemes that estimate the location of wireless devices in sensor networks [27–31]. These schemes can be classified into two categories: range based and range free. The range based methods first estimate the distances between pairs of wireless nodes and then calculate the position of individual nodes. Examples of range based schemes are Angle of Arrival (AoA) [28], Received Signal Strength Indicator (RSSI) [29], Time of Arrival (ToA) [30], and Time Difference of Arrival (TDoA) [31]. The range free methods usually use connectivity information to identify the beacon nodes within radio range and then estimate the absolute position of non-beacon nodes [32].

The performance of these schemes are measured by the location estimate error, which is usually normalized to the units of node radio transmission range (R). Most current algorithms can achieve the accuracy that the estimation error is less than one unit of radio transmission range [28–32].

In this section, we study the impact of position estimation error on the proposed scheme. The simulation settings are mostly the same as the settings in previous experiments. We choose the decision threshold to ensure the overall detection rate be 99% when there are no malicious nodes. The radio transmission range is set to 50 m, which is a typical value for wireless sensor nodes. Both FA attack and FAMD attack with single attacker are simulated.

The proposed scheme needs a certain number of rounds to detect the malicious users. When the positions of secondary users are not accurate, it can be envisioned that the number of rounds needed to detect the malicious user will increase. In Figure 16, the horizontal axis is the normalized position estimation error, and the veridical axis is the averaged number of rounds needed to detect the malicious node. In particular, when the normalized position estimation error value is and the actual distance between primary transmitter and secondary user is , we simulate the case that the estimated distance between the secondary users and the primary transmitter is Gaussian distributed with mean being and variance being . From Figure 16, the following observations are made.

Figure 16
figure 16

Impact of Position Estimation Error.

  1. (i)

    The average number of rounds to detect malicious node is very stable when the position estimation error is within 4 units of radio range. Recall that most positioning estimate algorithms have the estimation error around 1 unit of radio range. Thus, the performance of the proposed scheme is stable given realistic positioning estimation errors.

  2. (ii)

    When estimation error goes beyond 4 units of radio range, it would take much more rounds to detect the malicious node.

  3. (iii)

    The position estimation error has similar impact on the FA attack and the FAMD attack.

In conclusion, the performance of the proposed scheme is not sensitive to the position estimate error as long as it is within a reasonable range. This reasonable range can be achieved by existing positioning algorithms.

5. Conclusions

Untrustworthy secondary users can significantly degrade the performance of collaborative spectrum sensing. We propose two attack models, FA attack and FAMD attack. The first attack intends to cause false alarm and the second attack causes both false alarm and miss detection. To deal with these attacks, we first propose a defense scheme to detect single malicious user. The basic idea is to calculate the trust value of all secondary nodes based on their reports. Only reports from nodes that have consistent high trust value will be used in primary user detection. Then we extend the method for single attacker to multiple attacker case. This defense scheme uses an onion-peeling approach and does not need prior knowledge about the attacker number. Finally, we define the damage metric and investigate the attack parameters that maximize the damage.

Comprehensive simulations are conducted to study the ROC curves and suspicious level dynamics for different attack models, attacker numbers and different collaborative sensing schemes. The proposed schemes demonstrate significant performance advantage. For example, when there are 10 secondary users, with the primary user detection rate equals to 0.99, one malicious user can make the false alarm rate () increases to 72%. Whereas the K2 rule defense scheme can reduce to 13%, the proposed scheme reduces to 5%. Two malicious users can make the false alarm rate () increases to 85%. Whereas the K3 defense scheme can reduce to 23%, the proposed scheme reduces to 8%. Furthermore, when a good user suddenly turns bad, the proposed scheme can quickly increase the suspicious level of this user. If this user only behaves badly for a few times, its suspicious level can recover after a large number of good behaviors. For single attacker case, we find optimal attack parameters for the proposed scheme. When facing the optimal attack, the proposed scheme yield 3% false alarm rate, with 99% detection rate. On the other hand, when the K2 rule scheme faces the strongest attack against the K2 rule, the false alarm rate can be 17.5% with 99% detection rate. With the proposed scheme, the impact from malicious users is greatly reduced even if the attacker adopts optimal attack parameters and remains undetected.

References

  1. Mitola J III, Maguire GQ Jr.: Cognitive radio: making software radios more personal. IEEE Personal Communications 1999, 6(4):13-18. 10.1109/98.788210

    Article  Google Scholar 

  2. Haykin S: Cognitive radio: brain-empowered wireless communications. IEEE Journal on Selected Areas in Communications 2005, 23(2):201-220.

    Article  Google Scholar 

  3. Hossain E, Niyato D, Han Z: Dynamic Spectrum Access in Cognitive Radio Networks. Cambridge University Press, Cambridge, UK; 2008.

    Google Scholar 

  4. Cabric D, Mishra SM, Brodersen RW: Implementation issues in spectrum sensing for cognitive radios. Proceedings of the 38th Asilomar Conference on Signals, Systems and Computers (ACSSC '04), November 2004, Pacific Grove, Calif, USA 772-776.

    Google Scholar 

  5. Urkowitz H: Energy detection of unknown deterministic signals. Proceedings of the IEEE 1967, 55(4):523-531.

    Article  Google Scholar 

  6. Cabric D, Tkachenko A, Brodersen RW: Experimental study of spectrum sensing based on energy detection and network cooperation. Proceedings of the 1st ACM International Workshop on Technology and Policy for Accessing Spectrum (TAPAS '06), August 2006, Pacific Grove, Calif, USA

    Google Scholar 

  7. Ghasemi A, Sousa ES: Collaborative spectrum sensing for opportunistic access in fading environments. Proceedings of the 1st IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN '05), November 2005 131-136.

    Google Scholar 

  8. Ghasemi A, Sousa ES: Opportunistic spectrum access in fading channels through collaborative sensing. Journal of Communications 2007, 2(2):71-82.

    Article  Google Scholar 

  9. Ghasemi A, Sousa ES: Spectrum sensing in cognitive radio networks: the cooperation-processing tradeoff. Wireless Communications and Mobile Computing 2007, 7(9):1049-1060. 10.1002/wcm.480

    Article  Google Scholar 

  10. Letaief KB, Zhang W: Cooperative spectrum sensing. In Cognitive Wireless Communication Networks. Springer, New York, NY, USA; 2007.

    Google Scholar 

  11. Han Z, Liu KJR: Resource Allocation for Wireless Networks: Basics, Techniques, and Applications. Cambridge University Press, Cambridge, UK; 2008.

    Book  Google Scholar 

  12. Visotsky E, Kuffher S, Peterson R: On collaborative detection of TV transmissions in support of dynamic spectrum sharing. Proceedings of the 1st IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN '05), November 2005, Baltimore, Md, USA 338-345.

    Google Scholar 

  13. Ganesan G, Li Y: Agility improvement through cooperative diversity in cognitive radio. Proceedings of the IEEE Global Communications Conference (GLOBECOM '05), November 2005, St. Louis, Mo, USA 2505-2509.

    Google Scholar 

  14. Ganesan G, Li Y: Cooperative spectrum sensing in cognitive radio, part I: two user networks. IEEE Transactions on Wireless Communications 2007, 6(6):2204-2212.

    Article  Google Scholar 

  15. Ganesan G, Li Y: Cooperative spectrum sensing in cognitive radio, part II: multiuser networks. IEEE Transactions on Wireless Communications 2007, 6(6):2214-2222.

    Article  Google Scholar 

  16. Quan Z, Cui S, Sayed AH: Optimal linear cooperation for spectrum sensing in cognitive radio networks. IEEE Journal on Selected Topics in Signal Processing 2008, 2(1):28-40.

    Article  Google Scholar 

  17. Unnikrishnan J, Veeravalli VV: Cooperative sensing for primary detection in cognitive radio. IEEE Journal on Selected Topics in Signal Processing 2008, 2(1):18-27.

    Article  Google Scholar 

  18. Sun C, Zhang W, Letaief KB: Cooperative spectrum sensing for cognitive radios under bandwidth constraints. Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC '07), March 2007, Hong Kong 1-5.

    Google Scholar 

  19. Sun C, Zhang W, Letaief KB: Cluster-based cooperative spectrum sensing in cognitive radio systems. Proceedings of IEEE International Conference on Communications (ICC '07), June 2007, Glasgow, UK 2511-2515.

    Google Scholar 

  20. Lee C-H, Wolf W: Energy efficient techniques for cooperative spectrum sensing in cognitive radios. Proceedings of the 5th IEEE Consumer Communications and Networking Conference (CCNC '08), January 2008, Las Vegas, Nev, USA 968-972.

    Google Scholar 

  21. Chen R, Park J-M, Reed JH: Defense against primary user emulation attacks in cognitive radio networks. IEEE Journal on Selected Areas in Communications 2008, 26(1):25-37.

    Article  Google Scholar 

  22. Newman T, Clancy T: Security threats to cognitive radio signal classifiers. Proceedings of the Virginia Tech Wireless Personal Communications Symposium, June 2009, Blacksburg, Va, USA

    Google Scholar 

  23. Mishra SM, Sahai A, Brodersen RW: Cooperative sensing among cognitive radios. Proceedings of the IEEE International Conference on Communications (ICC '06), June 2006, Istanbul, Turkey 4: 1658-1663.

    Google Scholar 

  24. Chen R, Park J-M, Bian K: Robust distributed spectrum sensing in cognitive radio networks. Proceedings of IEEE International Conference on Computer Communications (INFOCOM '08), April 2008, Phoenix, Ariz, USA 31-35.

    Google Scholar 

  25. Clancy T, Goergen N: Security in cognitive radio networks: threats and mitigation. Proceedings of the 3rd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom '08), May 2008, Singapore

    Google Scholar 

  26. Wang W, Li H, Sun Y, Han Z: Attack-proof collaborative spectrum sensing in cognitive radio networks. Proceedings of the 43rd Annual Conference on Information Sciences and Systems (CISS '09), March 2009 130-134.

    Google Scholar 

  27. Wang W, Li H, Sun Y, Han Z: CatchIt: detect malicious nodes in collaborative spectrum sensing. Proceedings of the IEEE Global Communications Conference (GLOBECOM '09), November 2009, Honolulu, Hawaii, USA

    Google Scholar 

  28. Peng R, Sichitiu ML: Angle of arrival localization for wireless sensor networks. Proceedings of the 3rd Annual IEEE Communications Society on Sensor and Ad Hoc Communications and Networks (Secon '06), September 2006 1: 374-382.

    Google Scholar 

  29. Bahl P, Padmanabhan VN: RADAR: an in-building RF-based user location and tracking system. Proceedings of IEEE International Conference on Computer Communications (INFOCOM '00), March 2000, Tel Aviv, Israel 775-784.

    Google Scholar 

  30. Wellenhoff BH, Lichtenegger H, Collins J: Global Positions System: Theory and Practice. 4th edition. Springer, Berlin, Germany; 1997.

    Google Scholar 

  31. Savvides A, Han C-C, Strivastava MB: Dynamic fine-grained localization in ad-hoc networks of sensors. Proceedings of the 7th Annual International Conference on Mobile Computing and Networking (MOBICOM '01), July 2001, Rome, Italy 166-179.

    Chapter  Google Scholar 

  32. He T, Huang C, Blum BM, Stankovic JA, Abdelzaher T: Rangefree localization schemes for large scale sensor networks. Proceedings of the 9th Annual International Conference on Mobile Computing and Networking (MOBICOM '03), 2003, San Diego, Calif, USA

    Google Scholar 

  33. Zhao Q, Tong L, Swami A, Chen Y: Decentralized cognitive MAC for opportunistic spectrum access in ad hoc networks: a POMDP framework. IEEE Journal on Selected Areas in Communications 2007, 25(3):589-599.

    Article  Google Scholar 

Download references

Acknowledgment

This work is supported by CNS-0905556, NSF Award no. 0910461, no. 0831315, no. 0831451 and no. 0901425.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenkai Wang.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wang, W., Li, H., Sun, Y. et al. Securing Collaborative Spectrum Sensing against Untrustworthy Secondary Users in Cognitive Radio Networks. EURASIP J. Adv. Signal Process. 2010, 695750 (2009). https://doi.org/10.1155/2010/695750

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/695750

Keywords