It can be easily realized that the conventional constant or binary power of SU does not fully exploit the capability of the co-existing transmission. Motivated by this, we propose a multiple-level power allocation strategy for SU to improve the average achievable rate.

### Strategy of multiple-level power allocation

Define {ℜ_{1},…,ℜ_{
M
}} as *M* disjoint spaces of the receiving energy *x*, and {*P*_{1},…,*P*_{
M
}} as the corresponding allocated power of SU. Then the proposed power allocation strategy can be written as

\begin{array}{l}P\left(x\right)=\sum _{i=1}^{M}{P}_{i}{I}_{x\in {\Re}_{i}},\end{array}

(4)

where *I*_{
A
}is the indicating function that *I*_{
A
}= 1 if *A* is true and *I*_{
A
}= 0 otherwise. Note that the conventional power allocation rules are special cases when *M* = 1 or 2.

Using (4), the instantaneous rates of SU with receiving *x*, at the absence and the presence of PU, are given by

\begin{array}{l}R\left(x\right){|}_{{H}_{0}}=\sum _{i=1}^{M}{\text{log}}_{2}\left(1+\frac{{P}_{i}h}{{N}_{0}}\right){I}_{x\in {\Re}_{i}},\phantom{\rule{2em}{0ex}}\end{array}

(5)

\begin{array}{l}R\left(x\right){|}_{{H}_{1}}=\sum _{i=1}^{M}{\text{log}}_{2}\left(1+\frac{{P}_{i}h}{{N}_{0}+{g}_{2}{P}_{p}}\right){I}_{x\in {\Re}_{i}},\phantom{\rule{2em}{0ex}}\end{array}

(6)

respectively. Then the average throughput of SU for the proposed multiple-level power allocation strategy using the total probability formula can be formulated as

\begin{array}{ll}R=\frac{T-\tau}{T}& \sum _{i=1}^{M}\left[{q}_{0}{\text{log}}_{2}\left(1+\frac{{P}_{i}h}{{N}_{0}}\right){p}_{i,0}\right.\\ \left(\right)close="]">\phantom{\rule{3.8em}{0ex}}+{q}_{1}{\text{log}}_{2}\left(1+\frac{{P}_{i}h}{{N}_{0}+{g}_{2}{P}_{p}}\right){p}_{i,1}& ,\end{array}\n

(7)

where *q*_{0} and *q*_{1} = 1-*q*_{0} are the idle and busy probabilities of the PU respectively; *p*_{i,0}and *p*_{i,1}are functions of *τ* and can be computed from

\begin{array}{l}{p}_{i,j}=\text{Pr}(x\in {\Re}_{i}|{H}_{j})=\underset{0}{\overset{\infty}{\int}}{I}_{x\in {\Re}_{i}}f(x\left|{H}_{j}\right)\mathit{\text{dx}},\phantom{\rule{1em}{0ex}}j=0,1.\end{array}

(8)

In order to keep the long-term power budget of SU, the average transmit power, denoted by \stackrel{\u0304}{P}, is constrained as

\frac{T-\tau}{T}\sum _{i=1}^{M}{P}_{i}\left[{q}_{0}{p}_{i,0}+{q}_{1}{p}_{i,1}\right]\le \stackrel{\u0304}{P}.

(9)

Moreover, to protect the QoS of PU, an interference temperature constraint should be applied as well. Under (4), the interference is caused only when PU is present. Denoting \u012aas the maximum average allowable interference at PU, the average interference power constraint can be formulated as

\frac{T-\tau}{T}\sum _{i=1}^{M}\gamma {q}_{1}{P}_{i}{p}_{i,1}\le \u012a.

(10)

Our target is to find the optimal space division {ℜ_{
i
}},^{a} the power allocation {*P*_{
i
}}, as well as the sensing time *τ* in order to maximize the average achievable rate of SU under the power constraints. The optimization is then formulated as

\begin{array}{l}\underset{\tau ,{P}_{i},{\Re}_{i}}{\text{max}}\phantom{\rule{1em}{0ex}}R\phantom{\rule{3em}{0ex}}\\ \phantom{\rule{1em}{0ex}}\text{s.t}.\phantom{\rule{1em}{0ex}}\left(9\right),\phantom{\rule{1em}{0ex}}\left(10\right),\phantom{\rule{1em}{0ex}}0\le \tau \le T,\phantom{\rule{1em}{0ex}}{P}_{i}\ge 0,\phantom{\rule{1em}{0ex}}\forall \mathrm{i.}\phantom{\rule{2em}{0ex}}\end{array}

(11)

The term \frac{T-\tau}{T} means that the power constraints occur in the transmission slot. Note that (11) is nonlinear and non-convex over *τ*. Hence, following [4, 10], we will simply use the one-dimensional search within the interval [0,*T*] to find the optimal *τ*, whose complexity is generally acceptable as known from [11, 12].

### The algorithm

The Lloyd’s algorithm is employed here to solve problem (11), where the local convergence has been proved for some cases in one-dimensional space. But in general, there is no guarantee that Lloyd’s algorithm will converge to the global optimal [13]. Starting from a feasible solution as the initial value, e.g., subspaces {ℜ_{
i
}} satisfying {p}_{i,0}=\frac{1}{M}, we repeat the following two steps until the convergence: step 1 - determine the power allocations {*P*_{
i
}} given the subspaces {ℜ_{
i
}}; step 2 - determine the subspaces {ℜ_{
i
}} given power allocations {*P*_{
i
}}.

#### Subspaces design

First, we demonstrate that the design of the optimal subspace division {ℜ_{
i
}} and power allocation {*P*_{
i
}} is equivalent to a modified *distortion measure* design [14]. Incorporating the power constraints by the Lagrange multipliers *λ* and *μ*, we define the following *distortion measure* for optimizing the rate

\begin{array}{ll}\phantom{\rule{0.6em}{0ex}}R(x,{P}_{i})=& \phantom{\rule{0.3em}{0ex}}{q}_{0}{\text{log}}_{2}\left(1+\frac{{P}_{i}h}{{N}_{0}}\right)f\left(x\right|{H}_{0})-\mu {q}_{1}\gamma {P}_{i}f(x\left|{H}_{1}\right)\\ +{q}_{1}{\text{log}}_{2}\left(1+\frac{{P}_{i}h}{{N}_{0}+{g}_{2}{P}_{p}}\right)f\left(x\right|{H}_{1})\\ -\lambda {P}_{i}\left[{q}_{0}f\left(x\right|{H}_{0})+{q}_{1}f(x\left|{H}_{1}\right)\right].\end{array}

(12)

The optimization problem in (11) is equivalent to selecting {ℜ_{
i
}} and {*P*_{
i
}} to maximize the *average distortion* given by

R=\frac{T-\tau}{T}\sum _{i=1}^{M}\underset{x\in {\Re}_{i}}{\int}R(x,{P}_{i})\mathrm{dx.}

(13)

The optimal subspaces {ℜ_{
i
}} are then determined by the *farthest neighbor rule*[14] as

{\Re}_{i}=\{x:\phantom{\rule{1em}{0ex}}R(x,{P}_{i})\ge R(x,{P}_{k}),\phantom{\rule{1em}{0ex}}\forall k\ne i\}.

(14)

The following lemma is instrumental to deriving the optimal subspaces {ℜ_{
i
}}.

##### Lemma 1.

For *x*_{1} < *x*_{2} < *x*_{3}, if *x*_{1} ∈ ℜ_{
i
}, *x*_{2} ∈ ℜ_{
k
}and *i* ≠ *k*, then *x*_{3} ∉ ℜ_{
i
}must hold.

##### Proof.

Define a function of *x* as

\begin{array}{ll}{S}_{i,k}\left(x\right)& =R(x,{P}_{i})-R(x,{P}_{k})\\ =\frac{{x}^{\tau {f}_{s}-1}{e}^{-\frac{x}{{N}_{0}}}}{\Gamma (\tau {f}_{s})}\left[\frac{{a}_{i,k}}{{({N}_{0}+{g}_{2}{P}_{p})}^{\tau {f}_{s}}}{e}^{\frac{x{g}_{2}{P}_{p}}{{N}_{0}({N}_{0}+{g}_{2}{P}_{p})}}+\frac{{b}_{i,k}}{{N}_{0}^{\tau {f}_{s}}}\right],\end{array}

where

\begin{array}{ll}{a}_{i,k}=& \phantom{\rule{0.3em}{0ex}}{q}_{1}\left[{\text{log}}_{2}\left(1+\frac{{P}_{i}h}{{N}_{0}+{g}_{2}{P}_{p}}\right)-{\text{log}}_{2}\left(1+\frac{{P}_{k}h}{{N}_{0}+{g}_{2}{P}_{p}}\right)\right]\\ -\lambda {q}_{1}({P}_{i}-{P}_{k})-\mu {q}_{1}\gamma ({P}_{i}-{P}_{k}),\\ {b}_{i,k}=& \phantom{\rule{0.3em}{0ex}}{q}_{0}\left[{\text{log}}_{2}\left(1+\frac{{P}_{i}h}{{N}_{0}+{g}_{2}{P}_{p}}\right)-{\text{log}}_{2}\left(1+\frac{{P}_{k}h}{{N}_{0}+{g}_{2}{P}_{p}}\right)\right]\\ -\lambda {q}_{0}({P}_{i}-{P}_{k}).\end{array}

From *x*_{1} ∈ ℜ_{
i
}, *x*_{2} ∈ ℜ_{
k
}and (14), we know that *S*_{i,k}(*x*_{1}) > 0 and *S*_{i,k}(*x*_{2}) < 0. In (14), the sign of *S*_{i,k}(*x*) is decided by \frac{{a}_{i,k}}{{({N}_{0}+{g}_{2}{P}_{p})}^{\tau {f}_{s}}}{e}^{\frac{x{g}_{2}{P}_{p}}{{N}_{0}({N}_{0}+{g}_{2}{P}_{p})}}+\frac{{b}_{i,k}}{{N}_{0}^{\tau {f}_{s}}} which is a strictly monotonic function. Thus, for any *x*_{3} > *x*_{2}, there are *S*_{i,k}(*x*_{3}) < 0 and *x*_{3} ∉ ℜ_{
i
}.

##### Proposition 1.

ℜ_{
i
},*i* = 1,…,*M* are continuous intervals and satisfy \bigcup _{i=1,\dots ,M}{\Re}_{i}=\phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}0,\infty ].

##### Proof.

The proof can be easily obtained from the law of contradiction. Assuming that ℜ_{
i
}has more than two non-continuous intervals, it is contradicted with Lemma 1. This proposition is instrumental to obtaining the explicit formulation of ℜ_{
i
}.

Define *M* + 1 thresholds *η*_{0}, *η*_{1},…,*η*_{
M
}with *η*_{0} = 0, *η*_{
M
}= + *∞*. Thus, ℜ_{
i
}corresponds to one of [*η*_{j-1},*η*_{
j
}), *j* = [ 1,…,*M*]. Based on Lemma 1, we can calculate *η*_{
j
}sequentially and assign {ℜ_{
i
}} in Algorithm 1. The answer of *x*_{
k
}that satisfies *S*_{i,k}(*x*_{
k
}) = 0 is given by

\begin{array}{l}{x}_{k}=\frac{{N}_{0}({N}_{0}+{g}_{2}{P}_{p})}{{g}_{2}{P}_{p}}\xb7\text{ln}\left(\frac{-{b}_{i,k}{({N}_{0}+{g}_{2}{P}_{p})}^{\tau {f}_{s}}}{{a}_{i,k}{N}_{0}^{\tau {f}_{s}}}\right).\end{array}

(15)

#####
**Algorithm 1 Subspaces design for**
*x*
**given**
*{P*
_{
i
}
*}*

#### Power allocation

After obtaining the threshold *η*_{
i
}, the probabilities *p*_{i,j}in (11) can be explicitly expressed as

{p}_{i,j}=\underset{{\eta}_{i-1}}{\overset{{\eta}_{i}}{\int}}f\left(x\right|{H}_{j})\mathit{\text{dx}},\phantom{\rule{1em}{0ex}}i\in [\phantom{\rule{0.3em}{0ex}}1,\dots ,M],\phantom{\rule{1em}{0ex}}j=0,1.

(16)

Let us first write the lagrangian *L*(*P*_{
i
},*λ*,*μ*) for problem (11) under the constraints (9) and (10) as

\phantom{\rule{-6.0pt}{0ex}}\begin{array}{ll}L({P}_{i},\lambda ,\mu )=& \phantom{\rule{0.3em}{0ex}}R+\lambda \left(\stackrel{\u0304}{P}-\frac{T-\tau}{T}\sum _{i=1}^{M}{P}_{i}\left[{q}_{0}{p}_{i,0}+{q}_{1}{p}_{i,1}\right]\right)\\ +\mu \left(\u012a-\frac{T-\tau}{T}\sum _{i=1}^{M}{q}_{1}\gamma {P}_{i}{p}_{i,1}\right),\end{array}

(17)

where *λ*, *μ* ≥ 0 are dual variables corresponding to (9) and (10). The Lagrange dual optimization can be formulated as

\begin{array}{l}\underset{\lambda \ge 0,\phantom{\rule{1em}{0ex}}\mu \ge 0}{\text{min}}\phantom{\rule{2em}{0ex}}g(\lambda ,\mu )\triangleq \underset{{P}_{i}\ge 0}{\text{sup}}L({P}_{i},\lambda ,\mu ).\end{array}

(18)

In (11), \frac{{\partial}^{2}R}{{\partial}^{2}{P}_{i}}=-\frac{T-\tau}{T}\left\{\frac{{\text{log}}_{2}\left(e\right){q}_{0}{p}_{i,0}}{{({P}_{i}+{N}_{0}/h)}^{2}}+\frac{{\text{log}}_{2}\left(e\right){q}_{1}{p}_{i,1}}{{({P}_{0}+({N}_{0}+{g}_{2}{P}_{p})/h)}^{2}}\right\}<0, and \frac{{\partial}^{2}R}{\partial {P}_{i}\partial {P}_{j}}=0,i\ne j. Since the constraints are linear functions, problem (11) is concave over *P*_{
i
}. Thus the optimal value *P*_{
i
}of problem (18) is equal to that of (11), and we can solve (18) instead of (11). From (18), we have to obtain the supremum of *L*(*P*_{
i
},*λ*,*μ*). Taking the derivative of *L*(*P*_{
i
},*λ*,*μ*) with respect to *P*_{
i
}leads to

\phantom{\rule{-6.0pt}{0ex}}\begin{array}{ll}\frac{\partial L({P}_{i},\lambda ,\mu )}{\partial {P}_{i}}=& \frac{T-\tau}{T}\left\{\frac{{\text{log}}_{2}\left(e\right){q}_{0}{p}_{i,0}}{{P}_{i}+{N}_{0}/h}+\frac{{\text{log}}_{2}\left(e\right){q}_{1}{p}_{i,1}}{{P}_{i}+({N}_{0}+{g}_{2}{P}_{p})/h}\right.\\ \left(\right)close="\}">\phantom{\rule{3em}{0ex}}-\lambda \left[{q}_{0}{p}_{i,0}+{q}_{1}{p}_{i,1}\right]-\mu {q}_{1}\gamma {p}_{i,1}& .\end{array}\n

(19)

By setting the above equation to 0 and applying the constraint *P*_{
i
}≥ 0, the optimal power allocation *P*_{
i
}for given Lagrange multipliers *λ* and *μ* is computed as

\begin{array}{l}{P}_{i}={\left[\frac{{A}_{i}+\sqrt{{\u25b3}_{i}}}{2}\right]}^{+},\end{array}

(20)

where [*x*]^{+} denotes max (0,*x*), and

\begin{array}{ll}{A}_{i}=& \frac{{\text{log}}_{2}\left(e\right)\left[{q}_{0}{p}_{i,0}+{q}_{1}{p}_{i,1}\right]}{\lambda \left[{q}_{0}{p}_{i,0}+{q}_{1}{p}_{i,1}\right]+\mu {q}_{1}\gamma {p}_{i,1}}-\frac{2{N}_{0}+{g}_{2}{P}_{p}}{h},\phantom{\rule{2em}{0ex}}\end{array}

(21)

\begin{array}{ll}{\u25b3}_{i}=& \phantom{\rule{0.3em}{0ex}}{A}_{i}^{2}+\frac{4}{h}\left\{\frac{{\text{log}}_{2}\left(e\right)\left[{q}_{0}{p}_{i,0}({N}_{0}+{g}_{2}{P}_{p})+{q}_{1}{p}_{i,1}{N}_{0}\right]}{\lambda \left[{q}_{0}{p}_{i,0}+{q}_{1}{p}_{i,1}\right]+\mu {q}_{1}\gamma {p}_{i,1}}\right.\phantom{\rule{2em}{0ex}}\\ \left(\right)close="\}">\phantom{\rule{4em}{0ex}}-\frac{{N}_{0}({N}_{0}+{g}_{2}{P}_{p})}{h}& .\phantom{\rule{2em}{0ex}}\end{array}\n

(22)

##### Proposition 2.

The power allocation functions *P*_{
i
}are non-increasing over *i*.

##### Proof.

. First, from (3), we have

\frac{f\left(x\right|{H}_{1})}{f\left(x\right|{H}_{0})}={e}^{\frac{x{g}_{1}{P}_{p}}{{N}_{0}({N}_{0}+{g}_{1}{P}_{p})}}{\left(\frac{{N}_{0}}{{N}_{0}+{g}_{1}{P}_{p}}\right)}^{\tau {f}_{s}},

(23)

and obviously it is an increasing function over *x*. Through some simple manipulations, the monotonicity of *A*_{
i
}is equivalent to the monotonicity of the following term:

{C}_{i}=\frac{1+\frac{{q}_{1}}{{q}_{0}}\frac{p(i,1)}{p(i,0)}}{1+(1+\mu \gamma /\lambda )\frac{{q}_{1}}{{q}_{0}}\frac{p(i,1)}{p(i,0)}}.

(24)

From (23), we can get that

\frac{p(i,1)}{p(i,0)}>\frac{p(i+1,1)}{p(i+1,0)},\phantom{\rule{1em}{0ex}}\forall i.

(25)

Jointly from (24) and (25), we know that *A*_{
i
}is a decreasing function over *i*. The monotonicity of \frac{4}{h}\left\{\frac{{\text{log}}_{2}\left(e\right)\left[{q}_{0}{p}_{i,0}({N}_{0}+{g}_{2}{P}_{p})+{q}_{1}{p}_{i,1}{N}_{0}\right]}{\lambda \left[{q}_{0}{p}_{i,0}+{q}_{1}{p}_{i,1}\right]+\mu {q}_{1}\gamma {p}_{i,1}}-\frac{{N}_{0}({N}_{0}+{g}_{2}{P}_{p})}{h}\right\} is equivalent to the monotonicity of the following term:

{D}_{i}=\frac{1+\frac{{N}_{0}}{{N}_{0}+{g}_{2}{P}_{p}}\frac{{q}_{1}}{{q}_{0}}\frac{p(i,1)}{p(i,0)}}{1+(1+\mu \gamma /\lambda )\frac{{q}_{1}}{{q}_{0}}\frac{p(i,1)}{p(i,0)}}.

(26)

Similarly, we get that △_{
i
}is a decreasing function over *i*. Thus, from (20), we can conclude that *P*_{
i
} is a non-increasing function with respect to *i*.

##### Remark 1.

. Proposition 2 shows that at smaller *x*, the probability of PU being busy is smaller, so SU can use higher transmit power to better exploit the primary band. On the other hand, at the larger *x*, lower transmit power should be used to prevent harmful interference to PU. Thus, the proposed multiple-level power allocation strategy can also be defined on the probability of PU being busy.

Subgradient-based methods are used here to find the optimal Lagrange multipliers *λ* and *μ*, e.g., the ellipsoid method and the Newton’s method [15]. The subgradient of *g*(*λ*,*μ*) is [*C*,*D*]^{T}, where

\begin{array}{l}C=\stackrel{\u0304}{P}-\frac{T-\tau}{T}\sum _{i=1}^{M}{\stackrel{\u0304}{P}}_{i}\left[{q}_{0}{p}_{i,0}+{q}_{1}{p}_{i,1}\right],\phantom{\rule{2em}{0ex}}\\ D=\u012a-\frac{T-\tau}{T}\sum _{i=1}^{M}{q}_{1}\gamma {\stackrel{\u0304}{P}}_{i}{p}_{i,1},\phantom{\rule{2em}{0ex}}\end{array}

(27)

while {\stackrel{\u0304}{P}}_{i} is the optimal power allocation for fixed *λ* and *μ*[16]. Finally, we summarize the algorithm that computes the sensing time and multiple-level power allocations in Algorithm 2.

##### Algorithm 2 Sensing time and multiple-level power allocations

##### Remark 2.

All computations are performed offline and the resulting power control rule is stored in a look-up table for real-time implementation. Thus, the computational complexity is not significant.