 Research
 Open Access
 Published:
Weighted sumrate maximization for multiuser SIMO multiple access channels in cognitive radio networks
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 80 (2013)
Abstract
In this article, an efficient distributed and parallel algorithm is proposed to maximize the sumrate and optimize the input distribution policy for the multiuser single input multiple output multiple access channel (MUSIMO MAC) system with concurrent access within a cognitive radio (CR) network. The single input means that every user has a single antenna and multiple output means that base station(s) has multiple antennas. The main features are: (i) the power distribution for the users is updated by using variable scale factors which effectively and efficiently maximize the objective function at each iteration; (ii) distributed and parallel computation is employed to expedite convergence of the proposed distributed algorithm; and (iii) a novel waterfilling with mixed constraints is investigated, and used as a fundamental block of the proposed algorithm. Due to sufficiently exploiting the structure of the proposed model, the proposed algorithm owns fast convergence. Numerical results verify that the proposed algorithm is effective and fast convergent. Using the proposed approach, for the simulated range, the required number of iterations for convergence is two and this number is not sensitive to the increase of the number of users. This feature is quite desirable for large scale systems with dense active users. In addition, it is also worth noting that the proposed algorithm is a monotonic feasible operator to the iteration. Thus, the stop criterion for computation could be easily set up.
1 Introduction
The radio spectrum is a precious resource that demands for efficient utilization as the currently licensed spectrum is severely underutilized [1]. Cognitive radio (CR) [2], [3], [4], which adapts the radios operating characteristics to the realtime conditions, is the key technology that allows flexible, efficient and reliable spectrum utilization in wireless communications. This technology exploits the underutilized licensed spectrum of the primary user(s) (PU) and introduces secondary user(s) (SU) to operate on the spectrum that is either opportunistically being available or concurrently being shared by the PU and the SU. Under this situation and according to the definition of a cognitive (radio) network [5], opportunistically utilizing the spectrum means that the SUs may fill the spectrum gaps or holes left by the PUs; whereas concurrently utilizing the spectrum means that the SUs transmit over the same spectrum as the PUs, in a way that the interference from the transmitting SUs does not violate the quality requirement from the PUs. This article focuses on the latter case. Furthermore, the multipleinput multipleoutput (MIMO) technology uses multiple antennas at either the transmitter or the receiver to significantly increase data throughput and link range without additional bandwidth or transmitted power. Thus it plays an important role in wireless communications today. In infrastructuresupported networks, such as the widely used cellular network, base stations are typically shared by a large number of users. Within the scope of this article, it is therefore assumed that the base station under consideration is shared by multiple PUs and multiple SUs. In this article, a MIMOenhanced CR network is considered to fully ensure the quality of service (QoS) of the PUs as well as to maximize the weighted sumrate of the SUs. We consider multiple SUs accessing the base station, referred to as a multipleaccess channel (MAC).
The weighted sumrate maximization problem is to compute the “best" achievable rate vector in the capacity region [6], [7], [8] by specifying the working point at the boundary of the capacity region. This optimality problem is of the Pareto meaning under multiobjective optimization.
For the nonCR cases, the sumrate maximization problem has been intensively explored for both Gaussian broadcast channel (BC) [9], [10] and Gaussian MAC [11]. Typical approaches include iterative waterfilling algorithms [9], [11] and dualdecomposition [10]. The conventional waterfilling algorithm [12] which is an efficient resource allocation algorithm needs to be used inside each of the iterations as an inner loop operation. In addition, the setup of the well known duality between the Gaussian BC and the sumpower constrained Gaussian dual MAC [13], [14], [15] facilities the transform of BC sumrate problems into its dual MAC problem. As for the weighted sumrate problem, it is easily seen that as the weighted coefficients all being unity, the weighted sumrate problem is reduced into a sumrate optimization problem. Thus, solving the weighted sumrate problem is more general. However, due to the more complicated problem structure, the conventional waterfilling algorithm [12] is not able to compute its solution. For computing the maximum weighted sumrate for a class of the Gaussian singleinput multipleoutput (SIMO) BC systems or equivalent dual MAC systems [16], has presented some algorithms using a cyclic coordinate ascent algorithm to provide the maxstability policy.
For the CR cases, besides the individual power constraints to the SUs, the total interference power from the SUs needs to be included into the constraints of the target problem. Since singleantenna mobile users are quite common and compose a major served group due to the size and cost limitations of mobile terminals, this article is confined to a single input multiple output multiple access channel (SIMOMAC) in the CR network. Earlier study [17], [18] investigated the sumrate problem and the weighted sumrate problem in CRSIMOMAC cases, respectively. In addition, for the ergodic sum capacity of single input single out (SISO) system [19], studied the maximum (nonweighted) sumrate problem with a simple form of the objective function.
In this article, by exploiting the structure of the weighted sumrate optimization problem, we propose an efficient iterative algorithm to compute the optimal input policy and to maximize the weighted sumrate, via solving a generalized waterfilling problem in each of the iterations. The waterfilling machinery is experiencing continuous development [12], [20], [21], [22], [23]. In this article, we propose a generalized weighted waterfilling algorithm (GWWFA) to form a fundamental step (inner loop algorithm) for the target problem. In the inner loop, the weighted sumrate problem is decomposed into a series of generalized waterfilling problems. With this decomposition, a decoupled system with each equation of the decoupled system containing only a scalar variable is formed and solved. Any one of the equations is solved by the GWWFA with a finite number of loops. To speed up the computation of the solution to each of the equations, we also specify the intervals the solution belongs to.
For the outer loop of the algorithm, a variable scale factor is applied to update the covariance vector of the users. The optimal scale factor is obtained by maximizing the target objective value (i.e., the weighted sumrate) in the scalar variable β to expedite convergence of the proposed algorithm. In order to achieve this purpose, we determine an optimal scale factor by searching in a range which consists of a few discrete values. As a result, parallel operation can be used to expedite the search and to avoid the requirement of another nested loop. This parallel operation can be distributed to and carried out by multiple processors (for example, four processors).
Compared with earlier study [18], the main difference of our study is that: (i) in [18], the dualdecomposition approach [10] is used. In our study, we apply the iterative waterfilling algorithm [9] and extend the algorithm to solve the target problem. The advantage of the iterative waterfilling algorithm is that it is a monotonic feasible operator to the iteration. That is to say, the proposed algorithm generates a sequence composed of feasible points in its iterations. The objective function values, corresponding to this point sequence, are monotonically increasing. Hence, the stop criterion for computation might be easily set up. However, the regular primaldual method used in [18] is not a feasible point method; (ii) for the constraints of the target problem, we make the individual power constraints more strict and more reasonable, due to the values of signal powers being assumed to be greater than or equal to zero; (iii) the convergence rate is improved significantly. In the numerical example illustrated by Figure 1 of [18], the convergence of the weighted sumrate is obtained after 110 iterations for a system with 3 SUs and 2 PUs. However, with our proposed algorithm, we achieve the weighted sumrate convergence with two iterations with the simulated range (number of SUs up to 110). In addition, even if the PUs and SUs are served by different base stations, it is easy to see that the proposed machinery can be used with some minor modifications.
In the remainder of this article, the system model for a CRSIMOMAC system and its weighted sumrate are described in Section 2. Section 3 discusses the proposed algorithm to solve the maximal weighted sumrate problem through an inner loop algorithm. The optimality proof of the inner loop algorithm (GWWFA) is presented in Section 3.1. Then the outer loop algorithm (AWCR) and its implementation are presented in Section 3.2. Section 4 provides the convergence proof of the AWCR. Section 5 presents numerical results and some complexity analysis to show the effectiveness of the proposed algorithm.
Key notations that are used in this article are as follows: A and Tr(A) give the determinant and the trace of a square matrix A, respectively; E[X] is the expectation of the random variable X; the capital symbol I for a matrix denotes the identity matrix with the corresponding size. A square matrix B≽0 means that B is a positive semidefinite matrix. Further, for two arbitrary positive semidefinite matrices B and C, the expression B≽C means the difference of B−C is a positive semidefinite matrix. In addition, for any complex matrix, its superscripts † and T denote the conjugate transpose and the transpose of the matrix, respectively.
2 SIMOMAC in CR network and its weighted sumrate
For a SIMOMAC in a CR network, as shown in Figure 1, assume that there are one basestation (BS) with N _{ r } antennas, and K SUs and N PUs, each of which is equipped with one single antenna. The received signal \mathbf{y}\in {\mathbb{C}}^{{N}_{r}\times 1} at the BS is described as
where the j th entry x ^{j} of \mathbf{x}\in {\mathbb{C}}^{K\times 1} is a scalar complex input signal from the j th SU and x is assumed to be a Gaussian random vector having zero mean with independent entries. The j th entry {\widehat{\mathbf{x}}}^{j} of \widehat{\mathbf{x}} is a scalar complex input signal from the j th PU and \widehat{\mathbf{x}} is assumed to be a Gaussian random vector having zero mean with independent entries. The noise term, \mathbf{z}\in {\mathbb{C}}^{{N}_{r}\times 1} is an additive Gaussian noise random vector, i.e., \mathbf{z}\sim \mathbb{N}(0,{\sigma}^{2}\mathbf{I}). The channel input, \widehat{\mathbf{x}},\mathbf{\text{x}}, and z are also assumed to be independent. Furthermore, the j th SU’s transmitted power can be expressed as
Note that S _{ j }, ∀j, is nonnegative.
The mathematical model of the weighted sumrate optimization problem for the SIMOMAC in the CR network can be stated as follows (refer to (2.16) in [6] and the references therein):
Given a group of weights {\left\{{w}_{k}\right\}}_{k=1}^{K} which is assumed to be in decreasing order (users can be arbitrarily renumbered to satisfy this condition) with the achievable rate of the secondary user k,
the weighted sumrate is organized by
where, for the MAC cases, the peak power constraint on the k th SU exists and is denoted by a group of positive numbers: P _{ k }, k = 1, …, K; the power threshold to ensure the QoS of the PUs is denoted by the positive number P _{ t }. Further, when no confusion is possible, f _{ w mac} is simply written as f. For convenience, we define η _{ k } by w _{ k } − w _{ k+1} for k = 1, …, K − 1; and η _{ k } by w _{ K }, as a group of nonnegative real numbers, and assume at least one of them to be nonzero. Further, the term {g}_{k}={\mathbf{h}}_{k}{\mathbf{h}}_{k}^{\u2020},\forall k, is the channel power gain of the k th SU to the BS. Also, we denote the covariance matrix of the random vector \sum _{j=1}^{N}{\widehat{\mathbf{h}}}_{j}^{\u2020}{\widehat{\mathbf{x}}}^{j}+\mathbf{z} by C _{0}. It is easy to see that the matrix C _{0} is positive definite.
The constraint \sum _{k=1}^{K}{g}_{k}{S}_{k}\le {P}_{t} is called the sumpower constraint with gains. The constraint is obtained in the following analysis. Let \mathbf{H}=\left[{\mathbf{h}}_{1}^{\u2020},\dots ,{\mathbf{h}}_{K}^{\u2020}\right]\in {\mathbb{C}}^{{N}_{r}\times K}\text{and}\widehat{\mathbf{H}}=\left[{\widehat{\mathbf{h}}}_{1}^{\u2020},\dots ,{\widehat{\mathbf{h}}}_{N}^{\u2020}\right]\in {\mathbb{C}}^{{N}_{r}\times N}. Thus, the received signal at BS is \mathbf{y}=\widehat{\mathbf{H}}\widehat{\mathbf{x}}+(\mathbf{H}\mathbf{x}+\mathbf{z}), where H x + z can be regarded as the additive interference and noise to the transmitted signal \widehat{\mathbf{H}}\widehat{\mathbf{x}} from the PUs. To guarantee the QoS for the PUs, the power of the interference and noise should be less than a threshold value, P _{TH}. This condition can be expressed as
It can be written as
where the power constraint value P _{ t } is the interference and noise threshold subtracted by the Gaussian noise power.
As an alternative, to guarantee the QoS for each of the PUs, individually, the power of the interference and noise should be less than a threshold value, P _{TH}(i), ∀ i. Similarly, it is obtained that
That is to say, the condition above is equivalent to
Name mini{P _{ t }(i)} as P _{ t }; then the target model can still cover the case that the QoS for each of the PUs is considered individually. Note that at the base station with multiple antennas, the received signals can be regarded as a stochastic vector or point in a Hilbert space and the received signal powers are abstracted into the norm squared of the vector. The transmitted powers of the PUs have been taken into account by forming C _{0} and P _{ t } mentioned above, which appear in (3).
It is seen that the sequence {\left\{{\eta}_{k}\right\}}_{k=1}^{K} stems from the vector of weights used in the multiuser information theory [6]. The parameter or item η _{ k }, ∀ k, in the sequence is called the weighted coefficient without confusion.
A more strict weighted sumrate model can also be obtained that reflects the essence of the issue for the CRSIMOMAC. Along a similar way mentioned above, we may choose the power thresholds P _{ t, i } to limit the impact from the SUs on each of the antennas of the BS. Thus the sumpower constraint with the gains is evolved into \sum _{k=1}^{K}{g}_{k,i}{S}_{k}\le {P}_{t,i},i=1,2,\dots ,{N}_{r}. It is seen that such a weighted sumrate problem with more power constraints can be solved by solving a similar problem in (3). Therefore, the proposed article aims at computing the solution to the problem (3). Note that if \exists {\mathbf{h}}_{{i}_{0}}=\mathbf{\text{0}}, 1 ≤ i _{0} ≤ K, for (3), we remove the user i _{0} and then the number of the users is reduced to K − 1. Thus, we can assume that h _{ i } ≠ 0, ∀ i.
For the important special case of the sumrate problem, which is included in (3), assume that M = rank(H). Applying the QR decomposition, H = Q R, let \mathbf{Q}=\left[{\mathbf{q}}_{1},\dots ,{\mathbf{q}}_{M}\right]\in {\mathbb{C}}^{{N}_{r}\times M} have orthogonal and normalized column vectors. \mathbf{R}\in {\mathbb{C}}^{M\times K} is an upper triangle matrix with r _{ i, j } denoting the (i, j)th entry of the matrix R. Q ^{†} is regarded as an equalizer to the received signal by the BS. Thus, the i th SU should have the rate:
where \widehat{{S}_{n}}=E\left[{\widehat{\mathbf{x}}}^{n}{\left({\widehat{\mathbf{x}}}^{n}\right)}^{\u2020}\right] and {\widehat{\mathbf{R}}}_{n}={\widehat{\mathbf{h}}}_{n}^{\u2020}{\widehat{\mathbf{h}}}_{n},n=1,\dots ,\mathrm{N.} It is easy to see that the rate just mentioned comes from the expression:
i.e., C _{0} = I in this case.
3 Algorithm AWCR
The proposed algorithm for solving the weighted sumrate problem in the cognitive radio network, denoted by AWCR, is an iterative algorithm. It consists of two layers of loops. Inside the inner loop, a generalized weighted waterfilling algorithm is proposed and used. Due to special problem structure and the complexity of the weighted sumrate problem, the proposed waterfilling is more general than regular weighted waterfilling. It is discussed in Section 3.1. For the outer loop of AWCR, a variable scale factor with parallel computation is applied to expedite the convergence. This discussion is presented in Section 3.2.
3.1 Generalized weighted waterfilling algorithm (GWWFA)
Being a fundamental block of the optimum resource allocation problem for the CRSIMOMAC systems, the generalized waterfilling problem is abstracted as follows.
For a multiple receiving antenna system, it is given that P _{ t } > 0, as the total power or volume of the water; K is the total number of the users; the allocated power and the propagation path (nonnegative) gains for the i th user are given as S _{ i } for i = 1 … K, and {\left\{{a}_{\mathit{\text{ij}}}\right\}}_{j=i}^{K}, respectively. The generalized weighted waterfilling problem under consideration then reads
where the set {\left\{{\eta}_{i}\right\}}_{i=1}^{K} plays the role of the weighted coefficients. Note that if \sum _{i=1}^{K}{g}_{i}{P}_{i}\le {P}_{t} holds, the solution to Problem (3) is regressed into a trivial case. Hence, \sum _{i=1}^{K}{g}_{i}{P}_{i}>{P}_{t} is assumed.
Due to the specific CR SIMO MAC setup considered as well as the inclusion of arbitrary weights {η _{ j }}, the problem structure (9) is novel. It is easy to see that if a _{ i j } = 0, as i ≠ j, and P _{ i }> > 0, ∀i, then the problem (9) is reduced into the conventional weighted waterfilling problem. Further, if equal weights are chosen, it is reduced into the conventional waterfilling problem, which can be solved by the conventional waterfilling algorithm [12].
To find the solution to the more complicated generalized problem above, the generalized weighted waterfilling algorithm (GWWFA) is presented as follows. Let
Utilize a permutation operation π on {λ _{ i }} such that
where P=\sum _{k=1}^{K}{P}_{k}. Define function J _{ i }(s _{ i }) as
It is easy to see that the function J _{ i }(s _{ i }) is strictly monotonically decreasing and continuous over the interval
The steps of the GWWFA can be described as below.
Algorithm GWWFA:

(1)
Given ε > 0, initialize λ _{min} and λ _{max}.

(2)
Set λ = (λ _{min} + λ _{max}) / 2.

(3)
If λ falls in the interval [λ _{ π(i + 1)}, λ _{ π(i)}], where 1 ≤ i ≤ K, initialize the point \left[{s}_{\pi \left(1\right)}^{\left(0\right)},\dots ,{s}_{\pi \left(i\right)}^{\left(0\right)}\right] and compute
\phantom{\rule{17.0pt}{0ex}}\begin{array}{l}\left[{s}_{\pi \left(1\right)}^{\left(n+1\right)},\dots ,{s}_{\pi \left(i\right)}^{\left(n+1\right)}\right]\\ =& \left[{s}_{\pi \left(1\right)}^{\left(n\right)}\phantom{\rule{0.3em}{0ex}}\frac{{J}_{\pi \left(1\right)}\left({s}_{\pi \left(1\right)}^{\left(n\right)}\right)\lambda}{{J}_{\pi \left(1\right)}^{\prime}\left({s}_{\pi \left(1\right)}^{\left(n\right)}\right)},\phantom{\rule{2.56804pt}{0ex}}\dots \phantom{\rule{2.56804pt}{0ex}},{s}_{\pi \left(i\right)}^{\left(n\right)}\frac{{J}_{\pi \left(i\right)}\left({s}_{\pi \left(1\right)}^{\left(n\right)}\right)\lambda}{{J}_{\pi \left(i\right)}^{\prime}\left({s}_{\pi \left(i\right)}^{\left(n\right)}\right)}\right]\phantom{\rule{0.3em}{0ex}}.\end{array}(14)Then increase the iteration from n to n + 1. Repeat the procedure in (14) until the point \left[{s}_{\pi \left(1\right)}^{\left(n\right)},\dots ,{s}_{\pi \left(i\right)}^{\left(n\right)}\right] converges. Denote \underset{n}{lim}\left[{s}_{\pi \left(1\right)}^{\left(n\right]},\dots ,{s}_{\pi \left(i\right)}^{\left(n\right)}\right] by \left[{s}_{\pi \left(1\right)}^{\ast},\dots ,{s}_{\pi \left(i\right)}^{\ast}\right]. Let \left[{s}_{\pi (i+1)}^{\ast},\dots ,{s}_{\pi \left(K\right)}^{\ast}\right]=0\in {\mathbb{R}}^{1\times (Ki)}.

(4)
If \sum _{k=1}^{K}{g}_{k}{s}_{k}^{\ast}{P}_{t}>0, then λ _{min} is assigned λ;
if \sum _{k=1}^{K}{g}_{k}{s}_{k}^{\ast}{P}_{t}<0, then λ _{max} is assigned λ;
If \sum _{k=1}^{K}{g}_{k}{s}_{k}^{\ast}{P}_{t}=0, stop.

(5)
If λ _{min} − λ _{max} ≤ ε, stop. Otherwise, go to step (2).
Remarks 3.1
Note, in (1) of the GWWFA, that the initial λ _{min} may be chosen as λ _{ π(K+1)}, and λ _{max} may be chosen as λ _{ π(1)}.
In (3), for the initialization of {s}_{\pi \left(k\right)}^{\left(0\right)}, first, we may choose an interval, such as [0,P _{ π(k)}], and use the secant method or the bisection method [24] over the interval to compute, in parallel, an approximate solution to the system J _{ π(k)}(s _{ π(k)}) − λ = 0, ∀k. Hence, only through a few loops (≤⌈log2P _{ π(k)} ⌉ + 1 loops), e _{0}, as an absolute error between the accurate solution and the approximate solution obtained by the method above, is less than 0.5. The initialization of {s}_{\pi \left(k\right)}^{\left(0\right)}, for k = 1, …, i, is assigned by the above approximate solution. Let {\left({e}_{n}\right)}_{k}={s}_{\pi \left(k\right)}^{\ast}{s}_{\pi \left(k\right)}^{\left(n\right)}, where J\left({s}_{\pi \left(k\right)}^{\ast}\right)\lambda =0. It is seen that
where 0<ρ _{ n }<1. It can be observed that 0\le {\left({e}_{m}\right)}_{k}<{\left({e}_{0}\right)}_{k}^{{2}^{m}} and then {\left\{{s}_{\pi \left(k\right)}^{\left(m\right)}\right\}}_{m=1}^{\infty} uniformly converges, as 0 ≤ (e _{6})_{ k } < 10^{−19} (machine zero), ∀k. That is to say, the absolute error between the approximate solution and the accurate solution is the machine zero within 6 loops. Thus, the optimal solution ({s}_{\pi \left(1\right)}^{\ast},\dots ,{s}_{\pi \left(i\right)}^{\ast}) can be obtained in parallel, within finite loops.
Denote a function
Then define G(λ) as
Since J _{ π(k)}(s _{ π(k)}) is strictly monotonically decreasing and continuous over the interval, so are {J}_{\pi \left(k\right)}^{1}\left(\lambda \right) and G(λ) over the corresponding interval(s). Due to G(λ _{ π(K+1)}) < P _{ t } and G(λ _{ π(1)}) < P _{ t }, step (4) can make λ converge such that G(λ) = P _{ t }. Optimality of the GWWFA is stated by following proposition.
Proposition 3.1
For (9), its optimal solution can be obtained by the GWWFA.
Proof of Proposition 3.1. From the third item of (4) in the GWWFA and (5), G(λ) = P _{ t }. Then
Since there exists i _{0} (1 ≤ i _{0} ≤ K) such that \lambda \in [{\lambda}_{\pi ({i}_{0}+1)},{\lambda}_{\pi \left({i}_{0}\right)}], {\underline{\mu}}_{\pi \left(j\right)}=0 and {\overline{\mu}}_{\pi \left(j\right)}={\lambda}_{\pi \left(j\right)}\lambda \ge 0 hold as j = 1, …, i _{0}, we have {\underline{\mu}}_{\pi \left(j\right)}=\lambda {\lambda}_{\pi \left(j\right)}\ge 0 and {\overline{\mu}}_{\pi \left(j\right)}=0 as j = i _{0} + 1, …, K. Therefore, there exists the solution
and the Lagrange multipliers λ, \left\{{\underline{\mu}}_{\pi \left(k\right)}\right\} and \left\{{\overline{\mu}}_{\pi \left(k\right)}\right\} mentioned above such that the KKT condition of the problem (9) holds, where the λ corresponds to the constraint \sum _{k=1}^{K}{g}_{k}{s}_{k}\le {P}_{t}, and \left\{{\underline{\mu}}_{\pi \left(k\right)}\right\} and \left\{{\overline{\mu}}_{\pi \left(k\right)}\right\} correspond to the constraints {s _{ π(k)} ≥ 0} and {s _{ π(k)} ≤ P _{ π(k)}}, respectively.
Since the problem in Proposition 3.1 is a differentiable convex optimization problem with linear constraints, not only is the KKT condition mentioned above sufficient, but it is also necessary for optimality. Note that it is easily seen that the constraint qualification (the CQ) of the optimization problem (9) holds. Proposition 3.1 hence is proved.
Remarks 3.2
To decouple the variables in the objective function of the problem (3), a sum expression is acquired by adding the objective function, just mentioned, K times. Then the sum expression is operated, by one variable being selected as an optimized variable with respect to the others being fixed. Thus, from the expression (3), the problem (20),
is implied as follows:
Since
where {\overline{S}}_{k},\forall k, is fixed and
the optimization problem
is equivalent to the problem below:
If the CR SIMO weighted case is generalized to the CR MIMO weighted case, it is still an open question whether there exists a fast waterfilling solution like the algorithm mentioned above.
3.2 Algorithm AWCR and its implementation
The proposed Algorithm AWCR, which is based on the combined problem of both the MIMO MAC and the CR network, is listed below.
Algorithm AWCR:
Input: vector {\mathbf{\text{h}}}_{i},\phantom{\rule{1em}{0ex}}{S}_{i}^{\left(0\right)}=0,\phantom{\rule{1em}{0ex}}i=1,\dots ,K;n=1.

(1)
Generate effective channels {\mathbf{G}}_{\mathit{\text{ij}}}^{\left(n\right)}={\mathbf{h}}_{i}{\left(\mathbf{I}+\sum _{l\in \{1,\dots ,i\}\setminus \left\{j\right\}}{\mathbf{h}}_{l}^{\u2020}{\mathbf{h}}_{l}{S}_{j}^{(n1)}\right)}^{\frac{1}{2}},\text{for}\phantom{\rule{1em}{0ex}}i=1,\dots ,K, where the superscript with a pair of bracket, (n), represents the number of iterations.

(2)
Treating these effective channels as parallel, noninterfering channels, the new covariances {\left\{{\stackrel{~}{S}}_{i}^{\left(n\right)}\right\}}_{i=1}^{K} are generated by the GWWFA under the sum power constraint P _{ t }. That is to say, {\left\{{\stackrel{~}{S}}_{i}^{\left(n\right)}\right\}}_{i=1}^{K} is the optimal solution to (24):
\begin{array}{c}\underset{{\left\{{S}_{i}\right\}}_{i=1}^{K}:0\le {S}_{i}\le {P}_{i},\phantom{\rule{1em}{0ex}}\sum _{i=1}^{K}{g}_{i}{S}_{i}\le {P}_{t}}{\text{max}}\sum _{i=1}^{K}{\eta}_{i}\sum _{j=1}^{i}\\ log\left(1+{\mathbf{G}}_{\mathit{\text{ij}}}^{\left(n\right)}{\left(\phantom{\rule{0.3em}{0ex}}{\mathbf{G}}_{\mathit{\text{ij}}}^{\left(n\right)}\right)}^{\u2020}{S}_{j}\right).\end{array}(24)Note that (24) is similar to (20), only {S}_{i}^{(n1)} and {\mathbf{G}}_{\mathit{\text{ij}}}^{\left(n\right)} in the former take place of {\overline{S}}_{i} and G _{ i l } in the latter, respectively, for any i, j, l.

(3)
Update step: Let γ ^{(n)} and p ^{(n−1)} denote the newly obtained covariance set and the immediate past covariance set, respectively,
\begin{array}{ll}\phantom{\rule{3em}{0ex}}{\gamma}^{\left(n\right)}& \stackrel{\u25b3}{=}\left({\stackrel{~}{S}}_{1}^{\left(n\right)},{\stackrel{~}{S}}_{2}^{\left(n\right)},\dots ,{\stackrel{~}{S}}_{K}^{\left(n\right)}\right)\text{and}\\ \phantom{\rule{1em}{0ex}}{p}^{\left(n1\right)}& \stackrel{\u25b3}{=}\left({S}_{1}^{\left(n1\right)},{S}_{2}^{\left(n1\right)},\dots ,{S}_{K}^{\left(n1\right)}\right).\end{array}Let
\begin{array}{cc}\phantom{\rule{1em}{0ex}}{\beta}^{\ast}=& max\left\{{\beta}_{1}\left\right.{\beta}_{1}\in arg\underset{\beta \in \left[1/K,1\right]}{\text{max}}\right.\\ \left(\right)close="\}">f\left(\beta {\gamma}^{\left(n\right)}+\left(1\beta \right){p}^{\left(n1\right)}\right)& ,\end{array}\n(25)as the innovation, where the function f has been defined in (3). Then, the covariance update step is
{p}^{\left(n\right)}=\left({S}_{1}^{\left(n\right)},{S}_{2}^{\left(n\right)},\dots ,{S}_{K}^{\left(n\right)}\right)={\beta}^{\ast}{\gamma}^{\left(n\right)}+\left(1{\beta}^{\ast}\right){p}^{\left(n1\right)}\phantom{\rule{0.3em}{0ex}}.(26)The updated covariance is a convex combination of the newly obtained covariance and the immediate past covariance.

(4)
Increase the iteration from n to n + 1. Go to (1) until convergence.
Note that the new algorithm employs variable weighting factors, which are obtained to maximize the objective function and to update the covariance.
In this section, the optimality of {\left\{{\stackrel{~}{S}}_{i}^{\left(n\right)}\right\}}_{i=1}^{K} has been proved, i.e., {\left\{{\stackrel{~}{S}}_{i}^{\left(n\right)}\right\}}_{i=1}^{K} is the solution to (20), by Proposition 3.1.
Remarks 3.3
Due to the objective function f(β γ ^{(n)} + (1 − β)p ^{(n − 1)}) in step (3) of Algorithm AWCR being (upper) convex, i.e., being concave, in the scalar variable β, for computing the maximum solution to the corresponding optimization problem, we can choose finite searching steps with even fewer evaluations of the objective function. Without loss of generality, the objective function in step (3) is evaluated at the four points \left\{\beta =\frac{1}{K},\frac{1}{K}+\frac{1}{3}\left(1\frac{1}{K}\right),\right.\left(\right)close="}">\n \n \n \n 1\n \n \n K\n \n \n +\n \n \n 2\n \n \n 3\n \n \n \n \n 1\n \u2212\n \n \n 1\n \n \n K\n \n \n \n \n and\n 1\n \n by parallel computation to determine β ^{∗}. That is to say, this parallel operation can be distributed to and carried out by multiple processors (for example, four processors) at the base station, in order to expedite convergence of the proposed algorithm. Finally, the obtained satisfying solution is then distributed or returned to the corresponding secondary users.
4 Convergence of Algorithm AWCR
There are two methods by either of which convergence of the proposed algorithm can be proved. The first method is to utilize convergence of Algorithm AWCR with {\beta}^{\ast}=\frac{1}{4} (refer to [25]) and the innovation, as a spacer step, by Zangwill’s convergence theorem B ([26], p. 128). However, we will then still need to prove Algorithm AWCR with {\beta}^{\ast}=\frac{1}{4}, as a basic mapping, to satisfy the closedness condition of Zangwill’s convergence theorem B. This point requires much explanation and an abstract proof. As an alternative, the second method which is more intuitive than the first method is used. The fixed point approach proposed in this article could also be generalized to solve other problems.
In this section, utilizing results from Section 3.1, convergence of Algorithm AWCR will be strictly proved under a weaker assumption. Note that, due to the power constraint being coupled between the optimization stages of (20) with the weighted coefficients in the objective function while being decoupled between the optimization stages of the MIMOMAC case without the weighted coefficients, usage of the waterfilling principle in the former is different from that of the latter.
4.1 Convergence proof of the proposed algorithm
In this article, as a more general model, we eliminate the assumption in [9] that the optimal solution is unique to prove convergence of the proposed algorithm. To the best knowledge of the authors, this is one of the proposed novelty for convergence of this class of algorithm with the spacer step ([26], p. 125). Since our convergence proof is based on more general functions including an objective function and a few constraint functions, it will also enrich the optimization theory and methods. It is assumed that a mapping projects a point to a set. First, two concepts are introduced. The first concept is of an image of a mapping (or algorithm) that projects a point to a set; the second one is of a fixed point under the mapping (algorithm). Then, two lemmas are proposed, followed by the convergence proof of the proposed algorithm.
Definition 4.1
(Image under mapping or Algorithm A) (see e.g., [26], p. 84). Assume that X and Y are two sets. Let A be a mapping or an algorithm from X to Y, which projects from a point in X to a set of points in Y. If the point in X is denoted by x and the set of the points in Y is denoted by A(x), then A(x) is called the image of x under A.
Definition 4.2
(Fixed point under mapping or Algorithm A). Let A be a mapping or an algorithm from X to Y. Assume x ∈ X. If x ∈ A(x), x is said to be a fixed point under A.
Note that (20) can be changed into a general form:
due to the condition of the optimal solution uniqueness being removed. Further, corresponding to this change, step (2) of Algorithm AWCR will be carried out in this way: given a feasible point {\left\{{S}_{i}^{\left(n\right)}\right\}}_{i=1}^{K}, its image under step (2) of Algorithm AWCR is a set of points. A point in this set is chosen arbitrarily as the next point {\left\{{S}_{i}^{\left(n+1\right)}\right\}}_{i=1}^{K} generated by Algorithm AWCR. Thus, Algorithm AWCR can generate a point sequence under this change. In the following, we will still call this algorithm Algorithm AWCR despite the changes. The feasible set is denoted by V _{ d }.
For any convergent subsequence, whose limit is denoted by ({\overline{S}}_{1},\dots ,{\overline{S}}_{K}), generated by Algorithm AWCR, we may use the following lemma to prove that the limit is a fixed point under Algorithm AWCR, when Algorithm AWCR is regarded as a mapping.
Lemma 1
A point is the limit of a convergent subsequence of the point sequence generated by Algorithm AWCR if and only if this point is a fixed point under Algorithm AWCR.
Proof
See Appendix 1. □
Lemma 2
({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in {V}_{d} is a fixed point under Algorithm AWCR if and only if ({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in {V}_{d} is one of the optimal solutions to the problem in (3).
Proof
See Appendix 2. □
Based on the lemmas above, we obtain the conclusion that Algorithm AWCR is convergent. At the same time, step (3) of Algorithm AWCR is then regarded as a computation for a point. With these lines of proofs, Algorithm AWCR generates a point sequence and every point of the point sequence consists of the K nonnegative numbers, e.g., \left({S}_{1}^{\left(n\right)},\dots ,{S}_{K}^{\left(n\right)}\right). The details are described below.
Theorem 4.1
Algorithm AWCR is convergent. At the same time, the sequence of objective values, obtained by evaluating the objective function at the point sequence, monotonically increases to the optimal objective value.
Proof
Due to compactness of the set of feasible solutions for the problem in (3), the point sequence generated by Algorithm AWCR already includes a convergent subsequence. For every convergent subsequence, according to Lemma 1, the convergent subsequence must converge to a fixed point under Algorithm AWCR. Then, according to Lemma 2, it converges to one of the optimal solutions to the problem in (3).
In addition, conversely, as stated by the sufficient and necessary conditions of Lemmas 1 and 2, for any optimal solution to the problem in (3), there is a point sequence generated by Algorithm AWCR such that the point sequence converges to that optimal solution.
With Algorithm AWCR generating the point sequence, the definition of Algorithm AWCR and (30) in Appendix 1 imply that the sequence of the objective values, obtained by evaluating the objective function at the point sequence, monotonically increases to the optimal objective value. This is due to (30) and any convergent subsequence of the point sequence converging to one of the optimal solutions to the optimization problem in (3).
Therefore, Algorithm AWCR is convergent.
To reduce the cost of computation, (20) and (25) in Section 3 may utilize the Fibonacci search. To improve the performance of the algorithm and reduce the cost of the computation, the objective function in step (3) of the AWCR can be evaluated at the four points mentioned in Remark 3.3, by parallel computation to find the estimate of β ^{∗} of (25). □
5 Numerical results and complexity analysis
In this section, numerical examples are provided to illustrate the effectiveness of the proposed algorithm. For comparison purpose, a regular feasible direction method utilizing the gradient [27] in the optimization is chosen. It is denoted as Algorithm AFD. Note that, as a benchmark and a feasible direction method, the Algorithm AFD can also generate a sequence of feasible points (as a feasible point algorithm). It is easy to set up a stop criterion of computation for a feasible point algorithm, especially for a monotonic feasible point algorithm like the proposed one. Due to the feasible set being a convex polygon, the recently developed AFD algorithm is used as a reference. We didn’t select [18] for comparison since the primaldual algorithm used in [18] is not a feasible point method; in addition, the assumption of the constrains is different and system model is different, too.
Figures 2 and 3 show the evolution of the weighted sumrate values versus the number of iterations for AWCR and AFD for some choices of the number of users (K). In the calculation, the number of antennas at the base station (m) is set to be 4. Channel gain vectors are generated randomly using random m × 1 vectors with each entry drawn independently from the standard Gaussian distribution. {P _{ k }} is the set of randomly chosen positive numbers. The sum power constraint is P _{ t } = 10 dB. A group of different weights are also generated randomly. In these figures, the cross markers and the diamond markers represent the results of our proposed Algorithms AWCR and AFD, respectively. These results show that the proposed Algorithm AWCR exhibits much faster convergence rate, especially with an increasing number of users.
Let f ^{∗} be the maximum sumrate, f ^{(n)} the sumrate at the n th iteration and f ^{(n)} − f ^{∗} the error in the sumrate. Figures 4 and 5 show the corresponding error in the sumrate versus the number of iterations. Note it is easy to see that using the fixedpoint theory of the proposed Lemma 2 one can determine the maximum sumrate f ^{∗} mentioned. As shown in these figures, the algorithms converge linearly. The proposed algorithm exhibits a much larger slope in the sumrate error function, which translates to a faster convergence rate.
We can further observe that the convergence rate of the proposed algorithm is not sensitive to the increase of the number of users. For clearly understanding, we define
where the point {(j,f ^{(j)})} is generated by the AWCR and ϵ = 10^{−3} without loss of generality; N _{AFD} is similarly defined but generated by the AFD. Each of these numbers can be regarded as the required number of iterations for the corresponding algorithm. We simulate different selection of K, and list the corresponding N _{AWCR} and N _{AFD} in Table 1. We can observe that in the simulated range, using the proposed algorithm, the required number of iterations for convergence is about 2, whereas for the AFD, the required number of iterations is much larger.
Since the AFD and the proposed algorithm use the same matrix inverse operations, which consist of the most significant part of the computation, to compute the gradient of the objective values, both algorithms have similar computational complexity O(m ^{3}) in each of the iterations (refer to [28]). This is because for a m × m square matrix, its inverse needs m(m ^{2} − 1) + m(m − 1)^{2}, i.e., O(m ^{3}), arithmetic operations; its determinant needs \frac{2}{3}{m}^{3}+m, i.e., O(m ^{3}), operations (the Cholesky decomposition approach is used for efficiency and our objects). Thus, since these operations are used with finite times, it is easily seen that, for each iteration, computational complexities for both AFD and AWCR are O(m ^{3}).
Also for conveniently checking the algorithms, deterministic instances are chosen as {\eta}_{k}=\frac{k}{\sum _{k=1}^{K}k},\forall k, P _{ t } = 10 dB and P _{ i } = 9 dB, ∀i, and the channel gains are randomly generated as
and
for K = 4 and K = 5, respectively. Let the normalized covariances of C _{0} be the identity matrix. The calculated weighted sumrate is plotted as a function of the iterations in Figure 6. It is shown that N _{AWCR} = 1, keeping the same least value for both cases; and N _{AFD} = 10 and 12 as K = 4 and K = 5, respectively.
6 Conclusion
The proposed algorithm AWCR, as a class of iterative waterfilling algorithms, is used to solve the problem of the weighted sumrate for the MIMOMAC in a CR network. By exploiting the concept of variable weighting factor for covariance update, together with the machinery of distributed and parallel computation, the proposed AWCR algorithm can greatly speed up the convergence rate of the weighted sumrate maximization computation. The required number of iterations for convergence exhibits nonsensitivity to the increase of the number of the users. Furthermore, a novel GWWFA, as a fundamental block of the proposed algorithm, is proposed.
Convergence of the proposed algorithm is strictly proved by the designed fixed point theory. We present an equivalent optimality condition by Lemma 2, i.e., a point is one of the optimal solutions to the problem of maximum weighted sumrate for the MISOMAC in the CR network if and only if the point is a fixed point of the AWCR. In the derivation, for more general problems, the assumption used in [9] that the optimal solution is unique to prove the convergence could be eliminated. Numerical examples are presented to demonstrate the effectiveness of the proposed algorithm. In the simulated range, the required number of iterations for convergence is shown to be fixed at two, which is a significant reduction compared with the conventional algorithms.
Appendix 1
Proof of Lemma 1
Note that in the following proof, we use the notation n to stand for the number of iterations for convenience.
The necessity is proved first. For the limit ({\overline{S}}_{1},\dots ,{\overline{S}}_{K}) of any convergent subsequence, there is a convergent subsequence {\left\{({S}_{1}^{\left({n}_{k}\right)},\dots ,{S}_{K}^{\left({n}_{k}\right)})\right\}}_{k=0}^{\infty}(\subset {\left\{({S}_{1}^{\left(n\right)},\dots ,{S}_{K}^{\left(n\right)})\right\}}_{n=0}^{\infty}) such that
is the point sequence generated by Algorithm AWCR.
Assume ({\stackrel{~}{S}}_{1}^{({n}_{k}+1)},\dots ,{\stackrel{~}{S}}_{K}^{({n}_{k}+1)})\in arg\underset{({S}_{1},\dots ,{S}_{K})\in {V}_{d}}{max} \sum _{i=1}^{K}f({S}_{1}^{\left({n}_{k}\right)},\dots ,{S}_{i1}^{\left({n}_{k}\right)},{S}_{i},{S}_{i+1}^{\left({n}_{k}\right)},\dots ,{S}_{K}^{\left({n}_{k}\right)}) from the definition of Algorithm AWCR. The definition of Algorithm AWCR implies that
for any n and (S _{1}, …, S _{ K }) ∈ V _{ d }. Replacing n with n _{ k }, we obtain:
We have the following relationships:
Among the relationships mentioned above, the first inequality and the first equality hold due to step (3) of Algorithm AWCR; the second inequality results from the function f being concave; the third inequality and the second equality are true because of step (2) of Algorithm AWCR, i.e., the definition of ({\stackrel{~}{S}}_{1}^{(n+1)},\dots ,{\stackrel{~}{S}}_{K}^{(n+1)}).
Thus, f({S}_{1}^{\left(n\right)},\dots ,{S}_{K}^{\left(n\right)}) is monotonically increasing with respect to n, and
From (30), we obtain: \sum _{i=1}^{K}f({S}_{1}^{\left({n}_{k}\right)},\dots ,{S}_{i1}^{\left({n}_{k}\right)},{\stackrel{~}{S}}_{i}^{({n}_{k}+1)},{S}_{i+1}^{\left({n}_{k}\right)},\dots ,{S}_{K}^{\left({n}_{k}\right)})\le \mathit{\text{Kf}} (S 1(n _{ k } + 1), …, S K(n _{ k } + 1)). From (29), we acquire:
Hence, it is true that K f(S 1(n _{ k+1)}, …, S K(n _{ k+1)}) ≥ \sum _{i=1}^{K}f({S}_{1}^{\left({n}_{k}\right)},\dots ,{S}_{i1}^{\left({n}_{k}\right)},{S}_{i},{S}_{i+1}^{\left({n}_{k}\right)},\dots ,{S}_{K}^{\left({n}_{k}\right)}).\phantom{\rule{1em}{0ex}} Letting k approach infinity, we may acquire that
where ∀(S _{1}, …, S _{ K }) ∈ V _{ d }. Thus, ({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in arg\underset{({S}_{1},\dots ,{S}_{K})\in {V}_{d}}{max}\sum _{i=1}^{K}f({\overline{S}}_{1},\dots ,{\overline{S}}_{i1},{S}_{i},{\overline{S}}_{i+1},\dots ,{\overline{S}}_{K}).
Note that the set arg\underset{({S}_{1},\dots ,{S}_{K})\in {V}_{d}}{max}\sum _{i=1}^{K}f({\overline{S}}_{1},\dots ,{\overline{S}}_{i1}\phantom{\rule{0.3em}{0ex}},{S}_{i},{\overline{S}}_{i+1},\dots ,{\overline{S}}_{K}) does not need to be a singlepoint set. However, we may choose ({\overline{S}}_{1},\dots ,{\overline{S}}_{K}) as one of the optimal solutions to the problem \underset{({S}_{1},\dots ,{S}_{K})\in {V}_{d}}{max}\sum _{i=1}^{K}f({\overline{S}}_{1},\dots ,{\overline{S}}_{i1},{S}_{i},{\overline{S}}_{i+1},\dots ,{\overline{S}}_{K}). This corresponds to step (2) of Algorithm AWCR. Further, ({\overline{S}}_{1},\dots ,{\overline{S}}_{K})={\beta}^{\ast}({\overline{S}}_{1},\dots ,{\overline{S}}_{K})+(1{\beta}^{\ast})({\overline{S}}_{1},\dots ,{\overline{S}}_{K}), based on the choice of the optimal solution mentioned above. This corresponds to step (3) of Algorithm AWCR.
Therefore, resulting from the two correspondences mentioned above and the definition of Algorithm AWCR, it is true that ({\overline{S}}_{1},\dots ,{\overline{S}}_{K}) is a fixed point under Algorithm AWCR, which is viewed as a mapping.
The sufficiency will be proved as follows:
If ({\overline{S}}_{1},\dots ,{\overline{S}}_{K}) is a fixed point under Algorithm AWCR, it is seen that if ({S}_{1}^{\left(0\right)},\dots ,{S}_{K}^{\left(0\right)}) is denoted by ({\overline{S}}_{1},\dots ,{\overline{S}}_{K}), then ({S}_{1}^{\left(1\right)},\dots ,{S}_{K}^{\left(1\right)})=({\overline{S}}_{1},\dots ,{\overline{S}}_{K}), i.e., the former is assigned by the latter, due to ({\overline{S}}_{1},\dots ,{\overline{S}}_{K}) being a fixed point under Algorithm AWCR. If it is assumed that ({S}_{1}^{\left(n\right)},\dots ,{S}_{K}^{\left(n\right)})=({\overline{S}}_{1},\dots ,{\overline{S}}_{K}), then ({S}_{1}^{(n+1)},\dots ,{S}_{K}^{(n+1)})=({\overline{S}}_{1},\dots ,{\overline{S}}_{K}) due to ({\overline{S}}_{1},\dots ,{\overline{S}}_{K}) being a fixed point under Algorithm AWCR. According to the principle of mathematical induction, ({S}_{1}^{\left(n\right)},\dots ,{S}_{K}^{\left(n\right)})=({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in {V}_{d},\forall \mathrm{n.} Furthermore, \underset{n\to \infty}{lim}({S}_{1}^{\left(n\right)},\dots ,{S}_{K}^{\left(n\right)})=({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in {V}_{d}. Therefore, the sufficiency is true.
Note that in the proving process above, we do not have the following assumption:
Appendix 2
Proof of Lemma 2
The necessity is proved first.
According to definition of Algorithm AWCR, it is easily known, for the fixed point ({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in {V}_{d}, that
Since (31) is a convex optimization problem with a concave objective function, noting the optimality condition (refer to [29], Proposition 3.1), which is necessary and sufficient for (31), of the convex optimization problems, formula (31) implies that
where, ∀(S _{1}, S _{2}, …, S _{ K }) ∈ V _{ d }, we denote a transpose of the gradient with respect to the variables S _{ i } of f by the row vector {f}_{{S}_{i}}.
It is seen that formula (32) is just the optimal condition of the optimization problem (3). Therefore, the fixed point ({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in {V}_{d} is one of the optimal solutions to the problem in (3).
The sufficiency will be proved as follows:
Among the relationships mentioned above, due to (S _{1}, …, S _{ K }) ∈ V _{ d }, the first equality holds; because the function f is concave and the set of feasible solutions V _{ d } is convex, the first inequality holds; since ({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in {V}_{d} is the optimal solution to the problem in (3), the second inequality is true.
\begin{array}{l}\text{Hence}\text{,}\sum _{i=1}^{K}f({\overline{S}}_{1},\dots ,{\overline{S}}_{i1},{S}_{i},{\overline{S}}_{i+1},\dots ,{\overline{S}}_{K})\le \\ \sum _{i=1}^{K}f({\overline{S}}_{1},\dots ,{\overline{S}}_{K}),\forall ({S}_{1},\dots ,{S}_{K})\in {V}_{d}.\end{array}
According to definition of the optimal solution to (20) mentioned above,
According to steps (2) and (3) of Algorithm AWCR, ({\overline{S}}_{1},\dots ,{\overline{S}}_{K})\in {V}_{d} is a fixed point under Algorithm AWCR.
References
Jiang H, Lai L, Fan R, Poor HV: Optimal selection of channel sensing order in cognitive radio. IEEE Trans. Wirel. Commun. 2009, 8: 297307.
Prasad RV, Pawelczak P, Hoffmeyer JA, Berger HS: Cognitive functionality in next generation wireless networks: standardization efforts. IEEE Commun. Mag. 2008, 46: 7278.
Mitola J, Maguire GQ: Cognitive radios: making software radios more personal. IEEE Personal Commun 1999, 6: 1318. 10.1109/98.788210
Haykin S: Cognitive radio: brainempowered wireless communications. IEEE J. Sel. Areas Commun 2005, 23: 201220.
Devroye N, Vu M, Tarokh V: Cognitive radio networks: Information theory limits, models and design. IEEE Signal Process. Mag 2008, 25: 1223.
Biglieri E, Calderbank R, Constantinides A, Goldsmith A, Paulraj A, Poor HV: MIMO Wireless Communications. Cambridge: Cambridge University Press; 2007.
Tse D, Hanly S: Multiaccess fading channels. Part I: Polymatroid structure, optimal resource allocation and throughput capacities. IEEE Trans. Inf. Theory 1998, 44: 27962815. 10.1109/18.737513
Vishwanath S, Jafar S, Goldsmith A: Optimum power and rate allocation strategies for multiple access fading channels, in Proc. IEEE Vehicular Technology Conf. Rhodes, 2001.
Jindal N, Rhee W, Vishwanath S, Jafar SA, Goldsmith A: Sum power iterative waterfilling for multiantenna Gaussian broadcast channels. IEEE Trans. Inf. Theory 2005, 51: 15701580. 10.1109/TIT.2005.844082
Yu W: Sumcapacity computation for the Gaussian vector broadcast channel via dual decomposition. IEEE Trans. Inf. Theory 2006, 52: 754759.
Yu W, Rhee W, Boyd S, Cioffi JM: Iterative waterfilling for Gaussian vector multiaccess channels. IEEE Trans. Inf. Theory 2004, 50: 145152,. 10.1109/TIT.2003.821988
Telatar E: Capacity of multiantenna Gaussian channels. Europ. Trans. Telecommun 1999, 10: 585596. 10.1002/ett.4460100604
Jindal N, Vishwanath S, Goldsmith A: On the duality of Gaussian multipleaccess and broadcast channels. IEEE Trans. Inf. Theory 2004, 50: 768783. 10.1109/TIT.2004.826646
Viswanath P, Tse D: Sum capacity of the multiple antenna Gaussian broadcast channel and uplinkdownlink duality. IEEE Trans. Inf. Theory 2003, 49: 19121921. 10.1109/TIT.2003.814483
Weingarten H, Steinberg Y, Shamai S: The capacity region of the Gaussian multipleinput multipleoutput broadcast channel. IEEE Trans. Inf. Theory 2006, 52: 39363964.
Kobayashi M, Caire G: An iterative waterfilling algorithm for maximum weighted sumrate of Gaussian MIMOBC. IEEE J. Sel. Areas Commun 2006, 24: 16401646.
Zhang L, Liang YC, Xin Y: Joint beamforming and power allocation for multiple access channels in cognitive radio networks. IEEE J. Sel. Areas Commun 2008, 26: 3851.
Zhang L, Xin Y, Liang YC, Poor HV: Cognitive multiple access channels: optimal power allocation for weighted sum rate maximization. IEEE Trans. Commun 2009, 57: 27542762.
Zhang R, Cui S, Liang YC: On ergodic sum capacity of fading cognitive multipleaccess and broadcast channels. IEEE Trans. Inf. Theory 2009, 55: 51615178.
Palomar D: Practical algorithms for a family of waterfilling solutions. IEEE Trans. Signal Process 2005, 53: 686695.
Hs C, Su H, Lin P: Joint subcarrier pairing and power allocation for OFDM transmission with decodeandforward relaying. IEEE Trans. Inf. Theory 2011, 59: 399414.
Qi Q, Minturn A, Yang Y: An efficient waterfilling algorithm for power allocation in OFDMbased cognitive radio systems. in Proc. International Conference on Systems and Informatics (ICSAI) Yantai, 2012. pp. 2069–2073
Rong Y, Tang X, Hua Y: A unified framework for optimizing linear nonregenerative multicarrier MIMO relay communication systems. IEEE Trans. Signal Process 2009, 57: 48374852.
Quarteroni A, Sacco R, Saleri F: Numerical Mathematics. Berlin Heidelberg: Springer; 2010.
He P, Zhao L: Correction of convergence proof for iterative waterfilling in Gaussian MIMO broadcast channels. IEEE Trans. Inf. Theory 2011, 57: 25392543.
Zangwill W: Nonlinear Programming: A Unified Approach. Englewood Cliffs: PrenticeHall; 1969.
Sun W, Yuan Y: Optimization Theory and Methods: Nonlinear Programming. New York: Springer; 2006.
Papadimitriou CH, Steiglitz K: Combinatorial Optimization: Algorithms and Complexity, Unabridged edition. Mineola: Dover Publications; 1998.
Bertsekas DP, Tsitsiklis JN: Parallel and Distribution Computation: Numerical Methods. Nashua: Athena Scientific; 1997.
Acknowledgements
The authors sincerely acknowledge the support from Natural Sciences and Engineering Research Council (NSERC) of Canada under grant number RGPIN/2932372009, National Natural Science Foundation of China (NSFC) under grant number 61021001, and Tsinghua National Laboratory for Information Science and Technology (TNList). The authors were grateful to the anonymous reviewers and guest editors for their valuable comments and suggestions to improve the quality of the article.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
He, P., Zhao, L. & Lu, J. Weighted sumrate maximization for multiuser SIMO multiple access channels in cognitive radio networks. EURASIP J. Adv. Signal Process. 2013, 80 (2013). https://doi.org/10.1186/16876180201380
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/16876180201380
Keywords
 Channel capacity
 Multiuser MIMO (MUMIMO)
 Multiaccess Channels (MAC)
 Cognitive Radio (CR)
 Multipleantenna
 Broadcast systems
 Maximum sumrate
 Optimal power distribution
 Optimization methods
 Waterfilling
 Algorithm with mixed constraints