Skip to main content

Advertisement

Weighted sum-rate maximization for multi-user SIMO multiple access channels in cognitive radio networks

Article metrics

Abstract

In this article, an efficient distributed and parallel algorithm is proposed to maximize the sum-rate and optimize the input distribution policy for the multi-user single input multiple output multiple access channel (MU-SIMO MAC) system with concurrent access within a cognitive radio (CR) network. The single input means that every user has a single antenna and multiple output means that base station(s) has multiple antennas. The main features are: (i) the power distribution for the users is updated by using variable scale factors which effectively and efficiently maximize the objective function at each iteration; (ii) distributed and parallel computation is employed to expedite convergence of the proposed distributed algorithm; and (iii) a novel water-filling with mixed constraints is investigated, and used as a fundamental block of the proposed algorithm. Due to sufficiently exploiting the structure of the proposed model, the proposed algorithm owns fast convergence. Numerical results verify that the proposed algorithm is effective and fast convergent. Using the proposed approach, for the simulated range, the required number of iterations for convergence is two and this number is not sensitive to the increase of the number of users. This feature is quite desirable for large scale systems with dense active users. In addition, it is also worth noting that the proposed algorithm is a monotonic feasible operator to the iteration. Thus, the stop criterion for computation could be easily set up.

1 Introduction

The radio spectrum is a precious resource that demands for efficient utilization as the currently licensed spectrum is severely underutilized [1]. Cognitive radio (CR) [2], [3], [4], which adapts the radios operating characteristics to the real-time conditions, is the key technology that allows flexible, efficient and reliable spectrum utilization in wireless communications. This technology exploits the underutilized licensed spectrum of the primary user(s) (PU) and introduces secondary user(s) (SU) to operate on the spectrum that is either opportunistically being available or concurrently being shared by the PU and the SU. Under this situation and according to the definition of a cognitive (radio) network [5], opportunistically utilizing the spectrum means that the SUs may fill the spectrum gaps or holes left by the PUs; whereas concurrently utilizing the spectrum means that the SUs transmit over the same spectrum as the PUs, in a way that the interference from the transmitting SUs does not violate the quality requirement from the PUs. This article focuses on the latter case. Furthermore, the multiple-input multiple-output (MIMO) technology uses multiple antennas at either the transmitter or the receiver to significantly increase data throughput and link range without additional bandwidth or transmitted power. Thus it plays an important role in wireless communications today. In infrastructure-supported networks, such as the widely used cellular network, base stations are typically shared by a large number of users. Within the scope of this article, it is therefore assumed that the base station under consideration is shared by multiple PUs and multiple SUs. In this article, a MIMO-enhanced CR network is considered to fully ensure the quality of service (QoS) of the PUs as well as to maximize the weighted sum-rate of the SUs. We consider multiple SUs accessing the base station, referred to as a multiple-access channel (MAC).

The weighted sum-rate maximization problem is to compute the “best" achievable rate vector in the capacity region [6], [7], [8] by specifying the working point at the boundary of the capacity region. This optimality problem is of the Pareto meaning under multi-objective optimization.

For the non-CR cases, the sum-rate maximization problem has been intensively explored for both Gaussian broadcast channel (BC) [9], [10] and Gaussian MAC [11]. Typical approaches include iterative water-filling algorithms [9], [11] and dual-decomposition [10]. The conventional water-filling algorithm [12] which is an efficient resource allocation algorithm needs to be used inside each of the iterations as an inner loop operation. In addition, the setup of the well known duality between the Gaussian BC and the sum-power constrained Gaussian dual MAC [13], [14], [15] facilities the transform of BC sum-rate problems into its dual MAC problem. As for the weighted sum-rate problem, it is easily seen that as the weighted coefficients all being unity, the weighted sum-rate problem is reduced into a sum-rate optimization problem. Thus, solving the weighted sum-rate problem is more general. However, due to the more complicated problem structure, the conventional water-filling algorithm [12] is not able to compute its solution. For computing the maximum weighted sum-rate for a class of the Gaussian single-input multiple-output (SIMO) BC systems or equivalent dual MAC systems [16], has presented some algorithms using a cyclic coordinate ascent algorithm to provide the max-stability policy.

For the CR cases, besides the individual power constraints to the SUs, the total interference power from the SUs needs to be included into the constraints of the target problem. Since single-antenna mobile users are quite common and compose a major served group due to the size and cost limitations of mobile terminals, this article is confined to a single input multiple output multiple access channel (SIMO-MAC) in the CR network. Earlier study [17], [18] investigated the sum-rate problem and the weighted sum-rate problem in CR-SIMO-MAC cases, respectively. In addition, for the ergodic sum capacity of single input single out (SISO) system [19], studied the maximum (non-weighted) sum-rate problem with a simple form of the objective function.

In this article, by exploiting the structure of the weighted sum-rate optimization problem, we propose an efficient iterative algorithm to compute the optimal input policy and to maximize the weighted sum-rate, via solving a generalized water-filling problem in each of the iterations. The water-filling machinery is experiencing continuous development [12], [20], [21], [22], [23]. In this article, we propose a generalized weighted water-filling algorithm (GWWFA) to form a fundamental step (inner loop algorithm) for the target problem. In the inner loop, the weighted sum-rate problem is decomposed into a series of generalized water-filling problems. With this decomposition, a decoupled system with each equation of the decoupled system containing only a scalar variable is formed and solved. Any one of the equations is solved by the GWWFA with a finite number of loops. To speed up the computation of the solution to each of the equations, we also specify the intervals the solution belongs to.

For the outer loop of the algorithm, a variable scale factor is applied to update the covariance vector of the users. The optimal scale factor is obtained by maximizing the target objective value (i.e., the weighted sum-rate) in the scalar variable β to expedite convergence of the proposed algorithm. In order to achieve this purpose, we determine an optimal scale factor by searching in a range which consists of a few discrete values. As a result, parallel operation can be used to expedite the search and to avoid the requirement of another nested loop. This parallel operation can be distributed to and carried out by multiple processors (for example, four processors).

Compared with earlier study [18], the main difference of our study is that: (i) in [18], the dual-decomposition approach [10] is used. In our study, we apply the iterative water-filling algorithm [9] and extend the algorithm to solve the target problem. The advantage of the iterative water-filling algorithm is that it is a monotonic feasible operator to the iteration. That is to say, the proposed algorithm generates a sequence composed of feasible points in its iterations. The objective function values, corresponding to this point sequence, are monotonically increasing. Hence, the stop criterion for computation might be easily set up. However, the regular primal-dual method used in [18] is not a feasible point method; (ii) for the constraints of the target problem, we make the individual power constraints more strict and more reasonable, due to the values of signal powers being assumed to be greater than or equal to zero; (iii) the convergence rate is improved significantly. In the numerical example illustrated by Figure 1 of [18], the convergence of the weighted sum-rate is obtained after 110 iterations for a system with 3 SUs and 2 PUs. However, with our proposed algorithm, we achieve the weighted sum-rate convergence with two iterations with the simulated range (number of SUs up to 110). In addition, even if the PUs and SUs are served by different base stations, it is easy to see that the proposed machinery can be used with some minor modifications.

Figure 1
figure1

CR-MAC system model.

In the remainder of this article, the system model for a CR-SIMO-MAC system and its weighted sum-rate are described in Section 2. Section 3 discusses the proposed algorithm to solve the maximal weighted sum-rate problem through an inner loop algorithm. The optimality proof of the inner loop algorithm (GWWFA) is presented in Section 3.1. Then the outer loop algorithm (AWCR) and its implementation are presented in Section 3.2. Section 4 provides the convergence proof of the AWCR. Section 5 presents numerical results and some complexity analysis to show the effectiveness of the proposed algorithm.

Key notations that are used in this article are as follows: |A| and Tr(A) give the determinant and the trace of a square matrix A, respectively; E[X] is the expectation of the random variable X; the capital symbol I for a matrix denotes the identity matrix with the corresponding size. A square matrix B0 means that B is a positive semi-definite matrix. Further, for two arbitrary positive semi-definite matrices B and C, the expression BC means the difference of BC is a positive semi-definite matrix. In addition, for any complex matrix, its superscripts and T denote the conjugate transpose and the transpose of the matrix, respectively.

2 SIMO-MAC in CR network and its weighted sum-rate

For a SIMO-MAC in a CR network, as shown in Figure 1, assume that there are one base-station (BS) with N r antennas, and K SUs and N PUs, each of which is equipped with one single antenna. The received signal y C N r × 1 at the BS is described as

y = j = 1 K h j x j + j = 1 N h ̂ j x ̂ j + z , h j C 1 × N r , j = 1 , 2 , , K , and h ̂ j C 1 × N r , j = 1 , 2 , , N ,
(1)

where the j th entry x j of x C K × 1 is a scalar complex input signal from the j th SU and x is assumed to be a Gaussian random vector having zero mean with independent entries. The j th entry x ̂ j of x ̂ is a scalar complex input signal from the j th PU and x ̂ is assumed to be a Gaussian random vector having zero mean with independent entries. The noise term, z C N r × 1 is an additive Gaussian noise random vector, i.e., z(0, σ 2 I). The channel input, x ̂ ,x, and z are also assumed to be independent. Furthermore, the j th SU’s transmitted power can be expressed as

S j = E | x j | 2 ,j=1,2,,K.
(2)

Note that S j , j, is non-negative.

The mathematical model of the weighted sum-rate optimization problem for the SIMO-MAC in the CR network can be stated as follows (refer to (2.16) in [6] and the references therein):

Given a group of weights { w k } k = 1 K which is assumed to be in decreasing order (users can be arbitrarily renumbered to satisfy this condition) with the achievable rate of the secondary user k,

log | C 0 + j = 1 k h j h j S j | | C 0 + j = 1 k 1 h j h j S j | , k ,

the weighted sum-rate is organized by

f w mac h 1 , , h K ; P 1 , , P K ; P t = max S k k = 1 K w K log | C 0 + j = 1 K h j h j S j | + k = 1 K 1 ( w k w k + 1 ) log | C 0 + j = 1 k h j h j S j | Subject to: 0 S k P k , k ; k = 1 K g k S k P t ,
(3)

where, for the MAC cases, the peak power constraint on the k th SU exists and is denoted by a group of positive numbers: P k , k = 1, …, K; the power threshold to ensure the QoS of the PUs is denoted by the positive number P t . Further, when no confusion is possible, f w mac is simply written as f. For convenience, we define η k by w k  − w k+1 for k = 1, …, K − 1; and η k by w K , as a group of non-negative real numbers, and assume at least one of them to be non-zero. Further, the term g k = h k h k ,k, is the channel power gain of the k th SU to the BS. Also, we denote the covariance matrix of the random vector j = 1 N h ̂ j x ̂ j +z by C 0. It is easy to see that the matrix C 0 is positive definite.

The constraint k = 1 K g k S k P t is called the sum-power constraint with gains. The constraint is obtained in the following analysis. Let H= h 1 , , h K C N r × K and H ̂ = h ̂ 1 , , h ̂ N C N r × N . Thus, the received signal at BS is y= H ̂ x ̂ +(Hx+z), where H x + z can be regarded as the additive interference and noise to the transmitted signal H ̂ x ̂ from the PUs. To guarantee the QoS for the PUs, the power of the interference and noise should be less than a threshold value, P TH. This condition can be expressed as

Tr(HE(x x ) H +E(z z )) P TH .
(4)

It can be written as

k = 1 K g k S k P TH N r σ 2 = P t ,
(5)

where the power constraint value P t is the interference and noise threshold subtracted by the Gaussian noise power.

As an alternative, to guarantee the QoS for each of the PUs, individually, the power of the interference and noise should be less than a threshold value, P TH(i), i. Similarly, it is obtained that

k = 1 K g k S k P t (i),i.
(6)

That is to say, the condition above is equivalent to

k = 1 K g k S k min i { P t (i)}.
(7)

Name mini{P t (i)} as P t ; then the target model can still cover the case that the QoS for each of the PUs is considered individually. Note that at the base station with multiple antennas, the received signals can be regarded as a stochastic vector or point in a Hilbert space and the received signal powers are abstracted into the norm squared of the vector. The transmitted powers of the PUs have been taken into account by forming C 0 and P t mentioned above, which appear in (3).

It is seen that the sequence { η k } k = 1 K stems from the vector of weights used in the multi-user information theory [6]. The parameter or item η k , k, in the sequence is called the weighted coefficient without confusion.

A more strict weighted sum-rate model can also be obtained that reflects the essence of the issue for the CR-SIMO-MAC. Along a similar way mentioned above, we may choose the power thresholds P t, i to limit the impact from the SUs on each of the antennas of the BS. Thus the sum-power constraint with the gains is evolved into k = 1 K g k , i S k P t , i ,i=1,2,, N r . It is seen that such a weighted sum-rate problem with more power constraints can be solved by solving a similar problem in (3). Therefore, the proposed article aims at computing the solution to the problem (3). Note that if h i 0 =0, 1 ≤ i 0 ≤ K, for (3), we remove the user i 0 and then the number of the users is reduced to K − 1. Thus, we can assume that h i  ≠ 0, i.

For the important special case of the sum-rate problem, which is included in (3), assume that M = rank(H). Applying the QR decomposition, H = Q R, let Q= q 1 , , q M C N r × M have orthogonal and normalized column vectors. R C M × K is an upper triangle matrix with r i, j denoting the (i, j)th entry of the matrix R. Q is regarded as an equalizer to the received signal by the BS. Thus, the i th SU should have the rate:

Rate i =log 1 + | r i , i | 2 S i σ 2 + n = 1 N S n ̂ q i R ̂ n q i + j = i + 1 K | r i , j | 2 S j ,
(8)

where S n ̂ =E x ̂ n ( x ̂ n ) and R ̂ n = h ̂ n h ̂ n ,n=1,,N. It is easy to see that the rate just mentioned comes from the expression:

log ( | I + j = 1 k h j h j S j | ) ( | I + j = 1 k 1 h j h j S j | ) , k ,

i.e., C 0 = I in this case.

3 Algorithm AWCR

The proposed algorithm for solving the weighted sum-rate problem in the cognitive radio network, denoted by AWCR, is an iterative algorithm. It consists of two layers of loops. Inside the inner loop, a generalized weighted water-filling algorithm is proposed and used. Due to special problem structure and the complexity of the weighted sum-rate problem, the proposed water-filling is more general than regular weighted water-filling. It is discussed in Section 3.1. For the outer loop of AWCR, a variable scale factor with parallel computation is applied to expedite the convergence. This discussion is presented in Section 3.2.

3.1 Generalized weighted water-filling algorithm (GWWFA)

Being a fundamental block of the optimum resource allocation problem for the CR-SIMO-MAC systems, the generalized water-filling problem is abstracted as follows.

For a multiple receiving antenna system, it is given that P t  > 0, as the total power or volume of the water; K is the total number of the users; the allocated power and the propagation path (non-negative) gains for the i th user are given as S i for i = 1 … K, and { a ij } j = i K , respectively. The generalized weighted water-filling problem under consideration then reads

max { S i } i = 1 K : 0 S i P i , i ; i = 1 K g i S i P t i = 1 K η i j { 1 , , i } log(1+ a ij S j ),
(9)

where the set { η i } i = 1 K plays the role of the weighted coefficients. Note that if i = 1 K g i P i P t holds, the solution to Problem (3) is regressed into a trivial case. Hence, i = 1 K g i P i > P t is assumed.

Due to the specific CR SIMO MAC setup considered as well as the inclusion of arbitrary weights {η j }, the problem structure (9) is novel. It is easy to see that if a i j  = 0, as i ≠ j, and P i > > 0, i, then the problem (9) is reduced into the conventional weighted water-filling problem. Further, if equal weights are chosen, it is reduced into the conventional water-filling problem, which can be solved by the conventional water-filling algorithm [12].

To find the solution to the more complicated generalized problem above, the generalized weighted water-filling algorithm (GWWFA) is presented as follows. Let

λ i = 1 g i j = i K η j a ji ,i=1,,K.
(10)

Utilize a permutation operation π on {λ i } such that

λ π ( 1 ) λ π ( 2 ) λ π ( K ) > min 1 i K v v = 1 g i j = i K η j a ji 1 + a ji P > 0 = λ π ( K + 1 ) ,
(11)

where P= k = 1 K P k . Define function J i (s i ) as

J i ( s i )= 1 g i j = i K η j a ji 1 + a ji s i ,i=1,,K.
(12)

It is easy to see that the function J i (s i ) is strictly monotonically decreasing and continuous over the interval

min j 1 a ji a ji > 0 , .
(13)

The steps of the GWWFA can be described as below.

Algorithm GWWFA:

  1. (1)

    Given ε > 0, initialize λ min and λ max.

  2. (2)

    Set λ = (λ min + λ max) / 2.

  3. (3)

    If λ falls in the interval [λ π(i + 1), λ π(i)], where 1 ≤ i ≤ K, initialize the point s π ( 1 ) ( 0 ) , , s π ( i ) ( 0 ) and compute

    s π ( 1 ) n + 1 , , s π ( i ) n + 1 = s π ( 1 ) n J π ( 1 ) s π ( 1 ) ( n ) λ J π ( 1 ) s π ( 1 ) ( n ) , , s π ( i ) ( n ) J π ( i ) s π ( 1 ) ( n ) λ J π ( i ) s π ( i ) ( n ) .
    (14)

    Then increase the iteration from n to n + 1. Repeat the procedure in (14) until the point s π ( 1 ) n , , s π ( i ) n converges. Denote lim n s π ( 1 ) n , , s π ( i ) n by s π ( 1 ) , , s π ( i ) . Let s π ( i + 1 ) , , s π ( K ) =0 R 1 × ( K i ) .

  4. (4)

    If k = 1 K g k s k P t >0, then λ min is assigned λ;

    if k = 1 K g k s k P t <0, then λ max is assigned λ;

    If k = 1 K g k s k P t =0, stop.

  5. (5)

    If |λ min − λ max| ≤ ε, stop. Otherwise, go to step (2).

Remarks 3.1

Note, in (1) of the GWWFA, that the initial λ min may be chosen as λ π(K+1), and λ max may be chosen as λ π(1).

In (3), for the initialization of s π ( k ) ( 0 ) , first, we may choose an interval, such as [0,P π(k)], and use the secant method or the bisection method [24] over the interval to compute, in parallel, an approximate solution to the system J π(k)(s π(k)) − λ = 0, k. Hence, only through a few loops (≤log2P π(k) + 1 loops), |e 0|, as an absolute error between the accurate solution and the approximate solution obtained by the method above, is less than 0.5. The initialization of s π ( k ) ( 0 ) , for k = 1, …, i, is assigned by the above approximate solution. Let ( e n ) k = s π ( k ) s π ( k ) ( n ) , where J( s π ( k ) )λ=0. It is seen that

( e n + 1 ) k = ( e n ) k 2 j = π ( k ) K 1 g π ( k ) η j a j π ( k ) 2 ( 1 + a j π ( k ) ( s π ( k ) ( e n ) k ) ) 2 1 1 + a j π ( k ) s π ( k ) j = π ( k ) K 1 g π ( k ) η j a j π ( k ) 2 ( 1 + a j π ( k ) ( s π ( k ) ( e n ) k ) ) 2 = ( e n ) k 2 ρ n ,
(15)

where 0<ρ n <1. It can be observed that 0 ( e m ) k < ( e 0 ) k 2 m and then { s π ( k ) ( m ) } m = 1 uniformly converges, as 0 ≤ (e 6) k  < 10−19 (machine zero), k. That is to say, the absolute error between the approximate solution and the accurate solution is the machine zero within 6 loops. Thus, the optimal solution ( s π ( 1 ) ,, s π ( i ) ) can be obtained in parallel, within finite loops.

Denote a function

( x ) 0 a = 0 , x < 0 x , 0 x a a , x > a.
(16)

Then define G(λ) as

G(λ)= k = 1 K g π ( k ) J π ( k ) 1 ( λ ) 0 P π ( k ) .
(17)

Since J π(k)(s π(k)) is strictly monotonically decreasing and continuous over the interval, so are J π ( k ) 1 (λ) and G(λ) over the corresponding interval(s). Due to G(λ π(K+1)) < P t and G(λ π(1)) < P t , step (4) can make λ converge such that G(λ) = P t . Optimality of the GWWFA is stated by following proposition.

Proposition 3.1

For (9), its optimal solution can be obtained by the GWWFA.

Proof of Proposition 3.1. From the third item of (4) in the GWWFA and (5), G(λ) = P t . Then

k = 1 K g π ( k ) J π ( k ) 1 ( λ ) 0 P ( π ( k ) ) = P t .
(18)

Since there exists i 0 (1 ≤ i 0 ≤ K) such that λ[ λ π ( i 0 + 1 ) , λ π ( i 0 ) ], μ ̲ π ( j ) =0 and μ ¯ π ( j ) = λ π ( j ) λ0 hold as j = 1, …, i 0, we have μ ̲ π ( j ) =λ λ π ( j ) 0 and μ ¯ π ( j ) =0 as j = i 0 + 1, …, K. Therefore, there exists the solution

s π ( k ) ( = ( J π ( k ) 1 ( λ ) ) 0 P ( π ( k ) ) ) k = 1 K ,
(19)

and the Lagrange multipliers λ, { μ ̲ π ( k ) } and { μ ¯ π ( k ) } mentioned above such that the KKT condition of the problem (9) holds, where the λ corresponds to the constraint k = 1 K g k s k P t , and { μ ̲ π ( k ) } and { μ ¯ π ( k ) } correspond to the constraints {s π(k) ≥ 0} and {s π(k) ≤ P π(k)}, respectively.

Since the problem in Proposition 3.1 is a differentiable convex optimization problem with linear constraints, not only is the KKT condition mentioned above sufficient, but it is also necessary for optimality. Note that it is easily seen that the constraint qualification (the CQ) of the optimization problem (9) holds. Proposition 3.1 hence is proved.

Remarks 3.2

To decouple the variables in the objective function of the problem (3), a sum expression is acquired by adding the objective function, just mentioned, K times. Then the sum expression is operated, by one variable being selected as an optimized variable with respect to the others being fixed. Thus, from the expression (3), the problem (20),

max S l l = 1 K : 0 S l P l , i = 1 K g i S i P t i = 1 K η i l = 1 i log 1 + G il G il S l ,
(20)

is implied as follows:

Since

j = 1 K i = 1 K η i log C 0 + l { 1 , , i } { j } h l h l S l + k { 1 , , i } { j } h k h k S ¯ k = i = 1 K η i j = 1 K log 1 + l { 1 , , i } { j } G il G il S l + i = 1 K η i j = 1 K C 0 + k { 1 , , i } { j } h k h k S ¯ k ,
(21)

where S ¯ k ,k, is fixed and

G il = h i C 0 + k { 1 , , i } { l } h k h k S ¯ l 1 2 ,i,l,
(22)

the optimization problem

max S k k = 1 K : 0 S k P k , k = 1 K g k S k P t j = 1 K i = 1 K η i × log C 0 + l { 1 , , i } { j } h l h l S l + k { 1 , , i } { j } h k h k S ¯ k
(23)

is equivalent to the problem below:

max S l l = 1 K : 0 S l P l , i = 1 K g i S i P t i = 1 K η i l = 1 i log 1 + G il G il S l .

If the CR SIMO weighted case is generalized to the CR MIMO weighted case, it is still an open question whether there exists a fast water-filling solution like the algorithm mentioned above.

3.2 Algorithm AWCR and its implementation

The proposed Algorithm AWCR, which is based on the combined problem of both the MIMO MAC and the CR network, is listed below.

Algorithm AWCR:

Input: vector h i , S i 0 =0,i=1,,K;n=1.

  1. (1)

    Generate effective channels G ij ( n ) = h i I + l { 1 , , i } { j } h l h l S j ( n 1 ) 1 2 ,fori=1,,K, where the superscript with a pair of bracket, (n), represents the number of iterations.

  2. (2)

    Treating these effective channels as parallel, noninterfering channels, the new covariances S ~ i ( n ) i = 1 K are generated by the GWWFA under the sum power constraint P t . That is to say, S ~ i ( n ) i = 1 K is the optimal solution to (24):

    max S i i = 1 K : 0 S i P i , i = 1 K g i S i P t i = 1 K η i j = 1 i log 1 + G ij ( n ) G ij ( n ) S j .
    (24)

    Note that (24) is similar to (20), only S i ( n 1 ) and G ij ( n ) in the former take place of S ¯ i and G i l in the latter, respectively, for any i, j, l.

  3. (3)

    Update step: Let γ (n) and p (n−1) denote the newly obtained covariance set and the immediate past covariance set, respectively,

    γ ( n ) = S ~ 1 ( n ) , S ~ 2 n , , S ~ K ( n ) and p n 1 = S 1 n 1 , S 2 n 1 , , S K n 1 .

    Let

    β = max β 1 β 1 arg max β 1 / K , 1 f β γ ( n ) + 1 β p n 1 ,
    (25)

    as the innovation, where the function f has been defined in (3). Then, the covariance update step is

    p ( n ) = S 1 n , S 2 n , , S K n = β γ ( n ) + 1 β p n 1 .
    (26)

    The updated covariance is a convex combination of the newly obtained covariance and the immediate past covariance.

  4. (4)

    Increase the iteration from n to n + 1. Go to (1) until convergence.

Note that the new algorithm employs variable weighting factors, which are obtained to maximize the objective function and to update the covariance.

In this section, the optimality of S ~ i ( n ) i = 1 K has been proved, i.e., S ~ i ( n ) i = 1 K is the solution to (20), by Proposition 3.1.

Remarks 3.3

Due to the objective function f(β γ (n) + (1 − β)p (n − 1)) in step (3) of Algorithm AWCR being (upper) convex, i.e., being concave, in the scalar variable β, for computing the maximum solution to the corresponding optimization problem, we can choose finite searching steps with even fewer evaluations of the objective function. Without loss of generality, the objective function in step (3) is evaluated at the four points β = 1 K , 1 K + 1 3 1 1 K , 1 K + 2 3 1 1 K and 1 by parallel computation to determine β . That is to say, this parallel operation can be distributed to and carried out by multiple processors (for example, four processors) at the base station, in order to expedite convergence of the proposed algorithm. Finally, the obtained satisfying solution is then distributed or returned to the corresponding secondary users.

4 Convergence of Algorithm AWCR

There are two methods by either of which convergence of the proposed algorithm can be proved. The first method is to utilize convergence of Algorithm AWCR with β = 1 4 (refer to [25]) and the innovation, as a spacer step, by Zangwill’s convergence theorem B ([26], p. 128). However, we will then still need to prove Algorithm AWCR with β = 1 4 , as a basic mapping, to satisfy the closedness condition of Zangwill’s convergence theorem B. This point requires much explanation and an abstract proof. As an alternative, the second method which is more intuitive than the first method is used. The fixed point approach proposed in this article could also be generalized to solve other problems.

In this section, utilizing results from Section 3.1, convergence of Algorithm AWCR will be strictly proved under a weaker assumption. Note that, due to the power constraint being coupled between the optimization stages of (20) with the weighted coefficients in the objective function while being decoupled between the optimization stages of the MIMO-MAC case without the weighted coefficients, usage of the water-filling principle in the former is different from that of the latter.

4.1 Convergence proof of the proposed algorithm

In this article, as a more general model, we eliminate the assumption in [9] that the optimal solution is unique to prove convergence of the proposed algorithm. To the best knowledge of the authors, this is one of the proposed novelty for convergence of this class of algorithm with the spacer step ([26], p. 125). Since our convergence proof is based on more general functions including an objective function and a few constraint functions, it will also enrich the optimization theory and methods. It is assumed that a mapping projects a point to a set. First, two concepts are introduced. The first concept is of an image of a mapping (or algorithm) that projects a point to a set; the second one is of a fixed point under the mapping (algorithm). Then, two lemmas are proposed, followed by the convergence proof of the proposed algorithm.

Definition 4.1

(Image under mapping or Algorithm A) (see e.g., [26], p. 84). Assume that X and Y are two sets. Let A be a mapping or an algorithm from X to Y, which projects from a point in X to a set of points in Y. If the point in X is denoted by x and the set of the points in Y is denoted by A(x), then A(x) is called the image of x under A.

Definition 4.2

(Fixed point under mapping or Algorithm A). Let A be a mapping or an algorithm from X to Y. Assume xX. If xA(x), x is said to be a fixed point under A.

Note that (20) can be changed into a general form:

S ~ i ( n ) i = 1 K arg max S i i = 1 K : 0 S i P i , i = 1 K g i S i P t i = 1 K η i j = 1 i log 1 + G ij ( n ) G ij ( n ) S j ,
(27)

due to the condition of the optimal solution uniqueness being removed. Further, corresponding to this change, step (2) of Algorithm AWCR will be carried out in this way: given a feasible point S i ( n ) i = 1 K , its image under step (2) of Algorithm AWCR is a set of points. A point in this set is chosen arbitrarily as the next point S i n + 1 i = 1 K generated by Algorithm AWCR. Thus, Algorithm AWCR can generate a point sequence under this change. In the following, we will still call this algorithm Algorithm AWCR despite the changes. The feasible set is denoted by V d .

For any convergent subsequence, whose limit is denoted by ( S ¯ 1 ,, S ¯ K ), generated by Algorithm AWCR, we may use the following lemma to prove that the limit is a fixed point under Algorithm AWCR, when Algorithm AWCR is regarded as a mapping.

Lemma 1

A point is the limit of a convergent subsequence of the point sequence generated by Algorithm AWCR if and only if this point is a fixed point under Algorithm AWCR.

Proof

See Appendix 1. □

Lemma 2

( S ¯ 1 ,, S ¯ K ) V d is a fixed point under Algorithm AWCR if and only if ( S ¯ 1 ,, S ¯ K ) V d is one of the optimal solutions to the problem in (3).

Proof

See Appendix 2. □

Based on the lemmas above, we obtain the conclusion that Algorithm AWCR is convergent. At the same time, step (3) of Algorithm AWCR is then regarded as a computation for a point. With these lines of proofs, Algorithm AWCR generates a point sequence and every point of the point sequence consists of the K non-negative numbers, e.g., S 1 ( n ) , , S K ( n ) . The details are described below.

Theorem 4.1

Algorithm AWCR is convergent. At the same time, the sequence of objective values, obtained by evaluating the objective function at the point sequence, monotonically increases to the optimal objective value.

Proof

Due to compactness of the set of feasible solutions for the problem in (3), the point sequence generated by Algorithm AWCR already includes a convergent subsequence. For every convergent subsequence, according to Lemma 1, the convergent subsequence must converge to a fixed point under Algorithm AWCR. Then, according to Lemma 2, it converges to one of the optimal solutions to the problem in (3).

In addition, conversely, as stated by the sufficient and necessary conditions of Lemmas 1 and 2, for any optimal solution to the problem in (3), there is a point sequence generated by Algorithm AWCR such that the point sequence converges to that optimal solution.

With Algorithm AWCR generating the point sequence, the definition of Algorithm AWCR and (30) in Appendix 1 imply that the sequence of the objective values, obtained by evaluating the objective function at the point sequence, monotonically increases to the optimal objective value. This is due to (30) and any convergent subsequence of the point sequence converging to one of the optimal solutions to the optimization problem in (3).

Therefore, Algorithm AWCR is convergent.

To reduce the cost of computation, (20) and (25) in Section 3 may utilize the Fibonacci search. To improve the performance of the algorithm and reduce the cost of the computation, the objective function in step (3) of the AWCR can be evaluated at the four points mentioned in Remark 3.3, by parallel computation to find the estimate of β of (25). □

5 Numerical results and complexity analysis

In this section, numerical examples are provided to illustrate the effectiveness of the proposed algorithm. For comparison purpose, a regular feasible direction method utilizing the gradient [27] in the optimization is chosen. It is denoted as Algorithm AFD. Note that, as a benchmark and a feasible direction method, the Algorithm AFD can also generate a sequence of feasible points (as a feasible point algorithm). It is easy to set up a stop criterion of computation for a feasible point algorithm, especially for a monotonic feasible point algorithm like the proposed one. Due to the feasible set being a convex polygon, the recently developed AFD algorithm is used as a reference. We didn’t select [18] for comparison since the primal-dual algorithm used in [18] is not a feasible point method; in addition, the assumption of the constrains is different and system model is different, too.

Figures 2 and 3 show the evolution of the weighted sum-rate values versus the number of iterations for AWCR and AFD for some choices of the number of users (K). In the calculation, the number of antennas at the base station (m) is set to be 4. Channel gain vectors are generated randomly using random m × 1 vectors with each entry drawn independently from the standard Gaussian distribution. {P k } is the set of randomly chosen positive numbers. The sum power constraint is P t  = 10 dB. A group of different weights are also generated randomly. In these figures, the cross markers and the diamond markers represent the results of our proposed Algorithms AWCR and AFD, respectively. These results show that the proposed Algorithm AWCR exhibits much faster convergence rate, especially with an increasing number of users.

Figure 2
figure2

Weighted sum-rates (unit: bits) of AWCR and AFD, as K = 10 and 15.

Figure 3
figure3

Weighted sum-rates (unit: bits) of AWCR and AFD, as K = 30 and 50.

Let f be the maximum sum-rate, f (n) the sum-rate at the n th iteration and |f (n) − f | the error in the sum-rate. Figures 4 and 5 show the corresponding error in the sum-rate versus the number of iterations. Note it is easy to see that using the fixed-point theory of the proposed Lemma 2 one can determine the maximum sum-rate f mentioned. As shown in these figures, the algorithms converge linearly. The proposed algorithm exhibits a much larger slope in the sum-rate error function, which translates to a faster convergence rate.

Figure 4
figure4

Error functions of AWCR and AFD, as K = 10 and 15.

Figure 5
figure5

Error functions of AWCR and AFD, as K = 30 and 50.

We can further observe that the convergence rate of the proposed algorithm is not sensitive to the increase of the number of users. For clearly understanding, we define

N AWCR min{n|| f ( j ) f |<ϵ f ,asjn},

where the point {(j,f (j))} is generated by the AWCR and ϵ = 10−3 without loss of generality; N AFD is similarly defined but generated by the AFD. Each of these numbers can be regarded as the required number of iterations for the corresponding algorithm. We simulate different selection of K, and list the corresponding N AWCR and N AFD in Table 1. We can observe that in the simulated range, using the proposed algorithm, the required number of iterations for convergence is about 2, whereas for the AFD, the required number of iterations is much larger.

Table 1 Comparison of the convergence rate

Since the AFD and the proposed algorithm use the same matrix inverse operations, which consist of the most significant part of the computation, to compute the gradient of the objective values, both algorithms have similar computational complexity O(m 3) in each of the iterations (refer to [28]). This is because for a m × m square matrix, its inverse needs m(m 2 − 1) + m(m − 1)2, i.e., O(m 3), arithmetic operations; its determinant needs 2 3 m 3 +m, i.e., O(m 3), operations (the Cholesky decomposition approach is used for efficiency and our objects). Thus, since these operations are used with finite times, it is easily seen that, for each iteration, computational complexities for both AFD and AWCR are O(m 3).

Also for conveniently checking the algorithms, deterministic instances are chosen as η k = k k = 1 K k ,k, P t  = 10 dB and P i  = 9 dB, i, and the channel gains are randomly generated as

h 1 , h 2 , h 3 , h 4 = 0 . 3864 + 0 . 3319 i 0 . 6040 + 0 . 3786 i 0 . 3432 + 0 . 0937 i 0 . 0561 0 . 0556 i 0 . 5987 0 . 6389 i 0 . 8495 + 0 . 3909 i 0 . 4211 + 1 . 1264 i 1 . 0855 0 . 4820 i 0 . 1742 + 0 . 0254 i 0 . 0848 0 . 1440 i 0 . 1058 + 0 . 7201 i 0 . 4288 0 . 7245 i 0 . 4688 0 . 4437 i 0 . 0462 1 . 4526 i 0 . 3074 1 . 1175 i 0 . 9527 0 . 8728 i

and

h 1 , h 2 , h 3 , h 4 , h 5 = 0 . 2042 0 . 8059 i 0 . 3288 + 0 . 4492 i 0 . 9598 + 0 . 0608 i 0 . 9767 0 . 2270 i 1 . 3841 0 . 8709 i 0 . 3036 0 . 1493 i 0 . 2623 0 . 4253 i 0 . 7231 1 . 4174 i 0 . 2231 + 0 . 8744 i 0 . 3568 + 0 . 7465 i 0 . 0395 + 0 . 8416 i 0 . 5150 + 0 . 3897 i 0 . 7339 0 . 3487 i 1 . 0983 0 . 4464 i 1 . 3184 0 . 0801 i 0 . 2601 0 . 7893 i 1 . 4935 0 . 7777 i 0 . 2756 + 0 . 3267 i 0 . 5006 1 . 6442 i 0 . 2403 + 0 . 2682 i ,

for K = 4 and K = 5, respectively. Let the normalized covariances of C 0 be the identity matrix. The calculated weighted sum-rate is plotted as a function of the iterations in Figure 6. It is shown that N AWCR = 1, keeping the same least value for both cases; and N AFD = 10 and 12 as K = 4 and K = 5, respectively.

Figure 6
figure6

Weighted sum-rates (unit: bits) of AWCR and AFD, as K = 4 and 5.

6 Conclusion

The proposed algorithm AWCR, as a class of iterative water-filling algorithms, is used to solve the problem of the weighted sum-rate for the MIMO-MAC in a CR network. By exploiting the concept of variable weighting factor for covariance update, together with the machinery of distributed and parallel computation, the proposed AWCR algorithm can greatly speed up the convergence rate of the weighted sum-rate maximization computation. The required number of iterations for convergence exhibits non-sensitivity to the increase of the number of the users. Furthermore, a novel GWWFA, as a fundamental block of the proposed algorithm, is proposed.

Convergence of the proposed algorithm is strictly proved by the designed fixed point theory. We present an equivalent optimality condition by Lemma 2, i.e., a point is one of the optimal solutions to the problem of maximum weighted sum-rate for the MISO-MAC in the CR network if and only if the point is a fixed point of the AWCR. In the derivation, for more general problems, the assumption used in [9] that the optimal solution is unique to prove the convergence could be eliminated. Numerical examples are presented to demonstrate the effectiveness of the proposed algorithm. In the simulated range, the required number of iterations for convergence is shown to be fixed at two, which is a significant reduction compared with the conventional algorithms.

Appendix 1

Proof of Lemma 1

Note that in the following proof, we use the notation n to stand for the number of iterations for convenience.

The necessity is proved first. For the limit ( S ¯ 1 ,, S ¯ K ) of any convergent subsequence, there is a convergent subsequence ( S 1 ( n k ) , , S K ( n k ) ) k = 0 ( ( S 1 ( n ) , , S K ( n ) ) n = 0 ) such that

( S ¯ 1 , , S ¯ K ) = lim k ( S 1 ( n k ) , , S K ( n k ) ) , where ( S 1 ( n ) , , S K ( n ) ) n = 0

is the point sequence generated by Algorithm AWCR.

Assume ( S ~ 1 ( n k + 1 ) ,, S ~ K ( n k + 1 ) )arg max ( S 1 , , S K ) V d i = 1 K f( S 1 ( n k ) ,, S i 1 ( n k ) , S i , S i + 1 ( n k ) ,, S K ( n k ) ) from the definition of Algorithm AWCR. The definition of Algorithm AWCR implies that

i = 1 K f ( S 1 ( n ) , , S i 1 ( n ) , S ~ i ( n + 1 ) , S i + 1 ( n ) , , S K ( n ) ) i = 1 K f ( S 1 ( n ) , , S i 1 ( n ) , S i , S i + 1 ( n ) , , S K ( n ) ) ,
(28)

for any n and (S 1, …, S K ) V d . Replacing n with n k , we obtain:

i = 1 K f ( S 1 ( n k ) , , S i 1 ( n k ) , S ~ i ( n k + 1 ) , S i + 1 ( n k ) , , S K ( n k ) ) i = 1 K f ( S 1 ( n k ) , , S i 1 ( n k ) , S i , S i + 1 ( n k ) , , S K ( n k ) ) .
(29)

We have the following relationships:

f ( S 1 ( n + 1 ) , , S i 1 ( n + 1 ) , S i ( n + 1 ) , S i + 1 ( n + 1 ) , , S K ( n + 1 ) ) f ( K 1 K ( S 1 ( n ) , , S K ( n ) ) + 1 K ( S ~ 1 ( n + 1 ) , , S ~ K ( n + 1 ) ) ) = f ( i = 1 K 1 K ( S 1 ( n ) , , S i 1 ( n ) , S ~ i ( n + 1 ) , S i + 1 ( n ) , , S K ( n ) ) ) 1 K i = 1 K f ( S 1 ( n ) , , S i 1 ( n ) , S ~ i ( n + 1 ) , S i + 1 ( n ) , , S K ( n ) ) 1 K i = 1 K f ( S 1 ( n ) , , S i 1 ( n ) , S i ( n ) , S i + 1 ( n ) , , S K ( n ) ) = f ( S 1 ( n ) , , S K ( n ) ) .

Among the relationships mentioned above, the first inequality and the first equality hold due to step (3) of Algorithm AWCR; the second inequality results from the function f being concave; the third inequality and the second equality are true because of step (2) of Algorithm AWCR, i.e., the definition of ( S ~ 1 ( n + 1 ) ,, S ~ K ( n + 1 ) ).

Thus, f( S 1 ( n ) ,, S K ( n ) ) is monotonically increasing with respect to n, and

f ( S 1 ( n ) , , S K ( n ) ) 1 K i = 1 K f ( S 1 ( n ) , , S i 1 ( n ) , S ~ i ( n + 1 ) , S i + 1 ( n ) , , S K ( n ) ) f ( S 1 ( n + 1 ) , , S K ( n + 1 ) ) .
(30)

From (30), we obtain: i = 1 K f( S 1 ( n k ) ,, S i 1 ( n k ) , S ~ i ( n k + 1 ) , S i + 1 ( n k ) ,, S K ( n k ) )Kf (S 1(n k  + 1), …, S K(n k  + 1)). From (29), we acquire:

i = 1 K f ( S 1 ( n k ) , , S i 1 ( n k ) , S ~ i ( n k + 1 ) , S i + 1 ( n k ) , , S K ( n k ) ) i = 1 K f ( S 1 ( n k ) , , S i 1 ( n k ) , S i , S i + 1 ( n k ) , , S K ( n k ) ) .

Hence, it is true that K f(S 1(n k+1), …, S K(n k+1)) ≥  i = 1 K f( S 1 ( n k ) ,, S i 1 ( n k ) , S i , S i + 1 ( n k ) ,, S K ( n k ) ). Letting k approach infinity, we may acquire that

i = 1 K f ( S ¯ 1 , , S ¯ K ) = Kf ( S ¯ 1 , , S ¯ K ) i = 1 K f ( S ¯ 1 , , S ¯ i 1 , S i , S ¯ i + 1 , , S ¯ K ) ,

where (S 1, …, S K ) V d . Thus, ( S ¯ 1 ,, S ¯ K )arg max ( S 1 , , S K ) V d i = 1 K f( S ¯ 1 ,, S ¯ i 1 , S i , S ¯ i + 1 ,, S ¯ K ).

Note that the set arg max ( S 1 , , S K ) V d i = 1 K f( S ¯ 1 ,, S ¯ i 1 , S i , S ¯ i + 1 ,, S ¯ K ) does not need to be a single-point set. However, we may choose ( S ¯ 1 ,, S ¯ K ) as one of the optimal solutions to the problem max ( S 1 , , S K ) V d i = 1 K f( S ¯ 1 ,, S ¯ i 1 , S i , S ¯ i + 1 ,, S ¯ K ). This corresponds to step (2) of Algorithm AWCR. Further, ( S ¯ 1 ,, S ¯ K )= β ( S ¯ 1 ,, S ¯ K )+(1 β )( S ¯ 1 ,, S ¯ K ), based on the choice of the optimal solution mentioned above. This corresponds to step (3) of Algorithm AWCR.

Therefore, resulting from the two correspondences mentioned above and the definition of Algorithm AWCR, it is true that ( S ¯ 1 ,, S ¯ K ) is a fixed point under Algorithm AWCR, which is viewed as a mapping.

The sufficiency will be proved as follows:

If ( S ¯ 1 ,, S ¯ K ) is a fixed point under Algorithm AWCR, it is seen that if ( S 1 ( 0 ) ,, S K ( 0 ) ) is denoted by ( S ¯ 1 ,, S ¯ K ), then ( S 1 ( 1 ) ,, S K ( 1 ) )=( S ¯ 1 ,, S ¯ K ), i.e., the former is assigned by the latter, due to ( S ¯ 1 ,, S ¯ K ) being a fixed point under Algorithm AWCR. If it is assumed that ( S 1 ( n ) ,, S K ( n ) )=( S ¯ 1 ,, S ¯ K ), then ( S 1 ( n + 1 ) ,, S K ( n + 1 ) )=( S ¯ 1 ,, S ¯ K ) due to ( S ¯ 1 ,, S ¯ K ) being a fixed point under Algorithm AWCR. According to the principle of mathematical induction, ( S 1 ( n ) ,, S K ( n ) )=( S ¯ 1 ,, S ¯ K ) V d ,n. Furthermore, lim n ( S 1 ( n ) ,, S K ( n ) )=( S ¯ 1 ,, S ¯ K ) V d . Therefore, the sufficiency is true.

Note that in the proving process above, we do not have the following assumption:

( S ¯ 1 , , S ¯ K ) = lim k ( S 1 ( n k + 1 ) , , S K ( n k + 1 ) ) .

Appendix 2

Proof of Lemma 2

The necessity is proved first.

According to definition of Algorithm AWCR, it is easily known, for the fixed point ( S ¯ 1 ,, S ¯ K ) V d , that

( S ¯ 1 , , S ¯ K ) max ( S 1 , , S K ) V d × i = 1 K f ( S ¯ 1 , , S ¯ i 1 , S i , S ¯ i + 1 , , S ¯ K ) , where ( S ¯ 1 , , S ¯ K ) V d .
(31)

Since (31) is a convex optimization problem with a concave objective function, noting the optimality condition (refer to [29], Proposition 3.1), which is necessary and sufficient for (31), of the convex optimization problems, formula (31) implies that

f S 1 S ¯ 1 , , S ¯ K , , f S K S ¯ 1 , , S ¯ K · ( S 1 S ¯ 1 ) , , ( S K S ¯ K ) T 0 ,
(32)

where, (S 1, S 2, …, S K ) V d , we denote a transpose of the gradient with respect to the variables S i of f by the row vector f S i .

It is seen that formula (32) is just the optimal condition of the optimization problem (3). Therefore, the fixed point ( S ¯ 1 ,, S ¯ K ) V d is one of the optimal solutions to the problem in (3).

The sufficiency will be proved as follows:

i = 1 K f ( S ¯ 1 , , S ¯ i 1 , S i , S ¯ i + 1 , , S ¯ K ) = K i = 1 K 1 K f ( S ¯ 1 , , S ¯ i 1 , S i , S ¯ i + 1 , , S ¯ K ) Kf ( 1 K ( S 1 , , S K ) + K 1 K ( S ¯ 1 , , S ¯ K ) ) Kf ( S ¯ 1 , , S ¯ K ) = i = 1 K f ( S ¯ 1 , , S ¯ K ) .

Among the relationships mentioned above, due to (S 1, …, S K ) V d , the first equality holds; because the function f is concave and the set of feasible solutions V d is convex, the first inequality holds; since ( S ¯ 1 ,, S ¯ K ) V d is the optimal solution to the problem in (3), the second inequality is true.

Hence , i = 1 K f ( S ¯ 1 , , S ¯ i 1 , S i , S ¯ i + 1 , , S ¯ K ) i = 1 K f ( S ¯ 1 , , S ¯ K ) , ( S 1 , , S K ) V d .

According to definition of the optimal solution to (20) mentioned above,

( S ¯ 1 , , S ¯ K ) arg max ( S 1 , , S K ) V d i = 1 K f ( S ¯ 1 , , S ¯ i 1 , S i , S ¯ i + 1 , , S ¯ K ) .

According to steps (2) and (3) of Algorithm AWCR, ( S ¯ 1 ,, S ¯ K ) V d is a fixed point under Algorithm AWCR.

References

  1. 1.

    Jiang H, Lai L, Fan R, Poor HV: Optimal selection of channel sensing order in cognitive radio. IEEE Trans. Wirel. Commun. 2009, 8: 297-307.

  2. 2.

    Prasad RV, Pawelczak P, Hoffmeyer JA, Berger HS: Cognitive functionality in next generation wireless networks: standardization efforts. IEEE Commun. Mag. 2008, 46: 72-78.

  3. 3.

    Mitola J, Maguire GQ: Cognitive radios: making software radios more personal. IEEE Personal Commun 1999, 6: 13-18. 10.1109/98.788210

  4. 4.

    Haykin S: Cognitive radio: brain-empowered wireless communications. IEEE J. Sel. Areas Commun 2005, 23: 201-220.

  5. 5.

    Devroye N, Vu M, Tarokh V: Cognitive radio networks: Information theory limits, models and design. IEEE Signal Process. Mag 2008, 25: 12-23.

  6. 6.

    Biglieri E, Calderbank R, Constantinides A, Goldsmith A, Paulraj A, Poor HV: MIMO Wireless Communications. Cambridge: Cambridge University Press; 2007.

  7. 7.

    Tse D, Hanly S: Multiaccess fading channels. Part I: Polymatroid structure, optimal resource allocation and throughput capacities. IEEE Trans. Inf. Theory 1998, 44: 2796-2815. 10.1109/18.737513

  8. 8.

    Vishwanath S, Jafar S, Goldsmith A: Optimum power and rate allocation strategies for multiple access fading channels, in Proc. IEEE Vehicular Technology Conf. Rhodes, 2001.

  9. 9.

    Jindal N, Rhee W, Vishwanath S, Jafar SA, Goldsmith A: Sum power iterative water-filling for multi-antenna Gaussian broadcast channels. IEEE Trans. Inf. Theory 2005, 51: 1570-1580. 10.1109/TIT.2005.844082

  10. 10.

    Yu W: Sum-capacity computation for the Gaussian vector broadcast channel via dual decomposition. IEEE Trans. Inf. Theory 2006, 52: 754-759.

  11. 11.

    Yu W, Rhee W, Boyd S, Cioffi JM: Iterative water-filling for Gaussian vector multi-access channels. IEEE Trans. Inf. Theory 2004, 50: 145-152,. 10.1109/TIT.2003.821988

  12. 12.

    Telatar E: Capacity of multi-antenna Gaussian channels. Europ. Trans. Telecommun 1999, 10: 585-596. 10.1002/ett.4460100604

  13. 13.

    Jindal N, Vishwanath S, Goldsmith A: On the duality of Gaussian multiple-access and broadcast channels. IEEE Trans. Inf. Theory 2004, 50: 768-783. 10.1109/TIT.2004.826646

  14. 14.

    Viswanath P, Tse D: Sum capacity of the multiple antenna Gaussian broadcast channel and uplink-downlink duality. IEEE Trans. Inf. Theory 2003, 49: 1912-1921. 10.1109/TIT.2003.814483

  15. 15.

    Weingarten H, Steinberg Y, Shamai S: The capacity region of the Gaussian multiple-input multiple-output broadcast channel. IEEE Trans. Inf. Theory 2006, 52: 3936-3964.

  16. 16.

    Kobayashi M, Caire G: An iterative water-filling algorithm for maximum weighted sum-rate of Gaussian MIMO-BC. IEEE J. Sel. Areas Commun 2006, 24: 1640-1646.

  17. 17.

    Zhang L, Liang Y-C, Xin Y: Joint beamforming and power allocation for multiple access channels in cognitive radio networks. IEEE J. Sel. Areas Commun 2008, 26: 38-51.

  18. 18.

    Zhang L, Xin Y, Liang YC, Poor HV: Cognitive multiple access channels: optimal power allocation for weighted sum rate maximization. IEEE Trans. Commun 2009, 57: 2754-2762.

  19. 19.

    Zhang R, Cui S, Liang YC: On ergodic sum capacity of fading cognitive multiple-access and broadcast channels. IEEE Trans. Inf. Theory 2009, 55: 5161-5178.

  20. 20.

    Palomar D: Practical algorithms for a family of waterfilling solutions. IEEE Trans. Signal Process 2005, 53: 686-695.

  21. 21.

    Hs C, Su H, Lin P: Joint subcarrier pairing and power allocation for OFDM transmission with decode-and-forward relaying. IEEE Trans. Inf. Theory 2011, 59: 399-414.

  22. 22.

    Qi Q, Minturn A, Yang Y: An efficient water-filling algorithm for power allocation in OFDM-based cognitive radio systems. in Proc. International Conference on Systems and Informatics (ICSAI) Yantai, 2012. pp. 2069–2073

  23. 23.

    Rong Y, Tang X, Hua Y: A unified framework for optimizing linear non-regenerative multicarrier MIMO relay communication systems. IEEE Trans. Signal Process 2009, 57: 4837-4852.

  24. 24.

    Quarteroni A, Sacco R, Saleri F: Numerical Mathematics. Berlin Heidelberg: Springer; 2010.

  25. 25.

    He P, Zhao L: Correction of convergence proof for iterative water-filling in Gaussian MIMO broadcast channels. IEEE Trans. Inf. Theory 2011, 57: 2539-2543.

  26. 26.

    Zangwill W: Nonlinear Programming: A Unified Approach. Englewood Cliffs: Prentice-Hall; 1969.

  27. 27.

    Sun W, Yuan Y: Optimization Theory and Methods: Nonlinear Programming. New York: Springer; 2006.

  28. 28.

    Papadimitriou CH, Steiglitz K: Combinatorial Optimization: Algorithms and Complexity, Unabridged edition. Mineola: Dover Publications; 1998.

  29. 29.

    Bertsekas DP, Tsitsiklis JN: Parallel and Distribution Computation: Numerical Methods. Nashua: Athena Scientific; 1997.

Download references

Acknowledgements

The authors sincerely acknowledge the support from Natural Sciences and Engineering Research Council (NSERC) of Canada under grant number RGPIN/293237-2009, National Natural Science Foundation of China (NSFC) under grant number 61021001, and Tsinghua National Laboratory for Information Science and Technology (TNList). The authors were grateful to the anonymous reviewers and guest editors for their valuable comments and suggestions to improve the quality of the article.

Author information

Correspondence to Peter He.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

He, P., Zhao, L. & Lu, J. Weighted sum-rate maximization for multi-user SIMO multiple access channels in cognitive radio networks. EURASIP J. Adv. Signal Process. 2013, 80 (2013) doi:10.1186/1687-6180-2013-80

Download citation

Keywords

  • Channel capacity
  • Multi-user MIMO (MU-MIMO)
  • Multi-access Channels (MAC)
  • Cognitive Radio (CR)
  • Multiple-antenna
  • Broadcast systems
  • Maximum sum-rate
  • Optimal power distribution
  • Optimization methods
  • Water-filling
  • Algorithm with mixed constraints