In this section, we first try to review the optimal structure of the power allocation via majorization theory. Then, we present a fast algorithm by restricting the attention to a new searching variable, and we show that our new method requires less than half of the computational cost of the existing method in the worst case and is even much faster in general.
4.1 Review of the optimal structure of the power allocation via majorization theory
Let and , problem (6) can be equivalently rewritten as
(7)
where , , , . Note that in deriving (7), the logarithm part is dropped because of its monotonicity. This optimization problem is rather difficult to solve directly, due to its non-convexity. However, if the value of T is fixed, it is possible to solve the optimization problem via majorization theory in a specific hyperplane described by a linear equation . After that, we are able to find the optimal solution by performing a one-dimensional search on T. Defining , the corresponding problem becomes
(8)
where and . Since is a constant for fixed T, according to Schur-concavity of f(y) (see Lemma 1), it is easy to see that
where and . Therefore, solving the optimization problem (8) for fixed T is equivalent to finding the optimal vector y∗ such that y∗≻y for any feasible y. The corresponding y∗ can be explicitly determined as follows.
Theorem 1.
For any fixed T, the optimal y∗ among the constraints of problem (8) must be structured as
(10)
where , the value of k is determined by the following inequality
(11)
and y
k
is given by
(12)
Proof.
In order to simplify the presentation, we rewrite the constraints in (8) as
where and . It is easy to see that q
i
≤qi+1 and , due to their definitions. To obtain the optimal vector y∗, we employ an induction argument (similarly as in [6]).
-
1.
If , must be satisfied.
-
2.
Increasing Q with a small value (0<Δ≤w−q
M
), i.e., , the vector y ∗ must be structured as
The proof of this result is trivial, and is omitted for brevity.
-
3.
Further increase Q (), then the vector y ∗ will be
-
4.
If , we have
-
5.
Generally, if , the most majority y ∗ is
Now, it is clear that the optimal y∗ is related to the value of Q, and is structured as
(13)
where the value of k is determined by the following inequality
(14)
and y
k
is given by
(15)
Recall that and . There always exists a value of n (0≤n≤k−1) such that
(16)
Consequently, the optimal solution may be rewritten as
(17)
On the other hand, it should be noted that , which implies n must be 0. Thus, we achieve desired results in Theorem 1. Additionally, we know that for the case of or, equivalently, l
i
>T+1, if there exists any feasible solution it must be [w,w,⋯,w]T
Drawn from above discussions, the following theorem can be obtained.
Theorem 2.
If there is any solution to problem (7), it is structured as
(18)
where , k is determined by (11), and x
k
equals
(19)
4.2 Our improved method
Although the solution structure (18) has been addressed in [5], the relationship among k, x
k
, and T is not well revealed due to the limitation of the proof procedure therein. According to the results in [5], a triplet (k,x
k
,T) can be determined by any two of its elements, for example k and x
k
. Our new derivation shows that (k,x
k
,T) in fact can be simply determined by a single element T, while k and x
k
can be obtained by (11) and (19) given T. Therefore, the candidate solution of (18) can be evaluated over different values of T instead of (k,x
k
). As will be shown later, this plays an important role in reducing the computational complexity of our improved method.
A numerical search for different values of k is used in [3] to find the optimal x
k
, and the search is reduced to a finite number of one-dimensional spaces. The search is further reduced in [5] from a set of one-dimensional spaces to a finite set of points by employing the following lemma.
Lemma 2.
In the optimal solution to problem (7), x
k
must accept one of the marginal values given in
(20)
where (see Theorem 3 and Theorem 4 in [5]).
Applying Lemma 2, it is clear that the process of finding all the possible power allocations requires checking k, 1≤k≤M, while trying two marginal values given in (20). Therefore, almost 2M candidate points need to be searched in the algorithm proposed in [5]. However, we note that if the constraints in Lemma 2 are rearranged, a more efficient method can be derived. The key observations for the development of our improved method are:
-
1.
x
k
is a monotonic increasing function of T and vice versa (see (19)).
-
2.
If the optimal x
k
accepts one of the marginal values given in (20), x
k
must be one of the values among .
-
3.
Given T, the values of k and x
k
are uniquely determined and vice versa (the relationship among k, x
k
, and T is given in Theorem 2). Therefore, searching candidate values of T instead of k and x
k
does not result in any loss of optimality
Keeping these observations in mind, we now proceed to derive our efficient method based on the following theorem.
Theorem 3.
In the optimal solution to problem (7), T must accept one of the candidate values given by
(21)
where and T
k
<Tk+1.
Proof.
Relying on Observation 2 and 3, it is clear that proving the above theorem is equivalent to specifying the values of T corresponding to the candidate values of k and x
k
, where and 1≤k≤M. To gain some insight, we now discuss the possible values of x
k
, respectively.
-
1.
For the case of x
k
=l
k
, based on (19), the corresponding T is given by
(22)
where 1≤k≤M. Thus, the candidate values of T in this case are given by
(23)
-
2.
For the case of x
k
=(X max+1)[1−(M−k)φ]−L−1, using the expression for x
k
in (19), we obtain
(24)
or, equivalently, T=Xmax. Hence, in this case, the candidate set of T is
-
3.
For the case of , similarly, we get
Consequently, the candidate set is given as
-
4.
For the case of , we have
(28)
where 1≤k≤M. Therefore, the candidate set of T is
(29)
Putting all the possible T together and using the fact that any feasible T satisfies T<Xmax and , we are able to derive the whole candidate set of T as
(30)
For any T
k
and Tk+1, the solutions are structured as
(31)
(32)
Further, we note that x
k
can be written as
(33)
In this form, it is easy to see that the structures of (32) and (33) are equivalent and φ(1+T
k
)≤lk+1 (which follows from the first constraint of (7)). So, according to Observation 1 and
(34)
we achieve the desired result that T
k
<Tk+1. It should be noted that since T
k
<Tk+1, there always exists a unique j such that
which means the structure of solution for the case of has been determined by the value of j.
Obviously, the candidate set of T contains less than M+2 points that need to be searched to find the optimal solution. Based on these conclusions, our improved algorithm is outlined as follows. Compared with the original 2M search points, less than half of the computational calculations are required by our method in the worst casea.
-
1.
Initialization: X max=P R max/I, , l
i
=p max g
i
/I.
-
2.
For all 0≤k≤M, do the following:
(a) Compute T
k
as
(b) If , compute ϕ
k
as
(c) If , compute ϕ
k
as
and go to step 3, else continue (go to step 2a).
-
3.
Find the smallest ϕ
k
, denoted as ϕ ∗, and retrieve the corresponding x.
-
4.
Output the p
i
and C:
Discussions:
-
If the value of is small, according to our method, very few points need to be searched. In particular, if , just a point needs to be searched. However, if , there is no feasible solution.
-
If the value of is large, e.g., , it is clear that the third constraint in (6) does not affect the final solution and can be removed from the optimization problem.