In this section, we first try to review the optimal structure of the power allocation via majorization theory. Then, we present a fast algorithm by restricting the attention to a new searching variable, and we show that our new method requires less than half of the computational cost of the existing method in the worst case and is even much faster in general.
4.1 Review of the optimal structure of the power allocation via majorization theory
Let {x}_{i}\triangleq {p}_{i}{g}_{i}/I and T\triangleq \sum _{j=1}^{M}{x}_{j}, problem (6) can be equivalently rewritten as
\begin{array}{l}\underset{\mathbf{x},T}{min}\prod _{i=1}^{M}\frac{1+T{x}_{i}}{1+T},\\ \mathit{\text{subject}}\phantom{\rule{1em}{0ex}}\mathit{\text{to}}:& \forall i,\phi (1+T)\le {x}_{i}\le {l}_{i}\\ \sum _{i=1}^{M}{x}_{i}=T,\phantom{\rule{1em}{0ex}}0\le T\le {X}_{\text{max}}\end{array}
(7)
where \mathbf{x}\triangleq {\left[{x}_{1},{x}_{2},\cdots \phantom{\rule{0.3em}{0ex}},{x}_{M}\right]}^{T}, {l}_{i}\triangleq {g}_{i}{p}_{\text{max}}/I, \phi \triangleq \gamma /(\gamma +1), {X}_{\text{max}}\triangleq {P}_{R}^{\text{max}}/I. Note that in deriving (7), the logarithm part is dropped because of its monotonicity. This optimization problem is rather difficult to solve directly, due to its nonconvexity. However, if the value of T is fixed, it is possible to solve the optimization problem via majorization theory in a specific hyperplane described by a linear equation \sum _{j=1}^{M}{x}_{j}=T. After that, we are able to find the optimal solution by performing a onedimensional search on T. Defining {y}_{i}\triangleq \frac{1+T{x}_{i}}{1+T}, the corresponding problem becomes
\begin{array}{l}\underset{\mathbf{y}}{min}\prod _{i=1}^{M}{y}_{i}=\underset{\mathbf{y}}{min}f\left(\mathbf{y}\right),\\ \mathit{\text{subject}}\phantom{\rule{1em}{0ex}}\mathit{\text{to}}:& \forall i,max\left\{\frac{1+T{l}_{i}}{1+T},0\right\}\le {y}_{i}\le 1\phi \\ \sum _{i=1}^{M}{y}_{i}=M\frac{T}{1+T},\phantom{\rule{1em}{0ex}}0\le T\le {X}_{\text{max}}\end{array}
(8)
where \mathbf{y}\triangleq {\left[{y}_{1},{y}_{2},\cdots \phantom{\rule{0.3em}{0ex}},{y}_{M}\right]}^{T} and f:\mathbf{y}\mapsto \prod _{i=1}^{M}{y}_{i}. Since \sum _{i=1}^{M}{y}_{i} is a constant for fixed T, according to Schurconcavity of f(y) (see Lemma 1), it is easy to see that
{\mathbf{y}}^{\ast}\succ \mathbf{y}\phantom{\rule{1em}{0ex}}\Rightarrow \phantom{\rule{1em}{0ex}}f\left({\mathbf{y}}^{\ast}\right)\le f\left(\mathbf{y}\right)
(9)
where {\mathbf{y}}^{\ast}\triangleq {[{y}_{1}^{\ast},{y}_{2}^{\ast},\cdots \phantom{\rule{0.3em}{0ex}},{y}_{M}^{\ast}]}^{T} and \sum _{i=1}^{M}{y}_{i}^{\ast}=M\frac{T}{1+T}. Therefore, solving the optimization problem (8) for fixed T is equivalent to finding the optimal vector y^{∗} such that y^{∗}≻y for any feasible y. The corresponding y^{∗} can be explicitly determined as follows.
Theorem 1.
For any fixed T, the optimal y^{∗} among the constraints of problem (8) must be structured as
\begin{array}{l}{\mathbf{y}}^{\ast}={\left[\underset{k1}{\underset{\u23df}{{\stackrel{~}{l}}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{\stackrel{~}{l}}_{k1}}},{y}_{k},\underset{Mk}{\underset{\u23df}{1\phi ,\cdots \phantom{\rule{0.3em}{0ex}},1\phi}}\right]}^{T}\end{array}
(10)
where {\stackrel{~}{l}}_{i}\triangleq \frac{1+T{l}_{i}}{1+T}, the value of k is determined by the following inequality
\sum _{i=k+1}^{M}(w{\stackrel{~}{l}}_{i})<M\frac{T}{1+T}\sum _{i=1}^{M}{\stackrel{~}{l}}_{i}\le \sum _{i=k}^{M}(w{\stackrel{~}{l}}_{i})
(11)
and y_{
k
} is given by
{y}_{k}={\stackrel{~}{l}}_{k}+\left(M\frac{T}{1+T}\sum _{i=1}^{M}{\stackrel{~}{l}}_{i}\right)\sum _{i=k+1}^{M}(w{\stackrel{~}{l}}_{i}).
(12)
Proof.
In order to simplify the presentation, we rewrite the constraints in (8) as
\begin{array}{l}\forall i,{q}_{i}\le {y}_{i}\le w,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}Q\triangleq \sum _{i=1}^{M}{y}_{i}=M\frac{T}{1+T}\end{array}
where {q}_{i}\triangleq max\left\{{\stackrel{~}{l}}_{i},0\right\} and w\triangleq 1\phi. It is easy to see that q_{
i
}≤q_{i+1} and Q\ge \sum _{i=1}^{M}{q}_{i}, due to their definitions. To obtain the optimal vector y^{∗}, we employ an induction argument (similarly as in [6]).

1.
If Q=\sum _{i=1}^{M}{q}_{i}, \forall i,{y}_{i}^{\ast}={q}_{i} must be satisfied.

2.
Increasing Q with a small value (0<Δ≤w−q _{
M
}), i.e., Q=\sum _{i=1}^{M}{q}_{i}+\Delta, the vector y ^{∗} must be structured as
\begin{array}{l}{y}_{i}^{\ast}={q}_{i},\phantom{\rule{1em}{0ex}}i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},M1,\\ {y}_{M}^{\ast}={q}_{M}+\mathrm{\Delta .}\end{array}
The proof of this result is trivial, and is omitted for brevity.

3.
Further increase Q (w{q}_{M}<\Delta \le \sum _{i=M1}^{M}(w{q}_{i})), then the vector y ^{∗} will be
\begin{array}{l}{y}_{i}^{\ast}={q}_{i},\phantom{\rule{1em}{0ex}}i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},M2,\\ {y}_{M1}^{\ast}={q}_{M1}+\Delta (w{q}_{M}),\\ {y}_{M}^{\ast}=\mathrm{w.}\end{array}

4.
If \sum _{i=M1}^{M}(w{q}_{i})<\Delta \le \sum _{i=M2}^{M}(w{q}_{i}), we have
\begin{array}{l}{y}_{i}^{\ast}={q}_{i},\phantom{\rule{1em}{0ex}}i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},M3,\\ {y}_{M2}^{\ast}={q}_{M2}+\Delta \sum _{i=M1}^{M}(w{q}_{i}),\\ {y}_{M1}^{\ast}={y}_{M}^{\ast}=\mathrm{w.}\end{array}

5.
Generally, if \sum _{i=k+1}^{M}(w{q}_{i})<\Delta \le \sum _{i=k}^{M}(w{q}_{i}), the most majority y ^{∗} is
\begin{array}{l}{y}_{i}^{\ast}={q}_{i},\phantom{\rule{1em}{0ex}}i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},k1,\\ {y}_{k}^{\ast}={q}_{k}+\Delta \sum _{i=k+1}^{M}(w{q}_{i}),\\ {y}_{k+1}^{\ast}=\cdots ={y}_{M}^{\ast}=\mathrm{w.}\end{array}
Now, it is clear that the optimal y^{∗} is related to the value of Q, and is structured as
{\mathbf{y}}^{\ast}={\left[\underset{k1}{\underset{\u23df}{{q}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{q}_{k1}}},{y}_{k},\underset{Mk}{\underset{\u23df}{w,\cdots \phantom{\rule{0.3em}{0ex}},w}}\right]}^{T},
(13)
where the value of k is determined by the following inequality
\sum _{i=k+1}^{M}(w{q}_{i})<M\frac{T}{1+T}\sum _{i=1}^{M}{q}_{i}\le \sum _{i=k}^{M}(w{q}_{i})
(14)
and y_{
k
} is given by
{y}_{k}={q}_{k}+\left(M\frac{T}{1+T}\sum _{i=1}^{M}{q}_{i}\right)\sum _{i=k+1}^{M}(w{q}_{i}).
(15)
Recall that {q}_{i}=max\left\{{\stackrel{~}{l}}_{i},0\right\} and {\stackrel{~}{l}}_{i}\le {\stackrel{~}{l}}_{i+1}. There always exists a value of n (0≤n≤k−1) such that
\left\{\begin{array}{cc}{q}_{i}=0,& i\le n\\ {q}_{i}={\stackrel{~}{l}}_{i},& i>n\end{array}\right.
(16)
Consequently, the optimal solution may be rewritten as
{\mathbf{y}}^{\ast}={\left[\underset{n}{\underset{\u23df}{0,\cdots \phantom{\rule{0.3em}{0ex}},0}},\underset{kn1}{\underset{\u23df}{{\stackrel{~}{l}}_{n+1},\cdots \phantom{\rule{0.3em}{0ex}},{\stackrel{~}{l}}_{k1}}},{y}_{k},\underset{Mk}{\underset{\u23df}{w,\cdots \phantom{\rule{0.3em}{0ex}},w}}\right]}^{T}.
(17)
On the other hand, it should be noted that {y}_{i}=\frac{1+T{x}_{i}}{1+T}>\frac{1}{1+T}>0, which implies n must be 0. Thus, we achieve desired results in Theorem 1. Additionally, we know that for the case of {\stackrel{~}{l}}_{i}<0 or, equivalently, l_{
i
}>T+1, if there exists any feasible solution it must be [w,w,⋯,w]^{T}
Drawn from above discussions, the following theorem can be obtained.
Theorem 2.
If there is any solution to problem (7), it is structured as
\begin{array}{l}\mathbf{x}={\left[\underset{k1}{\underset{\u23df}{{l}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{l}_{k1}}},{x}_{k},\underset{Mk}{\underset{\u23df}{\phi (1+T),\cdots \phantom{\rule{0.3em}{0ex}},\phi (1+T)}}\right]}^{T}\end{array}
(18)
where T=\sum _{i=1}^{M}{\left[\mathbf{x}\right]}_{i}, k is determined by (11), and x_{
k
} equals
\begin{array}{lll}{x}_{k}& =& (1+T)(1{y}_{k})\\ =& (1+T)(1(Mk\left)\phi \right)\sum _{i=1}^{k1}{l}_{i}1.\end{array}
(19)
4.2 Our improved method
Although the solution structure (18) has been addressed in [5], the relationship among k, x_{
k
}, and T is not well revealed due to the limitation of the proof procedure therein. According to the results in [5], a triplet (k,x_{
k
},T) can be determined by any two of its elements, for example k and x_{
k
}. Our new derivation shows that (k,x_{
k
},T) in fact can be simply determined by a single element T, while k and x_{
k
} can be obtained by (11) and (19) given T. Therefore, the candidate solution of (18) can be evaluated over different values of T instead of (k,x_{
k
}). As will be shown later, this plays an important role in reducing the computational complexity of our improved method.
A numerical search for different values of k is used in [3] to find the optimal x_{
k
}, and the search is reduced to a finite number of onedimensional spaces. The search is further reduced in [5] from a set of onedimensional spaces to a finite set of points by employing the following lemma.
Lemma 2.
In the optimal solution to problem (7), x_{
k
} must accept one of the marginal values given in
\begin{array}{l}\left\{\begin{array}{l}{x}_{k}\le min\left\{{l}_{k},\phantom{\rule{1em}{0ex}}({X}_{\text{max}}+1)\left[1(Mk)\phi \right]L1,\phantom{\rule{1em}{0ex}}{l}_{M}\frac{1(Mk)\phi}{\phi}L1\right\}\\ {x}_{k}\ge \frac{\phi (L+1)}{1(Mk+1)\phi}\end{array}\right.\end{array}
(20)
where L\triangleq \sum _{i=1}^{k1}{l}_{i} (see Theorem 3 and Theorem 4 in [5]).
Applying Lemma 2, it is clear that the process of finding all the possible power allocations requires checking k, 1≤k≤M, while trying two marginal values given in (20). Therefore, almost 2M candidate points need to be searched in the algorithm proposed in [5]. However, we note that if the constraints in Lemma 2 are rearranged, a more efficient method can be derived. The key observations for the development of our improved method are:

1.
x _{
k
} is a monotonic increasing function of T and vice versa (see (19)).

2.
If the optimal x _{
k
} accepts one of the marginal values given in (20), x _{
k
} must be one of the values among \left\{{l}_{k},\phantom{\rule{0.5em}{0ex}}({X}_{\text{max}}+1)\left[1(Mk)\phi \right]L1,{l}_{M}\frac{1(Mk)\phi}{\phi}\right.\left(\right)close="}">\n \n \u2212\n L\n \u2212\n 1\n ,\n \n \n \phi \n (\n L\n +\n 1\n )\n \n \n 1\n \u2212\n (\n M\n \u2212\n k\n +\n 1\n )\n \phi \n \n \n \n.

3.
Given T, the values of k and x _{
k
} are uniquely determined and vice versa (the relationship among k, x _{
k
}, and T is given in Theorem 2). Therefore, searching candidate values of T instead of k and x _{
k
} does not result in any loss of optimality
Keeping these observations in mind, we now proceed to derive our efficient method based on the following theorem.
Theorem 3.
In the optimal solution to problem (7), T must accept one of the candidate values given by
\begin{array}{l}\left\{\left(\right)close="">{T}_{k}{T}_{k}\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\},k=0,1,\cdots \phantom{\rule{0.3em}{0ex}},M\right\}& \bigcup \end{array}\n \n \n \n min\n \n \n \n \n X\n \n \n max\n \n \n ,\n \n \n \n \n l\n \n \n M\n \n \n \n \n \phi \n \n \n \u2212\n 1\n \n \n \n \n
(21)
where {T}_{k}\triangleq \frac{\sum _{i=1}^{k}{l}_{i}+(Mk)\phi}{1(Mk)\phi} and T_{
k
}<T_{k+1}.
Proof.
Relying on Observation 2 and 3, it is clear that proving the above theorem is equivalent to specifying the values of T corresponding to the candidate values of k and x_{
k
}, where {x}_{k}\in \left\{{l}_{k},\phantom{\rule{0.3em}{0ex}}({X}_{\text{max}}+1)\left[1(Mk)\right.\right.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left(\right)close="}">\n \n close="]">\n \n \phi \n \u2212\n L\n \u2212\n 1\n \n \n ,\n \n \n l\n \n \n M\n \n \n \n \n 1\n \u2212\n (\n M\n \u2212\n k\n )\n \phi \n \n \n \phi \n \n \n \u2212\n L\n \u2212\n 1\n ,\n \n \n \n \phi \n (\n L\n +\n 1\n )\n \n \n 1\n \u2212\n (\n M\n \u2212\n k\n +\n 1\n )\n \phi \n \n \n and 1≤k≤M. To gain some insight, we now discuss the possible values of x_{
k
}, respectively.

1.
For the case of x _{
k
}=l _{
k
}, based on (19), the corresponding T is given by
\begin{array}{lll}T& =& \frac{L+{x}_{k}+(Mk)\phi}{1(Mk)\phi}\\ =& \frac{\sum _{i=1}^{k}{l}_{i}+(Mk)\phi}{1(Mk)\phi}\triangleq {T}_{k}\end{array}
(22)
where 1≤k≤M. Thus, the candidate values of T in this case are given by
{\mathbb{T}}_{1}=\left\{\left(\right)close="">{T}_{k}k=1,\cdots \phantom{\rule{0.3em}{0ex}},M\right\}\n
(23)

2.
For the case of x _{
k
}=(X _{max}+1)[1−(M−k)φ]−L−1, using the expression for x _{
k
} in (19), we obtain
\begin{array}{l}({X}_{\text{max}}+1)\left[1(Mk)\phi \right]L1=(1+T)\\ (1(Mk\left)\phi \right)L1\end{array}
(24)
or, equivalently, T=X_{max}. Hence, in this case, the candidate set of T is
{\mathbb{T}}_{2}=\left\{{X}_{\text{max}}\right\}.
(25)

3.
For the case of {x}_{k}={l}_{M}\frac{1(Mk)\phi}{\phi}L1, similarly, we get
Consequently, the candidate set is given as
{\mathbb{T}}_{3}=\left\{\frac{{l}_{M}}{\phi}1\right\}.
(27)

4.
For the case of {x}_{k}=\frac{\phi (L+1)}{1(Mk+1)\phi}, we have
\begin{array}{lll}T& =& \frac{L+{x}_{k}+(Mk)\phi}{1(Mk)\phi}\\ =& \frac{\sum _{i=1}^{k1}{l}_{i}+(M(k1\left)\right)\phi}{1(Mk+1)\phi}\end{array}
(28)
where 1≤k≤M. Therefore, the candidate set of T is
{\mathbb{T}}_{4}=\left\{{T}_{k}\phantom{\rule{0.3em}{0ex}}\left(\right)close="">k=0,\cdots \phantom{\rule{0.3em}{0ex}},M1\right\}\n
(29)
Putting all the possible T together and using the fact that any feasible T satisfies T<X_{max} and T<\frac{{l}_{M}}{\phi}1, we are able to derive the whole candidate set of T as
\begin{array}{ll}\phantom{\rule{20.0pt}{0ex}}\mathbb{T}& ={\left(\right)close="">\left\{{\mathbb{T}}_{1}\bigcup {\mathbb{T}}_{2}\bigcup {\mathbb{T}}_{3}\bigcup {\mathbb{T}}_{4}\right\}}_{}T\le min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}& \phantom{\rule{2em}{0ex}}\end{array}\n \n \n =\n \n \n close="">\n \n \n \n T\n \n \n k\n \n \n \n \n \n \n T\n \n \n k\n \n \n \n min\n \n \n \n \n X\n \n \n max\n \n \n ,\n \n \n \n \n l\n \n \n M\n \n \n \n \n \phi \n \n \n \u2212\n 1\n \n \n ,\n k\n =\n 0\n ,\n 1\n ,\n \cdots \n \n ,\n M\n \n \n \n \n
(30)
For any T_{
k
} and T_{k+1}, the solutions are structured as
\begin{array}{ll}\phantom{\rule{1em}{0ex}}{\mathbf{x}}_{k}=& \phantom{\rule{0.5em}{0ex}}{\left[\underset{k1}{\underset{\u23df}{{l}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{l}_{k1}}},{l}_{k},\underset{Mk}{\underset{\u23df}{\phi (1+{T}_{k}),\cdots \phantom{\rule{0.3em}{0ex}},\phi (1+{T}_{k})}}\right]}^{T},\end{array}
(31)
\begin{array}{ll}\phantom{\rule{1em}{0ex}}{\mathbf{x}}_{k+1}=& \phantom{\rule{0.5em}{0ex}}{\left[\underset{k}{\underset{\u23df}{{l}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{l}_{k}}},{l}_{k+1},\underset{Mk1}{\underset{\u23df}{\phi (1+{T}_{k+1}),\cdots \phantom{\rule{0.3em}{0ex}},\phi (1+{T}_{k+1})}}\right]}^{T}.\end{array}
(32)
Further, we note that x_{
k
} can be written as
\begin{array}{ll}\phantom{\rule{1em}{0ex}}{\mathbf{x}}_{k}& ={\left[\underset{k}{\underset{\u23df}{{l}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{l}_{k}}},\phi (1+{T}_{k}),\underset{Mk1}{\underset{\u23df}{\phi (1+{T}_{k}),\cdots \phantom{\rule{0.3em}{0ex}},\phi (1+{T}_{k})}}\right]}^{T}\end{array}
(33)
In this form, it is easy to see that the structures of (32) and (33) are equivalent and φ(1+T_{
k
})≤l_{k+1} (which follows from the first constraint of (7)). So, according to Observation 1 and
\phi (1+{T}_{k})={\left[{\mathbf{x}}_{k}\right]}_{k+1}<{\left[{\mathbf{x}}_{k+1}\right]}_{k+1}={l}_{k+1},
(34)
we achieve the desired result that T_{
k
}<T_{k+1}. It should be noted that since T_{
k
}<T_{k+1}, there always exists a unique j such that
{T}_{j}\le min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}\le {T}_{j+1}
which means the structure of solution for the case of T=min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\} has been determined by the value of j.
Obviously, the candidate set of T contains less than M+2 points that need to be searched to find the optimal solution. Based on these conclusions, our improved algorithm is outlined as follows. Compared with the original 2M search points, less than half of the computational calculations are required by our method in the worst case^{a}.

1.
Initialization: X _{max}=P R max/I, \phi =\frac{\gamma}{1+\gamma}, l _{
i
}=p _{max} g _{
i
}/I.

2.
For all 0≤k≤M, do the following:
(a) Compute T_{
k
} as
{T}_{k}=\frac{{\Sigma}_{i=1}^{k}{l}_{i}+(Mk)\phi}{1(Mk)\phi}.
(b) If {T}_{k}\le min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}, compute ϕ_{
k
} as
{\varphi}_{k}=\frac{{(1\phi )}^{(Mk)}}{{(1+{T}_{k})}^{k}}\prod _{i=1}^{k}(1+{T}_{k}{l}_{i}).
(c) If {T}_{k}>min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}, compute ϕ_{
k
} as
\begin{array}{l}{\varphi}_{k}=\frac{\left(\left(1+min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}\right)(Mk)\phi +\sum _{i=1}^{k1}{l}_{i}+1\right){(1\phi )}^{(Mk)}}{{\left(1+min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}\right)}^{k}}\\ \xb7\prod _{i=1}^{k1}\left(1+min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}{l}_{i}\right)\end{array}
and go to step 3, else continue (go to step 2a).

3.
Find the smallest ϕ _{
k
}, denoted as ϕ ^{∗}, and retrieve the corresponding x.

4.
Output the p _{
i
} and C:
\begin{array}{l}{p}_{i}={\mathit{\text{Ix}}}_{i}/{g}_{i},\\ C=\underset{2}{log}{\varphi}^{\ast}.\end{array}
Discussions:

If the value of min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\} is small, according to our method, very few points need to be searched. In particular, if {T}_{0}\le min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}\le {T}_{1}, just a point T=min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\} needs to be searched. However, if min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}<{T}_{0}, there is no feasible solution.

If the value of min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\} is large, e.g., min\left\{{X}_{\text{max}},\frac{{l}_{M}}{\phi}1\right\}\ge {T}_{M}, it is clear that the third constraint in (6) does not affect the final solution and can be removed from the optimization problem.