3.1 Task scheduling subproblem
When the power control vector \({\mathbf {p}}\) and the offloading ratio \({\varvec{\eta }}\) are fixed, the optimization problem P1 can transformed into K task scheduling subproblems, where the subproblem for VS k is given as
$$\begin{aligned} P2:&\min \limits _{m_{k}[n]}\sum _{n=1}^{N}E_{k}[n] \end{aligned}$$
(11a)
$$\begin{aligned} \text{s.t.}\;\,&\sum _{n\in {\mathcal{N}}}m_{k}[n]\ge M_{k},\end{aligned}$$
(11b)
$$\begin{aligned}&\tau _{k}^{{\mathrm{off}}}[n]\le \frac{T}{K},\quad \forall n\in {\mathcal{N}}, \end{aligned}$$
(11c)
$$\begin{aligned}&\tau _{k}^{{\mathrm{loc}}}[n]\le T,\quad \forall n\in {\mathcal{N}}, \end{aligned}$$
(11d)
$$\begin{aligned}&m_{k}[n]\ge 0,\quad \forall n\in {\mathcal{N}}, \end{aligned}$$
(11e)
For convenience, the optimization objective in P2 is simplified as
$$\begin{aligned} \begin{aligned} E_{k}[n]&=\left[ \left( 1-\eta _{k}[n]\right) C_{k}[n]\varpi _{k}f_{k}^{2}[n]+\frac{p_{k}[n]\eta _{k}[n]}{BR_{k}[n]}\right] m_{k}[n]+\frac{T}{K}P_{c}\\&=\varDelta _{k}[n]m_{k}[n]+\frac{T}{K}P_{c}, \end{aligned} \end{aligned}$$
(12)
where \(\varDelta _{k}[n]=\left( 1-\eta _{k}[n]\right) C_{k}[n]\varpi _{k}f_{k}^{2}[n]+\frac{p_{k}[n]\eta _{k}[n]}{BR_{k}[n]}\).
The constraint (11c) and constraint (11d) can be rewritten as
$$\begin{aligned} {\left\{ \begin{array}{ll} m_{k}[n]\le \frac{Tf_{k}[n]}{\left( 1-\eta _{k}[n]\right) C_{k}[n]},\\ m_{k}[n]\le \frac{TBR_{k}[n]}{K\eta _{k}[n]}. \end{array}\right. } \end{aligned}$$
(13)
Therefore, the following inequality is obtained.
$$\begin{aligned} 0\le m_{k}[n]\le \min \left\{ \frac{Tf_{k}[n]}{\left( 1-\eta _{k}[n]\right) C_{k}[n]}\text{,}\frac{TBR_{k}[n]}{K\eta _{k}[n]}\right\} . \end{aligned}$$
(14)
Since P2 is a linear programming problem, we solve it by using interior point method. By applying the log-barrier method, P2 is transformed into an unconstrained optimization problem where the optimization objective is given by
$$\begin{aligned} \begin{aligned} \Phi (m_{k}[n])&=t\sum _{n=1}^{N}\left( \varDelta _{k}[n]m_{k}[n]+\frac{T}{K}P_{c}\right) \\&\quad -\ln \left( -M_{k}+\sum _{n\in {\mathcal{N}}}m_{k}[n]\right) \\&\quad -\sum _{n=1}^{N}\ln \left( -m_{k}[n]+\Lambda _{k}[n]\right) , \end{aligned} \end{aligned}$$
(15)
where \(\Lambda _{k}[n]=\min \left\{ \frac{Tf_{k}[n]}{\left( 1-\eta _{k}[n]\right) C_{k}[n]}\text{,}\frac{TBR_{k}[n]}{K\eta _{k}[n]}\right\}\).
By deriving the first-order derivative of \(\Phi (m_{k}[n])\) with respect to \(m_{k}[n]\), the following equation can be obtained as
$$\begin{aligned} \frac{\partial \Phi }{\partial m_{k}[n]}=t\varDelta _{k}[n]-\frac{1}{\sum _{n\in {\mathcal{N}}}m_{k}[n]-M_{k}}+\frac{1}{\Lambda _{k}[n]-m_{k}[n]}. \end{aligned}$$
(16)
According to the above derivation, we have
$$\begin{aligned} \begin{aligned} \nabla \Phi (m_{k}[n]) =\left( t\varDelta _{k}[n]-\frac{1}{\sum _{n\in {\mathcal{N}}}m_{k}[n]-M_{k}}+\frac{1}{\Lambda _{k}[n]-m_{k}[n]}\right) _{n=1}^{N}. \end{aligned} \end{aligned}$$
(17)
Moreover, the second-order derivative of \(\Phi (m_{k}[n])\) with respect to \(m_{k}[n]\) is given by
$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial ^{2}\Phi }{\partial ^{2} m_{k}[n]}=\frac{1}{\left( \sum _{n\in {\mathcal{N}}}m_{k}[n]-M_{k}\right) ^{2}}+\frac{1}{\left( \Lambda _{k}[n]-m_{k}[n]\right) ^{2}},\\ \frac{\partial ^{2}\Phi }{\partial m_{k}[n]\partial m_{k}[l]}=\frac{1}{\left( \sum _{n\in {\mathcal{N}}}m_{k}[n]-M_{k}\right) ^{2}}. \end{array}\right. } \end{aligned}$$
(18)
The Hessian matrix of \(\Phi (m_{k}[n])\) is represented as
$$\begin{aligned} \begin{aligned} {\mathbf {H}}(m_{k}[n])&=\frac{1}{\left( \sum _{n\in {\mathcal{N}}}m_{k}[n]-M_{k}\right) ^{2}}{\mathbf {I}}\\&\quad +{\text{diag}}\left( \frac{1}{\left( \Lambda _{k}[1]-m_{k}[1]\right) ^{2}},\cdots ,\frac{1}{\left( \Lambda _{k}[n]-m_{k}[n]\right) ^{2}}\right) . \end{aligned} \end{aligned}$$
(19)
By applying interior point method to solve P2, the update strategy of task scheduling \({\mathbf {m}}_{k}\) can be expressed as
$$\begin{aligned} {\mathbf {m}}_{k}^{t}={\mathbf {m}}_{k}^{t-1}-{\mathbf {H}}^{-1}\left( {\mathbf {m}}_{k}^{t-1}\right) \nabla \Phi \left( {\mathbf {m}}_{k}^{t-1}\right) . \end{aligned}$$
(20)
The detailed solving process is described in Algorithm 1.
3.2 Offloading ratio subproblem
When the power control vector \({\mathbf {p}}\) and the task scheduling \({\mathbf {m}}\) are fixed, P1 can be split to N subproblems, which is equivalently formulated as
$$\begin{aligned} P3:&\min \limits _{\eta _{k}[n]}\sum _{k=1}^{K}E_{k}[n] \end{aligned}$$
(21a)
$$\begin{aligned} \text{s.t.}\;\,&\tau _{k}^{{\mathrm{off}}}[n]\le \frac{T}{K},\quad \forall k\in {\mathcal{K}}, \end{aligned}$$
(21b)
$$\begin{aligned}&\tau _{k}^{{\mathrm{loc}}}[n]\le T,\quad \forall k\in {\mathcal{K}}, \end{aligned}$$
(21c)
$$\begin{aligned}&\eta _{k}[n]\in [0,1],\quad \forall k\in {\mathcal{K}}. \end{aligned}$$
(21d)
For convenience, the optimization objective in P3 is rewritten as
$$E_{k}[n]=\left( \frac{p_{k}[n]m_{k}[n]}{BR_{k}[n]}-m_{k}[n]C_{k}[n]\varpi _{k}f_{k}^{2}[n]\right) \eta _{k}[n] +m_{k}[n]C_{k}[n]\varpi _{k}f_{k}^{2}[n]+\frac{T}{K}P_{c}.$$
(22)
The constraint (21b) and constraint (21c) can be rewritten as
$$\begin{aligned} {\left\{ \begin{array}{ll} 1-\frac{Tf_{k}[n]}{m_{k}[n]C_{k}[n]}\le \eta _{k}[n],\\ \eta _{k}[n]\le \frac{TBR_{k}[n]}{Km_{k}[n]}. \end{array}\right. } \end{aligned}$$
(23)
It is trivial to verify that the optimal solution of the offloading ratio is given as
$$\begin{aligned} \eta _{k}^{{\mathrm{opt}}}[n]= {\left\{ \begin{array}{ll} \max \left( 1-\frac{Tf_{k}[n]}{m_{k}[n]C_{k}[n]},0\right) , &{}\quad {\text{if}}\;p_{k}[n]\ge BR_{k}[n]C_{k}[n]\varpi _{k}f_{k}^{2}[n],\\ \min \left( \frac{TBR_{k}[n]}{Km_{k}[n]},1\right) ,&{}\quad {\text{otherwise}}, \end{array}\right. } \end{aligned}$$
(24)
for \(\forall k\in {\mathcal{K}}\).
3.3 Power control subproblem
When \({\mathbf {m}}\) and \({\varvec{\eta }}\) are given, P1 is split to NK subproblems, which is equivalently formulated as
$$\begin{aligned} P4:&\min \limits _{p_{k}[n]}E_{k}[n] \end{aligned}$$
(25a)
$$\begin{aligned} \text{s.t.}~&R_{k}[n]\ge R_{k}^{{\min }}, \end{aligned}$$
(25b)
$$\begin{aligned}&\tau _{k}^{{\mathrm{off}}}[n]\le \frac{T}{K}, \end{aligned}$$
(25c)
$$\begin{aligned}&0\le p_{k}[n]\le p_{k}^{{\max }}. \end{aligned}$$
(25d)
Constraint (25c) can be rewritten as
$$\begin{aligned} \frac{K\eta _{k}[n]m_{k}[n]}{TB}\le R_{k}[n]. \end{aligned}$$
(26)
Since the objective function in P4 is non-convex, P4 is transformed as P4.1 by introducing the auxiliary variables \(\chi _{k}[n], k\in {\mathcal{K}}, n\in {\mathcal{N}}\).
$$\begin{aligned} P4.1:&\min \limits _{p_{k}[n],\chi _{k}[n]}\frac{\eta _{k}[n]m_{k}[n]}{B}\chi _{k}[n] \end{aligned}$$
(27a)
$$\begin{aligned} \text{s.t.}\;\,&R_{k}[n]\ge \max \left\{ R_{k}^{{\min }},\frac{K\eta _{k}[n]m_{k}[n]}{TB}\right\} , \end{aligned}$$
(27b)
$$\begin{aligned}&R_{k}[n]\ge \frac{p_{k}[n]}{\chi _{k}[n]}\ge 0, \end{aligned}$$
(27c)
$$\begin{aligned}&0\le p_{k}[n]\le p_{k}^{{\max }}. \end{aligned}$$
(27d)
In P4.1, constraints (27b) and (27c) are still non-convex due to the non-convexity of the logarithm terms in \(R_{k}[n]\) and the coupling of two variables.
For the convenience of calculation, constraint (27c) is rewritten as
$$\begin{aligned} & \log _{2} \left( {1 + \frac{{p_{k} [n]h_{k} [n]}}{{P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} }}} \right) - \log _{2} \left( {1 + \frac{{p_{k} [n]g_{k} [n]}}{{P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} }}} \right) \ge \frac{{p_{k} [n]}}{{\chi _{k} [n]}} \\ & \quad \Rightarrow \chi _{k} [n]\left( {\log _{2} \left( {1 + \frac{{p_{k} [n]h_{k} [n]}}{{P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} }}} \right) - \log _{2} \left( {1 + \frac{{p_{k} [n]g_{k} [n]}}{{P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} }}} \right)} \right) \ge p_{k} [n] \\ & \quad \Rightarrow \chi _{k} [n]\left( {\log _{2} \left( {p_{k} [n]h_{k} [n] + P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log \left( {P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right)} \right. \\ & \left. { \quad \quad - \log _{2} \left( {p_{k} [n]g_{k} [n] + P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) + \log _{2} \left( {P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right)} \right) \ge p_{k} [n] \\ & \quad \Rightarrow \chi _{k} [n]\left( {\log _{2} \left( {p_{k} [n]h_{k} [n] + P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log _{2} \left( {p_{k} [n]g_{k} [n] + P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right)} \right) \\ & \quad \ge p_{k} [n] + \chi _{k} [n]\left( {\log \left( {P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log _{2} \left( {P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right)} \right). \\ \end{aligned}$$
(28)
Constraint (28) is relaxed as the following two constraints by introducing the auxiliary variables \(\varphi _{k}[n], k\in {\mathcal{K}}, n\in {\mathcal{N}}\).
$$\begin{aligned} f_{1} (p_{k} [n]) - f_{2} (p_{k} [n])& \triangleq \log _{2} \left( {p_{k} [n]h_{k} [n] + P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) \\ & \quad - \log _{2} \left( {p_{k} [n]g_{k} [n] + P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) \ge \varphi _{k} [n], \\ \end{aligned}$$
(29)
and
$$\begin{aligned} \begin{aligned} \chi _{k}[n]\varphi _{k}[n]&\ge p_{k}[n]+\chi _{k}[n]\left( \log \left( P_{\mathrm{ICI}}h_{k}[n]+\sigma _{k}^{2}\right) \right. \\&\quad \left. -\log _{2}\left( P_{\mathrm{ICI}}g_{k}[n]+\sigma _{e}^{2}\right) \right) . \end{aligned} \end{aligned}$$
(30)
Since \(f_{1}\) and \(f_{2}\) are concave functions, \(f_{1}(p_{k}[n])-f_{2}(p_{k}[n])\) is a form of subtraction of two concave functions. We adopt the successive convex approximation (SCA) technique to re-express (29) in the \((t+1)\)-th iteration, which is given by
$$\begin{aligned} & \log _{2} \left( {p_{k} [n]h_{k} [n] + P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log _{2} \left( {p_{k} [n]g_{k} [n] + P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) \ge \varphi _{k} [n] \\ & \quad - \Rightarrow \log _{2} \left( {p_{k} [n]h_{k} [n] + P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) \\ & \quad \quad \left( {\log _{2} \left( {p_{k}^{t} [n]g_{k} [n] + P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) + \frac{{g_{k} [n]}}{{p_{k}^{t} [n]\ln 2}}\left( {p_{k} [n] - p_{k}^{t} [n]} \right)} \right) \ge \varphi _{k} [n] \\ & \quad \Rightarrow \log _{2} \left( {p_{k} [n]h_{k} [n] + P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \frac{{g_{k} [n]}}{{p_{k}^{t} [n]\ln 2}}p_{k} [n] - \varphi _{k} [n] \\ & \quad \ge \log _{2} \left( {p_{k}^{t} [n]g_{k} [n] + P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) - \frac{{g_{k} [n]}}{{p_{k}^{t} [n]\ln 2}}p_{k}^{t} [n]. \\ \end{aligned}$$
(31)
The first item in inequality (30) is coupled with respect to the optimization variables. Its first-order Taylor expansion around a feasible point \((\chi _{k}^{t}[n],\varphi _{k}^{t}[n])\) at the \((t+1)\)-th iteration is given as
$$\begin{aligned} \begin{aligned} \chi _{k}[n]\varphi _{k}[n]&=\frac{\left( \chi _{k}[n]+\varphi _{k}[n]\right) ^{2}-\left( \chi _{k}[n]-\varphi _{k}[n]\right) ^{2}}{4}\\&\ge \frac{\left( \chi _{k}^{t}[n]+\varphi _{k}^{t}[n]\right) \left( \chi _{k}[n]+\varphi _{k}[n]\right) }{2}\\&\quad -\frac{\left( \chi _{k}^{t}[n]+\varphi _{k}^{t}[n]\right) ^{2}}{4}-\frac{\left( \chi _{k}[n]-\varphi _{k}[n]\right) ^{2}}{4}. \end{aligned} \end{aligned}$$
(32)
Thus, constraint (30) is derived as a further tight constraint (33) at the \((t+1)\)-th iteration.
$$\begin{aligned} & \frac{{\left( {\chi _{k}^{t} [n] + \varphi _{k}^{t} [n]} \right)\varphi _{k} [n]}}{2} - \frac{{\left( {\chi _{k} [n] - \varphi _{k} [n]} \right)^{2} }}{4} \ge p_{k} [n] + \frac{{\left( {\chi _{k}^{t} [n] + \varphi _{k}^{t} [n]} \right)^{2} }}{4} \\ & \quad + \chi _{k} [n]\left( {\log \left( {P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log _{2} \left( {P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) - \frac{{\left( {\chi _{k}^{t} [n] + \varphi _{k}^{t} [n]} \right)}}{2}} \right). \\ \end{aligned}$$
(33)
To convexify (27b), the first-order Taylor expansion of the first two items in (27b) around a feasible point \(p_{k}^{t}[n]\) at the \((t+1)\)-th iteration is given and (27b) is derived as a further tight constraint (34) at the \((t+1)\)-th iteration.
$$\begin{aligned} & \log _{2} \left( {1 + \frac{{p_{k} [n]h_{k} [n]}}{{P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} }}} \right) - \log _{2} \left( {1 + \frac{{p_{k} [n]g_{k} [n]}}{{P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} }}} \right) \\ & \quad \ge \max \left\{ {R_{k}^{{\min }} ,\frac{{K\eta _{k} [n]m_{k} [n]}}{{TB}}} \right\} \\ & \quad \Rightarrow \log _{2} \left( {p_{k} [n]h_{k} [n] + P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log _{2} \left( {p_{k} [n]g_{k} [n] + P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) \\ & \quad \ge \max \left\{ {R_{k}^{{\min }} ,\frac{{K\eta _{k} [n]m_{k} [n]}}{{TB}}} \right\} + \log \left( {P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log _{2} \left( {P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) \\ & \quad \Rightarrow \log _{2} \left( {p_{k} [n]h_{k} [n] + P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log _{2} \left( {p_{k}^{t} [n]g_{k} [n] + P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right) \\ & \quad \quad - \frac{{g_{k} [n]}}{{p_{k}^{t} [n]\ln 2}}\left( {p_{k} [n] - p_{k}^{t} [n]} \right) \\ & \quad \ge \max \left\{ {R_{k}^{{\min }} ,\frac{{K\eta _{k} [n]m_{k} [n]}}{{TB}}} \right\} + \log \left( {P_{{{\text{ICI}}}} h_{k} [n] + \sigma _{k}^{2} } \right) - \log _{2} \left( {P_{{{\text{ICI}}}} g_{k} [n] + \sigma _{e}^{2} } \right). \\ \end{aligned}$$
(34)
According to the above analysis, problem P4.1 is reformulated as P4.2, i.e.,
$$\begin{aligned} P4.2:&\min \limits _{p_{k}[n],\chi _{k}[n], \varphi _{k}[n]}\frac{\eta _{k}[n]m_{k}[n]}{B}\chi _{k}[n] \end{aligned}$$
(35a)
$$\begin{aligned} \text{s.t.}\;\,&(31), (33), (34), \end{aligned}$$
(35b)
$$\begin{aligned}&0\le p_{k}[n]\le p_{k}^{{\max }}, \end{aligned}$$
(35c)
which is a convex optimization problem and can be solved by CVX effectively.
3.4 Algorithm description
In Algorithm 1, the task scheduling \({\mathbf {m}}\) with the given \({\varvec{\eta }}\) and \({\mathbf {p}}\) is optimized, and in Algorithm 2, the joint offloading ratio and power control \(({\varvec{\eta }},{\mathbf {p}})\) with the given \({\mathbf {m}}\) are solved. The joint iterative optimization procedure is presented in Algorithm 3.
In order to verify the convergence of Algorithm 3, we define \(\phi _{1}({\mathbf {m}},{\varvec{\eta }},{\mathbf {p}})\) as the objective value of P1, and define \(\phi _{2}({\mathbf {m}},{\varvec{\eta }},{\mathbf {p}})\), \(\phi _{3}({\mathbf {m}},{\varvec{\eta }},{\mathbf {p}})\), \(\phi _{4}({\mathbf {m}},{\varvec{\eta }},{\mathbf {p}})\) and \(\phi _{4.2}({\mathbf {m}},{\varvec{\eta }},{\mathbf {p}})\) as the sum of the objective values for all \(k\in {\mathcal{K}}\) in P2, all \(n\in {\mathcal{N}}\) in P3, and all \(k\in {\mathcal{K}}\), \(n\in {\mathcal{N}}\) in P4 and P4.2, respectively.
In step 5 of Algorithm 2, we can obtain the solution \({\mathbf {p}}^{t}\) to P4.2 with the feasible points \({\mathbf {p}}^{t-1}\), \({\varvec{\chi }}^{t-1}\) and \({\varvec{\varphi }}^{t-1}\). As the iterative number increasing in step 6 and step 7, the feasible points are updated and the feasible region is enlarged with the SCA method. When \({\mathbf {p}}^{t}={\mathbf {p}}^{t-1}\), we have \(\phi _{4.2}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t-1})\ge \phi _{4.2}({\mathbf {p}}^{t},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t-1})\), where \({\mathbf {p}}^{t}\) is the globally optimal solution for P4.2 with fixed \({\mathbf {m}}^{t-1}\) and \({\varvec{\eta }}^{t-1}\). Since P4.2 is equivalent to P4, the inequality \(\phi _{4}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t-1})\ge \phi _{4}({\mathbf {p}}^{t},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t-1})\) holds. In Algorithm 2, the obtained solutions between inner loop and outer loop have the following relationship: \(\phi _{3}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t-1})\ge \phi _{3}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t})=\phi _{4}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t})\ge \phi _{4}({\mathbf {p}}^{t},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t})=\phi _{3}({\mathbf {p}}^{t},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t})\) That is, \(\phi _{3}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1}\), \({\varvec{\eta }}^{t-1})\) \(\ge\) \(\phi _{3}({\mathbf {p}}^{t},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t})\).
In Algorithm 3, the obtained solutions in step 2 and step 3 have the following relationship: \(\phi _{1}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t-1})=\phi _{2}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t-1})\ge \phi _{2}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t},{\varvec{\eta }}^{t-1})=\phi _{3}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t},{\varvec{\eta }}^{t-1})\ge \phi _{3}({\mathbf {p}}^{t},{\mathbf {m}}^{t},{\varvec{\eta }}^{t})=\phi _{2}({\mathbf {p}}^{t},{\mathbf {m}}^{t},{\varvec{\eta }}^{t})=\phi _{1}({\mathbf {p}}^{t},{\mathbf {m}}^{t},{\varvec{\eta }}^{t})\). That is, \(\phi _{1}({\mathbf {p}}^{t-1},{\mathbf {m}}^{t-1},{\varvec{\eta }}^{t-1})\ge \phi _{1}({\mathbf {p}}^{t},{\mathbf {m}}^{t},{\varvec{\eta }}^{t})\). It is obviously obtained that \(\phi _{1}\) is monotonically non-increasing with respect to the iteration number and is lower bounded by a finite value. Therefore, Algorithm 3 is guaranteed to converge.