In this section, we derive an efficient algorithm via modifying the standard ADMM framework to solve the formulated problem in (17). Specifically, since solving (17) directly is very difficult due to the existence of the discontinuous function \(\max \{\cdot \}\), we first introduce an auxiliary variable \(\eta\) to rephrase (17) as,
$$\begin{aligned}&\min _{\varvec{x}_l}~\eta +\tau \sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2}+\beta \varvec{x}^H\varvec{\Omega }\varvec{x} \nonumber \\&\quad \text {s.t.}~\big \Vert \varvec{A}_p^H\varvec{x}\Vert ^2_2 D_p\big \le \frac{\eta }{w_p}, p=1,\ldots ,P, \nonumber \\&\quad \text {PVPR}(\varvec{x}_l)\le \sigma _l, l \in \mathbb {S}, \end{aligned}$$
(19)
where \(\varvec{A}_p=\breve{\varvec{A}}_{\theta _p}\) for notional brevity.
Furthermore, (18) is equivalent to,
$$\begin{aligned}&\min _{\varvec{x},\eta }~\eta +\tau \sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2}+\beta \varvec{x}^H\varvec{\Omega }\varvec{x} \nonumber \\&\quad \text {s.t.}~D_p\frac{\eta }{w_p}\le \Vert \varvec{A}_p^H \varvec{x}\Vert ^2_2\le D_p+\frac{\eta }{w_p}, p=1,\ldots ,P, \nonumber \\&\quad \text {PVPR}(\varvec{x}_l)\le \sigma _l,l \in \mathbb {S}, \end{aligned}$$
(20)
By defining \(\hat{\varvec{A}}_p=\sqrt{w_p}\varvec{A}_p\), we continue to simplify (19) as,
$$\begin{aligned}&\min _{\varvec{x},\eta }~\eta +\tau \sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2} +\beta \varvec{x}^H\varvec{\Omega }\varvec{x} \nonumber \\&\quad \text {s.t.}~D_pw_p\eta \le \Vert \hat{\varvec{A}}_p^H\varvec{x}\Vert ^2_2\le D_pw_p+\eta , p=1,\ldots ,P, \nonumber \\&\quad \text {PVPR}(\varvec{x}_l)\le \sigma _l, l \in \mathbb {S}. \end{aligned}$$
(21a)
Note that (20) is still difficult to handle as \(\eta\) is coupled in the lower and upper bounds, which makes (20) intractable. To solve (20), we again introduce an auxiliary variable \(\varvec{v}_p\) and impose the equality constraints, i.e., \(\hat{\varvec{A}}_p^H\varvec{x}=\varvec{v}_p\) for \(p=1,\ldots ,P\) on (20), yielding the following equivalent problem,
$$\begin{aligned}&\min _{\varvec{x},\varvec{v}_p,\eta }~\eta +\tau \sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2}+\beta \varvec{x}^H\varvec{\Omega }\varvec{x} \nonumber \\&\quad \text {s.t.}~\hat{\varvec{A}}_p^H\varvec{x}=\varvec{v}_p,p=1,\ldots ,P, \end{aligned}$$
(21b)
$$\begin{aligned}&\quad d_p\eta \le \Vert \varvec{v}_p\Vert ^2_2 \le d_p+\eta ,p=1,\ldots ,P, \end{aligned}$$
(21c)
$$\begin{aligned}&\quad \text {PVPR}(\varvec{x}_l)\le \sigma _l, l \in \mathbb {S}, \end{aligned}$$
(22)
where \(d_p=D_pw_p\). From (21), we can see that the constraints \(d_p\eta \le \Vert \varvec{v}_p\Vert ^2 \le d_p+\eta\) and \(\text {PVPR}(\varvec{x}_l)\le \sigma _l\) will act their roles in respective subproblems with respect to (w.r.t) variables \(\varvec{v}_p\) and \(\varvec{x}_l\), respectively. However, \(\varvec{v}_p\) and \(\varvec{x}\) are coupled together in the additional constraint (21a). This variable splitting characteristic enables us to utilize the ADMM strategy for solving (21). Specifically, we first construct the augmented Lagrangian function of (21) in terms of (21a) as,
$$\begin{aligned} \mathcal {L}_\rho (\varvec{x},\eta ,\varvec{v}_p,\tilde{\varvec{u}}_p)&=\eta +\tau \sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2}+\beta \varvec{x}^H\varvec{\Omega }\varvec{x} \nonumber \\&\quad \quad +\sum \limits _{p=1}^P\Re \big \{\tilde{\varvec{u}}_p^H(\hat{\varvec{A}}_p^H\varvec{x}\varvec{v}_p)\big \}+\frac{\rho }{2}\sum \limits _{p=1}^P\Vert \hat{\varvec{A}}_p^H\varvec{x}\varvec{v}_p\Vert ^2, \end{aligned}$$
(23)
where \(\rho >0\) is the penalty parameter and \(\{\tilde{\varvec{u}}_p\}_{p=1}^P\) are dual variables. By introducing the scaled dual variable \(\varvec{u}_p=\frac{\tilde{\varvec{u}}_p}{\rho }\), we can recast (22) as,
$$\begin{aligned} \mathcal {L}_\rho (\varvec{x},\eta ,\varvec{v}_p,\varvec{u}_p)&=\eta +\tau \sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2}+\beta \varvec{x}^H\varvec{\Omega }\varvec{x} \nonumber \\&\quad \quad +\frac{\rho }{2}\sum \limits _{p=1}^P(\Vert \hat{\varvec{A}}_p^H\varvec{x}\varvec{v}_p +\varvec{u}_p\Vert ^2\Vert \varvec{u}_p\Vert ^2). \end{aligned}$$
(24)
It follows the ADMM alternates minimizing the augmented Lagrangian function \(\mathcal {L}_\rho\) w.r.t \(\varvec{x}\) and \(\varvec{v}_p\) along with the dual ascent update on \(\varvec{u}_p\) to produce the following iteration steps:
$$\begin{aligned}{} & {} \text {Step1}: \varvec{x}^{t+1}=\mathop {{\mathrm{argmin}}}\limits _{\varvec{x}}\mathcal {L}_\rho (\varvec{x},\eta ^{t},\varvec{v}_p^{t},\varvec{u}_p^{t}),\nonumber \\{} & {} \quad \text {s.t.}~\text {PVPR}(\varvec{x}_l)\le \sigma _l,l \in \mathbb {S}; \end{aligned}$$
(25)
$$\begin{aligned}{} & {} \quad \text {Step2}: \{\varvec{v}_p^{t+1},\eta ^{t+1}\}=\mathop {{\mathrm{argmin}}}\limits _{\varvec{v}_p}\mathcal {L}_\rho (\varvec{x}^{t+1},\eta ,\varvec{v}_p,\varvec{u}_p^{t})\nonumber \\{} & {} \quad \text {s.t.}~d_p\eta \le \Vert \varvec{v}_p\Vert ^2_2 \le d_p+\eta ,p=1,\ldots ,P; \end{aligned}$$
(26)
$$\begin{aligned}{} & {} \quad \text {Step3}: \varvec{u}_p^{t+1}=\varvec{u}_p^{t}+\hat{\varvec{A}}_p^H\varvec{x}^{t+1}\varvec{v}_p^{t+1}, \end{aligned}$$
(27)
where t denotes the count of iterations.
In what follows, we discuss the solutions to the subproblems (24) and (25), respectively.
1) The solution to (24): The \(\varvec{x}\)subproblem in (24) by omitting the irrelative items, is given by,
$$\begin{aligned}&\min _{\varvec{x} }~\sum \limits _{p=1}^P\Vert \hat{\varvec{A}}_p^H\varvec{x}\varvec{z}_p\Vert ^2+\frac{2\tau }{\rho }\sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2}+\frac{2\beta }{\rho }\varvec{x}^H\varvec{\Omega }\varvec{x} \nonumber \\&\quad \text {s.t.}~\text {PVPR}(\varvec{x}_l)\le \sigma _l,l \in \mathbb {S}, \end{aligned}$$
(28)
where \(\varvec{z}_p=\varvec{v}_p^{t}\varvec{u}_p^{t}\). To clearly solve (27), we further simplify (27) as,
$$\begin{aligned}&\min _{\varvec{x} }~\varvec{x}^H\varvec{B}\varvec{x}2\Re \{\varvec{x}^H\varvec{b}\}+\frac{2\tau }{\rho }\sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2}\nonumber \\&\quad \text {s.t.}~\text {PVPR}(\varvec{x}_l)\le \sigma _l,l \in \mathbb {S}, \end{aligned}$$
(29)
where \(\varvec{B}=\sum \limits _{p=1}^P\hat{\varvec{A}}_p\hat{\varvec{A}}_p^H+\frac{2\beta }{\rho }\varvec{\Omega }\) and \(\varvec{b}=\sum \limits _{p=1}^P\hat{\varvec{A}}_p\varvec{z}_p\).
However, we find that the quadratic term \(\varvec{x}^H \varvec{B}\varvec{x}\) in (28) is inseparable for \(\varvec{x}_l,l=1,\ldots ,L\), which makes the update of \(\varvec{x}_l^{t+1}\) subject to PVPR constraint complicated. To this end, we propose to update \(\varvec{x}_l^{t+1}\) in an approximated manner. More specifically, we first introduce the following Lemma 1 to help us seek a simpler surrogate problem for (28).
Lemma 1
[45]: Let \(\varvec{L}\) be \(N\times N\) Hermitian matrix and \(\varvec{M}\) be another \(N\times N\) Hermitian matrix such that \(\varvec{M}\succeq \varvec{L}\). Then, for any point \(\varvec{x}^{t}\in \mathbb {C}^{N\times 1}\), the quadratic function \(\varvec{x}^H\varvec{L}\varvec{x}\) is majorized by:
$$\begin{aligned} \varvec{x}^H\varvec{L}\varvec{x}&\le \varvec{x}^H\varvec{M}\varvec{x}+2\Re \{\varvec{x}^H(\varvec{L}\varvec{M})\varvec{x}^{t}\} \nonumber \\&~~+\varvec{x}^{(t)H}(\varvec{M}\varvec{L})\varvec{x}^{t}. \end{aligned}$$
(30)
According to Lemma 1, we define \(\varvec{L}=\varvec{B}\) and \(\varvec{M}=\delta \varvec{I}\) with \(\delta =\lambda _{\mathrm{max}}(\varvec{B})\) and linearize the quadratic term \(\varvec{x}^H \varvec{B}\varvec{x}\) as its a local upper bound function \(g(\varvec{x},\varvec{x}^t)\) at \(\varvec{x}^t\), i.e.,
$$\begin{aligned} \varvec{x}^H \varvec{B}\varvec{x}&\le g(\varvec{x},\varvec{x}^t)\nonumber \\&=\delta \Vert \varvec{x}\Vert _2^2+2\Re \{\varvec{x}^H\varvec{R}\varvec{x}^t\}\varvec{x}^{(t)H}\varvec{R}\varvec{x}^t, \end{aligned}$$
(31)
where \(\varvec{R}=\varvec{B}\delta \varvec{I}_{NL}\).
It is worth stressing that differing from the original ADMM strategy that directly solves the problem (28), our scheme is to update \(\varvec{x}^{t+1}\) by replacing \(\varvec{x}^H\varvec{B}\varvec{x}\) with \(g(\varvec{x},\varvec{x}^t)\) in (28) and ignoring the constant terms, which results in the following surrogate problem,
$$\begin{aligned}&~~~~~\min _{\varvec{x} }~\delta \Vert \varvec{x}\Vert _2^22\Re \{\varvec{x}^H(\varvec{b}\varvec{R}\varvec{x}^t)\} +\tau \sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2} \nonumber \\&\Rightarrow ~\min _{\varvec{x}}~\delta \sum \limits _{l=1}^L\Vert \varvec{x}_l\varvec{y}_l\Vert ^2+\frac{2\tau }{\rho }\sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2} \nonumber \\&~~~\text {s.t.}~\text {PVPR}(\varvec{x}_l)\le \sigma _l,l \in \mathbb {S}, \end{aligned}$$
(32)
where \(\varvec{y}_l\) is the lth subvector of \(\varvec{y}=[\varvec{y}_1^T,\ldots ,\varvec{y}_L^T]^T\) defined by,
$$\begin{aligned} \varvec{y}=\frac{1}{\delta }(\varvec{b}\varvec{R}\varvec{x}^t)=\varvec{x}^t \frac{1}{\delta }(\varvec{B}\varvec{x}^t\varvec{b}). \end{aligned}$$
(33)
It is obvious that such a transformation in (31) makes the objective function become separable in \(\varvec{x}_l\) for \(l=1,\ldots ,L\) and much easier to solve \(\varvec{x}_l\) constrained in the PVPR constraint. Specifically, the lth subproblem of (31) is given by,
$$\begin{aligned}&\min _{\varvec{x}_l}~\frac{2\tau }{\rho }\Vert \varvec{x}_l\Vert _{2} +\delta \Vert \varvec{x}_l\varvec{y}_l\Vert ^2\nonumber \\&\text {s.t.}~\text {PVPR}(\varvec{x}_l)\le \sigma _l, l \in \mathbb {S}. \end{aligned}$$
(34)
Note that the PVPR constraint in (33) depends on the support set \(\mathbb {S}\), which is explicitly related to the sparsityinducing term \(\tau \sum \limits _{l=1}^L\Vert \varvec{x}_l\Vert _{2}\). Hence, we have to first determine the support set \(\mathbb {S}\). To this end, regardless of the PVPR constraint, we find that the unconstrained optimization (33) admits a closedform solution as given by the block soft thresholding formula:
$$\begin{aligned} \varvec{x}_l=\left\{ \left. \begin{array}{cc} \varvec{q}_l, &{} \text {if}~\Vert \varvec{y}_l\Vert _2 \ge \hat{\tau } \\ \varvec{0}_{N}, &{} \text {if}~\Vert \varvec{y}_l\Vert _2 < \hat{\tau } \\ \end{array} \right. \right. \end{aligned}$$
(35)
where \(\varvec{q}_l=(1\frac{\hat{\tau }}{\Vert \varvec{y}_l\Vert _2})\varvec{y}_l\) with \(\hat{\tau }=\frac{\tau }{\rho \delta }\). From (34), we can naturally obtain the support set \(\mathbb {S}\) as,
$$\begin{aligned} \mathbb {S} =\{l  \Vert \varvec{y}_l\Vert _2 \ge \hat{\tau }\}, \end{aligned}$$
(36)
and under its complementary set \(\mathbb {\bar{S}}=\{l  \Vert \varvec{y}_l\Vert _2 < \hat{\tau }\}\), we accordingly set \(\varvec{x}_l,l\in \mathbb {\bar{S}}\) as zero vector, i.e., \(\varvec{0}_{N}\). For \(\varvec{x}_l, l \in \mathbb {S}\), we turn to solve the following projection problem as,
$$\begin{aligned}&\min _{\varvec{x}_{l}}~\Vert \varvec{x}_l\varvec{q}_l\Vert ^2 \nonumber \\&\text {s.t.}~\text {PVPR}(\varvec{x}_l)\le \sigma _l. \end{aligned}$$
(37)
It can be seen that (36) is intractable due to the nonconvex PVPR constraint. To facilitate solving (36), we tactfully introduce a scalar \(\mu\) to equivalently rewrite (36) as,
$$\begin{aligned}&\min _{{\textbf{x}}_l,\mu }~\Vert {\textbf{x}}_l{\textbf{q}}_l\Vert _2^2\nonumber \\&\text {s.t.}~\mu \le x_l(n)^2 \le \mu \sigma _l, n=1,\ldots ,N. \end{aligned}$$
(38)
By defining \(\zeta =\sqrt{\mu }\) and \(\nu _l=\sqrt{\sigma _l}\), we find that (37) is equivalent to,
$$\begin{aligned}&\min _{{\textbf{x}}_l,\zeta }~\Vert {\textbf{x}}_l{\textbf{q}}_l\Vert _2^2\nonumber \\&\text {s.t.}~\zeta \le x_l(n) \le \zeta \nu _l, n=1,\ldots ,N. \end{aligned}$$
(39)
Fortunately, the solution to (38) admits a closedform solution \(\zeta _o\), which can be determined by the technique in [46]. After obtaining \(\zeta _o\), we then have the optimal \(\mu ^\star =\zeta _o^2\) and each element of \(\varvec{x}_l^{t+1},l\in \mathbb {S}\) can be updated by the following projection:
$$\begin{aligned} x_l^{t+1}(n)=\left\{ \left. \begin{array}{cc} \mu ^\star \sigma _l e^{j\angle q_l(n)}, &{} q_l(n)^2 \ge \mu ^\star \sigma _l\\ \mu ^\star e^{j\angle q_l(n)}, &{} q_l(n)^2 \le \mu ^\star \\ q_l(n), &{} \text {otherwise}\\ \end{array} \right. \right. \end{aligned}$$
(40)
for \(n=1,\ldots ,N\). Once obtaining the solution \(\varvec{x}_l^{t+1}\) for both cases of \(l \in \mathbb {S}\) or not, then the concatenated vector \(\varvec{x}^{t+1}\) is constructed by \(\varvec{x}^{t+1}=[\varvec{x}_1^{(t+1)T},\ldots ,\varvec{x}_L^{(t+1)T}]^T\).
Note that when the common CM constraint (8) is considered in \(\varvec{x}\)subproblem with \(\tau =0\), the optimal \(\varvec{x}^{t+1}\) is readily updated by an elementwise manner as \(\varvec{x}^{t+1}=[\mu _0e^{\angle y(1)},\ldots ,\mu _0e^{\angle y(NL)}]^T\).
2) The solution to (25): The \(\varvec{v}_p\)subproblem in (25) by omitting the irrelative items, is given by,
$$\begin{aligned}&\min _{\varvec{v}_p,\eta >0 }~\eta +\frac{\rho }{2}\sum \limits _{p=1}^P\Vert \varvec{v}_p\tilde{\varvec{v}}_p\Vert ^2 \nonumber \\&\text {s.t.}~ d_p\eta \le \Vert \varvec{v}_p\Vert ^2 \le d_p+\eta ,p=1,\ldots ,P, \end{aligned}$$
(41)
where \(\tilde{\varvec{v}}_p=\hat{\varvec{A}}_p^H\varvec{x}^{t+1}+\varvec{u}_p^{t}\). Clearly, once given \(\eta\), \(\varvec{v}_p^{t+1}\) can be updated by solving the following problem,
$$\begin{aligned}&\min _{\varvec{v}_p }~\sum \limits _{p=1}^P\Vert \varvec{v}_p\tilde{\varvec{v}}_p\Vert ^2 \nonumber \\&\text {s.t.}~d_p\eta \le \Vert \varvec{v}_p\Vert ^2 \le d_p+\eta ,p=1,\ldots ,P, \end{aligned}$$
(42)
which admits a closedform solution as,
$$\begin{aligned} \varvec{v}_p^{t+1}=\left\{ \left. \begin{array}{cc} \sqrt{d_p+\eta }\frac{\tilde{\varvec{v}}_p}{\Vert \tilde{\varvec{v}}_p\Vert _2}, &{}\Vert \tilde{\varvec{v}}_p \Vert _2\ge \sqrt{d_p+\eta }\\ \sqrt{d_p\eta }\frac{\tilde{\varvec{v}}_p}{\Vert \tilde{\varvec{v}}_p\Vert _2}, &{}\Vert \tilde{\varvec{v}}_p \Vert _2\le \sqrt{d_p\eta }\\ \tilde{\varvec{v}}_p, &{} \text {otherwise} \\ \end{array} \right. \right. \end{aligned}$$
(43)
Then, plugging (42) into the cost function in (40) yields an optimization problem only in terms of \(\eta\):
$$\begin{aligned}&\min _{\eta }~\eta +\frac{\rho }{2}\sum\limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)\big _{\eta \le \Vert \tilde{\varvec{v}}_p \Vert ^2d_p}\left (\sqrt{d_p+\eta }\Vert \tilde{\varvec{v}}_p\Vert _2\right )^2 \\&+\frac{\rho }{2}\sum\limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)\big _{\eta \le d_p\Vert \tilde{\varvec{v}}_p \Vert ^2} \left (\sqrt{d_p\eta }\Vert \tilde{\varvec{v}}_p\Vert _2\right)^2, \\&\text {s.t.}~\eta \in [0,\breve{\eta }], \end{aligned}$$
(44)
where \(\breve{\eta }=\max _p\{d_p\}_{p=1}^P\) and \(\mathcal {G}(\cdot )_{\mathcal {C}}\) denote the condition function that equals 1 if the argument satisfies \(\mathcal {C}\) and equals 0 otherwise. It can be seen that the objective function in (43) is a piecewise nonlinear one that relies on the value of the condition function \(\mathcal {G}(\cdot )_{\mathcal {C}}\). To this end, we select J reasonable and nonoverlapped turning points that are less than \(\breve{\eta }\) from the set \(\big \{\big \Vert \tilde{\varvec{v}}_p \Vert ^2d_p\big \big \}_{p=1}^P\) and sort them as \(\{ \breve{r}_j\}_{j=1}^J\) in ascending order, which naturally divides the feasible domain of \(\eta\) into \(J+1\) subintervals. Evidently, the optimal solution \(\eta ^\star\) to (43) corresponds to the smallest one among the objective values in the \(J+1\) subintervals. Therefore, we first study the local minimum in each subinterval. Specifically, at the jth subinterval, i.e., \(\eta \in [\breve{r}_{j1},\breve{r}_j]\), the subproblem of (43) is given by,
$$\begin{aligned}&\min _{\eta }~\bar{a}_j\eta \rho \sum\limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)\big _{\eta \le \Vert \tilde{\varvec{v}}_p \Vert ^2d_p}\left (\Vert \tilde{\varvec{v}}_p\Vert _2\sqrt{d_p+\eta }\right) \\&\rho \sum\limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)_{\eta \le d_p\Vert \tilde{\varvec{v}}_p \Vert ^2}\left (\Vert \tilde{\varvec{v}}_p\Vert _2\sqrt{d_p\eta }\right)+\bar{c}_j \\&\text {s.t.}~\eta \in [\breve{r}_{j1},\breve{r}_j], \end{aligned}$$
(45)
where
$$\begin{aligned}&\bar{a}_j=1+\frac{\rho }{2}\sum \limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)\big _{\eta \le \Vert \tilde{\varvec{v}}_p \Vert ^2d_p}\frac{\rho }{2}\sum \limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)_{\eta \le d_p\Vert \tilde{\varvec{v}}_p \Vert ^2}, \nonumber \\&\bar{c}_j=\frac{\rho }{2}\sum \limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)\big _{\eta \le \Vert \tilde{\varvec{v}}_p \Vert ^2d_p}\big (d_p+\Vert \tilde{\varvec{v}}_p\Vert _2^2\big )\nonumber \\&~~~~~\quad +\frac{\rho }{2}\sum \limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)_{\eta \le d_p\Vert \tilde{\varvec{v}}_p \Vert ^2} \big (d_p+\Vert \tilde{\varvec{v}}_p\Vert _2^2\big ). \end{aligned}$$
(46)
Denote the cost function in (44) as \(f_j(\eta )\) and its firstorder derivative and the secondorder derivative are given by, respectively,
$$\begin{aligned} f_j'(\eta )&=\bar{a}_j\sum \limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)\big _{\eta \le \Vert \tilde{\varvec{v}}_p \Vert ^2d_p} \frac{\rho \Vert \tilde{\varvec{v}}_p \Vert _2}{2\sqrt{d_p+\eta }}\nonumber \\&~~~~~~~\quad +\sum \limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)_{\eta \le d_p\Vert \tilde{\varvec{v}}_p \Vert ^2} \frac{\rho \Vert \tilde{\varvec{v}}_p \Vert _2}{2\sqrt{d_p\eta }}, \\ f_j''(\eta )&=\sum \limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)\big _{\eta \le \Vert \tilde{\varvec{v}}_p \Vert ^2d_p}\frac{\rho \Vert \tilde{\varvec{v}}_p \Vert _2}{4(d_p+\eta )^{\frac{3}{2}}} \nonumber \\&~~~~~~~\quad + \sum \limits _{p=1}^P \mathcal {G}(\tilde{\varvec{v}}_p)_{\eta \le d_p\Vert \tilde{\varvec{v}}_p \Vert ^2} \frac{\rho \Vert \tilde{\varvec{v}}_p \Vert _2}{4(d_p\eta )^{\frac{3}{2}}}. \end{aligned}$$
(47)
Clearly, \(f'(\eta )\) is increasing over \(\eta\) and \(f''(\eta )>0\), which implies that the subfunction \(f_j(\eta )\) is convex. Thus, the local optimal solution to the jth subproblem in \(\eta \in [\breve{r}_{j1},\breve{r}_j]\) can be determined from one of the following three cases:

(1)
When \(f'(\breve{r}_{j1})>0\), the subfunction \(f_j(\eta )\) is increasing in the interval \([\breve{r}_{j1},\breve{r}_j]\) and the local minimizer \(\hat{\eta }_j=\breve{r}_{j1}\).

(2)
When \(f'(\breve{r}_{j})<0\), the subfunction \(f_j(\eta )\) is decreasing in the interval \([\breve{r}_{j1},\breve{r}_j]\) and the local minimizer \(\hat{\eta }_j=\breve{r}_{j}\).

(3)
When \(f'(\breve{r}_{j1})<0\) and \(f'(\breve{r}_{j})>0\), there exists a unique solution to \(f_j(\eta )\) in the interval \([\breve{r}_{j1},\breve{r}_j]\) and the local minimizer \(\hat{\eta }_j\) can be obtained by applying a bisection search to the equation \(f'(\eta )=0\) in the interval \([\breve{r}_{j1},\breve{r}_j]\).
After obtaining \(\hat{\eta }_j\) and its objective value \(f_j(\hat{\eta }_j)\) for \(j=1,\ldots ,J\), we then choose the smallest one from \(\{f_j(\hat{\eta }_j)\}_{j=1}^J\) and set the corresponding \(\hat{\eta }_j\) as the optimal \(\eta ^{\star }\), that is the update of \(\eta ^{t+1}\). Once getting \(\eta ^{t+1}\), \(\varvec{v}_p^{t+1}\) can be updated by inserting \(\eta ^{t+1}\) into (42).
Finally, we summarize the above steps in Algorithm 1 and repeat it until the maximum iteration number \(T_m\) is reached or the maximum residual, i.e., \(\max _p\Vert \hat{\varvec{A}}_p^H\varvec{x}^{t}\varvec{v}_p^{t}\Vert _2\) is less than the stop tolerance value \(\varepsilon\).