4.1 Minimum conditional MSE criterion
Taking CSI mismatch of H
1 and H
2 into account, the proposed objective function is to minimize the conditional MSE between the transmitted and received signals subject to relay power constraint, which is given as follows:
$$\begin{array}{*{20}l} &\min_{\mathbf{Q},\mathbf{W}}\ E\left[\parallel\mathbf{\widehat{s}}-\mathbf{s}{\parallel_{2}^{2}}|\widehat{\mathbf{H}_{1}},\widehat{\mathbf{H}_{2}}\right], \end{array} $$
(9a)
$$\begin{array}{*{20}l} &\text{subject to}\ tr\left(\mathbf{Q}\left({\sigma_{s}^{2}}E\left[\mathbf{H}_{1}\mathbf{H}_{1}^{H}|\mathbf{\widehat{H}}_{1}\right] +{\sigma_{1}^{2}}\mathbf{I}_{L}\right)\mathbf{Q}^{H}\right)\leq P_{r}, \end{array} $$
(9b)
where P
r
is the upper bound of relay transmission power.
Assuming that s and n are independent, based on (1), setting the gradient of (9a) with respect to W
H to zero yields the optimal W
$$\begin{array}{*{20}l} \mathbf{W}_{\text{opt}}={\sigma_{s}^{2}}E^{H}\left[\mathbf{H}|\mathbf{\widehat{H}}_{1},\mathbf{\widehat{H}}_{2}\right] \left(\mathbf{R}_{n}\right)^{-1}, \end{array} $$
(10)
where W
opt denotes the optimal solution of W, and \(\mathbf {R}_{n}={\sigma _{s}^{2}}E\left [\mathbf {HH}^{H}|\mathbf {\widehat {H}}_{1},\mathbf {\widehat {H}}_{2}\right ] +{\sigma _{1}^{2}}E\left [\mathbf {H}_{2}\mathbf {QQ}^{H}\mathbf {H}_{2}^{H}|\mathbf {\widehat {H}}_{2}\right ] +{\sigma _{2}^{2}}\mathbf {I}_{N_{s}}\).
Define \(\mathbf {\overline {H}}=E\left [\mathbf {H}|\mathbf {\widehat {H}}_{1},\mathbf {\widehat {H}}_{2}\right ]\), \(\mathbf {\overline {H}}_{1}=E\left [\mathbf {H}_{1}|\mathbf {\widehat {H}}_{1}\right ]\) and \(\mathbf {\overline {H}}_{2}=E\left [\mathbf {H}_{2}|\mathbf {\widehat {H}}_{2}\right ]\). Substituting (1) and (10) into (9a) yields (11) (shown on the top of next page).
$$\begin{array}{*{20}l} &\max_{\mathbf{Q}}\ tr \left(\mathbf{\overline{H}}^{H}\left({\sigma_{s}^{2}} E\left[\mathbf{HH}^{H}|\mathbf{\widehat{H}}_{1},\mathbf{\widehat{H}}_{2}\right] +\mathbf{R}_{n}\right)^{-1} \mathbf{\overline{H}}\right), \end{array} $$
(11a)
$$\begin{array}{*{20}l} &\text{subject to}\ tr\left(\mathbf{Q}\left({\sigma_{s}^{2}}E\left[\mathbf{H}_{1}\mathbf{H}_{1}^{H}|\mathbf{\widehat{H}}_{1}\right] +{\sigma_{1}^{2}}\mathbf{I}_{L}\right)\mathbf{Q}^{H}\right)\leq P_{r}. \end{array} $$
(11b)
It is observed from (11a) that Q is contained in the inversion manipulation; therefore, direct optimization of (11) is difficult. To facilitate the solution of (11), the structure of optimal Q is analyzed and it is found that optimal Q has the form of
$$\begin{array}{@{}rcl@{}} \mathbf{Q}_{\text{opt}}=\mathbf{V}_{2}\boldsymbol{\Phi}_{1}\mathbf{U}_{1}^{H}, \end{array} $$
(12)
where Q
opt is the optimal Q, Φ
1 is an M×M diagonal matrix, and \(\mathbf {V}_{2}^{H}\) and U
1 are unitary matrices constituted by right- and left-singular vectors of \(\overline {\mathbf {H}}_{2}\) and \(\overline {\mathbf {H}}_{1}\), respectively. The proof of (12) is provided in Appendix A. Using (12), (1) is equivalently expressed as
$$\begin{array}{*{20}l} \max_{\boldsymbol{\Phi}_{1}}J\left(\boldsymbol{\Phi}_{1}\right), \end{array} $$
(13a)
$$\begin{array}{*{20}l} \text{subject to }\sum_{i=1}^{M}\gamma_{i}|\phi_{i}|^{2}\leq P_{r}, \end{array} $$
(13b)
where
$$\begin{array}{*{20}l} J\left(\boldsymbol{\Phi}_{1}\right)=\sum_{i=1}^{M}\frac{\alpha_{i}|\phi_{i}|^{2}}{\beta_{i}|\phi_{i}|^{2}+{\sigma_{s}^{2}}b+c+{\sigma_{2}^{2}}}, \end{array} $$
(14)
ϕ
i
is the i
th diagonal element of Φ
1, \(\alpha _{i}=\lambda _{1,i}^{2}\lambda _{2,i}^{2}\), \(\beta _{i}={\sigma _{s}^{2}}\left (\lambda _{1,i}^{2}+\sigma _{h_{1}|\widehat {h}_{1}}^{2}\right)\lambda _{2,i}^{2}+{\sigma _{1}^{2}}\lambda _{2,i}^{2}\), \(\gamma _{i}={\sigma _{s}^{2}}\mid \lambda _{1,i}\mid ^{2}+\)
\({\sigma _{s}^{2}}\sigma _{h_{1}|\widehat {h}_{1}}^{2}+{\sigma _{1}^{2}},\)
$$\begin{array}{*{20}l} &b=\sigma_{h_{2}|\widehat{h}_{2}}^{2}\sum_{i=1}^{N_{s}}\left(\lambda_{1,i}^{2}+\sigma_{h_{1}|\widehat{h}_{1}}^{2}\right)\mid\phi_{i}\mid^{2}, \end{array} $$
(15)
$$\begin{array}{*{20}l} &c=\sigma_{h_{2}|\widehat{h}_{2}}^{2}\sum_{i=1}^{N_{s}}\mid\phi_{i}\mid^{2}, \end{array} $$
(16)
and λ
1,i
and λ
2,i
denote the i
th singular values of \(\overline {\mathbf {H}}_{1}\) and \(\overline {\mathbf {H}}_{2}\), respectively. Details of transformation from (11) to (13) are presented in Appendix B. The optimization problem (13) is easier to solve than (11). Once the optimal Φ
1 is derived, Q
opt and W
opt are computed using (12) and (10), respectively.
When \(\sigma _{h_{1}|\widehat {h}_{1}}^{2}=0\) and \(\sigma _{h_{2}|\widehat {h}_{2}}^{2}=0\), b=0, c=0, \(\beta _{i}={\sigma _{s}^{2}}\lambda _{1,i}^{2}\lambda _{2,i}^{2}+{\sigma _{1}^{2}}\lambda _{2,i}^{2}\), and \(\gamma _{i}={\sigma _{s}^{2}}\mid \lambda _{1,i}\mid ^{2}+{\sigma _{1}^{2}},\). (13) becomes
$$ \max_{\phi_{i}}\sum_{i=1}^{N_{s}}\frac{\lambda_{1,i}^{2}\lambda_{2,i}^{2}\mid\phi_{i}\mid^{2}} {\left({\sigma_{s}^{2}}\lambda_{1,i}^{2}\lambda_{2,i}^{2}+ {\sigma_{1}^{2}}\lambda_{2,i}^{2}\right)\mid\phi_{i}\mid^{2}+{\sigma_{2}^{2}}}, $$
(17a)
$$ \text{subject to} \sum_{i=1}^{M}\left({\sigma_{s}^{2}}\mid\lambda_{1,i}\mid^{2}+{\sigma_{1}^{2}}\right)\mid\phi_{i}\mid^{2}\leq P_{r}, $$
(17b)
which are equivalent to (24) and (25) of [19]. Therefore, the minimum conditional MSE criterion reduces to the MSE criterion when the CSI mismatch vanishes.
4.2 Global solution by genetic algorithm
It is observed from (13) that multiplying ϕ
i
by a constant maximizes the relay power and the value of the objective function (13a). Therefore, the optimal ϕ
i
can be obtained while (13b) achieves equality. Based on this observation, the values of chromosomes in genetic algorithm (GA) is optimized within 0 and 1, then multiplied with a constant α which is obtained when (13b) achieves equality, i.e.,
$$ {{}{\begin{aligned} \alpha=\frac{P_{r}}{\sum_{i=1}^{M}{\sigma_{s}^{2}}\mid\phi_{i}\mid^{2}\mid{\lambda}_{1,i}\mid^{2}+{\sigma_{s}^{2}}\sigma^{2}_{h_{1}|\widehat{h}_{1}} \mid\phi_{i}\mid^{2}+{\sigma_{1}^{2}}\mid\phi_{i}\mid^{2}}. \end{aligned}}} $$
(18)
Substituting α
ϕ
i
into (13a) yields the value of the fitness function of GA.
4.3 Relaxed solution by water filling strategy
It is observed from (13a) that the terms of b and c contain ϕ
i
,i=1,…,M, which prevent deriving an analytical solution to (13). To avoid high computational loads of using the global searching algorithm, a relaxed version of (13) is proposed here.
From (13a), it is noted that increasing the values of b and c reduces the value of J(Φ
1), which means
$$\begin{array}{@{}rcl@{}} J(\boldsymbol{\Phi}_{1})\geq J(\boldsymbol{\Phi}_{1})_{\text{max}}, \end{array} $$
(19)
where J(Φ
1)max is computed from (13a) using b
max and c
max. Here, b
max and c
max denote the maximum values of b and c, respectively. From the relay power constraint (13b), the possible values of b
max and c
max are straightforward to derive and are given below:
$$\begin{array}{@{}rcl@{}} b_{\text{max}}=\sum_{i=1}^{M}\frac{P_{r}\sigma^{2}_{h_{2}|\widehat{h}_{2}}\left(\lambda_{1,i}^{2}+\sigma^{2}_{h_{1}|\widehat{h}_{1}}\right)} {{\sigma_{s}^{2}}\mid{\lambda}_{1,i}\mid^{2}+{\sigma_{s}^{2}}\sigma^{2}_{h_{1}|\widehat{h}_{1}}+{\sigma_{1}^{2}}}, \end{array} $$
(20)
$$\begin{array}{@{}rcl@{}} c_{\text{max}}=\sum_{i=1}^{M}\frac{P_{r}\sigma^{2}_{h_{2}|\widehat{h}_{2}}} {{\sigma_{s}^{2}}\mid{\lambda}_{1,i}\mid^{2}+{\sigma_{s}^{2}}\sigma^{2}_{h_{1}|\widehat{h}_{1}}+{\sigma_{1}^{2}}}. \end{array} $$
(21)
Substituting (20) and (21) into (13), and using the Lagrange multiplier technique, the solution of the relaxed version of (13) is given by
$$ {{}{\begin{aligned} \mid\phi_{i}\mid^{2}=\frac{1}{{\sigma_{s}^{2}}\mid{\lambda}_{2,i}\mid^{2}\left(\mid{\lambda}_{1,i}\mid^{2}+\sigma^{2}_{h_{1}|\widehat{h}_{1}}\right) +{\sigma_{1}^{2}}\mid{\lambda}_{2,i}\mid^{2}} \\ \cdot\left(\sqrt{\frac{{\sigma_{s}^{2}}\mid{\lambda}_{1,i}\mid^{2}\mid{\lambda}_{2,i}\mid^{2}\sigma_{2,\text{max}}^{2} }{\mu\left({\sigma_{s}^{2}}\mid{\lambda}_{1,i}\mid^{2}+{\sigma_{s}^{2}}\sigma^{2}_{h_{1}|\widehat{h}_{1}}+{\sigma_{1}^{2}}\right)}}-\sigma_{2,\text{max}}^{2}\right)^{+}, \forall i, \end{aligned}}} $$
(22)
where
$$\begin{array}{@{}rcl@{}} \sigma_{2,\text{max}}^{2}={\sigma_{s}^{2}}{b}_{\text{max}}+{\sigma_{1}^{2}}{c}_{\text{max}}+{\sigma_{2}^{2}}, \end{array} $$
(23)
and (x)+=max(x,0), μ is the Lagrange constant which should be chosen such that (13b) is satisfied.