In this section, the first algorithm is formulated. The algorithm starts with arbitrary transmit and receive filters and then iteratively updates these filters to yield solution. The goal is to achieve robust transceiver by optimization problem \( \underset{u_d^k}{ \max }E\left[ SINR\right] \). The iterative algorithm alternates between the original and reciprocal networks \( \underset{\overleftarrow{u_d^j}}{ \max }E\left[\overleftarrow{SINR}\right] \). Within each network, only the receivers update their filters.
In the following, first, approximate expression for the mean value is computed and then the optimization problem is solved.
3.1 Estimate the mean of \( {\mathrm{SINR}}_{\mathrm{d}}^{\mathrm{k}} \)
In (6) \( E\left[{SINR}_d^k\right] \) is computed in terms of the function \( {SINR}_d^k=\frac{num}{den} \) and the probability density function f(num, den)Footnote 1
$$ E\left[{SINR}_d^k\right]={\int}_{-\infty}^{\infty}\frac{num}{den}f\left(num, den\right)d.num\times d. den. $$
(6)
Unfortunately, a closed form solution cannot be derived for the integration in (6). Hence, the approximate mean should be found. If f(num, den) is concentrated near its mean, then estimation of the mean value can be expressed by
$$ E\left[{SINR}_d^k|{H}^{kj}\right]\cong \frac{\mu_1}{\mu_2}=\frac{{u_d^k}^{\dagger}\left[{T}_d^k+P{\sigma}^2I\right]{u}_d^k}{{u_d^k}^{\dagger}\left[{S}^k-{T}_d^k+\left(P{\sigma}^2\sum_{j=1}^K{D}^j-P{\sigma}^2+{N}_0\right)I\right]{u}_d^k}. $$
(7)
where \( {S}^k=P\sum_{j=1}^K\sum_{m=1}^{D^j}{H}^{kj}{v}_m^j{v_m^j}^{\dagger }{H^{kj}}^{\dagger } \) and \( {T}_d^k=P{H}^{kk}{v}_d^k{v_d^k}^{\dagger }{H^{kk}}^{\dagger } \) denote, respectively, the estimated covariance matrix of all data streams observed by the receiver k and estimated covariance matrix of the d
th desirable data stream. Since the estimation of mean is common in two algorithms, it is provided in Appendix 1.
3.2 Iterative solution
To obtain columns of U
k, the derivative of (7) with respect to \( {u}_d^k \) should be obtained and then set equal to zero. Thus, \( {u}_d^k \) should satisfy the following vector equation (i.e., the derivative of the numerator multiplied by μ
2 should be equal to the derivative of denominator multiplied by μ
1).
$$ \begin{array}{c}\hfill {\mu}_2\left[{T}_d^k+P{\sigma}^2I\right]{u}_d^k={\mu}_1\left[{S}^k-{T}_d^k+\left(P{\sigma}^2{\sum}_{j=1}^K{D}^j-P{\sigma}^2+{N}_0\right)I\right]{u}_d^k.\hfill \\ {}\hfill {\mu}_1=E\left[num|{H}^{kj}\right]={u_d^k}^{\dagger}\left[{T}_d^k+P{\sigma}^2I\right]{u}_d^k.Eq.31\ \mathrm{in}\ \mathrm{appendix}\ \mathrm{A}\hfill \\ {}\hfill {\mu}_2=E\left[ den|{H}^{kj}\right]={u_d^k}^{\dagger}\left[{S}^k-{T}_d^k+\left(P{\sigma}^2{\sum}_{j=1}^K{D}^j-P{\sigma}^2+{N}_0\right)I\right]{u}_d^k.Eq.32\ \mathrm{in}\ \mathrm{A}\mathrm{ppendix}\ \mathrm{A}\hfill \end{array} $$
(8)
The above vector equation is rearranged as follow by moving the terms involving \( {S}^k{u}_d^k \) and \( {u}_d^k \) to left and \( {T}_d^k{u}_d^k \) to the right hand side:
$$ {\mu}_1\left({S}^k{u}_d^k+{\varOmega}_d^k\ {u}_d^k\right)=\left({\mu}_1+{\mu}_2\right){T}_d^k{u}_d^k, $$
(9)
$$ {\varOmega}_d^k=P{\sigma}^2{\sum}_{j=1}^K{D}^j-P{\sigma}^2\frac{\mu_1+{\mu}_2}{\mu_1}+{N}_0. $$
(10)
where the scalar coefficient is \( {\Omega}_d^k \).
According to the definition, \( {T}_d^k{u}_d^k \) is the product of scalar value \( {v_d^k}^{\dagger }{H^{kk}}^{\dagger }{u}_d^k \) and vector \( {H}^{kk}{v}_d^k \). It is concluded that \( {T}_d^k{u}_d^k \) is in the direction of \( {H}^{kk}{v}_d^k \). Furthermore, only the directions are important. Hence, the scalar factors μ
1, \( \left({\mu}_1+{\mu}_2\right){v_d^k}^{\dagger }{H^{kk}}^{\dagger }{u}_d^k \) can be removed from (9). Then, the unit vector maximizes (7) is given by
$$ {u}_d^k=\frac{{\left({S}^k+{\varOmega}_d^k\ I\right)}^{-1}{H}^{kk}{v}_d^k}{\left\Vert {\left({S}^k+{\varOmega}_d^k\ I\right)}^{-1}{H}^{kk}{v}_d^k\right\Vert }. $$
(11)
Now, consider the reciprocal network. The transmit precoding matrices, \( \overleftarrow{V^k} \), are the receive interference suppression matrices U
k from the original network, whose columns are given by (11). The optimal d
th unit column of \( \overleftarrow{U^j} \) is given by
$$ \overleftarrow{u_d^j}=\frac{{\left(\overleftarrow{S^j}+\overleftarrow{\varOmega_d^j}\ I\right)}^{-1}\ \overleftarrow{H^{jj}}\ \overleftarrow{v_d^j}}{\left\Vert {\left(\overleftarrow{S^j}+\overleftarrow{\varOmega_d^j}\ I\right)}^{-1}\ \overleftarrow{H^{jj}}\ \overleftarrow{v_d^j}\right\Vert }. $$
(12)
Now, the receive interference suppression matrices in the reciprocal network replace \( {V}^j\forall j\in \mathcal{K} \), and then \( {U}^k\forall k\in \mathcal{K} \) are updated based on them. It is seen from (10) and (11) that \( {\Omega}_d^k \) and \( {u}_d^k \) are interdependent. Therefore, prior to repeat steps, \( {\Omega}_d^k \) should be computed at the end of each iteration. To summarize the iterative procedure, the steps are given in Fig. 3.
It can be proved that EM filters for the special case σ
2 = 0, are transmit/receive matrices of the Max-SINR algorithm (Proof is provided in Appendix 2.).
In order to implement the algorithm in a distributed manner, receiver k needs to know about H
kk and S
k which are locally available. The covariance matrix S
k can be estimated from the autocorrelation of the received signal Y
k. Substituting \( {X}^j=\sum_{d=1}^{D^j}{v}_d^j{s}_d^j \) into (2) yields
$$ E\left[{Y}^k{Y^k}^{\dagger}\right]={S}^k+\left(P{\sigma}^2{\sum}_{j=1}^K{D}^j+{N}_0\right)I, $$
(13)
where the expectation is computed over error and noise. The receiver j in the reciprocal channel can learn \( \overleftarrow{U^j} \) in a similar manner. For TDD systems, the transmitters can estimate the channels from the sounding signals received in the reverse link ([21], section II-A). Using MMSE channel prediction, the CSI estimate H
kk is obtained, whereas the CSI error E
kj is Gaussian distributed and independent from the CSI estimate H
kj ([21], section II-A).
3.3 Proof of convergence
Now, the convergence of the proposed EM algorithm is demonstrated. Equivalent problem is considered. EM can be written as follows
$$ \mathit{\max}\frac{{u_d^k}^{\dagger }Q{u}_d^k}{{u_d^k}^{\dagger }F{u}_d^k}, $$
(14)
where \( Q={Q}^{\dagger }={T}_d^k+P{\sigma}^2I\ge 0 \), and \( F={F}^{\dagger }={S}^k-{T}_d^k+\left(P{\sigma}^2\sum_{j=1}^K{D}^j-P{\sigma}^2+{N}_0\right)I>0 \) are matrices and \( {u}_d^k \) indicates optimization variable. It is shown in [26] that (14) is equivalent to
$$ \begin{array}{c}\hfill \max {u_d^k}^{\dagger }{Qu}_d^k,\hfill \\ {}\hfill \mathrm{s}.\mathrm{t}.{u_d^k}^{\dagger }{Fu}_d^k=1.\hfill \end{array} $$
(15)
For the equivalent problem, the Lagrangian function is given by \( l\left({u}_d^k,\lambda \right)={u_d^k}^{\dagger }Q{u}_d^k+\lambda \left(1-{u_d^k}^{\dagger }F{u}_d^k\right) \). The solution \( {u_d^k}^{\ast } \) is the eigenvector corresponding to the maximal eigenvalue of F
−1
Q, and the Lagrange multiplier is \( {\lambda}^{\ast }={{u_d^k}^{\ast}}^{\dagger }Q{u_d^k}^{\ast } \).
The metric is defined in (16). It is proved here that each step in the algorithm increases the metric. Since it cannot increase unboundedly, this implies that equivalent problem converges and consequently algorithm Fig. 3 also converges. It is important to note that the metric is the same for both original and reciprocal networks.
$$ {\mathit{\max}}_{\begin{array}{c}{V}^j\ and\ {U}^K\\ {}\forall j\ and\ k\in \mathcal{K}\end{array}} metric={\sum}_{k=1}^K{\sum}_{d=1}^{D^k}l\left({u}_d^k,\lambda \right). $$
(16)
Accordingly:
$$ {\mathit{\max}}_{\begin{array}{c}{U}^K\\ {}\forall k\in \mathcal{K}\end{array}} metric={\sum}_{k=1}^K{\sum}_{d=1}^{D^k}{\mathit{\max}}_{u_d^k\ }l\left({u}_d^k,\lambda \right). $$
(17)
In other words, given \( {V}^j\forall j\in \mathcal{K} \), Step 1 increases the value of (16) over all possible choices of \( {U}^k\forall k\in \mathcal{K} \). The filter \( \overleftarrow{U^j} \) computed in Step 3, based on \( \overleftarrow{V^k}={U}^k \), also maximizes the metric in the reciprocal channel (18).
$$ \begin{array}{c}\hfill { \max}_{\begin{array}{c}\hfill \overleftarrow{U^j}\hfill \\ {}\hfill \forall j\in K\hfill \end{array}}\overleftarrow{metric},\hfill \\ {}\hfill \overleftarrow{metric}={\sum}_{j=1}^K{\sum}_{d=1}^{D^j}\overleftarrow{l}\left(\overleftarrow{u_d^j},\lambda \right)={\sum}_{j=1}^K{\sum}_{d=1}^{D^j}{\overleftarrow{u_d^j}}^{\dagger}\overleftarrow{Q}\overleftarrow{u_d^j}+\lambda \left(1-{\overleftarrow{u_d^j}}^{\dagger}\overleftarrow{F}\overleftarrow{u_d^j}\right).\hfill \end{array} $$
(18)
Since \( \overleftarrow{V^k}={U}^k \) and \( \overleftarrow{U^j}={V}^j \), the metric remains unchanged in the original and reciprocal networks, according to following equation:
$$ \begin{array}{c}\hfill \overleftarrow{metric}={\sum}_{j=1}^K{\sum}_{d=1}^{D^j}{u_d^j}^{\dagger}\left[{T}_d^j+P{\sigma}^2I\right]{u}_d^j+\hfill \\ {}\hfill {\sum}_{j=1}^K{\sum}_{d=1}^{D^j}{\lambda}_d^j\left(1+{u_d^j}^{\dagger}\left[{T}_d^j-\left(P{\sigma}^2{\sum}_{k=1}^K{D}^k-P{\sigma}^2+{N}_0\right)I\right]{u}_d^j\right)-\hfill \\ {}\hfill P{\sum}_{j=1}^K{\sum}_{d=1}^{D^j}{\sum}_{k=1}^K{\sum}_{m=1}^{D^k}{\lambda}_d^j{u_m^k}^{\dagger }{H}^{kj}{v}_d^j{v_d^j}^{\dagger }{H}^{kj^{\dagger }}{u}_m^k= metric.\hfill \end{array} $$
(19)
Therefore, Step 3 also can increase the value of (16). Since the value of (16) is monotonically increased after every iteration, convergence of the algorithm is guaranteed.