Open Access

Joint source and relay optimization for interference MIMO relay networks

EURASIP Journal on Advances in Signal Processing20172017:24

https://doi.org/10.1186/s13634-017-0453-4

Received: 28 June 2016

Accepted: 8 February 2017

Published: 7 March 2017

Abstract

This paper considers multiple-input multiple-output (MIMO) relay communication in multi-cellular (interference) systems in which MIMO source-destination pairs communicate simultaneously. It is assumed that due to severe attenuation and/or shadowing effects, communication links can be established only with the aid of a relay node. The aim is to minimize the maximal mean-square-error (MSE) among all the receiving nodes under constrained source and relay transmit powers. Both one- and two-way amplify-and-forward (AF) relaying mechanisms are considered. Since the exactly optimal solution for this practically appealing problem is intractable, we first propose optimizing the source, relay, and receiver matrices in an alternating fashion. Then we contrive a simplified semidefinite programming (SDP) solution based on the error covariance matrix decomposition technique, avoiding the high complexity of the iterative process. Numerical results reveal the effectiveness of the proposed schemes.

Keywords

Interference MIMO Two-way Relay Optimization

1 Introduction

Due to scarcity of frequency spectrum in practical wireless networks, multiple communicating pairs are motivated to share a common time-frequency channel to ensure efficient use of the available spectrum. Co-channel interference (CCI) is, however, one of the main deteriorating factors in such networks that adversely affect the system performance. The impact is more obvious in 5G heterogeneous networks where there is oceanic volume of interference due to hyper-dense frequency reuse among small-cell and macro cell base stations. Therefore, it is important to develop schemes to mitigate the CCI, which has been a major research direction in wireless communications over the past decades.

In the literature, various schemes have been proposed to control CCI at an acceptable level. A conventional approach in MIMO systems is to exploit spatial diversity for suppressing CCI [1]. Such spatial diversity technique has been used to solve many power control problems in interference systems for different network setups. In [2], a power control scheme has been designed with receive diversity only, whereas joint transmit-receive beamforming has been considered in [2, 3] for interference systems. However, the incorporation of the spatial diversity at the transmitter side in [3], results in lower total transmit power compared to that in [2].

On the other hand, there is synergy between multiple antenna and relaying technologies. The latter is particularly useful to reestablish communications in case of a broken channel between source and destination. Hence, relaying has been considered in interference networks in order to afford longer source-destination distance [47]. Both [4, 5] considered network beamforming for minimizing total relay transmit power, whereas in [7], an iterative transceiver optimization scheme has been proposed to minimize total source and relay transmit power.

While the works in [25, 7] all considered minimizing the total transmit power of interference networks, another important performance metric, which concerns more about the quality of communications, is the mean-square-error (MSE) for signal estimation. In [810], the sum minimum MSE (MMSE) was considered to design iterative algorithms for MIMO interference relay systems taking the direct links between the source and destination nodes into consideration, and in [11], similar problem has been considered ignoring the direct links between the communicating parties. Indeed the direct source-destination links can play a vital role in wireless communication systems when the link-strength is significant. However, multihop communication is motivated by the fact that such direct links may undergo deep shadowing effects in many practical scenarios. Hence, many existing works on multihop communications have ignored the direct links. Nonetheless, the sum MMSE criterion runs the risk that some of the receivers may suffer from unacceptably high MSEs. Also, the works in [810] considered one-way relaying only.

Due to the increasing demands on multimedia applications, in particular, the notion of emerging wireless communications terminologies such as Big data, ultra-high spectral efficiency is essential in future wireless networks, including 5G, to provide ADSL-like user experience aspired by 2020. The abovementioned one-way relay systems suffer from a substantial performance loss in terms of spectral efficiency due to the pre-log factor of 1/2 persuaded by the fact that two channel uses are required for each end-to-end transmission.

Two-way relay systems have hence been proposed to overcome the loss of spectral efficiency in such one-way relay methods [1214]. Utilizing the concept of analog network coding [14], communication in a two-way relay channel can be accomplished in two phases: the multiple access (MAC) phase and the broadcast (BC) phase. During the MAC phase, all the users simultaneously send their messages to an intermediate relay node, whereas in the BC phase, the relay retransmits the received information to the users. As each user knows its own transmitted signals, each user can cancel the self-interference and decode the intended message. The capacity region of multi-pair two-way relay networks in the deterministic channel was characterized in [15]. Later in [16], the achievable total degrees of freedom in a two-way interference MIMO relay channel were also studied. Most recently in [17], the transceivers in a full-duplex MIMO interference system were optimized based on the weighted sum-rate maximization criterion.

In this paper, we consider a K-user MIMO interference system where each of the pairs can communicate only with the aid of a relay node thus ignoring the direct source-destination links. The direct links are understood to be in deep shadowing and hence negligible. Both one- and two-way amplify-and-forward (AF) relaying mechanisms are considered. All nodes are assumed to be equipped with multiple antennas so as to afford simultaneous transmission of multiple data streams. Our aim is to develop joint transceiver optimization algorithms for minimizing the worst-user MSE (min-max MSE)1 subject to the source and relay power constraints. It can be verified that the problem is strictly non-convex, and thus it is difficult to find an analytical solution. To tackle this, we first devise an algorithm to optimize the source, relay, and receiver matrices alternatingly by decomposing the original non-convex problem into convex subproblems. To avoid the complexity of the iterative process, we then extend the error covariance matrix decomposition technique applied to point-to-point MIMO relay systems in [18] to interference MIMO relay systems in this paper. More specifically, under practically reasonable high first-hop signal-to-noise ratio (SNR) assumption, we demonstrate that the problem can be decomposed into two standard semidefinite programming (SDP) problems to optimize source and relay matrices separately. Note that high SNR assumption has also been made in [19] to simplify the joint codebook design problem in single-user MIMO relay systems and in [20, 21] for multicasting MIMO relay design. Hence our work is a generalization to multi-pair communication scheme taking co-channel interference into account.

The remainder of this paper is lined-up as follows. In Section 2, the interference MIMO relay system model is introduced. The joint optimal transmitter, relay, and receiver beamforming optimization schemes are developed in Section 3 and Section 4, respectively, for one-way and two-way relaying. Section 5 provides simulation results to analyze the performance of the proposed algorithms in various system configurations before concluding remarks are made in Section 6.

2 System model

Let us consider a communication scenario, as illustrated in Fig. 1, where each of the K source nodes communicates with the corresponding destination node sharing the same frequency channel via a common relay node. The direct link between each transmitter-receiver pair is assumed to be broken due to strong attenuation and/or shadowing effects. The kth source, the relay, and the kth destination nodes are assumed to be equipped with N s,k , N r, and N d,k antennas, respectively.
Fig. 1

The model of the dual-hop interference MIMO relay system

3 One-way relaying

In this section, we consider that communication takes place in one direction only. The relay node is assumed to work in half-duplex mode which implies that the actual communication between the source and destination nodes is accomplished in two time slots. In the first time slot, the source nodes transmit the linearly precoded signal vectors B k s k ,k=1,,K, to the relay node. The received signal vector at the relay node is therefore given by
$$ \mathbf{y}_{\mathrm{r}} = \sum_{k=1}^{K}\mathbf{H}_{k}\mathbf{B}_{k}\mathbf{s}_{k} + \mathbf{n}_{\mathrm{r}}, $$
(1)

where H k denotes the N r×N s,k Gaussian channel matrix between the kth source node and the intermediate relay node, s k is the N b,k ×1 (1≤N b,k N s,k ) transmit symbol vector with covariance \(\mathbf {I}_{N_{\mathrm {b}},k}\), B k is the N s,k ×N b,k source precoding matrix, and n r is the N r×1 additive white Gaussian noise (AWGN) vector introduced at the relay node. Let us denote \(N_{\mathrm {b}} = \sum _{k=1}^{K}N_{\mathrm {b},k}\) as the total number of data streams transmitted by all the source nodes. In order to successfully transmit N b independent data streams simultaneously through the relay, the relay node must be equipped with N rN b antennas.

After receiving y r, the relay node simply multiplies the signal vector by an N r×N r precoding matrix F and transmits the amplified version of y r in the second time slot. Thus the relay’s N r×1 transmit signal vector x r is given by
$$ \mathbf{x}_{\mathrm{r}} = \mathbf{F}\mathbf{y}_{\mathrm{r}}. $$
(2)
Accordingly, the signal received at the kth destination node can be expressed as
$$\begin{array}{*{20}l}{} \mathbf{y}_{\mathrm{d},k}&=\mathbf{G}_{k} \mathbf{x}_{\mathrm{r}} + \mathbf{n}_{\mathrm{d},k}\\ &= \underbrace{\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k}\mathbf{s}_{k}}_{\text{desired signal}} + \underbrace{\mathbf{G}_{k}\mathbf{F} \sum_{j=1\atop j\neq k}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{s}_{j}}_{\text{interference signal}} + \underbrace{\mathbf{G}_{k}\mathbf{F}\mathbf{n}_{\mathrm{r}} + \mathbf{n}_{\mathrm{d},k}}_{\text{noise}}, \end{array} $$
(3)
$$\begin{array}{*{20}l} &=\bar{\mathbf{H}}_{k}\mathbf{s}_{k} + \bar{\mathbf{n}}_{\mathrm{d},k}, ~\text{for }k=1,\dots,K, \end{array} $$
(4)

where G k denotes the N d,k ×N r complex channel matrix between the relay node and the kth destination node, n d,k is the N d,k ×1 AWGN vector introduced at the kth destination node, \(\bar {\mathbf {H}}_{k}\triangleq \mathbf {G}_{k}\mathbf {F}\mathbf {H}_{k}\mathbf {B}_{k}\) is the equivalent source-destination channel matrix, and \(\bar {\mathbf {n}}_{\mathrm {d},k}\triangleq \mathbf {G}_{k}\mathbf {F} (\sum _{j=1\atop j\neq k}^{K}\mathbf {H}_{j}\mathbf {B}_{j}\mathbf {s}_{j} + \mathbf {n}_{\mathrm {r}}) + \mathbf {n}_{\mathrm {d},k}\) is the equivalent noise vector. All noises are assumed to be independent and identically distributed (i.i.d.) complex Gaussian random variables with mean zero and variance \(\sigma _{\mathrm {n}}^{2}\), where n{r,d} indicates the noise introduced at the relay or at the destination.

Remark

Note that the interference term in (3) does not appear in the received signal of the single-user MIMO relay system considered in [19] or in the multicasting MIMO relay system considered in [20,21]. Hence the subsequent analyses remain considerably simpler in [1921], whereas we need to deal with this troublesome interference term in this paper.

Considering the input-output relationship at the relay node given in (2), the average transmit power consumed by the MIMO relay node is defined as
$$ \text{tr}\big(\mathrm{E}\{\mathbf{x}_{\mathrm{r}} \mathbf{x}_{\mathrm{r}}^{H}\}\big)= \text{tr}\big(\mathbf{F} {\boldsymbol \Psi}\mathbf{F}^{H}\big), $$
(5)

where tr(·) denotes trace of a matrix, E{·} indicates statistical expectation, and \({\boldsymbol \Psi }\triangleq \mathrm {E}\{\mathbf {y}_{\mathrm {r}}\mathbf {y}_{\mathrm {r}}^{H}\} =\sum _{k=1}^{K}\mathbf {H}_{k}\mathbf {B}_{k}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H} +\sigma _{\mathrm {r}}^{2}\mathbf {I}_{N_{\mathrm {r}}}\) represents the covariance matrix of the signal vector received at the relay node.

For signal detection, linear receivers are used at the destination nodes for simplicity reasons. Denoting W k as the N d,k ×N b,k receiver matrix used by the kth destination node, the corresponding estimated signal vector \({\hat {\mathbf {s}}}_{k}\) can be written as
$$ {\hat{\mathbf{s}}}_{k} = \mathbf{W}_{k}^{H}\mathbf{y}_{\mathrm{d},k}, ~~\text{for }k = 1, \dots, K, $$
(6)
where (·) H indicates the conjugate transpose (Hermitian) of a matrix (vector). Thus the MSE of signal estimation at the kth receiver can be expressed as
$$\begin{array}{@{}rcl@{}}{} E_{k} &=&\text{tr}\left(\mathbf{E}_{k} \triangleq \mathrm{E}\left[\left(\hat{\mathbf{s}}_{k}-\mathbf{s}_{k}\right) \left(\hat{\mathbf{s}}_{k} - \mathbf{s}_{k}\right)^{H}\right]\right),\\ &=& \text{tr} \left(\begin{array}{r} \mathbf{I}_{N_{\mathrm{b}},k} - \mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k} - \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\\ +\sum_{j=1}^{K}\mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\\ +\sigma_{\mathrm{r}}^{2}\mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k} + \sigma_{\mathrm{d}}^{2}\mathbf{W}_{k}^{H}\mathbf{W}_{k} \end{array}\right),\\ &=&\text{tr} \left(\!\left(\mathbf{W}_{k}^{H} {\bar{\mathbf{H}}}_{k}- \mathbf{I}_{N_{\mathrm{b},k}}\right)\!\left(\mathbf{W}_{k}^{H} {\bar{\mathbf{H}}}_{k}- \mathbf{I}_{N_{\mathrm{b},k}}\right)^{H} + \mathbf{W}_{k}^{H} {\bar{\mathbf{C}}}_{k}\mathbf{W}_{k}\!\right),\\ && \qquad \qquad \qquad \qquad \qquad \qquad \,\,\,\,\, \text{for } \,\,k = 1, \dots, K, \end{array} $$
(7)
where E k denotes the error covariance matrix at the kth receiver, and
$$ \bar{\mathbf{C}}_{k}\triangleq \sum_{j=1\atop j\ne k}^{K}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H} + \sigma_{\mathrm{r}}^{2}\mathbf{G}_{k}\mathbf{F}\mathbf{F}^{H}\mathbf{G}_{k}^{H} + \sigma_{\mathrm{d}}^{2}\mathbf{I}_{N_{\mathrm{d}}}. $$
(8)

is the combined interference and noise covariance matrix.

In the following subsections, we develop optimization approaches that minimize the worst-user MSE among all the receivers subject to source and relay power constraints.

3.1 Problem formulation

In this section, we formulate the joint source and relay precoding optimization problem for MIMO interference systems. Our aim is to minimize the maximal MSE among all the source-destination pairs yet satisfying the transmit power constraints at the source as well as the relay nodes. To fulfill this aim, the following joint optimization problem is formulated:
$$\begin{array}{*{20}l} \min_{\left\{\mathbf{B}_{k}\right\}, \mathbf{F}, \left\{\mathbf{W}_{k}\right\}} & \max_{k}\;\; E_{k} \end{array} $$
(9a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left(\mathbf{F} {\boldsymbol \Psi} \mathbf{F}^{H}\right)\leq P_{\mathrm{r}} \end{array} $$
(9b)
$$\begin{array}{*{20}l} & \text{tr}\left(\mathbf{B}_{k}\mathbf{B}_{k}^{H}\right)\leq P_{\mathrm{s},k},~~\text{for } k = 1, \dots, K \end{array} $$
(9c)

where (9b) and (9c), respectively, constrains the transmit power at the relay node and the kth transmitter to P r>0, P s,k >0. Our next endeavor is to develop optimal solutions for this problem. Note that the problem is strictly non-convex with matrix variables appearing in quadratic form, and hence any closed-form solution is intractable. Therefore, we first resort to developing an iterative algorithm for the problem and then propose a sub-optimal solution which has lower computational complexity.

3.2 Iterative joint transceiver optimization

In this subsection, we investigate the non-convex source, relay, and destination filter design problem in an alternating fashion. We tend to optimize one group of variables while fixing the others. Given source and relay matrices {B k }, F, the optimal receiver matrices {W k } are obtained through solving the unconstrained optimization problem of \(\min _{\mathbf {W}_{k}} E_{k}\), since E k does not depend on W j , for jk, and W k does not appear in constraints (9b) and (9c). Using the matrix derivative formulas, the gradient \(\nabla _{\mathbf {W}_{k}^{H}}\left (\text {tr}\left (\mathbf {E}_{k}\right)\right)\) can be written as
$$\begin{array}{@{}rcl@{}} \nabla_{\mathbf{W}_{k}^{H}}&\left(\text{tr}\left(\mathbf{E}_{k}\right)\right)= - \mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k} + \sum_{j=1}^{K}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\\ &~~~~~~+ \sigma_{\mathrm{r}}^{2}\mathbf{G}_{k}\mathbf{F}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k} + \sigma_{\mathrm{d}}^{2}\mathbf{W}_{k}, ~~\text{for }k = 1, \dots, K.~~~ \end{array} $$
(10)
Equating \(\nabla _{\mathbf {W}_{k}^{H}}\left (\text {tr}\left (\mathbf {E}_{k}\right)\right) = \mathbf {0}\) yields the linear MMSE receive filter given by
$$\begin{array}{@{}rcl@{}} {}\mathbf{W}_{k} &\,=\, \left(\sum\limits_{j=1}^{K}\!\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H} \,+\, \sigma_{\mathrm{r}}^{2}\mathbf{G}_{k}\mathbf{F}\mathbf{F}^{H}\mathbf{G}_{k}^{H} \,+\, \sigma_{\mathrm{d}}^{2}\mathbf{I}_{N_{\mathrm{d},k}}\!\!\right)^{-1}~\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\times\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k} \end{array} $$
(11)

where (·)−1 indicates the inversion operation of a matrix.

Then for given source and receiver matrices {B k } and {W k }, the relay precoding matrix F optimization problem can be formulated as
$$\begin{array}{*{20}l} \min_{\mathbf{F}}& \max_{k}\;\; E_{k} \end{array} $$
(12a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left(\mathbf{F} {\boldsymbol \Psi} \mathbf{F}^{H}\right)\leq P_{\mathrm{r}}. \end{array} $$
(12b)
Note that (12) is non-convex with a matrix variable since F appears in quadratic form in the objective function as well as in the constraint. However, we can reformulate this problem as an SDP using Schur complement [22] as follows. By introducing a matrix Ξ k we conclude from the second equation in (7) that the k-th link MSE will be upper-bounded if
$$ \begin{aligned} &- \mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k} - \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\\ &+ \mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}{\boldsymbol \Psi}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k} \preceq {\boldsymbol{\Xi}}_{k}. \end{aligned} $$
(13)
In the above inequality, AB indicates that the matrix BA is positive semidefinite (PSD). Now, by introducing a matrix Φ such that F Ψ F H Φ, and a scaler variable τ r, the relay optimization problem (12) can be transformed to
$$\begin{array}{*{20}l} \min_{\tau_{\mathrm{r}}, \mathbf{F}, \{{\boldsymbol \Xi}_{k}\}, {\boldsymbol \Phi}} & \tau_{\mathrm{r}} \end{array} $$
(14a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left({\boldsymbol \Xi}_{k}\right) + \text{tr}\left(\mathbf{W}_{k}^{H}\mathbf{W}_{k}\right) + N_{\mathrm{b},k}\le \tau_{\mathrm{r}}, ~~\text{for }k = 1, \dots, K, \end{array} $$
(14b)
$$\begin{array}{*{20}l} & \left[\begin{array}{cc} {\boldsymbol \Xi}_{k} + \mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k} & \mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\\ \mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k} & {\boldsymbol \Psi}^{-1} \end{array}\right]\succeq \mathbf{0}, \text{for }k = 1, \dots, K, \end{array} $$
(14c)
$$\begin{array}{*{20}l} & \left[\begin{array}{cc} {\boldsymbol \Phi} & \mathbf{F}\\ \mathbf{F}^{H} & {\boldsymbol \Psi}^{-1} \end{array}\right]\succeq \mathbf{0} \end{array} $$
(14d)
$$\begin{array}{*{20}l} & \text{tr}\left({\boldsymbol \Phi}\right) \le P_{\mathrm{r}} \end{array} $$
(14e)

where we have used the Schur complement to obtain (14c) and (14d). Note that the problem (14) is an SDP problem which is convex and can, as a result, be efficiently solved using interior-point based solvers [23] at a maximal complexity order of \({\mathcal O}\big ((K + 2N_{\mathrm {r}}^{2} + \sum _{k=1}^{K}N_{\mathrm {b},k}^{2} + 2)^{3.5}\big)\) [24]. However, the actual complexity is usually much less in many practical cases. Interested readers are referred to [24] for a detailed analysis of the computational complexity based on interior-point methods.

Finally, we optimize the source matrices {B k } using the relay matrix F and the receiver matrices {W k } known from the previous steps. Let us define \(\tilde {\mathbf {H}}_{k,j} \triangleq \mathbf {W}_{k}^{H}\mathbf {G}_{k}\mathbf {F}\mathbf {H}_{j}\). Applying the matrix identity vec(A B C)=(C T A)vec(B), we can rewrite E k in (7) as
$$\begin{array}{@{}rcl@{}} E_{k} &=& \sum_{j=1}^{K} \mathbf{b}_{j}^{H} \left(\mathbf{I}_{N_{\mathrm{b},j}}\otimes \left(\tilde{\mathbf{H}}_{k,j}^{H}\tilde{\mathbf{H}}_{k,j}\right)\right) \mathbf{b}_{j} - \left(\text{vec}\left(\tilde{\mathbf{H}}_{k,j}\right)\right)^{T}\mathbf{b}_{k}\\ && - \mathbf{b}_{k}^{H}\text{vec}\left(\tilde{\mathbf{H}}_{k,j}^{H}\right) + \theta_{k}, \end{array} $$
(15)
where the vector \(\mathbf {b}_{k} \triangleq \text {vec}(\mathbf {B}_{k})\) is created by stacking all the columns of the matrix B k on top of each other, \(\theta _{k} \triangleq \text {tr}(\sigma _{\mathrm {r}}^{2}\mathbf {W}_{k}^{H}\mathbf {G}_{k}\mathbf {F}\mathbf {F}^{H}\mathbf {G}_{k}^{H}\mathbf {W}_{k} + \sigma _{\mathrm {d}}^{2}\mathbf {W}_{k}^{H}\mathbf {W}_{k})+ N_{\mathrm {b},k}\), and indicates matrix Kronecker product. Let us now denote
$$\begin{array}{@{}rcl@{}} \left\{\begin{array}{ll} \tilde{\mathbf{G}}_{k} &\triangleq \text{bd}\left(\mathbf{I}_{N_{\mathrm{b},1}} \otimes \left(\tilde{\mathbf{H}}_{k,1}^{H}\tilde{\mathbf{H}}_{k,1}\right), \dots,\mathbf{I}_{N_{\mathrm{b},K}} \otimes \left(\tilde{\mathbf{H}}_{k,K}^{H}\tilde{\mathbf{H}}_{k,K}\right)\right)\!,\\ \mathbf{c}_{k} &\triangleq \left[\left(\text{vec}\left(\tilde{\mathbf{C}}_{k,1}\right)\right)^{T}, \dots, \left(\text{vec}\left(\tilde{\mathbf{C}}_{k,K}\right)\right)^{T}\right]^{T},\\ \mathbf{b} &\triangleq \left[\mathbf{b}_{1}^{T}, \dots, \mathbf{b}_{K}^{T}\right]^{T}, \end{array}\right. \end{array} $$
(16)
where bd(·) constructs a block-diagonal matrix taking the parameter matrices as the diagonal blocks, \(\tilde {\mathbf {C}}_{k,k} = \tilde {\mathbf {H}}_{k,k}\) and \(\tilde {\mathbf {C}}_{k,j} = \mathbf {0}_{N_{\mathrm {b},k}\times N_{\mathrm {s},j}}\), if jk. The MSE in (15) can be rewritten as
$$ E_{k} =\mathbf{b}^{H} \tilde{\mathbf{G}}_{k}\mathbf{b}- \mathbf{c}_{k}^{H} \mathbf{b}- \mathbf{b}^{H} \mathbf{c}_{k}+\theta_{k}. $$
(17)
By introducing \(\mathbf {M}_{k} \triangleq \mathbf {F}\mathbf {H}_{k}\), the power constraints in (9b) can be rewritten as
$$ \mathbf{b}^{H} \mathbf{M} \mathbf{b}\leq \bar{P}_{\mathrm{r}}, ~~\text{for }k=1,\dots,K, $$
(18)
where \(\mathbf {M} \triangleq \text {bd}\big (\mathbf {I}_{N_{\mathrm {b},1}} \otimes (\mathbf {M}_{1}^{H}\mathbf {M}_{1}), \dots,\mathbf {I}_{N_{\mathrm {b},K}} \otimes (\mathbf {M}_{K}^{H}\mathbf {M}_{K})\big)\), and \(\bar {P}_{\mathrm {r}}= P_{\mathrm {r}}-\sigma _{\mathrm {r}}^{2}\text {tr}(\mathbf {F}\mathbf {F}^{H})\). Using (17) and (18), problem (9) can be written as
$$\begin{array}{*{20}l} \min_{\mathbf{b}}& \max_{k} \; \mathbf{b}^{H} \tilde{\mathbf{G}}_{k}\mathbf{b}- \mathbf{c}_{k}^{H} \mathbf{b}- \mathbf{b}^{H} \mathbf{c}_{k}+\theta_{k} \end{array} $$
(19a)
$$\begin{array}{*{20}l} \text{s.t.} & \mathbf{b}^{H} \mathbf{M} \mathbf{b}\leq \bar{P}_{\mathrm{r}} \end{array} $$
(19b)
$$\begin{array}{*{20}l} & \mathbf{b}^{H} \mathcal{I}_{k} \mathbf{b} \leq P_{\mathrm{s},k}, ~~\text{for }k=1,\dots,K, \end{array} $$
(19c)
where \(\mathcal {I} \triangleq \text {bd}(\mathcal {I}_{k1}, \dots, \mathcal {I}_{kk}, \dots, \mathcal {I}_{kK})\) with \(\mathcal {I}_{kk}=\mathbf {I}_{N_{\mathrm {s},k}N_{\mathrm {b},k}}\) and \(\mathcal {I}_{kj}=\mathbf {0}\), if jk. Problem (19) is a standard quadratically-constrained quadratic program (QCQP) which can be solved using off-the-shelf convex optimization toolboxes [23]. In the following, we also provide an SDP formulation of problem (19):
$$\begin{array}{*{20}l} \min_{t_{\mathrm{s}},\mathbf{b}} & ~~\tau_{\mathrm{s}} \end{array} $$
(20a)
$$\begin{array}{*{20}l} \text{s.t.} & \left(\begin{array}{cc} \tau_{\mathrm{s}}-\theta_{k}+ \mathbf{c}_{k}^{H} \mathbf{b}+ \mathbf{b}^{H} \mathbf{c}_{k} & \mathbf{b}^{H}\\ \mathbf{b} & \tilde{\mathbf{G}}_{k}^{-1} \\ \end{array} \right) \succcurlyeq \mathbf{0},\\&\quad\text{for }k=1,\dots,K, \end{array} $$
(20b)
$$\begin{array}{*{20}l} & \left(\begin{array}{cc} \bar{P}_{\mathrm{r}} & \mathbf{b}^{H} \\ \mathbf{b} & \mathbf{M}^{-1} \\ \end{array} \right) \succcurlyeq \mathbf{0}, \end{array} $$
(20c)
$$\begin{array}{*{20}l} & \left(\begin{array}{cc} P_{\mathrm{s},k} & \mathbf{b}^{H}\mathcal{I}_{k}^{\frac{1}{2}} \\ \mathcal{I}_{k}^{\frac{1}{2}}\mathbf{b} & \mathbf{I}_{p}\\ \end{array} \right) \succcurlyeq \mathbf{0}, ~~\text{for }k=1,\dots,K, \end{array} $$
(20d)
where τ s is a slack variable and \(p \triangleq \sum _{k=1}^{K} N_{\mathrm {s},k} N_{\mathrm {b},k}\). The problem (20) can be solved at a maximal complexity order of \({\mathcal O}\big ((\sum _{k=1}^{K}N_{\mathrm {b},k}^{2} + 1)^{3.5}\big)\) [24]. The proposed iterative optimization technique for solving the original problem (9) is summarized in Table 1.
Table 1

Iterative solution of problem (9)

1

Randomly initialize F and {B k } such that the constraints (9b) and (9c) are satisfied.

2

Repeat

 

(a) Obtain {W k } as defined in (11) using known {B k } and F.

 

(b) Solve the subproblem (14) to update F using fixed {W k } and {B k }.

 

(c) Update {B k } through solving the subproblem (20) using F and {W k } known from the previous steps.

3

Until convergence.

Since in each step of the iterative algorithm we solve a convex subproblem to update one set of variables, the conditional update of each set will either decrease or maintain the objective function (9a). From this observation, a monotonic convergence of the iterative algorithm follows. However, the overall computational complexity of the iterative algorithm increases as the multiple of the number of iterations required until convergence. Thus the complexity of the iterative algorithms is often reasonably high. Note that the sum-MSE based iterative algorithms proposed in [810] have similar complexity orders. Hence in the following subsection, we contrive an algorithm for the joint optimization problem such that the computational overhead is substantially reduced.

3.3 Simplified joint optimization algorithm

In the previous subsection, we optimized the source, relay, and receiver matrices in an alternating fashion. Here, we propose a simplified approach to solve problem (9) using the error covariance matrix decomposition technique. The following theorem paves the foundation of the simplified algorithm.

Theorem 1

For given {B k } and {W k }, the optimum relaying matrix F for minimizing the worst-user MSE has the form:
$$ \mathbf{F} = \sum_{k = 1}^{K}\mathbf{T}_{k}\mathbf{D}_{k}^{H} = \mathbf{T}\mathbf{D}^{H}, $$
(21)
where \(\mathbf {T}\triangleq \left [\mathbf {T}_{1}, \dots, \mathbf {T}_{K}\right ]\) and \(\mathbf {D}\triangleq \left [\mathbf {D}_{1}, \dots, \mathbf {D}_{K}\right ]\) with T k and D k , respectively, defined as
$${} \mathbf{T}_{k} \triangleq \lambda_{\mathrm{e},k}\left(\sum_{i=1}^{K}\lambda_{\mathrm{e},i}\mathbf{G}_{i}^{H}\mathbf{W}_{i}\mathbf{W}_{i}^{H}\mathbf{G}_{i} + \lambda_{\mathrm{r}}\mathbf{I}_{N_{\mathrm{r}}}\right)^{-1}\mathbf{G}_{k}^{H}\mathbf{W}_{k} $$
(22)
and
$$ \mathbf{D}_{k} \triangleq \left(\sum_{j=1}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H} + \sigma_{\mathrm{r}}^{2}\mathbf{I}_{N_{\mathrm{r}}}\right)^{-1}\mathbf{H}_{k}\mathbf{B}_{k}, $$
(23)

λ r and λ e,k ,k, are the corresponding Lagrange multipliers as defined in Appendix 1.

Proof

See Appendix 1. □

Note that \(\mathbf {D}_{k} = \left (\mathbf {H}_{k}\mathbf {B}_{k}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H} + \sum _{j=1\atop j\neq k}^{K}\mathbf {H}_{j}\mathbf {B}_{j}\mathbf {B}_{j}^{H}\mathbf {H}_{j}^{H} + \sigma _{\mathrm {r}}^{2}\mathbf {I}_{N_{\mathrm {r}}}\right)^{-1}\mathbf {H}_{k}\mathbf {B}_{k}\) can be regarded as the MMSE receive filter of the first-hop MIMO channel for the kth transmitter’s signal received at the relay node given by (1).

The implication of the structure of the relay amplifying matrix in the proposed simplified design can be observed while applying the following theorem.

Theorem 2

The MSE term appearing in (9a) can be equivalently decomposed into
$$\begin{array}{@{}rcl@{}}{} E_{k} &=& \text{tr}\left(\mathbf{I}_{N_{\mathrm{b}},k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}_{\bar k}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1}\\ && + \text{tr} \left(\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1} + \tilde{\mathbf{T}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{T}}\right)^{-1}, \end{array} $$
(24)

where \({\boldsymbol \Psi }_{\bar k} \triangleq {\boldsymbol \Psi } - \mathbf {H}_{k}\mathbf {B}_{k}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H} = \sum _{j=1\atop j\neq k}^{K}\mathbf {H}_{j}\mathbf {B}_{j}\mathbf {B}_{j}^{H}\mathbf {H}_{j}^{H} + \sigma _{\mathrm {n}}^{2}\mathbf {I}_{N_{\mathrm {r}}}\) and \(\tilde {\mathbf {T}}\) is defined in Appendix 2.

Proof

See Appendix 2. □

Even given the structure, an analytical optimal solution to the joint optimization problem is still difficult to obtain due to the cross-link interference from the relay node to the destination nodes. Therefore, we resort to develop an efficient suboptimal solution. The following proposition provides the foundation of the proposed simplified suboptimal solution.

Proposition 1

In the practically reasonably high SNR regime, the term \(\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H} \times {\boldsymbol \Psi }^{-1}\mathbf {H}_{k}\mathbf {B}_{k}\) in (24) can be approximated as \(\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H}{\boldsymbol \Psi }^{-1}\mathbf {H}_{k}\mathbf {B}_{k} \approx \mathbf {I}_{N_{\mathrm {b},k}}\).

Proof

See Appendix 3. □

The result in Proposition 1 is guided by the observation that the eigenvalues of \(\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H}{\boldsymbol \Psi }^{-1}\mathbf {H}_{k}\mathbf {B}_{k}\) approach unity with increasing first-hop SNR. It will be demonstrated in Section 5 through numerical simulations that such an approximation results in negligible performance loss while reducing the computational complexity significantly. Applying Proposition 1, the transmit power of the relay node defined in (5) can be expressed as \(\text {tr}\left (\mathbf {F} {\boldsymbol \Psi } \mathbf {F}^{H}\right) = \text {tr}(\tilde {\mathbf {T}}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H}{\boldsymbol \Psi }^{-1}\mathbf {H}_{k}\mathbf {B}_{k}\tilde {\mathbf {T}}^{H}) = \text {tr}(\tilde {\mathbf {T}}\tilde {\mathbf {T}}^{H})\). Therefore, problem (9) can be approximated as
$$\begin{array}{*{20}l} \min_{\{\mathbf{B}_{k}\}, \{\mathbf{W}_{k}\}, \tilde{\mathbf{T}}} & ~~\max_{k} \; \text{tr}\left(\mathbf{I}_{N_{\mathrm{b}},k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}_{\bar k}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1}\\ &~~~~~\quad \qquad + \text{tr} \left(\mathbf{I}_{N_{\mathrm{b},k}} + \tilde{\mathbf{T}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{T}}\right)^{-1} \end{array} $$
(25a)
$$\begin{array}{*{20}l} \text{s.t.} &\quad \text{tr}\left(\mathbf{B}_{k}\mathbf{B}_{k}^{H}\right)\leq P_{\mathrm{s},k},~\text{for }k = 1, \dots, K, \end{array} $$
(25b)
$$\begin{array}{*{20}l} & \quad\text{tr }\left(\tilde{\mathbf{T}}\tilde{\mathbf{T}}^{H}\right)\leq P_{\mathrm{r}}. \end{array} $$
(25c)
Note that the optimal receiver matrices {W k } can be obtained as in (11). Interestingly, the source and relay optimization variables {B k } and \(\tilde {\mathbf {T}}\) are separable both in the objective function as well as in the constraints in problem (25). Therefore, applying the results from Theorem 2 and Proposition 1, we can decompose the problem (25) into the following source precoding matrices optimization problem:
$$\begin{array}{*{20}l} \min_{\{\mathbf{B}_{k}\}} & \max_{k} ~\text{tr}\left(\mathbf{I}_{N_{\mathrm{b}},k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}_{\bar k}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1} \end{array} $$
(26a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}(\mathbf{B}_{k}\mathbf{B}_{k}^{H})\leq P_{\mathrm{s},k},~~\text{for }k = 1, \dots, K, \end{array} $$
(26b)
and the relay amplifying matrix optimization problem:
$$\begin{array}{*{20}l} \min_{\tilde{\mathbf{T}}} & \max_{k} \quad \text{tr}\left(\left[\mathbf{I}_{N_{\mathrm{b},k}} + \tilde{\mathbf{T}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{T}}\right]^{-1}\right) \end{array} $$
(27a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left(\tilde{\mathbf{T}}\tilde{\mathbf{T}}^{H}\right)\leq P_{\mathrm{r}}. \end{array} $$
(27b)
Note that the objective function in (26a) can be interpreted as the MSE of the kth transmitter’s signal vector s k . In particular, the equivalent received signal for the kth transmitter’s signal in the first hop received at the relay node is given by \(\mathbf {y}_{\mathrm {r}}^{(k)} = \mathbf {H}_{k}\mathbf {B}_{k}\mathbf {s}_{k} + \sum _{j \ne k}^{K}\mathbf {H}_{j}\mathbf {B}_{j}\mathbf {s}_{j} + \mathbf {n}_{\mathrm {r}}\), treating other users’ signals as noise. As such, the corresponding MMSE receiver is given by D k in (23). Thus the MSE expression in (26a) actually represents the equivalent first-hop MSE of the kth transmitter’s signal s k . Given the corresponding MMSE receiver D k , (26a) can be rewritten as
$$\begin{array}{@{}rcl@{}} \begin{aligned} {} E_{\mathrm{s},k} &\triangleq\text{tr}\left(\mathbf{D}_{k}^{H}\left({\boldsymbol\Psi} + \sigma_{\mathrm{r}}^{2}\mathbf{I}_{N_{\mathrm{r}}}\right)\mathbf{D}_{k} - \mathbf{D}_{k}^{H}\mathbf{H}_{k}\mathbf{B}_{k} - \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{D}_{k} + \mathbf{I}_{N_{\mathrm{b}},k}\right)\\ &=\text{tr}\left(\left(\mathbf{D}_{k}^{H}\mathbf{H}\boldsymbol\Upsilon_{k}\mathbf{B} - \boldsymbol\Omega_{k}\right)\left(\mathbf{D}_{k}^{H}\mathbf{H}\boldsymbol\Upsilon_{k}\mathbf{B} - \boldsymbol\Omega_{k}\right)^{H} + \sigma_{\mathrm{r}}^{2}\mathbf{D}_{k}^{H}\mathbf{D}_{k}\right)\\ &=\left\|\text{vec}\left(\mathbf{D}_{k}^{H}\mathbf{H}\boldsymbol\Upsilon_{k}\mathbf{B} - \boldsymbol\Omega_{k}\right)\right\|_{2}^{2} + \sigma_{\mathrm{r}}^{2}\text{tr}\left(\mathbf{D}_{k}^{H}\mathbf{D}_{k}\right)\\ &=\left\|\left[\begin{array}{c} \omega_{k}\\ \left(\mathbf{I}_{N_{\mathrm{r}}}\otimes\mathbf{D}_{k}^{H}\mathbf{H}\boldsymbol\Upsilon_{k}\right)\text{vec}\left(\mathbf{B}\right) - \text{vec}\left(\boldsymbol\Omega_{k}\right) \end{array}\right]\right\|_{2}^{2}, \end{aligned} \end{array} $$
(28)
where \(\omega _{k} \triangleq \sigma _{\mathrm {r}}\sqrt {\text {tr}(\mathbf {D}_{k}^{H}\mathbf {D}_{k})}\) and \(\boldsymbol \Upsilon _{k} \triangleq \left [\boldsymbol \Upsilon _{k1}, \dots, \boldsymbol \Upsilon _{kk}, \dots,\boldsymbol \Upsilon _{kK}\right ]\) with \(\boldsymbol \Upsilon _{kk}=\mathbf {I}_{N_{\mathrm {r}}}\phantom {\dot {i}\!}\) and Υ kj =0, if jk. Introducing an auxiliary variable t s, problem (26) can be rewritten as the following second-order cone program (SOCP):
$$\begin{array}{*{20}l} \min_{\{\mathbf{B}_{k}\}, t_{\mathrm{s}}} & \quad t_{\mathrm{s}} \end{array} $$
(29a)
$$\begin{array}{*{20}l} \text{s.t.} & \quad\left\|\left[\begin{array}{c} \omega_{k}\\ \left(\mathbf{I}_{N_{\mathrm{r}}}\otimes\mathbf{D}_{k}^{H}\mathbf{H}\boldsymbol\Upsilon_{k}\right)\text{vec}\left(\mathbf{B}\right) - \text{vec}\left(\boldsymbol\Omega_{k}\right)\end{array}\right]\right\|_{2}\le t_{\mathrm{s}},\\&~~~~~~~~~~~~~~~\text{for }k = 1, \dots, K, \end{array} $$
(29b)
$$\begin{array}{*{20}l} & \quad\left\|\text{vec}\left(\mathbf{B}_{k}\right)\right\|_{2} \le \sqrt{P_{\mathrm{s},k}},~~\text{for }k = 1, \dots, K, \end{array} $$
(29c)

which can be efficiently solved by standard optimization packages at a complexity order of \({\mathcal O}\big ((\sum _{k=1}^{K}N_{\mathrm {b},k}^{2} + 1)^{3}\big)\) [24]. Thus, we can update {D k } and {B k } in an alternating fashion.

Regarding the relay amplifying matrix optimization, by introducing \(\tilde {\mathbf {T}}^{H}\tilde {\mathbf {T}} \triangleq \mathbf {Q}\), the relay matrix optimization problem (27) can be equivalently transformed to
$$\begin{array}{*{20}l} &\min_{\mathbf{Q}\succeq 0} \quad \max_{k} \quad\text{tr}\left(\!\left[\mathbf{I}_{N_{\mathrm{d}},k} + \mathbf{G}_{k}\mathbf{Q}\mathbf{G}_{k}^{H}\right]^{-1}\right) + \!N_{\mathrm{b},k} - N_{\mathrm{d},k} \end{array} $$
(30a)
$$\begin{array}{*{20}l} &\text{s.t.} \quad \text{tr}(\mathbf{Q})\leq P_{\mathrm{r}}. \end{array} $$
(30b)
Let us now introduce a matrix variable \(\mathbf {Y}_{k}\succeq \left (\mathbf {I}_{N_{\mathrm {d}},k} + \mathbf {G}_{k}\mathbf {Q}\mathbf {G}_{k}^{H}\right)^{-1}\), and a scalar variable t r. Using these variables, the relay optimization problem (30) can be equivalently rewritten as the following SDP:
$$\begin{array}{*{20}l} \min_{t_{\mathrm{r}}, \mathbf{Q}, \{\mathbf{Y}_{k}\}} & t_{\mathrm{r}} \end{array} $$
(31a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}(\mathbf{Y}_{k})\leq t_{\mathrm{r}},~~\text{for }k = 1, \dots, K, \end{array} $$
(31b)
$$\begin{array}{*{20}l} & \text{tr}(\mathbf{Q})\leq P_{\mathrm{r}}, \end{array} $$
(31c)
$$\begin{array}{*{20}l} & \left(\begin{array}{cc} \mathbf{Y}_{k} & \mathbf{I}_{N_{\mathrm{d}},k},\\ \mathbf{I}_{N_{\mathrm{d}},k} & \mathbf{I}_{N_{\mathrm{d}},k} + \mathbf{G}_{k}\mathbf{Q}\mathbf{G}_{k}^{H}\\ \end{array} \right) \succeq 0,\\& \qquad\qquad\qquad\!\text{for } k = 1,\dots, K, \end{array} $$
(31d)
$$\begin{array}{*{20}l} & t_{\mathrm{r}}\geq 0, \end{array} $$
(31e)
$$\begin{array}{*{20}l} & \mathbf{Q}\succeq 0. \end{array} $$
(31f)
Problem (31) is convex and the globally optimal solution can be easily obtained [23]. The complexity order of solving problem (31) is at most \({\mathcal O}\big ((\sum _{k=1}^{K}N_{\mathrm {b},k}^{2} + \sum _{k=1}^{K}N_{\mathrm {d},k}^{2} + K + 2)^{3.5}\big)\) [24]. Note that in the simplified algorithm, only the source matrices are obtained in an alternating fashion. The overall joint optimization procedure is summarized in Table 2.
Table 2

Proposed simplified algorithm for solving problem (9)

1

Initialize B k ,k, satisfying the constraints (29c).

2

Repeat

 

(a) Update D k ,k, as in (23).

 

(b) Update B k ,k, through solving the subproblem (29).

3

Until convergence.

4

Solve the subproblem (31) to obtain Q.

4 Two-way relaying

Two-way relaying is being considered as a promising technique for future generation wireless systems since two-way relaying can significantly improve spectral efficiency. Hence, in this section, we consider two-way relaying in an interference MIMO relay system where each pair of users transmit signals to each other through the assisting relay node. The information exchange in the two-way relay channel is accomplished in two time slots: MAC phase and the BC phase. During the MAC phase, all the users simultaneously send their messages to the relay node. Thus the signal vector received at the relay node during the MAC phase can be expressed as
$$ \mathbf{y}_{\mathrm{r}} = \sum_{k=1}^{2K}\mathbf{H}_{k}\mathbf{B}_{k}\mathbf{s}_{k} + \mathbf{n}_{\mathrm{r}}, $$
(32)

where \(\mathbf {H}_{K+k} \triangleq \mathbf {G}_{k}^{T}\) for k=1,…,K and n r is the N r×1 AWGN vector received at the relay node.

Upon receiving y r, the relay node linearly precodes the signal vector by an N r×N r amplifying matrix F and transmits the N r×1 precoded signal vector x r in the MAC phase:
$$ \mathbf{x}_{\mathrm{r}} = \mathbf{F}\mathbf{y}_{\mathrm{r}}. $$
(33)
The received signal at the kth user in the BC phase is given by
$$\begin{array}{@{}rcl@{}} \mathbf{y}_{k}&=&\mathbf{H}_{k}^{T} \mathbf{x}_{\mathrm{r}} + \mathbf{n}_{\mathrm{d},k}\\ &=&\mathbf{H}_{k}^{T}\mathbf{F}\mathbf{H}_{\bar k}\mathbf{B}_{\bar k}\mathbf{s}_{\bar k} + \mathbf{H}_{k}^{T}\mathbf{F} \!\!\left(\sum_{j=1\atop j\neq {\bar k}}^{2K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{s}_{j} + \mathbf{n}_{\mathrm{r}}\right)+\mathbf{n}_{\mathrm{d},k},\\&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{for }k=1, \dots, 2K, \end{array} $$
(34)

where we have defined \(\bar {k}\) as the index of user k’s partner (e.g., \({\bar {1}} = K+1, \overline {K+1} = 1\)), n d,k is the N d,k ×1 AWGN vector at the kth destination node. As in the case of the one-way relaying system, all noises are assumed to be i.i.d. complex Gaussian random variables with mean zero and variance \(\sigma _{\mathrm {n}}^{2}\).

Since the transmitting node k knows its own signal vector s k and the full CSI of the corresponding source-destination link \(\mathbf {H}_{k}^{T}\mathbf {F}\mathbf {H}_{k}\mathbf {B}_{k}\), each transmitter can completely cancel the self-interference component in (34). Thus, the effective received signal vector at the kth receiving node is given by
$$ {\begin{aligned} \mathbf{y}_{k}=\mathbf{H}_{k}^{T} \mathbf{F}\mathbf{H}_{\bar {k}}\mathbf{B}_{\bar{k}}\mathbf{s}_{\bar{k}} + \mathbf{H}_{k}^{T}\mathbf{F} \left(\sum_{j\neq k,{\bar{k}}}^{2K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{s}_{j} + \mathbf{n}_{\mathrm{r}}\right) + \mathbf{n}_{\mathrm{d},k},\\ \end{aligned}} $$
(35)
$$ {}\begin{aligned} =\bar{\mathbf{H}}_{k}\mathbf{s}_{\bar{k}} + \bar{\mathbf{n}}_{\mathrm{d},k}, ~~\text{for }k=1, \dots, 2K. \end{aligned} $$
(36)
Using (33), the transmission power required at the relay node can be defined as
$$\begin{array}{@{}rcl@{}} \text{tr}\left(\mathrm{E}\left\{\mathbf{x}_{\mathrm{r}} \mathbf{x}_{\mathrm{r}}^{H}\right\}\right)= \text{tr}\left(\mathbf{F} {\boldsymbol \Psi}\mathbf{F}^{H}\right), \end{array} $$
(37)
where \({\boldsymbol \Psi } \triangleq \mathrm {E}\{\mathbf {y}_{\mathrm {r}}\mathbf {y}_{\mathrm {r}}^{H}\} = \sum _{k=1}^{2K}\mathbf {H}_{k}\mathbf {B}_{k}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H} + \sigma _{\mathrm {r}}^{2}\mathbf {I}_{N_{\mathrm {r}}}\) is the covariance matrix of the signal received at the relay node from all the transmitters. Furthermore, the MSE of the estimated signal using an N d×N b linear weight matrix W k at the kth receiving node can be expressed as
$$\begin{array}{@{}rcl@{}}{} &&E_{k}\! =\! \text{tr}\! \left(\!\begin{array}{r} \mathbf{I}_{N_{\mathrm{s}},k} - \mathbf{W}_{k}^{H}\mathbf{H}_{k}^{T}\mathbf{F}\mathbf{H}_{\bar{k}}\mathbf{B}_{\bar k} - \mathbf{B}_{\bar{k}}^{H}\mathbf{H}_{\bar{k}}^{H}\mathbf{F}^{H}\mathbf{H}_{k}^{*}\mathbf{W}_{k}\\ + \sum_{j=1\atop j\ne k}^{2K}\mathbf{W}_{k}^{H}\mathbf{H}_{k}^{T}\mathbf{F}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{F}^{H}\mathbf{H}_{k}^{*}\mathbf{W}_{k}\\ +\sigma_{\mathrm{r}}^{2}\mathbf{W}_{k}^{H}\mathbf{H}_{k}^{T}\mathbf{F}\mathbf{F}^{H}\mathbf{H}_{k}^{*}\mathbf{W}_{k} + \sigma_{\mathrm{d}}^{2}\mathbf{W}_{k}^{H}\mathbf{W}_{k} \end{array}\!\right),\\ {}&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\qquad\text{for }k = 1, \dots, 2K. \end{array} $$
(38)
Similar to the case of one-way relaying, the problem of optimizing the transmit, relay, and receive matrices for the two-way scenario can be formulated as
$$\begin{array}{*{20}l} \min_{\{\mathbf{B}_{k}\}, \mathbf{F}, \{\mathbf{W}_{k}\}} & \max_{k}\;\; E_{k} \end{array} $$
(39a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left(\mathbf{F} {\boldsymbol \Psi} \mathbf{F}^{H}\right)\leq P_{\mathrm{r}} \end{array} $$
(39b)
$$\begin{array}{*{20}l} & \text{tr}(\mathbf{B}_{k}\mathbf{B}_{k}^{H})\leq P_{\mathrm{s},k},~~\text{for }k = 1, \dots, 2K, \end{array} $$
(39c)

where (39b) and (39c) indicates the corresponding transmit power constraints.

4.1 Iterative joint transceiver optimization

Similar to the one-way relaying scenario, it can be shown that the transmitter, relay, and receiver matrices can be optimized in an alternating fashion through solving convex sub-problems. In each iteration of the algorithm, the receiver weight matrices are updated as follows:
$$\begin{array}{@{}rcl@{}} \mathbf{W}_{k} &= \left(\sum\limits_{j=1\atop j \ne k}^{2K}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H} + \sigma_{\mathrm{r}}^{2}\mathbf{G}_{k}\mathbf{F}\mathbf{F}^{H}\mathbf{G}_{k}^{H} + \sigma_{\mathrm{d}}^{2}\mathbf{I}_{N_{\mathrm{d}}}\right)^{-1}\\ &\quad~~~~~~~~~~~~~~~~~~~~~~~\times\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k}, ~~\text{for } k = 1, \dots, 2K.\; \end{array} $$
(40)
The relay beamforming matrix F is optimized through solving the following SDP problem:
$$\begin{array}{*{20}l} \min_{\tau_{\mathrm{r}}, \mathbf{F}, \{{\boldsymbol \Xi}_{k}\}, {\boldsymbol \Phi}} & \tau_{\mathrm{r}} \end{array} $$
(41a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left({\boldsymbol \Xi}_{k}\right) + \text{tr}\left(\mathbf{F}_{k}^{H}\mathbf{W}_{k}\right) \le \tau_{\mathrm{r}}, \end{array} $$
(41b)
$$\begin{array}{*{20}l} & \left[\begin{array}{cc} {\boldsymbol \Xi}_{k} + \mathbf{W}_{k}^{H}\mathbf{H}_{k}^{T}\mathbf{F}\mathbf{H}_{\bar k}\mathbf{B}_{\bar{k}} + \mathbf{B}_{\bar{k}}^{H}\mathbf{H}_{\bar k}^{H}\mathbf{F}^{H}\mathbf{H}_{k}^{*}\mathbf{W}_{k} & \mathbf{W}_{k}^{H}\mathbf{H}_{k}^{T}\mathbf{F}\\ \mathbf{F}^{H}\mathbf{H}_{k}^{*}\mathbf{W}_{k} & {\boldsymbol \Psi}_{\bar k}^{-1} \end{array}\right]\succeq \mathbf{0}, \\ & {\kern110pt} \text{for }k=1,\dots, 2K, \end{array} $$
(41c)
$$\begin{array}{*{20}l} & \left[\begin{array}{cc} {\boldsymbol \Phi} & \mathbf{F}\\ \mathbf{F}^{H} & {\boldsymbol \Psi}^{-1} \end{array}\right]\succeq \mathbf{0}, \end{array} $$
(41d)
$$\begin{array}{*{20}l} & \text{tr}\left({\boldsymbol \Phi}\right) \le P_{\mathrm{r}}, \end{array} $$
(41e)
where we have defined
$$ \left\{\begin{aligned} \mathbf{F} {\boldsymbol \Psi} \mathbf{F}^{H}&\preceq {\boldsymbol \Phi},\\ -\mathbf{W}_{k}^{H}\mathbf{H}_{k}^{T}\mathbf{F}\mathbf{H}_{\bar k}\mathbf{B}_{\bar k} - \mathbf{B}_{\bar k}^{H}\mathbf{H}_{\bar k}^{H}\mathbf{F}^{H}\mathbf{H}_{k}^{*}\mathbf{W}_{k} + \mathbf{W}_{k}^{H}\mathbf{H}_{k}^{T}\mathbf{F}{\boldsymbol \Psi}_{\bar k}\mathbf{F}^{H}\mathbf{H}_{k}^{*}\mathbf{W}_{k} &\preceq {\boldsymbol \Xi}_{k}. \end{aligned}\right. $$
(42)
Finally, the optimal source precoding matrices are obtained by solving
$$\begin{array}{*{20}l} \min_{t_{\mathrm{s}},\mathbf{b}} & \quad\tau_{\mathrm{s}} \end{array} $$
(43a)
$$\begin{array}{*{20}l} \text{s.t.} & \quad\left(\begin{array}{cc} \tau_{\mathrm{s}}-\theta_{k}+ \mathbf{c}_{k}^{H} \mathbf{b}+ \mathbf{b}^{H} \mathbf{c}_{k} & \mathbf{b}^{H} \\ \mathbf{b} & \tilde{\mathbf{G}}_{k}^{-1} \\ \end{array} \right)\succcurlyeq \mathbf{0},\\&\quad\text{for }k=1,\dots, 2K, \end{array} $$
(43b)
$$\begin{array}{*{20}l} & \quad\left(\begin{array}{cc} \bar{P}_{\mathrm{r}} & \mathbf{b}^{H} \\ \mathbf{b} & \mathbf{M}^{-1} \\ \end{array} \right) \succcurlyeq \mathbf{0}, \end{array} $$
(43c)
$$\begin{array}{*{20}l} & \quad\left(\begin{array}{cc} P_{\mathrm{s},k} & \mathbf{b}^{H}\mathcal{I}_{k}^{\frac{1}{2}} \\ \mathcal{I}_{k}^{\frac{1}{2}}\mathbf{b} & \mathbf{I}_{p}\\ \end{array} \right)\succcurlyeq \mathbf{0},~~\text{for }k=1,\dots, 2K, \end{array} $$
(43d)
where
$$\begin{array}{*{20}l} {} \theta_{k} &\triangleq \text{tr}\left(\sigma_{\mathrm{r}}^{2}\mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k} + \sigma_{\mathrm{d}}^{2}\mathbf{W}_{k}^{H}\mathbf{W}_{k}\right)\\ {}&\quad+N_{\mathrm{b},k}, ~~\text{for }k=1,\dots, 2K, \end{array} $$
(44a)
$$\begin{array}{*{20}l} {}\tilde{\mathbf{G}}_{k} &\triangleq \text{bd}\left(\mathbf{I}_{N_{\mathrm{b},1}} \otimes \left(\tilde{\mathbf{H}}_{k,1}^{H}\tilde{\mathbf{H}}_{k,1}\right), \cdots,\mathbf{I}_{N_{\mathrm{b},2K}}\right.\\ {}&\quad\left.\otimes \left(\tilde{\mathbf{H}}_{k,2K}^{H}\tilde{\mathbf{H}}_{k,2K}\right)\right),\text{for} k=1,\dots, 2K, \end{array} $$
(44b)
$$\begin{array}{*{20}l} {}\mathbf{c}_{k} &\triangleq \left[\left(\text{vec}\left(\tilde{\mathbf{C}}_{k,1}\right)\right)^{T}, \dots, \left(\text{vec}\left(\tilde{\mathbf{C}}_{k,2K}\right)\right)^{T}\right]^{T}, \end{array} $$
(44c)
$$\begin{array}{*{20}l} {}\tilde{\mathbf{C}}_{k,k} &= \tilde{\mathbf{H}}_{k,k}, \end{array} $$
(44d)
$$\begin{array}{*{20}l} {}\tilde{\mathbf{C}}_{k,j}& = \mathbf{0}_{N_{\mathrm{b},k}\times N_{\mathrm{s},j}}, ~~\text{for }j \ne k, \end{array} $$
(44e)
$$\begin{array}{*{20}l} {}\mathbf{b} &\triangleq \left[\mathbf{b}_{1}^{T}, \dots, \mathbf{b}_{2K}^{T}\right]^{T}, \end{array} $$
(44f)
$$\begin{array}{*{20}l} {}\mathbf{M} &\triangleq \text{bd}\left(\mathbf{I}_{N_{\mathrm{b},1}}\otimes\left(\mathbf{M}_{1}^{H}\mathbf{M}_{1}\right), \dots,\mathbf{I}_{N_{\mathrm{b},K}} \otimes \left(\mathbf{M}_{2K}^{H}\mathbf{M}_{2K}\right)\right), \end{array} $$
(44g)
$$\begin{array}{*{20}l} {}&p\triangleq \sum_{k=1}^{2K} N_{\mathrm{s},k} N_{\mathrm{b},k}. \end{array} $$
(44h)

4.2 Simplified non-iterative approach

Assuming moderate SNR in the MAC phase, it can be shown, similar to the one-way relaying case, that the generic structure of the relay matrix F is defined as F=T D H . Using this particular structure of F, the MSE at the kth receiver can be equivalently decomposed into two parts as shown below:
$$\begin{array}{@{}rcl@{}} E_{k} &=&\text{tr}\left(\mathbf{I}_{N_{\mathrm{b}},k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}_{\bar k}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1}\\ && + \text{tr} \left(\!\left(\!\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}_{\bar k}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1} \!+ \tilde{\mathbf{T}}^{H}\mathbf{H}_{\bar k}^{*}\mathbf{H}_{\bar k}^{T}\tilde{\mathbf{T}}\right)^{-1}\!.~~~~~ \end{array} $$
(45)
Accordingly, the joint precoding design problem (25) can be decomposed into two sub-problems, namely, the source precoding matrices optimization problem:
$$\begin{array}{*{20}l} \min_{\{\mathbf{B}_{k}\}} & \max_{k} ~\text{tr}\left(\mathbf{I}_{N_{\mathrm{b}},k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}_{\bar k}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1} \end{array} $$
(46a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left(\mathbf{B}_{k}\mathbf{B}_{k}^{H}\right)\leq P_{\mathrm{s},k},~~\text{for }k = 1, \dots, K, \end{array} $$
(46b)
and the relay beamforming matrix optimization problem:
$$\begin{array}{*{20}l} \min_{\tilde{\mathbf{T}}} & \max_{k} \quad \text{tr}\left(\left[\mathbf{I}_{N_{\mathrm{b},k}} + \tilde{\mathbf{T}}^{H}\mathbf{H}_{\bar k}^{*}\mathbf{H}_{\bar k}^{T}\tilde{\mathbf{T}}\right]^{-1}\right) \end{array} $$
(47a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left(\tilde{\mathbf{T}}\tilde{\mathbf{T}}^{H}\right)\leq P_{\mathrm{r}}, \end{array} $$
(47b)

which can be solved following the similar approach as for the one-way relaying scenario.

5 Numerical simulations

In this section, we analyze the performance of the proposed one- and two-way MIMO relay interference system optimization algorithms through numerical examples. For simplicity, we assume that the source and the destination nodes are equipped with N s and N d antennas each, respectively, and P s,k =P s, k. We simulated a flat Rayleigh fading environment such that the channel matrices have zero-mean entries with variances 1/N s for H k , k, and 1/N r for G k , k. All the simulation results were obtained by averaging over 500 independent channel realizations.

The performance of the proposed min-max MSE algorithms have been compared with that of the naive AF (NAF) algorithm in terms of both MSE and bit error rate (BER). The NAF algorithm is a simple baseline scheme that forwards the signals at the transmitters and the relay node assigning equal power to each data stream. In particular, the source and the relay matrices, in their simplest forms, in the NAF scheme are defined as
$$ \left\{\begin{aligned} \mathbf{B}_{k}& = \sqrt{P_{\mathrm{s}}/N_{\mathrm{s}}}\,\mathbf{I}_{N_{\mathrm{s}}},~~\text{for }k = 1, \dots, K,\\ \mathbf{F} &= \sqrt{P_{\mathrm{r}}/\text{tr}({\boldsymbol \Psi})}\, \mathbf{I}_{N_{\mathrm{r}}}. \end{aligned}\right. $$
(48)
In the first example, we compare the performance of the proposed min-max MSE-based one-way algorithms with that of the sum-MSE minimization algorithm in [8] as well as the NAF approach in terms of the MSE normalized by the number of data streams (NMSE) with K=3, N s=3,N r=9, and N d=3. Figure 2 shows the NMSE performance of the algorithms versus transmit power P s with fixed P r=20 dB. Note that for the proposed simplified non-iterative algorithm, we plot the NMSE of the user with the worst channel (Worst) as well as the average per-stream MSE of all the users (Avg.). On the other hand, for the rest of the algorithms, the worst-user NMSE has been plotted. The results clearly indicate that the proposed joint optimization algorithms consistently yield better performance compared to the existing schemes. It can also be revealed that the proposed iterative algorithm has the best MSE performance compared to the other approaches over the entire P s range. It is no surprise that the NAF algorithm yields much higher MSE compared to the other schemes since the NAF algorithm performs no optimization operation. Most importantly, the iterative sum-MSE minimization algorithm in [8] always penalizes the user with the worst channel condition.
Fig. 2

Example 1: normalized MSE versus P s. K=3, N s=3,N r=9, N d=3, P r=20 dB

Since the NAF algorithm does not allocate the transmit power optimally and equally divides the power among multiple data streams instead, the inter-stream interference and the inter-user interference increase significantly at higher transmit power. Hence, the MSE of the NAF algorithm does not improve notably at higher transmit power.

Further analysis of the results in Fig. 2 reveals that the proposed simplified algorithm yields the worst-user MSE performance which is comparable to that of the iterative algorithm, even at low P s region. This observation illustrates that the approximation made in the simplified algorithm encounters negligible performance loss compared to the iterative optimal design. On the other hand, the computational complexity of the proposed simplified optimization is less than that of even one iteration of the iterative design, making it much more attractive for practical interference MIMO relay systems. The number of iterations required for convergence up to 10−3 in terms of MSE in a random channel realization for the iterative algorithm are listed in Table 3.
Table 3

Iterations required till convergence in the proposed algorithm

P s (dB)

0

5

10

15

20

25

Iterations

3

3

3

4

5

5

In the next example, we focus on the proposed simplified optimization scheme and compare its performance with that of the proposed iterative approach and the NAF algorithm in terms of BER. Quadrature phase-shift keying (QPSK) signal constellations were assumed to modulate the transmitted signals and maximum-likelihood detection is applied at the receivers. We set K=3, N s=2,N r=6, N d=3, and transmit 1000N s randomly generated bits from each transmitter in each channel realization. The BER performance of the algorithms are shown in Fig. 3 versus P s with P r=20 dB. As we can see, the proposed simplified algorithm yields a much lower BER compared to the conventional NAF scheme. Compared with the iterative approach the simplified algorithm has much lower computational task at the cost of marginal performance loss.
Fig. 3

Example 2: BER versus P s. K=3, N s=2,N r=6, N d=3, P r=20 dB

In the last couple of examples, we analyze the performance of the two-way MIMO relaying scheme. The NMSE performance of the two-way relaying algorithms is shown for different number of communication links K in Fig. 4. This time we set N s=2,N r=KN s, and N d=6 to plot the NMSE of the proposed algorithms versus P s with P r=20 dB. It can be clearly seen from Fig. 4 that as the number of links increases, the worst-user MSE keeps increasing. This is due to the additional cross-link interferences generated by the increased number of active users.
Fig. 4

Example 3: MSE versus P s in two-way relaying. Varying number of links, N s=2,N r=KN s, N d=6, P r=20 dB

In Fig. 5, the BER performance of the proposed two-way relaying algorithms has been compared with the sum-MSE-based algorithms originally proposed for one-way relaying in [810]. QPSK signal constellations were assumed to modulate the transmitted signals. We set N s=2,K=3,N r=KN s, N d=6, P r=20 dB, and transmit 1000N s randomly generated bits from each transmitter in each channel realization. Most importantly, the iterative sum-MSE minimization algorithms in [8 10] always penalize the user with the worst channel condition in the two-way relaying system.
Fig. 5

Example 4: BER versus P s in two-way relaying for different algorithms, N s=2,K=3,N r=KN s, N d=6, P r=20 dB

6 Conclusions

We considered a two-hop interference MIMO relay system and developed schemes to minimize the worst-user MSE of signal estimation for both one- and two-way relaying schemes. At first, we proposed an iterative solution for both relaying schemes by solving several convex subproblems alternatingly and in an iterative fashion. Then to reduce the computational overhead of the optimization approach, we develop a simplified non-iterative algorithm using the error covariance matrix decomposition technique based on the high SNR assumption. Simulation results have illustrated that the proposed simplified approach performs nearly as well as the iterative approach, while offering significant reduction in computational complexity.

7 Endnote

1 The min-max MSE criterion is considered by many to be more desirable than the min-sum MSE criterion in [8 10] because fairness is imposed and weaker users are not being sacrificed for the minimization of the sum.

8 Appendix 1: Proof of Theorem 1

For given {B k } and {W k }, problem (9) reduces to
$$\begin{array}{*{20}l} \min_{\mathbf{F}}& \tau \end{array} $$
(49a)
$$\begin{array}{*{20}l} \text{s.t.} & E_{k} \le \tau,~~\text{for }k = 1, \dots, K, \end{array} $$
(49b)
$$\begin{array}{*{20}l} & \text{tr}\left(\mathbf{F} {\boldsymbol \Psi} \mathbf{F}^{H}\right)\leq P_{\mathrm{r}}. \end{array} $$
(49c)
The Lagrangian function of problem (49) can be written as
$$\begin{array}{@{}rcl@{}} &&{\mathcal L}\left(\mathbf{F}, \{\lambda_{\mathrm{s},k}\}, \lambda_{\mathrm{r}}\right)\\ &&\quad=\tau + \sum_{k=1}^{K} \lambda_{\mathrm{e},k}\text{tr} \left(\begin{array}{r}\mathbf{I}_{N_{\mathrm{s}},k} - 2\text{Re}\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\right)\\ + \sum_{j=1}^{K}\mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H} \mathbf{H}_{j}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\\ +\sigma_{\mathrm{r}}^{2}\mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{W}_{k} + \sigma_{\mathrm{d}}^{2}\mathbf{W}_{k}^{H}\mathbf{W}_{k} - \tau \end{array} \right)\\ &&\qquad~~~+ \lambda_{\mathrm{r}}\left(\text{tr}\left(\mathbf{F} \left(\sum_{k=1}^{K}\mathbf{H}_{k}\mathbf{B}_{k}\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H} + \sigma_{\mathrm{r}}^{2}\mathbf{I}_{N_{\mathrm{r}}}\right) \mathbf{F}^{H}\right) - P_{\mathrm{r}} \right). \end{array} $$
(50)
The derivative of the Lagrangian function over F H is given by
$$\begin{array}{@{}rcl@{}} \frac{\partial{\mathcal L}}{\partial\mathbf{F}^{H}} &=& \sum_{k=1}^{K} \lambda_{\mathrm{e},k} \left(- \mathbf{G}_{k}^{H}\mathbf{W}_{k}\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H} + \sum\limits_{j=1}^{K}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H} \right.\\ && \left.+ {\vphantom{\sum_{k=1}^{K}}} \sigma_{\mathrm{r}}^{2}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\mathbf{W}_{k}^{H}\mathbf{G}_{k}\mathbf{F}\right) \,+\, \lambda_{\mathrm{r}}\mathbf{F}\left(\!\sum\limits_{k=1}^{K}\mathbf{H}_{k}\mathbf{B}_{k}\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H} \!+ \sigma_{\mathrm{r}}^{2}\mathbf{I}_{N_{\mathrm{r}}}\!\right). \end{array} $$
(51)
Rearranging the terms in (51), \(\frac {\partial {\mathcal L}}{\partial \mathbf {F}^{H}}\) can be expressed as
$$\begin{array}{*{20}l} \frac{\partial{\mathcal L}}{\partial\mathbf{F}^{H}}&=\sum_{k=1}^{K} - \lambda_{\mathrm{e},k}\mathbf{G}_{k}^{H}\mathbf{W}_{k}\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\\ &\quad+ \left(\sum_{i=1}^{K}\lambda_{\mathrm{e},i}\mathbf{G}_{i}^{H}\mathbf{W}_{i}\mathbf{W}_{i}^{H}\mathbf{G}_{i} + \lambda_{\mathrm{r}}\mathbf{I}_{N_{\mathrm{r}}}\right)\mathbf{F} \\ &\quad\times\left(\sum_{j=1}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H} + \sigma_{\mathrm{r}}^{2}\mathbf{I}_{N_{\mathrm{r}}}\right). \end{array} $$
(52)
Equating \(\frac {\partial {\mathcal L}}{\partial \mathbf {F}^{*}} = 0\), we have the optimal relay filter given by
$$ \mathbf{F} =\sum_{k = 1}^{K}\mathbf{T}_{k}\mathbf{D}_{k}^{H} $$
(53)
with
$$\begin{array}{@{}rcl@{}} \left\{\begin{array}{ll} \mathbf{T}_{k}& \triangleq \lambda_{\mathrm{e},k}\left(\sum\limits_{i=1}^{K}\lambda_{\mathrm{e},i}\mathbf{G}_{i}^{H}\mathbf{W}_{i}\mathbf{W}_{i}^{H}\mathbf{G}_{i} + \lambda_{\mathrm{r}}\mathbf{I}_{N_{\mathrm{r}}}\right)^{-1}\mathbf{G}_{k}^{H}\mathbf{W}_{k},\\ \mathbf{D}_{k} &\triangleq \left(\sum\limits_{j=1}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}+ \sigma_{\mathrm{r}}^{2}\mathbf{I}_{N_{\mathrm{r}}}\right)^{-1}\mathbf{H}_{k}\mathbf{B}_{k}. \end{array}\right. \end{array} $$
(54)

Denoting \(\mathbf {T} \triangleq \left [\mathbf {T}_{1} \cdots \mathbf {T}_{K} \right ]\) and \(\mathbf {D} \triangleq \left [\mathbf {D}_{1} \cdots \mathbf {D}_{K} \right ]\), F can be expressed as F=T D H . □

9 Appendix 2: Proof of Theorem 2

The MSE in (9a) can be rewritten as
$$\begin{array}{*{20}l} E_{k}&= \left[\mathbf{I}_{N_{\mathrm{s}},k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H} \bar{\mathbf{C}}_{k}^{-1}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k}\right]^{-1} \end{array} $$
(55)
$$\begin{array}{*{20}l} &= \text{tr}\left(\mathbf{I}_{N_{\mathrm{s}},k} - \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\left(\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k}\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\right.\right.\\ &\quad\left.\left.+ \bar{\mathbf{C}}_{k}\right)^{-1}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k}\right) \end{array} $$
(56)
$$\begin{array}{*{20}l} &=\text{tr}\left(\mathbf{I}_{N_{\mathrm{s}},k} - \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\left(\mathbf{G}_{k}\mathbf{F}{\boldsymbol\Psi}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\right.\right.\\ &\left.\left.\quad+\sigma_{\mathrm{d}}^{2}\mathbf{I}_{N_{\mathrm{d}},k}\right)^{-1}\mathbf{G}_{k}\mathbf{F}\mathbf{H}_{k}\mathbf{B}_{k}\right) \end{array} $$
(57)
$$\begin{array}{*{20}l} &=\text{tr}\left(\mathbf{I}_{N_{\mathrm{s}},k} - \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left[{\boldsymbol\Psi}^{-1} - \left({\boldsymbol\Psi}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\mathbf{F}{\boldsymbol\Psi}\right.\right.\right.\\ &\left.\left.\left.\quad+ {\boldsymbol\Psi}\right)^{-1}\right]\mathbf{H}_{k}\mathbf{B}_{k}\right) \end{array} $$
(58)
$$\begin{array}{*{20}l} &=\text{tr}\left(\mathbf{I}_{N_{\mathrm{s}},k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}_{\bar{k}}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1}\\ &\quad+\text{tr}\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left({\boldsymbol\Psi}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\mathbf{F}{\boldsymbol\Psi} + {\boldsymbol\Psi}\right)^{-1}\times\mathbf{H}_{k}\mathbf{B}_{k}\right), \end{array} $$
(59)
where we used matrix inversion lemma (A+B C D)−1=A −1A −1 B(D A −1 B+C −1)−1 D A −1 to obtain (56) and the first term in (59) whereas the matrix identity B H (B C B H +I)−1 B=C −1−(C B H B C+C)−1 is used to obtain (58) in the above derivation. Note that the first term in (59) is irrelevant to F. Hence for given source matrices, the problem of optimizing F can be simplified as
$$\begin{array}{*{20}l} \min_{\mathbf{F}} & \quad\text{tr}\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left({\boldsymbol\Psi}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\mathbf{F}{\boldsymbol\Psi} + {\boldsymbol\Psi}\right)^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right) \end{array} $$
(60a)
$$\begin{array}{*{20}l} \text{s.t.} & \quad\text{tr}\left(\mathbf{F} {\boldsymbol \Psi} \mathbf{F}^{H}\right)\leq P_{\mathrm{r}}. \end{array} $$
(60b)
By introducing \(\tilde {\mathbf {F}} = \mathbf {F} {\boldsymbol \Psi }^{\frac {1}{2}}\), problem (60) can be rewritten as
$$\begin{array}{*{20}l} \min_{\tilde{\mathbf{F}}}&\quad\text{tr}\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol \Psi}^{-\frac{1}{2}}\left(\tilde{\mathbf{F}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{F}} + \mathbf{I}_{N_{\mathrm{r}}}\right)^{-1}{\boldsymbol \Psi}^{-\frac{1}{2}}\mathbf{H}_{k}\mathbf{B}_{k}\right) \end{array} $$
(61a)
$$\begin{array}{*{20}l} \text{s.t.} &\quad \text{tr}\left(\tilde{\mathbf{F}}\tilde{\mathbf{F}}^{H}\right)\leq P_{\mathrm{r}}. \end{array} $$
(61b)

Let us write the eigenvalue decomposition (EVD) \(\mathbf {G}_{k}^{H}\mathbf {G}_{k} = \mathbf {V}_{\mathrm {g}}{\boldsymbol {\Lambda }}_{\mathrm {g}}\mathbf {V}_{\mathrm {g}}^{H}\) and the singular value decomposition (SVD) \({\boldsymbol {\Psi }}^{-\frac {1}{2}}\mathbf {H}_{k}\mathbf {B}_{k} = \mathbf {U}_{\psi }{\boldsymbol {\Lambda }}_{\psi }\mathbf {V}_{\psi }^{H}\). The following lemma defines the optimal \(\tilde {\mathbf {F}}\).

Lemma 1

([25] Lemma 2) For matrices \(\mathbf {A}, \bar {\mathbf {T}}, \mathbf {H}\) of dimensions m×n, l×m, and k×l, respectively, with k,l,mn, \(r \triangleq \text {rank} (\mathbf {H}) \ge n\) and \(\text {rank}(\bar {\mathbf {T}})= n\), the solution to the optimization problem
$$\begin{array}{*{20}l} \min_{\bar{\mathbf{T}}} &\text{tr}\left(\mathbf{A}^{H}\left(\bar{\mathbf{T}}^{H}\mathbf{H}^{H}\mathbf{H}\bar{\mathbf{T}} + \mathbf{I}_{m}\right)^{-1}\mathbf{A}\right) \end{array} $$
(62a)
$$\begin{array}{*{20}l} \text{s.t.} & \text{tr}\left(\bar{\mathbf{T}}\bar{\mathbf{T}}^{H}\right)\leq p, \end{array} $$
(62b)

is given by \(\bar {\mathbf {T}} = \tilde {\mathbf {V}}_{\mathrm {h}}{\boldsymbol {\Lambda }}_{\mathrm {T}}\mathbf {U}_{\mathrm {a}}^{H}\) in terms of the SVD of \(\bar {\mathbf {T}}\). Here \(\mathbf {H} = \mathbf {U}_{\mathrm {h}}{\boldsymbol {\Sigma }}_{\mathrm {h}}\mathbf {V}_{\mathrm {h}}^{H}\) and \(\mathbf {A} = \mathbf {U}_{\mathrm {a}}{\boldsymbol {\Sigma }}_{\mathrm {a}}\mathbf {V}_{\mathrm {a}}^{H}\) are the SVDs of H and A, respectively, with the diagonal elements of Σ h and Σ a sorted in a decreasing order, and \(\tilde {\mathbf {V}}_{\mathrm {h}}\) contains the leftmost n columns of V h.

According to Lemma 1, the optimal \(\tilde {\mathbf {F}}\) in (61) has the SVD \(\tilde {\mathbf {F}} = \tilde {\mathbf {V}}_{\mathrm {g}}{\boldsymbol {\Lambda }}_{\mathrm {f}}\mathbf {U}_{\psi }^{H}\) where \(\tilde {\mathbf {V}}_{\mathrm {g}}\) contains the left-most columns of V g corresponding to the non-zero eigenvalues. Then after some simple manipulations, \(\tilde {\mathbf {F}}\) can be rewritten as \(\tilde {\mathbf {F}} = \tilde {\mathbf {V}}_{\mathrm {g}}{\boldsymbol {\Lambda }}_{\mathrm {f}}{\boldsymbol {\Lambda }}_{\psi }^{-1}\mathbf {V}_{\psi }^{H}\mathbf {V}_{\psi } {\boldsymbol {\Lambda }}_{\psi }\mathbf {U}_{\psi }^{H} = \tilde {\mathbf {T}}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H}{\boldsymbol \Psi }^{-\frac {1}{2}}\) where \(\tilde {\mathbf {T}} \triangleq \tilde {\mathbf {V}}_{\mathrm {g}}{\boldsymbol {\Lambda }}_{\mathrm {f}}{\boldsymbol {\Lambda }}_{\psi }^{-1}\mathbf {V}_{\psi }^{H}\). Hence F can be expressed as \(\mathbf {F} = \tilde {\mathbf {T}}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H}{\boldsymbol \Psi }^{-1}\). Interestingly, \(\mathbf {F} = \tilde {\mathbf {T}}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H}{\boldsymbol \Psi }^{-1}\) can be expressed as \(\mathbf {F} = \tilde {\mathbf {T}}\tilde {\mathbf {D}}^{H}\), which is structurally identical to the one defined in Theorem 1.

Applying this structure of the relay matrix, the second term in (59) can be written as
$$\begin{array}{@{}rcl@{}} &&\text{tr} \left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left({\boldsymbol\Psi}\mathbf{F}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\mathbf{F}{\boldsymbol\Psi} + {\boldsymbol\Psi}\right)^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)\\ &&\quad= \text{tr} \left(\!\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\!\left(\!{\boldsymbol\Psi}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\tilde{\mathbf{T}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{T}}\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}{\boldsymbol\Psi} +\! {\boldsymbol\Psi\!}\right)^{-1}\!\mathbf{H}_{k}\mathbf{B}_{k}\right)\\ &&\quad=\text{tr} \left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left({\boldsymbol\Psi}^{-1} - {\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}{\vphantom{\sum_{a}^{a}}}\right.\right.\right.\\ &&\qquad\;\;\left.\left.\left.+\left(\tilde{\mathbf{T}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{T}}\right)^{-1}\right)^{-1}\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\right)\mathbf{H}_{k}\mathbf{B}_{k}\right)\\ &&\quad=\text{tr} \left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k} - \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}{\vphantom{\sum_{a}^{a}}}\right.\right.\\ && \quad\qquad\left.\left. + \left(\tilde{\mathbf{T}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{T}}\right)^{-1}\right)^{-1}\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)\\ &&\quad=\text{tr}\left(\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1} + \tilde{\mathbf{T}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{T}}\right)^{-1}. \end{array} $$
(63)
Thus the MSE in (9a) can be expressed as the sum of two MSEs given by
$$\begin{array}{@{}rcl@{}} E_{k} &=&\text{tr}\left(\mathbf{I}_{N_{\mathrm{s}},k} + \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}_{\bar k}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1}\\ && + \text{tr} \left(\left(\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol\Psi}^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\right)^{-1} + \tilde{\mathbf{T}}^{H}\mathbf{G}_{k}^{H}\mathbf{G}_{k}\tilde{\mathbf{T}}\right)^{-1}. \end{array} $$
(64)

9.1 Appendix 3: Proof of Proposition 1

Assuming that the first-hop SNR is reasonably high, it emerges that \(\sum _{j=1}^{K}\mathbf {H}_{j}\mathbf {B}_{j}\mathbf {B}_{j}^{H}\mathbf {H}_{j}^{H} \gg \sigma _{\mathrm {r}}^{2}\mathbf {I}_{N_{\mathrm {r}}}\) where AB effectively means that the eigenvalues of AB are much greater than zero. Hence,
$$\begin{array}{@{}rcl@{}} \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}{\boldsymbol{\Psi}}^{-1}\mathbf{H}_{k}\mathbf{B}_{k} &=& \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left(\sum\limits_{j=1}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H} + \sigma_{\mathrm{r}}^{2}\mathbf{I}_{N_{\mathrm{r}}}\right)^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\\ &&\!\!\!\!\!\!\!\approx \mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left(\sum\limits_{j=1}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}\right)^{-1}\mathbf{H}_{k}\mathbf{B}_{k}. \end{array} $$
(65)
Let \(\mathbf {U}_{k}{\boldsymbol \Lambda }_{k}\mathbf {U}_{k}^{H}\) be the EVD of \(\mathbf {H}_{k}\mathbf {B}_{k}\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H}\). Without loss of generality, we express \(\mathbf {U}_{k} = \left [\mathbf {U}_{k}^{(\bar 0)} \,\,\mathbf {U}_{k}^{(0)}\right ]\) and \({\boldsymbol \Lambda }_{k} = \left [\begin {array}{cc} {\boldsymbol \Lambda }_{k}^{(\bar 0)}\,& \mathbf {0}\\ \mathbf {0} & \mathbf {0} \end {array} \right ]\), where \(\mathbf {U}_{k}^{(\bar {0})}\) and \(\mathbf {U}_{k}^{(0)}\) contain the eigenvectors corresponding to the non-zero and zero eigenvalues, respectively, in U k while \({\boldsymbol \Lambda }_{k}^{(\bar {0})}\) is an N b,k ×N b,k diagonal matrix containing the non-zero eigenvalues as the main diagonal. Thus \(\mathbf {H}_{k}\mathbf {B}_{k} = \mathbf {U}_{k}\bar {\boldsymbol {\Lambda }}_{k}^{(\bar 0)}\) where \(\bar {\boldsymbol \Lambda }_{k}^{(\bar 0)} = \left [\begin {array}{c} {\boldsymbol \Lambda }_{k}^{(\bar 0)\frac {1}{2}}\\ \mathbf {0} \end {array} \right ]\). Similarly, we obtain the following EVD
$$\begin{array}{*{20}l} &\sum_{j=1\atop k\neq k}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}= \mathbf{U}_{\bar k}{\boldsymbol \Lambda}_{\bar k}\mathbf{U}_{\bar k}^{H}\\ &\quad=\left[\mathbf{U}_{\bar k}^{(\bar 0)} \,\,\mathbf{U}_{\bar k}^{(0)}\right] \left[\begin{array}{cc} {\boldsymbol \Lambda}_{\bar k}^{(\bar 0)}\,\, & \mathbf{0}\\ \mathbf{0} & \mathbf{0} \end{array} \right] \left[\mathbf{U}_{\bar k}^{(\bar 0)} \,\,\mathbf{U}_{\bar k}^{(0)}\right]^{H} \end{array} $$
(66)
$$\begin{array}{*{20}l} &\quad=\left[\mathbf{U}_{\bar k}^{(0)} \,\,\mathbf{U}_{\bar k}^{(\bar 0)}\right] \left[\begin{array}{cc} \mathbf{0}\,\, & \mathbf{0}\\ \mathbf{0} & {\boldsymbol \Lambda}_{\bar k}^{(\bar 0)} \end{array} \right] \left[\mathbf{U}_{\bar k}^{(0)} \,\,\mathbf{U}_{\bar k}^{(\bar 0)}\right]^{H}. \end{array} $$
(67)
Substituting H k B k in (65) with \(\mathbf {H}_{k}\mathbf {B}_{k} = \mathbf {U}_{k}\bar {\boldsymbol \Lambda }_{k}^{(\bar {0})}\), we obtain
$$\begin{array}{*{20}l} &\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left(\sum_{j=1}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H} \mathbf{H}_{j}^{H}\right)^{-1}\mathbf{H}_{k}\mathbf{B}_{k}\\ &\quad= \bar{\boldsymbol \Lambda}_{k}^{(\bar{0})H}\left({\boldsymbol \Lambda}_{k} + \mathbf{U}_{k}^{H}\mathbf{U}_{\bar{k}}{\boldsymbol \Lambda}_{\bar{k}}\mathbf{U}_{\bar {k}}^{H}\mathbf{U}_{k}\right)^{-1}\bar{\boldsymbol \Lambda}_{k}^{(\bar{0})}. \end{array} $$
(68)
Now we rewrite \(\mathbf {U}_{k}^{H}\mathbf {U}_{\bar {k}}\) as
$$ \mathbf{U}_{k}^{H}\mathbf{U}_{\bar{k}} = \left[\mathbf{U}_{k}^{(\bar{0})} \,\,\mathbf{U}_{k}^{(0)}\right]^{H}\left[\mathbf{U}_{\bar{k}}^{(0)} \,\,\mathbf{U}_{\bar{k}}^{(\bar{0})}\right] = \left[\begin{array}{cc} \bar{\mathbf{U}}_{k}^{(0)}\,\, & \mathbf{0}\\ \mathbf{0} & \bar{\mathbf{U}}_{k}^{(\bar 0)} \end{array} \right], $$
(69)
where \(\bar {\mathbf {U}}_{k}^{(0)}\) and \(\bar {\mathbf {U}}_{k}^{(\bar {0})}\) are N b,k ×N b,k and (N rN b,k )×(N rN b,k ) unitary matrices, respectively. As a consequence, we obtain
$$\begin{array}{@{}rcl@{}} \mathbf{U}_{k}^{H}\mathbf{U}_{\bar{k}}{\boldsymbol \Lambda}_{\bar{k}}\mathbf{U}_{\bar{k}}^{H}\mathbf{U}_{k} &= &\mathbf{U}_{k}^{H}\left[\mathbf{U}_{\bar{k}}^{(0)} \,\,\mathbf{U}_{\bar{k}}^{(\bar{0})}\right] \left[\begin{array}{cc} \mathbf{0}\,\, & \mathbf{0}\\ \mathbf{0} & {\boldsymbol \Lambda}_{\bar{k}}^{(\bar{0})} \end{array} \right] \left[\mathbf{U}_{\bar{k}}^{(0)} \,\,\mathbf{U}_{\bar{k}}^{(\bar{0})}\right]^{H}\mathbf{U}_{k}\\ &=& \left[\begin{array}{cc} \mathbf{0}\,\, & \mathbf{0}\\ \mathbf{0} & \bar{\mathbf{U}}_{k}^{(\bar{0})}{\boldsymbol \Lambda}_{\bar{k}}^{(\bar{0})}\bar{\mathbf{U}}_{k}^{(\bar{0})H} \end{array} \right]. \end{array} $$
(70)
Using the identity U −1=U H for a unitary matrix U, we obtain
$$ \left({\boldsymbol{\Lambda}}_{k} + \mathbf{U}_{k}^{H}\mathbf{U}_{\bar{k}}{\boldsymbol{\Lambda}}_{\bar{k}}\mathbf{U}_{\bar{k}}^{H}\mathbf{U}_{k}\right)^{-1} \!= \left[\begin{array}{cc} {\boldsymbol{\Lambda}}_{k}^{(\bar{0})^{-1}}\,\, & \mathbf{0}\\ \mathbf{0} & \bar{\mathbf{U}}_{k}^{(\bar{0})}{\boldsymbol \Lambda}_{\bar{k}}^{(\bar{0})^{-1}}\bar{\mathbf{U}}_{k}^{(\bar{0})H} \end{array} \right]\!. $$
(71)
Substituting (71) into (68), we obtain
$$\begin{array}{@{}rcl@{}} &&\mathbf{B}_{k}^{H}\mathbf{H}_{k}^{H}\left(\sum_{j=1}^{K}\mathbf{H}_{j}\mathbf{B}_{j}\mathbf{B}_{j}^{H}\mathbf{H}_{j}^{H}\right)^{-1} \mathbf{H}_{k}\mathbf{B}_{k}\\ &&\quad= \left[\begin{array}{cc} {\boldsymbol \Lambda}_{k}^{(\bar{0})\frac{1}{2}H} &\mathbf{0} \end{array} \right] \left[ \begin{array}{cc} {\boldsymbol \Lambda}_{k}^{(\bar{0})^{-1}}\,\, & \mathbf{0}\\ \mathbf{0} & \bar{\mathbf{U}}_{k}^{(\bar{0})}{\boldsymbol \Lambda}_{\bar{k}}^{(\bar{0})^{-1}}\bar{\mathbf{U}}_{k}^{(\bar{0})H} \end{array} \right] \left[ \begin{array}{c} {\boldsymbol \Lambda}_{k}^{(\bar{0})\frac{1}{2}}\\ \mathbf{0} \end{array} \right]\\ &&\quad={\boldsymbol \Lambda}_{k}^{(\bar{0})\frac{1}{2}H}{\boldsymbol \Lambda}_{k}^{(\bar{0})^{-1}}{\boldsymbol \Lambda}_{k}^{(\bar{0})\frac{1}{2}}=\mathbf{I}_{N_{\mathrm{b},k}}. \end{array} $$
(72)

Thus for high first-hop SNR, \(\mathbf {B}_{k}^{H}\mathbf {H}_{k}^{H}{\boldsymbol \Psi }^{-1}\mathbf {H}_{k}\mathbf {B}_{k}\) can be approximated as \(\mathbf {I}_{N_{\mathrm {b},k}}\). □

Declarations

Funding

This work is supported by EPSRC under grant EP/K015893/1.

Authors’ contributions

MRAK formulated the optimization problems, designed the proposed solutions, performed numerical simulations, and prepared the initial draft as well as the revision. K-KW managed funding for the research, modified solution pattern, verified mathematical derivations, checked and analyzed the simulation results, and improved the writeup. Both authors read and approved the final manuscript.

Authors’ information

Muhammad R. A. Khandaker received the B.Sc. degree (Hons.) in computer science and engineering from Jahangirnagar University, Dhaka, Bangladesh, in 2006, the M.Sc. degree in telecommunications engineering from East West University, Dhaka, in 2007, and the Ph.D. degree in electrical and computer engineering from Curtin University, Australia, in 2013. He was a Junior Hardware Design Engineer with Visual Magic Corporation Ltd., in 2005. He joined the Department of Computer Science and Engineering, IBAIS University, Dhaka, in 2006, as a Lecturer. In 2007, he joined the Department of Information and Communication Technology, Mawlana Bhasani Science and Technology University, as a Lecturer. He joined the Institute of Information Technology, Jahangirnagar University, Dhaka, in 2008, as a Lecturer. Since 2013, he has been a Postdoctoral Research Associate with the Department of Electronic and Electrical Engineering, University College London, UK. He received the Curtin International Postgraduate Research Scholarship for his Ph.D. studies in 2009. He also received the Best Paper Award at the 16th Asia-Pacific Conference on Communications, Auckland, New Zealand, 2010.

Kai-Kit Wong received the B.Eng, M.Phil., and Ph.D. degrees from the Hong Kong University of Science and Technology, Hong Kong, in 1996, 1998, and 2001, respectively, all in electrical and electronic engineering. He is currently a Professor of Wireless Communications with the Department of Electronic and Electrical Engineer- ing, University College London, U.K. Prior to this, he took up faculty and visiting positions at the University of Hong Kong, Lucent Technologies, Bell-Labs, Holmdel, NJ, U.S., the Smart Antennas Research Group of Stanford University, and the Department of Engineering, the University of Hull, U.K. He is a Fellow of IET. He serves on the Editorial Board of the IEEE WIRELESS COMMUNICATIONS LETTERS, the IEEE COMMUNICATIONS LETTERS, and the IEEE ComSoc/KICS Journal of Communications and Networks. He also served as an Editor of the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS from 2005 to 2011 and the IEEE SIGNAL PROCESSING LETTERS from 2009 to 2012.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Electronic and Electrical Engineering, University College London

References

  1. D Tse, P Viswanath, Fundamentals of Wireless Communication (Cambridge University Press, Cambridge, 2005).View ArticleMATHGoogle Scholar
  2. F Rashid-Farrokhi, L Tassiulas, KJR Liu, Joint optimal power control and beamforming in wireless networks using antenna arrays. IEEE Trans. Commun.46:, 1313–24 (1998).View ArticleGoogle Scholar
  3. J-H Chang, L Tassiulas, F Rashid-Farrokhi, Joint transmitter receiver diversity for efficient space division multiaccess. IEEE Trans. Wireless Commun. 1:, 16–27 (2002).View ArticleGoogle Scholar
  4. S Fazeli-Dehkordy, S Shahbazpanahi, S Gazor, Multiple peer-to-peer communications using a network of relays. IEEE Trans. Signal Process. 57:, 3053–62 (2009).MathSciNetView ArticleGoogle Scholar
  5. BK Chalise, L Vandendorpe, Optimization of MIMO relays for multipoint-to-multipoint communications: Nonrobust and robust designs. IEEE Trans. Signal Process. 58:, 6355–68 (2010).MathSciNetView ArticleGoogle Scholar
  6. MRA Khandaker, Y Rong, Joint transceiver optimization for multiuser MIMO relay communication systems. IEEE Trans. Signal Process. 60:, 5977–86 (2012).MathSciNetView ArticleGoogle Scholar
  7. MRA Khandaker, Y Rong, Interference MIMO relay channel: Joint power control and transceiver-relay beamforming. IEEE Trans. Signal Process. 60:, 6509–18 (2012).MathSciNetView ArticleGoogle Scholar
  8. KX Nguyen, Y Rong, in Proc. Int. Symposium Inf. Theory Its Applications. Joint Source and Relay Matrices Optimization for Interference MIMO Relay Systems (IEEEMelbourne, 2014).Google Scholar
  9. KX Nguyen, Y Rong, S Nordholm, MMSE-based Joint Source and Relay Optimization for Interference MIMO Relay Systems. EURASIP J. Wireless Commun. Netw. 73: (2015).Google Scholar
  10. KX Nguyen, Y Rong, S Nordholm, MMSE-based transceiver design algorithms for interference MIMO relay systems. IEEE Trans. Wireless Commun. 14:, 6414–6424 (2015).View ArticleGoogle Scholar
  11. KX Nguyen, Y Rong, S Nordholm, Simplified MMSE precoding design in interference two-way MIMO relay systems. IEEE Signal Process. Lett. 23:, 262–266 (2016).View ArticleGoogle Scholar
  12. B Rankov, A Wittneben, in Proc. IEEE ISIT. Achievable Rate Regions for the Two-way Relay Channel (IEEESeattle, 2006), pp. 1668–1672.Google Scholar
  13. K-J Lee, H Sung, E Park, I Lee, Joint optimization for one and two-way MIMO AF multiple-relay systems. IEEE Trans. Wireless Commun. 9:, 3671–3681 (2010).View ArticleGoogle Scholar
  14. T Cui, F Gao, T Ho, A Nallanathan, in Proc. IEEE ICC. Distributed Space-time Coding for Two-way Wireless Relay Networks (IEEEBeijing, 2008), pp. 3888–3892.Google Scholar
  15. S Avestimehr, A Khajehnejad, A Sezgin, B Hassibi, in Proc. IEEE ISIT. Capacity Region of the Deterministic Multi-pair Bi-directional Relay Network (IEEE, 2009).Google Scholar
  16. K Lee, N Lee, I Lee, Achievable degrees of freedom on MIMO two-way relay interference channels. IEEE Trans. Wireless Commun. 12:, 1472–80 (2013).View ArticleGoogle Scholar
  17. AC Cirik, R Wang, Y Hua, M Latva-aho, Weighted sum-rate maximization for full-duplex MIMO interference channels. IEEE Trans. Commn. 63:, 801–15 (2015).View ArticleGoogle Scholar
  18. C Song, K-J Lee, I Lee, MMSE based transceiver designs in closed-loop non-regenerative MIMO relaying systems. IEEE Trans. Wireless Commun. 9:, 2310–9 (2010).View ArticleGoogle Scholar
  19. Y Huang, L Yang, M Bengtsson, B Ottersten, A limited feedback joint precoding for amplify-and-forward relaying. IEEE Trans. S. 58:, 1347–57 (2010).MathSciNetView ArticleGoogle Scholar
  20. MRA Khandaker, Y Rong, Precoding design for MIMO relay multicasting. IEEE Trans. Wireless Commun. 12:, 3544–55 (2013).View ArticleGoogle Scholar
  21. MRA Khandaker, Y Rong, Transceiver optimization for multi-hop MIMO relay multicasting from multiple sources. IEEE Trans. Wireless Commun. 13:, 5162–72 (2014).View ArticleGoogle Scholar
  22. RA Horn, CR Johnson, Matrix Analysis (Cambridge Univ. Press, Cambridge, 1985).View ArticleMATHGoogle Scholar
  23. M Grant, S Boyd, CVX: Matlab software for disciplined convex programming (web page and software) (2010). http://cvxr.com/cvx.
  24. Y Nesterov, A Nemirovski, Interior Point Polynomial Algorithms in Convex Programming (Philadelphia, SIAM, 1994).Google Scholar
  25. Y Rong, Simplified algorithms for optimizing multiuser multi-hop MIMO relay systems. IEEE Trans. Commun. 59:, 2896–2904 (2011).View ArticleGoogle Scholar

Copyright

© The Author(s) 2017