# Optimum beamforming for MIMO multicasting

- Baisheng Du
^{1}Email author, - Yi Jiang
^{2}, - Xiaodong Xu
^{1}Email author and - Xuchu Dai
^{1}

**2013**:121

https://doi.org/10.1186/1687-6180-2013-121

© Du et al.; licensee Springer. 2013

**Received: **15 January 2013

**Accepted: **17 June 2013

**Published: **25 June 2013

## Abstract

This paper investigates the transmit (Tx) beamforming design to maximize the throughput of a multiple-input multiple-output multicast channel, where common information is sent from the base station to *K* users simultaneously. This so-called max-min fair beamforming problem is known to be NP-hard. When the base station is equipped with two Tx antennas, we prove that the original complex-valued beamforming problem can be transformed into a real-valued problem and the globally optimal solution can be found by exhausting at most

${C}_{K}^{1}+{C}_{K}^{2}+{C}_{K}^{3}$ hypothesis tests. Moreover, a prune and search algorithm (PASA) is proposed for searching the optimal beamformer with computational complexity

$\mathcal{O}({K}^{3})$ in the worst case. When the base station has more than two Tx antennas, we develop an efficient algorithm named iterative two-dimensional optimization which converts the original beamforming problem into a series of two-antenna subproblems by iterations and hence, the beamformer is improved using PASA iteratively. Simulations results are presented to demonstrate the superior performance of the proposed algorithms.

## Keywords

## 1 Introduction

In the next generation of wireless networks, spectrally efficient multicasting techniques are required to support applications such as web TV, online gaming, and software updates, where common messages are sent to a group of users simultaneously. Under the assumption that the channel state information of all users is available at the multi-antenna base station (BS), transmit (Tx) beamforming can be used to improve the performance of multicasting. Consequently, the problem of multicast beamforming has received significant attentions recently[1–5].

In[1], Lopez formulated the multicast beamforming problem as maximizing the average signal-to-noise ratio (SNR) of all users, for which the optimal beamformer can be obtained by an eigendecomposition. However, this approach does not guarantee satisfactory performance for all users. In general, the performance of multicasting is determined by the user(s) with the lowest SNR. From this point of view, a practical criterion is to maximize the minimal SNR of all users. This optimization problem is referred to as max-min fair beamforming and is known to be NP-hard. The seminal work of max-min fair beamforming is proposed in[2], where semidefinite relaxation (SDR) is used to yield approximate solutions. In order to achieve higher throughput or reduce the implementation complexity, various iterative schemes are proposed subsequently. In[3], the closed-form expression of the optimal beamformer is deduced for the two-user case. For the case of more than two users, a group of beamformers are calculated through different pairs of users by the closed-form expression. Then, the best one among these beamformers is used to be an initialization for the proposed iterative algorithm, whose main idea is to improve the lowest SNR at each iteration. Furthermore, it is showed that this method is computationally much simpler while the performance is comparable to that of the SDR-based scheme. In[4], another iterative approach based on channel orthogonalization and local refinement is developed, and it provides attractive performance compared to the methods in[2] and[3]. Similar to[3], the authors in[5] also develop a closed-form solution for the beamformer design, wherein two users are assumed. Then, a successive greedy algorithm based on the two-user case are proposed to tackle the general cases. Recently in[6], the authors consider the robust design for unicast downlink beamforming and conclude that the optimal beamforming vectors can be obtained by the semidefinite relaxation when the base station is equipped with two antennas. Yet, it cannot be applied to the multicast scenario we investigate here.

- (1)
For the case that the BS has two Tx antennas, we derive that the feasible set of the SNR vector of all users is a two-dimensional ellipsoid in

*K*-dimensional Euclidean space, where*K*is the number of users. With this geometrical property, we prove that the original complex-valued optimization problem can be simplified as a real-valued problem and the optimal beamforming vector can be found by exhausting ${C}_{K}^{1}+{C}_{K}^{2}+{C}_{K}^{3}$ hypothesis tests of the bottleneck users, which are defined as the users with the lowest SNR. - (2)
In order to reduce the complexity of exhausting, we propose a prune and search algorithm (PASA) which is guaranteed to find the globally optimal beamformer. By analyzing the probabilities for three cases of bottleneck users, we prove that the worst-case computational complexity of PASA is $\mathcal{O}({K}^{3})$. It is showed that PASA is computationally more efficient than most of the existing schemes.

- (3)
For the general case that the BS is equipped with more than two Tx antennas, we propose an iterative two-dimensional optimization (I2DO) algorithm which iteratively transforms the problem of beamformer design into a sequence of two-antenna subproblems and then PASA can be used to improve the beamformer at each iteration. When the number of users is large, the throughput achieved by the proposed beamformer has considerable improvement over the state-of-the-art multicasting techniques.

The remainder of this paper is organized as follows. In Section 2, we introduce the multiple-input multiple-output (MIMO) multicast channel and formulate the transmit beamforming problem. In Section 3, we analyze the special case of two Tx antennas and deduce a new formulation for the beamformer design. Based on the new formulation, the PASA is proposed to obtain a globally optimal beamforming vector. For the case of more than two Tx antennas, we propose the I2DO algorithm in Section 4. Section 5 presents simulation results to verify the effectiveness of the proposed approaches. Finally, conclusions are drawn in Section 6.

Notations: We use uppercase and lowercase bold letters to denote matrices and vectors. The superscripts (·)^{
T
}, (·)^{∗}, and (·)^{†} stand for transpose, conjugate transpose, and pseudo-inverse, respectively. Re(·) and Im(·) mean the real part and imaginary part, respectively. ∥·∥ and ∥·∥_{
F
} denote the vector Euclidean norm and the Frobenius norm. **I**
_{
n
} represents an *n* × *n* identity matrix and **1**
_{
n
} is an all-one column vector with length *n*, while **0**
_{
n
} is an all-zero column vector with length *n*.

## 2 System model and problem statement

*K*users, where the base station is equipped with

*M*transmit antennas and the

*k*-th user has

*N*

_{ k }receive antennas. The multicast scenario is investigated, that is, the base station broadcasts common messages to

*K*users. Then, the received signal of the

*k*-th user, i.e.,${\mathbf{y}}_{k}\in {\u2102}^{{N}_{k}}$ is

where${\stackrel{~}{\mathbf{H}}}_{k}\in {\u2102}^{{N}_{k}\times M}$ is the channel between the base station and the *k*-th user,$\mathbf{s}\in {\u2102}^{M}$ is the transmit signal, and${\mathbf{n}}_{k}\in {\u2102}^{{N}_{k}}$ is the additive complex Gaussian noise vector at the *k*-th user. We assume that${\stackrel{~}{\mathbf{H}}}_{k},k=1,\dots ,K$ are known to the base station by exploiting channel reciprocity or through a feedback channel. Moreover, we consider the block fading channel model, i.e., the multicast channel remains constant during the transmission block and changes from one block to another. Without loss of generality (w.l.o.g), we also assume that the additive noise follows the distribution:${\mathbf{n}}_{k}\sim \mathcal{C}\mathcal{N}(0,{\mathbf{I}}_{{N}_{k}})$
^{a}.

*s*is the information symbol with zero mean and unit variance and

*P*denotes the transmit power. Hence, the received signal at the

*k*-th user can be rewritten as

For the sake of clarity,${\mathbf{H}}_{k}=\sqrt{P}{\stackrel{~}{\mathbf{H}}}_{k}$ can be referred to as the equivalent channel throughout this paper.

*k*-th user is

^{b}

**w**

**w**

_{opt}in Eq. (4) must satisfy ∥

**w**

_{opt}∥

^{2}= 1, Eq. (4) is equivalent to

It is proven in[2] that the max-min fair beamforming problem (5) is non-convex and NP-hard in general. To solve this problem, we first analyze the special case of two transmit antennas before addressing the general cases.

## 3 Case of two Tx antennas

In this section, the special case that the base station has two Tx antennas is investigated, i.e., *M* = 2. The results obtained here will be used for the general case in Section 4. It is worth mentioning that the early forms of some lemmas in this section are formerly established in the conference version of this paper. For the sake of completeness, we recall the refined lemmas here.

### 3.1 Feasible set of the SNR vector

*γ*(

**w**) as the SNR vector of

*K*users

we have the following theorem.

#### Theorem 1 ([7])

*For the case of M*= 2,

*the feasible set of the SNR vector*

*can be equivalently expressed as*

*where*

*and*

*with a*

_{ k },

*b*

_{ k },

*and c*

_{ k }

*from the matrix*

#### Proof

The proof is omitted. Please see[7] for details. □

For a given real matrix **A**, a hyper-ellipsoid is defined by {**Ax** : ∥**x**∥^{2} = 1}[8]. From Eq. (8), we can see that the feasible set of the SNR vector is a two-dimensional ellipsoid embedded in *K*-dimensional space with center **z**
_{
c
}.

*k*-th column of matrix

**G**

^{ T }. With

**Theorem**1, problem (5) can be reformulated as

**Theorem**1, vector

**x**in Eq. (12) has an one-to-one relationship with the beamforming vector

**w**in Eq. (5)

where *θ* ∈[ 0,*π*/2),*ϕ* ∈[ 0,2*π*). Let (**x**
_{opt},*λ*
_{opt}) denote the optimal solution to Eq. (12). If (**x**
_{opt},*λ*
_{opt}) is given, then we can calculate *θ*
_{opt} and *ϕ*
_{opt} from Eq. (13a) and henceforth the optimal beamforming vector **w**
_{opt} follows in Eq. (13b).

Due to the constraint ∥**x**∥ = 1, Eq. (12) is a non-convex problem[9]. Nevertheless, it is easier than problem (5) since Eq. (12) is an optimization problem in the real space. In the following, we develop a so-called prune and search algorithm (PASA) to find a globally optimal solution to Eq. (12).

### 3.2 Prune and search algorithm

*h*(

**x**) =

**x**

^{ T }

**x**− 1. Now, problem (12) can be written in the standard form

where *μ*
_{0},*ν*,*μ*
_{
k
},∀*k* are scalars and they cannot be all equal to zero.

**x**, by condition (15a), we have$\mathbf{x}=\frac{1}{2\nu}{\sum}_{k=1}^{K}{\mu}_{k}{\mathbf{g}}_{k}$. Let$\mathcal{K}\subseteq \{1,\cdots \phantom{\rule{0.3em}{0ex}},K\}$ denote the set of active constraints, i.e.,${g}_{k}(\mathbf{x},\lambda )=0,k\in \mathcal{K}$. With Eq. (15b), we know that

*μ*

_{ k }= 0 for$k\notin \mathcal{K}$. Hence, we can see that

where${\omega}_{k}=\frac{{\mu}_{k}}{2\nu},k\in \mathcal{K}$ are of the same sign.

For the sake of clarity, we refer to the user(s) with the lowest SNR as bottleneck user(s). With this definition and the FJ conditions, we can see that the optimal solution to Eq. (12) must have form as$\mathbf{x}=\sum _{k\in \mathcal{K}}{\omega}_{k}{\mathbf{g}}_{k}$ where {*ω*
_{
k
}} are of the same sign and$\mathcal{K}$ also denotes the set of bottleneck users.

In the lemma below, we show that the globally optimal solution to Eq. (12) can be found by a series of hypothesis tests.

#### Lemma 1 ([7])

*The globally optimal solution to Eq. (*
*12*
) *can be found through exhausting*
${C}_{K}^{1}+{C}_{K}^{2}+{C}_{K}^{3}$
*hypothesis tests of the bottleneck users.*

#### Proof

*L*} as the indexes of the bottleneck users, we have

Note that${\mathbf{G}}_{a}\in {\mathbb{R}}^{L\times 4}$,${\mathbf{x}}_{a}\in {\mathbb{R}}^{4\times 1}$, and${\mathbf{h}}_{a}\in {\mathbb{R}}^{L\times 1}$.

**H**

_{ k },

*k*= 1,…,

*K*are independent random fading channels, elements of

**g**

_{ l }and

*h*

_{ l }are random variables, and

**G**

_{ a }is full-rank with probability 1

^{c}. For

*L*> = 5, Eq. (18a) has no solutions in general; for

*L*= 4, there is a single solution${\mathbf{x}}_{a}={\mathbf{G}}_{a}^{-1}{\mathbf{h}}_{a}$ to Eq. (18a); however, Eq. (18b) is not fulfilled in general. If

*L*≤ 3, by letting the orthogonal basis in the null space of

**G**

_{ a }be${\mathbf{N}}_{{G}_{a}}=\phantom{\rule{0.3em}{0ex}}[\phantom{\rule{0.3em}{0ex}}{\mathit{\eta}}_{1},{\mathit{\eta}}_{2},\cdots \phantom{\rule{0.3em}{0ex}}],{\mathbf{N}}_{{G}_{a}}\in {\u2102}^{4\times (4-L)}$, we can write the general form of solutions to Eq. (18a) as

where$\mathit{\rho}\in {\u2102}^{(4-L)\times 1}$ is an arbitrary vector. With the degree of freedom introduced by ρ, it is possible to find a solution which satisfies Eq. (18b)^{d}. Since the number of bottleneck users is at most three, therefore, the globally optimal solution to Eq. (12) can be found by exhausting${C}_{K}^{1}+{C}_{K}^{2}+{C}_{K}^{3}$ hypothesis tests of the bottleneck users. □

#### Remark 1

Even if the number of bottleneck users happens to be more than three, i.e., *L* > 3, albeit with zero probability, the optimal solution can still be calculated by hypothesis tests of three bottleneck users, since the solution to Eq. (17) is uniquely determined by any three of the *L* bottleneck users.

Although the optimal solution can be obtained by exhausting all possible combinations of bottleneck users, the computational complexity can be rather high especially when the number of users is large. In the following, we develop several lemmas to dramatically reduce the number of hypothesis tests before identifying the optimal solution.

#### Lemma 2 (Sufficient condition for optimality[7])

*Given a set of L*≤ min(3,

*K*)

*constraints with indexes i*

_{1},…,

*i*

_{ L },

*we denote*(

**x**

_{0},

*λ*

_{0})

*as an optimal solution to*

*Then, we have*

*where λ*
_{opt} *is an optimum solution to Eq. (*
*12*
). *Furthermore, if* **x**
_{0} *satisfies*
${h}_{k}+{\mathbf{g}}_{k}^{T}{\mathbf{x}}_{0}\ge {\lambda}_{0}$
*for k* = 1,…,*K*, *then λ*
_{0} = *λ*
_{opt}, *and* (**x**
_{0},*λ*
_{0}) *is also an optimum solution to Eq. (*
*12*
)*.*

#### Proof

The proof is omitted. Please refer to[7] for details. □

*N*candidate solutions denoted as: {

**x**

_{ n },∥

**x**

_{ n }∥

^{2}= 1,

*n*= 1,…,

*N*}, then we also have a lower bound of

*λ*

_{opt}

The lower bound LB and upper bound UB can be updated and become more and more tight as the search proceeds. With these bounds, most hypothesis tests can be pruned, and hence, the complexity of exhausting is reduced drastically. This is also the meaning of the prune and search algorithm.

In PASA, the hypothesis tests of one bottleneck user are checked firstly, then two bottleneck users and so on.

#### Lemma 3 (Case of one bottleneck user[7])

*If there is only one bottleneck user in Eq. (*

*12*),

*then the bottleneck user must be the one with index*

*and the optimum solution to Eq. (*
*12*
) *is* (**g**
_{
i
}/∥**g**
_{
i
}∥,*h*
_{
i
}+∥**g**
_{
i
}∥).

#### Proof

Please see Appendix 1.□

If we have found a globally optimum solution to Eq. (12) in the hypothesis tests of one bottleneck user, obviously there is no need to check the rest of hypothesis tests, and the PASA routine terminates. Otherwise, we turn to check the case of two bottleneck users.

*i*,

*j*, then the optimum solution to Eq. (12) must be of form

**x**=

*α*

**g**

_{ i }+

*β*

**g**

_{ j }with

*α*,

*β*being of the same sign according to the FJ conditions. Note that

*α*=

*ω*

_{ i },

*β*=

*ω*

_{ j }[cf. Eq. (16)]. In the next, we show the process of calculating the candidate solutions. Denote

**G**

_{ i j }= [

**g**

_{ i }

**g**

_{ j }] and

**h**

_{ i j }= [

*h*

_{ i }

*h*

_{ j }]

^{ T }. From the assumption of bottleneck users

*α*,

*β*can be calculated by

**x**is

**x**∥ = 1, we have a quadratic equation with regard to

*λ*

*b*

^{2}−

*a*

*c*≥ 0, from Eqs. (26), (27), and (28), the SNR of bottleneck users is

Note that there are two solutions to *λ*, hence we need to verify both of them. With *α*,*β* in Eq. (25), the candidate solution is **x** = *α* **g**
_{
i
} + *β* **g**
_{
j
}.

#### Lemma 4 (Case of two bottleneck users[7])

*For the candidate solution derived above, if α* > 0, *β* > 0, *and λ* ∈ [LB,UB], *then the upper bound can be tightened as* UB = *λ with λ given in Eq. (*
*29*
). *In particular, if this solution satisfies*
${h}_{l}+{\mathbf{g}}_{l}^{T}\mathbf{x}\ge \lambda ,\phantom{\rule{2.83795pt}{0ex}}l\in \{1,\cdots \phantom{\rule{0.3em}{0ex}},K\}$, then the pair (**x**,*λ*) is an optimal solution to Eq. (12).

#### Proof

Please see Appendix 2. □

#### Remark 2

When *b*
^{2} − *a* *c* < 0, it means the equation${h}_{i}+{\mathbf{g}}_{i}^{T}\mathbf{x}={h}_{j}+{\mathbf{g}}_{j}^{T}\mathbf{x},\parallel \mathbf{x}\parallel =1$ cannot be true. If${h}_{i}+{\mathbf{g}}_{i}^{T}\mathbf{x}>{h}_{j}+{\mathbf{g}}_{j}^{T}\mathbf{x}$ always holds true for all unit-length vector **x**, then the *i*-th user must not be a bottleneck user since the SNR of the *j*-th user is always lower. Consequently, the *i*-th user can be eliminated from problem (12) without loss of optimality.

*i*,

*j*,

*k*, then the optimum solution to Eq. (12) must be of form

**x**=

*α*

**g**

_{ i }+

*β*

**g**

_{ j }+

*γ*

**g**

_{ k }with

*α*,

*β*,

*γ*being of the same sign according to the FJ conditions. Note that

*α*=

*ω*

_{ i },

*β*=

*ω*

_{ j },

*γ*=

*ω*

_{ k }[cf. Eq. (16)]. Similarly from the assumption of three bottleneck users

*i*,

*j*,

*k*, we have equations as below

**x**∥ = 1, we have

*b*

^{2}−

*a*

*c*< 0, it means that the SNR of user

*i*,

*j*,

*k*cannot be equal, and the three users are surely not the real bottleneck users. If

*b*

^{2}−

*a*

*c*≥ 0, the solutions to Eq. (31) are

*α*,

*β*,

*γ*are given by

Finally, the candidate solution is **x** = *α* **g**
_{
i
} + *β* **g**
_{
j
} + *γ* **g**
_{
k
}.

#### Lemma 5 (Case of three bottleneck users[7])

*For the candidate solution derived above, if α* > 0,*β* > 0,*γ* > 0 *and λ* ∈ [LB,UB], *then a tighter upper bound is obtained as* UB = *λ with λ given in Eq. (*
*34*
*). In particular, if this solution satisfies*
${h}_{l}+{\mathbf{g}}_{l}^{T}\mathbf{x}\ge \lambda ,\phantom{\rule{2.83795pt}{0ex}}l\in \{1,\cdots \phantom{\rule{0.3em}{0ex}},K\}$, *then* (**x**,*λ*) *is an optimal solution to Eq. (*
*12*
*).*

#### Proof

Please see Appendix 3 . □

Combining the calculation of candidate solutions with above lemmas and remarks, the full procedure of PASA follows straightforwardly and it is formerly proposed in[7]. To make this paper self-contained and facilitate the complexity evaluation of PASA, we include an improved version of PASA in Appendix 5^{e}. As we can see in the pseudo-code, the algorithm constantly updates the bounds UB and LB to prune the hypothetical combinations of bottleneck users and henceforth the name PASA.

#### Remark 3

The users with weak channels are more likely to be bottleneck users in general. So before the hypothesis tests, the users can be sorted by the strength of their channels, i.e., the Frobenius norm of channels (see Eq. (9)) and the users with weaker channels should be tested first.

#### Remark 4

*K*= 32 single-antenna users, we display the process of updating for LB,UB in Figure1. We can see that the upper bound and lower bound are getting tighter along with the hypothesis tests. When the gap between LB and UB is less than a given threshold: UB−LB ≤

*δ*, we can conclude that the gap between the best solution we have found and the optimal solution to Eq. (12) is no greater than

*δ*. If

*δ*is small, we can early terminate the PASA searching procedure. This scheme, which can be referred to as the truncated PASA, offers a desirable tradeoff between performance and complexity.

### 3.3 Complexity evaluation of PASA

Let *P*
_{1},*P*
_{2},*P*
_{3} denote the probability for case of one bottleneck user, case of two bottleneck users, and case of three bottleneck users, respectively. W.l.o.g, we assume independent and identically distributed Rayleigh fading between the BS and users. When the BS is equipped with two antennas while all users are single-antenna users, we have derived the bounds for *P*
_{1},*P*
_{2},*P*
_{3}.

#### Theorem 2

*For this multicast system, the probability for case of one bottleneck user is*
${P}_{1}=\frac{1}{K}$; *the lower bound of P*
_{2} *is*
${P}_{2}^{L}=\frac{2(K-1)}{{K}^{2}}$
*and the upper bound of P*
_{2} *is*
${P}_{2}^{U}=\frac{16K(K-1)}{{(K+2)}^{3}}$; *consequently, P*
_{3} *can be bounded as*
$(1-{P}_{1}-{P}_{2}^{U})\le {P}_{3}\le (1-{P}_{1}-{P}_{2}^{L})$.

#### Proof

Please see Appendix 4 . □

From **Theorem** 2, we can see that *P*
_{3} tends to be 1 as *K* increases. Therefore, the computational complexity of PASA depends only on the third case for asymptotically large *K*. Based on this observation, we can obtain a first-order estimation of the computational complexity of PASA.

*P*

_{ s }denote the probability of combinations which survive after the division line. For a large scale of users, we count the probability

*P*

_{ s }in Table1. It shows that

*P*

_{ s }≈ 0 when

*K*is large. For the hypothesis tests which are terminated before the division line (with probability close to 1), the complexity involves a constant number of vector multiplications. There are at most${C}_{K}^{3}$ hypothesis tests in the third part of PASA; therefore, the worst-case complexity of PASA is$\mathcal{O}({K}^{3})$.

**Probability of survival combinations**

K | 8 | 16 | 24 | 32 | 40 | 48 |
---|---|---|---|---|---|---|

| 0.0341 | 0.0226 | 0.0193 | 0.0148 | 0.0147 | 0.0121 |

| 56 | 64 | 72 | 80 | 88 | 96 |

| 0.0114 | 0.0103 | 0.0101 | 0.0091 | 0.0086 | 0.0082 |

| 104 | 112 | 120 | 128 | 136 | 144 |

| 0.0074 | 0.0071 | 0.0070 | 0.0067 | 0.0065 | 0.0054 |

## 4 Iterative two-dimension optimization for *M* > 2

When the base station is equipped with more than two transmit antennas, the max-min fair beamforming problem (5) becomes much more complicated due to the NP-hardness. In this section, an I2DO algorithm is developed for the general case, i.e., *M* > 2.

### 4.1 The I2DO algorithm

*M*= 2. Considering the beamforming vector

**w**is in the column space of matrix$\mathbf{P}\in {\u2102}^{M\times 2},{\mathbf{P}}^{\ast}\mathbf{P}={\mathbf{I}}_{2}$, we can write

**w**as

**u**

We can see that Eq. (37) is exactly the max-min fair beamforming problem of two Tx antennas. As showed in Section 3, the optimal solution to Eq. (37) can be obtained efficiently by PASA.

**w**

_{ n }∥ = 1 denote the beamforming vector at the

*n*-th iteration. The beamforming vector at the

*n*+ 1-th step is updated as

where${\mathbf{v}}_{n}\in {\u2102}^{M}$ denotes the direction of updating which is orthogonal to **w**
_{
n
} and of unit-length. Note that *u*
_{1},*u*
_{2} can be referred to as the complex-valued step size. After defining **P** = [**w**
_{
n
}, **v**
_{
n
}] and **u** = [*u*
_{1},*u*
_{2}]^{
T
}, we can obtain the optimal step size by solving Eq. (37). In other words, we find the optimal beamforming vector on the plane spanned by **w**
_{
n
} and **v**
_{
n
}. With this scheme, **w**
_{
n+1} always outperforms **w**
_{
n
}, except that *u*
_{2} = 0. Hence, the objective function of Eq. (5) is monotonically increasing across iterations, and it shall converge to a stationary point. In the iterations, if min_{
k
} *γ*
_{
k
}(**w**
_{
n+1}) − min_{
k
} *γ*
_{
k
}(**w**
_{
n
}) is less than a preset threshold *ε*, then the I2DO algorithm can be terminated. Here,${\gamma}_{k}(\mathbf{w})={\mathbf{w}}^{\ast}{\mathbf{H}}_{k}^{\ast}{\mathbf{H}}_{k}\mathbf{w}$ denotes the SNR of the *k*-th user.

**v**

_{ n }should be carefully chosen. Firstly, we consider a rotation from

**w**

_{ n }to

**v**

_{ n }, thus the SNR of the

*k*-th user can be expressed as a function of

**v**

_{ n }and the angle of rotation

*θ*

*γ*

_{ k }(

*θ*,

**v**

_{ n }) with respect to

*θ*, we have

If$\text{Re}\left\{{\mathbf{w}}_{n}^{\ast}{\mathbf{H}}_{k}^{\ast}{\mathbf{H}}_{k}{\mathbf{v}}_{n}\right\}>0$, then a tiny rotation from **w**
_{
n
} to **v**
_{
n
} will increase the SNR of the *k*-th user.

*n*-th iteration. To improve the SNR of these users, we can rotate

**w**

_{ n }to

**v**

_{ n }which satisfies

where the vector${\stackrel{~}{\mathbf{v}}}_{n}$ is a length-relaxed version of **v**
_{
n
}. Note that we choose$\text{Re}\{{\mathbf{w}}_{n}^{\ast}{\mathbf{H}}_{k}^{\ast}{\mathbf{H}}_{k}{\stackrel{~}{\mathbf{v}}}_{n}\}=1,\forall k\in {\mathcal{B}}_{n}$, w.l.o.g. It is clear that Eq. (41) are linear equations. Therefore, if |_{
n
}| ≤ 2*M* − 2,${\stackrel{~}{\mathbf{v}}}_{n}$ can be obtained by pseudo-inverse, while the unit-length vector **v**
_{
n
} is${\mathbf{v}}_{n}={\stackrel{~}{\mathbf{v}}}_{n}/\parallel {\stackrel{~}{\mathbf{v}}}_{n}\parallel $. With this scheme, the convergence rate of I2DO is satisfactory.

#### Remark 5

In general, there is no need to choose the step size (*u*
_{1},*u*
_{2}) optimally[3, 8]. Suboptimal algorithms such as the truncated PASA can be used here to obtain an approximate step size in Eq. (37).

### 4.2 Initialization of I2DO

In the iterations of I2DO, the minimum SNR is non-decreasing, and it will converge to a stable point. However, I2DO is not guaranteed to find the globally optimal beamformer, and it may get trapped into a local optimum. As a result, proper initialization of I2DO is of great significance[9].

*M*times in parallel. Similar to Lopez’s initialization, the

*M*eigenvectors of$\sum _{k}{\mathbf{H}}_{k}^{\ast}{\mathbf{H}}_{k}$ can be used as the initialization points of dLLI. For each eigenvector of$\sum _{k}{\mathbf{H}}_{k}^{\ast}{\mathbf{H}}_{k}$, we invoke dLLI and obtain an improved vector${\widehat{\mathbf{w}}}_{m}$. Finally, the best one is simply chosen to be the initialization point of I2DO

### 4.3 Implementation complexity of I2DO

The I2DO algorithm is comprised of the I2DO iterations and computation of initialization vector which requires calling dLLI *M* times. Let *J*
_{1},*J*
_{2} denote the numbers of iterations in I2DO and dLLI. The worst-case computational complexity of the I2DO iterations is$\mathcal{O}({J}_{1}{K}^{3})$, while the complexity involved in the generation of initialization vector is$\mathcal{O}({J}_{2}K{M}^{2})$. So the entire complexity of I2DO is$\mathcal{O}({J}_{1}{K}^{3}+{J}_{2}K{M}^{2})$. It is worthy to note that the actual complexity of I2DO can be further reduced if the truncated PASA is used instead of PASA. The worst-case complexity of the SDR-based scheme is$\mathcal{O}({(K+{M}^{2})}^{3.5})$, excluding the additional process of randomization[2]. The overall complexity of the GS-DL method proposed in[4] is$\mathcal{O}(K({M}^{3}+\mathit{\text{IKM}}))$, where *I* denotes the number of iterations in local refinement. Therefore, the SDR-based scheme is less efficient than I2DO and GS-DL.

## 5 Simulation results

In this section, simulation results are presented to demonstrate the effectiveness of the proposed approaches: PASA and I2DO. The following results are based on 2,000 Monte-Carlo trials, where independent and identically distributed Rayleigh fading channels between the base station and users are assumed. Besides, the transmit power is set as *P* = 1.

### 5.1 Performance of PASA

*N*

_{ k }= 1,∀

*k*, since the GS-DL technique in[4] and the RCC2-SOR method proposed in[3] cannot handle the case of multi-antenna users. Figure3 displays the achievable rates of different approaches with respect to the number of users. In the SDR-based scheme, the CVX[13] is used to solve the semidefinite programming problem. In the subsequent process of randomization, three randomization techniques: RandA, RandB, and RandC are used to generate 300 candidate vectors where 100 vectors are generated for each (see also[2]). We can see that both the SDR-based scheme and GS-DL perform quite close to PASA which yields the optimal beamformer. While for the RCC2-SOR method, it has about 0.05 bps/Hz performance loss compared to PASA.

*K*= 16, and an in-depth comparison between these methods is provided. The SNR loss of the SDR-based scheme and GS-DL compared to PASA is displayed in the form of cumulative distribution function (CDF). It is showed that the SNR loss of the SDR-based scheme and GS-DL are more than 1.5 dB in the worst case, even though their average achievable rates are close to the optimal value.

*K*≤ 12, the average runtime of PASA is less than those of the other three methods, and therefore, PASA is computationally more efficient. For the case of

*K*> 12, RCC2-SOR is computationally faster than PASA; however, it is a suboptimal algorithm and may suffer from severe performance loss.

### 5.2 Performance of I2DO

*M*= 8. For the SDR-based scheme, 3,000 candidate vectors are generated by randomization, and the best one is chosen as an approximate solution. In the I2DO algorithm, numbers of iterations in I2DO and dLLI are

*J*

_{1}= 20,

*J*

_{2}= 100, respectively. When all users are equipped with single antenna (

*N*

_{ k }= 1), both I2DO and GS-DL have superior performance over the SDR-based scheme, and the gap between them is more than 0.25 bps/Hz for

*K*≥ 16. Moreover, I2DO also outperforms GS-DL especially when the number of users is large. In multiple-antenna user scenario (

*N*

_{ k }= 2), GS-DL is not applicable anymore. Hence, we only compare the performance of I2DO and the SDR-based scheme. Similarly, we can see that I2DO has about 0.3 bps/Hz improvement over the latter.

*K*= 36, where all of them are single-antenna users. The average achievable rates of different methods are compared in Figure7. Note that 30

*M*

*K*candidate vectors are generated in the randomization process of SDR, since the dimensionality of beamforming problem gets larger as the number of transmit antennas increases[2]. Here, the multicast capacity is achieved by a multi-rank precoder[14]. Obviously, the multicast capacity is an upper bound of all achievable rates. From Figure7, we can see that I2DO is superior to the SDR-based scheme and GS-DL. Moreover, the beamformer designed by I2DO can achieve a large majority of the multicast capacity, while the complexity of encoding and decoding is significantly reduced.

*M*= 24,

*K*= 36,

*N*

_{ k }= 1. The cumulative distribution functions of the multicast rate achieved by above three methods are displayed in Figure8. Again, I2DO has the best performance and it can achieve almost twice the multicast rate of the SDR-based scheme. As shown in[15], the approximation accuracy of semidefinite relaxation is a decreasing function of the number of users. In other words, the performance of the SDR-based scheme degrades when the number of users increases.

## 6 Conclusions

For the well-known max-min fair beamforming problem, two efficient algorithms, PASA and I2DO, are proposed to handle the case of two Tx antennas and the general case of more than two Tx antennas, respectively. In the two-antenna case, PASA is guaranteed to obtain a globally optimal beamformer with worst-case complexity$\mathcal{O}({K}^{3})$. While in the general cases, I2DO can decompose the original beamforming problem into a series of two-antenna subproblems and iteratively improve the solution by PASA. The superior performance of the proposed algorithms is demonstrated by comparing them with the state-of-the-art multicasting schemes.

## Endnotes

^{a}If **n**
_{
k
} is colored noise and the covariance matrix is known, it can be pre-whitened at the receiver side.

^{b}The formula${\mathbf{w}}^{\ast}{\mathbf{H}}_{k}^{\ast}{\mathbf{H}}_{k}\mathbf{w}$ in Eq. (3) can be interpreted as the received SNR of the *k*-th user.

^{c}It also means that every three of${\{{\mathbf{g}}_{k}\}}_{k=1}^{K}$ are linearly independent.

^{d}The detailed process of computing these solutions is demonstrated in the following three cases of bottleneck users.

^{e}There are some slight modifications in part II of PASA and the division line marked in part III of PASA is to be used for the complexity analysis.

## Appendices

### Appendix 1 Proof of **Lemma** 3

In the case of one bottleneck user, w.l.o.g, we assume the *i*-th user is the bottleneck user. Hence, we have *γ*
_{
i
}(**x**) < *γ*
_{
l
}(**x**),*l* ∈ {1,⋯,*K*}∖{*i*}, where${\gamma}_{i}(\mathbf{x})={h}_{i}+{\mathbf{g}}_{i}^{T}\mathbf{x}$ is the SNR of the *i*-th user. To maximize the SNR of the bottleneck user, the optimal solution to Eq. (12) must be **x** = **g**
_{
i
}/∥**g**
_{
i
}∥ according to the Cauchy-Schwarz inequality. Besides, it yields an upper bound of *λ*
_{opt} : *λ*
_{opt} ≤ *h*
_{
i
} +∥**g**
_{
i
}∥ by Lemma 2.

There are *K* possible cases of the bottleneck user, so we obtain *K* upper bounds of *λ*
_{opt} : *h*
_{
k
} + ∥**g**
_{
k
}∥,*k* = 1,…,*K*. We can see that only the index corresponding to the lowest upper bound is possible to be the bottleneck user. Therefore, instead of exhausting *K* hypothesis tests, we need only to verify if user *i* = arg min*k* *h*
_{
k
} + ∥**g**
_{
k
}∥ is the bottleneck user.

▪

### Appendix 2 Proof of **Lemma** 4

*i*-th and

*j*-th users are the bottleneck users. Then, we have

*γ*

_{ i }(

**x**) =

*γ*

_{ j }(

**x**) <

*γ*

_{ l }(

**x**),

*l*∈ {1,⋯,

*K*} ∖ {

*i*,

*j*}, where${\gamma}_{i}(\mathbf{x})={h}_{i}+{\mathbf{g}}_{i}^{T}\mathbf{x}$ and${\gamma}_{j}(\mathbf{x})={h}_{j}+{\mathbf{g}}_{j}^{T}\mathbf{x}$ denote SNRs of the

*i*-th and

*j*-th users, respectively. According to the FJ necessary conditions, an optimal solution to Eq. (12) must have form as

**x**=

*α*

**g**

_{ i }+

*β*

**g**

_{ j }, where

*α*,

*β*have the same sign. In particular, if

*α*> 0,

*β*> 0, it can be proven that

**x**=

*α*

**g**

_{ i }+

*β*

**g**

_{ j }is the optimal solution to problem

From geometrical perspective, we can see that the optimal solution to Eq. (43) must be a vector which is in the cone generated by **g**
_{
i
} and **g**
_{
j
}.

**v**is an arbitrary unit-length vector which is orthogonal to

**x**, we define a rotation from

**x**to

**v**

*γ*

_{ i }(

*θ*),

*γ*

_{ j }(

*θ*) with respect to

*θ*, we have

**v**and

**x**can be denoted as

**G**

_{ i j }= [

**g**

_{ i },

**g**

_{ j }],

*ρ*is a scalar, and

**n**is an unit-length vector in the null space of${\mathbf{G}}_{\mathit{\text{ij}}}^{T}$. Recalling that

**v**and

**x**are orthogonal to each other, we have

**x**

^{ T }

**v**=

*α*

*ξ*

_{ i }+

*β*

*ξ*

_{ j }= 0. It can be divided in two cases:

**x**as the rotation always decreases the SNR of bottleneck users. For case 2, we need to consider the second derivative of

*γ*

_{ i }(

*θ*),

*γ*

_{ j }(

*θ*) with respect to

*θ*

We conclude that at least one of *σ*
_{
i
},*σ*
_{
j
} is negative, w.l.o.g, we assume *σ*
_{
i
} < 0. Then, *γ*
_{
i
}(*θ*) is a concave function about *θ* and rotating **x** to **v** will decrease the SNR of the *i*-th user. Therefore, it is impossible to improve the solution **x** by rotating to **v**.

From analysis of both cases, we can see that solution **x** = *α* **g**
_{
i
} + *β* **g**
_{
j
},*α* > 0,*β* > 0 is the optimal solution to Eq. (43). According to Lemma 2, we know the corresponding SNR$\lambda ={h}_{i}+{\mathbf{g}}_{i}^{T}\mathbf{x}={h}_{j}+{\mathbf{g}}_{j}^{T}\mathbf{x}$ is an upper bound of *λ*
_{opt}. If *λ* < UB, then *λ* is a tighter upper bound and we can update UB = *λ*. Besides, if the solution **x** satisfies${h}_{k}+{\mathbf{g}}_{k}^{T}\mathbf{x}\ge \lambda ,k=1,\dots ,K$, then from Lemma 2, it is also an optimal solution to Eq. (12).

▪

### Appendix 3 Proof of **Lemma** 5

*i*-th user,

*j*-th user, and

*k*-th user are the bottleneck users. That is,

*γ*

_{ i }(

**x**) =

*γ*

_{ j }(

**x**) =

*γ*

_{ k }(

**x**) <

*γ*

_{ l }(

**x**),

*l*∈ {1,…,

*K*}∖{

*i*,

*j*,

*k*}. According to the FJ necessary conditions, an optimal solution to Eq. (12) must have form as

**x**=

*α*

**g**

_{ i }+

*β*

**g**

_{ j }+

*γ*

**g**

_{ k }, where

*α*,

*β*,

*γ*have the same sign. In particular, if

*α*> 0,

*β*> 0,

*γ*> 0, then

**x**=

*α*

**g**

_{ i }+

*β*

**g**

_{ j }+

*γ*

**g**

_{ k }is the optimum solution to problem

From geometrical perspective, it is easy to see that the optimal solution to Eq. (47) must be a vector in the cone generated by **g**
_{
i
},**g**
_{
j
}, and **g**
_{
k
}.

**v**as an arbitrary unit-length vector which is orthogonal to

**x**, we consider the rotation from

**x**to

**v**and obtain

*γ*

_{ i }(

*θ*),

*γ*

_{ j }(

*θ*),

*γ*

_{ k }(

*θ*) as defined in Eq. (44). Taking the first derivative of them with respect to

*θ*, we have

**v**and

**x**can be denoted as

where **G**
_{
ijk
} = [**g**
_{
i
},**g**
_{
j
},**g**
_{
k
}]. Recalling that **v** and **x** are orthogonal to each other, we have **x**
^{
T
}
**v** = *α* *ξ*
_{
i
} + *β* *ξ*
_{
j
} + *γ* *ξ*
_{
k
} = 0. Note that *ξ*
_{
i
},*ξ*
_{
j
},*ξ*
_{
k
} cannot all equal to zero since **g**
_{
i
},**g**
_{
j
},**g**
_{
k
} are linearly independent. Consequently, at least one of *ξ*
_{
i
},*ξ*
_{
j
},*ξ*
_{
k
} is negative, and it is impossible to improve one of the SNRs without reducing another one. Now, we conclude that solution **x** = *α* **g**
_{
i
} + *β* **g**
_{
j
} + *γ* **g**
_{
k
},*α* > 0,*β* > 0, *γ* > 0 is the optimal solution to Eq. (47).

According to Lemma 2, the corresponding SNR$\lambda ={h}_{i}+{\mathbf{g}}_{i}^{T}\mathbf{x}={h}_{j}+{\mathbf{g}}_{j}^{T}\mathbf{x}={h}_{k}+{\mathbf{g}}_{k}^{T}\mathbf{x}$ is an upper bound of *λ*
_{opt} which is the optimal solution of the original problem (12). If *λ* < UB, then we obtain a tighter upper bound UB = *λ*. Also, if this solution **x** satisfies${h}_{k}+{\mathbf{g}}_{k}^{T}\mathbf{x}\ge \lambda ,\forall k$, then it is an optimal solution to Eq. (12) by Lemma 2.

▪

### Appendix 4 Proof of **Theorem** 2

*i*-th user is the bottleneck user. Then, the optimal beamforming vector should be

**w**

_{opt}=

**h**

_{ i }/∥

**h**

_{ i }∥[3, 5]. Hence, the SNR of the

*i*-th user is

*γ*

_{ i }= ∥

**h**

_{ i }∥

^{2}which is a chi-squared distributed random variable with four degrees of freedom[16], while the SNRs of other users are${\gamma}_{l}=|{\mathbf{h}}_{l}^{\ast}{\mathbf{w}}_{\text{opt}}{|}^{2},l\in \{1,\dots ,K\}\setminus \{i\}$. Since the beamforming vector

**w**

_{opt}is of unit length and is independent of

**h**

_{ l }; therefore,${\mathbf{h}}_{l}^{\ast}{\mathbf{w}}_{\text{opt}}$ is a complex Gaussian variable with zero mean and unit variance. Consequently,

*γ*

_{ l }is a chi-squared distributed random variable with two degrees of freedom. The probability density functions (pdf) of

*γ*

_{ i },

*γ*

_{ l }are given by[16]

*P*

_{1}can be calculated by

*i*-th and

*j*-th users are bottleneck users. Let

*γ*

_{ i j }denote the SNR of bottleneck users and

*γ*

_{ l },

*l*∈ {1,…,

*K*}∖{

*i*,

*j*} denote the SNRs of other users. Similarly, the probability of two bottleneck user case

*P*

_{2}can be expressed as

To calculate *P*
_{2}, firstly we have to derive the closed form of *γ*
_{
i
j
} and analyze its pdf.

**h**

_{ i }∥ < ∥

**h**

_{ j }∥, then from the fact that both of them are bottleneck users, we have (see also[5])

**H**

_{ i j }as

**H**

_{ ij }

**h**

_{ i },

**h**

_{ j }can also be expressed as

*r*

_{11}> 0,

*r*

_{22}> 0. With Eq. (50), we also obtain the relationship of

*r*

_{11},

*r*

_{12},

*r*

_{22}

**h**

_{ i }and

**h**

_{ j }, so

**w**

_{opt}can be expressed as the linear combination of

**q**

_{1},

**q**

_{2}

*ϕ*must be

*γ*

_{ i j }due to its complex expression. Instead of computing the precise expression of

*P*

_{2}, we try to provide the bounds of

*P*

_{2}. From the expression of

*γ*

_{ i j }, we can see that${\gamma}_{\mathit{\text{ij}}}\le {r}_{11}^{2}$. Moreover, with Eq. (52), we have

*P*

_{2}can be bounded as

where${P}_{2}^{L},{P}_{2}^{U}$ denote the lower bound and upper bound of *P*
_{2}, respectively.

**w**

_{opt}is only determined by the channels of bottleneck users and hence is independent of

**h**

_{ l },

*l*≠

*i*,

*j*. Therefore, the SNRs of other users${\gamma}_{l}=|{\mathbf{h}}_{l}^{\ast}{\mathbf{w}}_{\text{opt}}{|}^{2}$ still follow chi-squared distribution with two degrees of freedom and its pdf is

*P*

_{2}, we turn to analyze the pdf of${r}_{11}^{2}=\parallel {\mathbf{h}}_{i}{\parallel}^{2}$. Let

*υ*denote the squared normalized inner product of

**h**

_{ i }and

**h**

_{ j }

*υ*follows uniform distribution[17, 18], that is,

**h**

_{ i }∥

^{2}is

*P*

_{2}can be calculated by

*P*

_{1}+

*P*

_{2}+

*P*

_{3}= 1. From above results, it is straightforward to get the bounds corresponding to the probability of three bottleneck user case

where${P}_{3}^{L}=1-{P}_{1}-{P}_{2}^{U},{P}_{3}^{U}=1-{P}_{1}-{P}_{2}^{L}$ are the lower bound and upper bound of *P*
_{3}, respectively.

▪

### Appendix 5 The Pseudo Matlab^{
TM
}Code of PASA

## Endnotes

^{a} If **n**
_{
k
} is colored noise and the covariance matrix is known, it can be pre-whitened at the receiver side.

^{b} The formula${\mathbf{w}}^{\ast}{\mathbf{H}}_{k}^{\ast}{\mathbf{H}}_{k}\mathbf{w}$ in (3) can be interpreted as the received SNR of the *k*-th user.

^{c} It also means that every three of${\{{\mathbf{g}}_{k}\}}_{k=1}^{K}$ are linearly independent.

^{d} The detailed process of computing these solutions is demonstrated in the following three cases of bottleneck users.

^{e} There are some slight modifications in part II of PASA and the division line marked in part III of PASA is to used for the complexity analysis.

## Declarations

### Acknowledgments

This work was supported by the National Natural Science Foundation of China under grant number 61071094, the National Science and Technology Special Projects of China under grant number 2012ZX03001007-002, and grant number 2012AA01A502 from the National High Technology Research and Development Program of China (863 Program). Parts of this work were presented at the IEEE International Conference on Communications, Ottawa, Ontario, Canada, June 2012.

## Authors’ Affiliations

## References

- Lopez MJ: Multiplexing, scheduling, and multicasting strategies for antenna arrays in wireless networks, PhD thesis, Dept. of Elect. Eng. and Comp. Sci., MIT. 2002Google Scholar
- Sidiropoulos ND, Davidson TN, Luo ZQ: Transmit beamforming for physical-layer multicasting.
*IEEE Trans. Signal Process*2006, 54(6):2239-2251.View ArticleGoogle Scholar - Hunger R, Schmidt DA, Joham M, Schwing A, Utschick W: Design of single-group multicasting-beamformers,. In
*Proc. IEEE ICC*. Glasgow; June 2007:2499-2505.Google Scholar - Abdelkader A, Gershman AB, Sidiropoulos ND: Multiple-antenna multicasting using channel orthogonalization and local refinement.
*IEEE Trans.Signal Process*2010, 58(7):3922—3927.MathSciNetView ArticleGoogle Scholar - Han Kim II, Love DJ, Park SeungYoung: Optimal and successive approaches to signal design for multiple antenna physical layer multicasting.
*IEEE Trans. Commun*2011, 59(8):2316-2327.View ArticleGoogle Scholar - Song E, Shi Q, Sanjabi M, Sun R: Robust SINR-constrained MISO downlink beamforming: when is semidefinite programming relaxation tight? EURASIP J. Wirel. Commun. Netw. 2012, 2012(1):1-11.Google Scholar
- Du B, Jiang Y, Xu X, Dai X: Transmit beamforming for MIMO multicast channels,. In
*Proc. IEEE ICC*. Ottawa; June 2012:3800-3805.Google Scholar - Golub GH, Loan CV:
*Matrix Computations*. Baltimore: Johns Hopkins University Press; 1996.Google Scholar - Boyd S, Vandenberghe L:
*Convex Optimization*. Cambridge: Cambridge University Press; 2004.View ArticleGoogle Scholar - Bazaraa MS, Sherali HD, Shetty CM:
*Nonlinear Programing: Theory and Algorithms*. Hoboken: John Wiley & Sons; 2006.View ArticleGoogle Scholar - Lozano A: Long-term transmit beamforming for wireless multicasting. In
*Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing*. Honolulu; April 2007:III-417–III-420.Google Scholar - Matskani E, Sidiropoulos ND, Luo ZQ, Tassiulas L: Efficient batch and adaptive approximation algorithms for joint multicast beamforming and admission Control. IEEE Trans. Signal Process. 2009, 57(12):4882-4894.Google Scholar
- Grant M, Boyd S: CVX: Matlab software for disciplined convex programming, version 2.0 beta. 2013.http://cvxr.com/cvx . Accessed June 2013Google Scholar
- Jindal N, Luo ZQ: Capacity limits of multiple antenna multicast. In
*Proc. IEEE Int. Symp. Inf. Theory*. Seattle; July 2006:1841-1845.Google Scholar - Luo ZQ, Ma WK, So AMC, Ye Y, Zhang S: Semidefininite relaxation of quadratic optimization problems. IEEE Signal Process. Mag. 2010, 27(3):20-34.Google Scholar
- Anderson TW:
*An Introduction to Multivariate Statistical Analysis*. Hoboken: John Wiley & Sons; 2003.Google Scholar - Au-Yeung CK, Love DJ: On the performance of random vector quantization limited feedback beamforming in a MISO system. IEEE Trans. Wirel. Commun. 2007, 6(2):458-462.Google Scholar
- Love JD, Heath RW: Grassmannian beamforming for multiple-input multiple-output wireless systems. IEEE Trans. Inf. Theory. 2003, 49(10):2735-2747.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.