Bounds for Eigenvalues of Arrowhead Matrices and Their Applications to Hub Matrices and Wireless Communications

This paper considers the lower and upper bounds of eigenvalues of arrow-head matrices. We propose a parameterized decomposition of an arrowhead matrix which is a sum of a diagonal matrix and a special kind of arrowhead matrix whose eigenvalues can be computed explicitly. The eigenvalues of the arrowhead matrix are then estimated in terms of eigenvalues of the diagonal matrix and the special arrowhead matrix by using Weyl’s theorem. Improved bounds of the eigenvalues are obtained by choosing a decomposition of the arrowhead matrix which can provide best bounds. Some applications of these results to hub matrices and wireless communications are discussed.


Introduction
In this paper we develop lower and upper bounds for arrowhead matrices. A matrix Q ∈ R m×m is called an arrowhead matrix if it has a form as follows: where D ∈ R (m−1)×(m−1) is a diagonal matrix, c is a vector in R m−1 , and b is a real number. Here the superscript "t" signifies the transpose. The arrowhead matrix Q is obtained by bordering the diagonal matrix D by the vector c and the real number b. Hence, sometimes the matrix Q in (1) is also called a symmetric bordered diagonal matrix. In physics, arrowhead matrices have been used to describe radiationless transitions in isolated molecules [1] and oscillators vibrationally coupled with a Fermi liquid [2]. Numerically efficient algorithms for computing eigenvalues and eigenvectors of arrowhead matrices were discussed in [3]. The properties of eigenvectors of arrowhead matrices were studied in [4], and as an application of their results, an alternative proof of Cauchy's interlacing theorem was given there. The existence of arrowhead matrices was investigated recently in [5][6][7][8] such that the constructed arrowhead matrix has the pregiven eigenvalues and other additional requirements.
Our motivation to study lower and upper bounds of arrowhead matrices is from Kung and Suter's recent work on the hub matrix theory [9] and its applications to multipleinput and multiple output (MIMO) wireless communication systems. A matrix, say A, is a hub matrix with m columns if its first m − 1 columns (called nonhub columns) are orthogonal to each other with respect to the Euclidean inner product and its last column (called hub column) has a Euclidean norm greater than any other columns. Subsequently, it was shown that the Gram matrix of A, that is, Q = A t A, is an arrowhead matrix and its eigenvalues could be bounded by the norms of the columns of A. As pointed out in [9][10][11], the eigenstructure of Q determines the properties of wireless communication systems. This motivates us to reexamines these bounds of the eigenvalues of Q and makes them sharper. In [9], the hub matrix theory is also applied to the MIMO beamforming problem by comparing k of m transmitting antennas with the largest signal-to-noise ratio, including the special case where k = 1 which corresponds to a transmitting hub. The relative performance of resulting system can be expressed as the ratio of the largest eigenvalue 2 EURASIP Journal on Advances in Signal Processing of the truncated Q matrix to the largest eigenvalue of the Q matrix. Again, it was previously shown that these ratios could be bounded by the ratios of norms of columns of the associated hub matrix. Sharper bounds will be presented in Section 4.
The well-known result on the eigenvalues of arrowhead matrices is the Cauchy interlacing theorem for Hermitian matrices [12]. We assume that the diagonal elements d j , j = 1, 2, . . . , m − 1, of the diagonal matrix D in (1) satisfy the relation d 1 ≤ d 2 ≤ · · · ≤ d m−1 . Let λ 1 , λ 2 , . . . , λ m be the eigenvalues of Q arranged in increasing order. The Cauchy interlacing theorem says that When the vector c and the real number b in (1) are taken into consideration, a lower bound of λ 1 and an upper bound of λ m were developed by using the well-known Gershgorin theorem (see, e.g., [3,12]), that is, Accurate bounds of eigenvalues of arrowhead matrices are of great interest in applications as mentioned before. The main results of this paper are presented in Theorems 11 and 12 for the upper and lower bounds of the arrowhead matrices. It is also shown in Corollary 13 that the resulting bounds are tighter than in (2), (3), and (4).
The rest of the paper is outlined as follows. In Section 2, we will introduce notation and present several useful results on the eigenvalues of arrowhead matrices. We give our main results in Section 3. In Section 4, we revisit the lower and upper bounds of the ratio of eigenvalues of arrowhead matrices associated with hub matrices and wireless communication systems [9], and subsequently, we make those bounds shaper by using the results in Section 3. In Section 5, we compute the bounds of arrowhead matrices using the developed theorems via three examples. Conclusions are given in Section 6.

Notation and Basic Results
The identity matrix is denoted by I. The notation diag(a 1 , a 2 , . . . , a n ) represents a diagonal matrix whose diagonal elements are a 1 , a 2 , . . . , a n . The determinant of a matrix A is denoted by det(A). The eigenvalues of a symmetric matrix A ∈ R n×n are always ordered such that For a vector a ∈ R n , its Euclidean norm is defined to be a := n i=1 |a i | 2 . The first result is about the determinant of an arrowhead matrix and is stated as follows.

Lemma 1. Let Q ∈ R m×m be an arrowhead matrix of the form
The proof of this result can be found in [5,13] and therefore is omitted here.
When the diagonal matrix D in (1) is a zero matrix, the following result is followed from Lemma 1.

Corollary 2.
Let Q ∈ R m×m be an arrowhead matrix having the following form: where c is a vector in R m−1 and b is a real number. Then the eigenvalues of Q are Proof. By using Lemma 1, we have Clearly, λ = 0 is a zero of det(λI − Q) with multiplicity m − 2.
In what follows, a matrix Q having a form in (7) is called a special arrowhead matrix. The following corollary (also, see [3]) is a direct result from Lemma 1.

Corollary 3. Let Q be an m × m arrowhead matrix given by
identical and distinct from the first m − k − 1 diagonal elements By (6) in Lemma 1, we have Clearly, if λ is an eigenvalue of Q, then λ is either an eigenvalue of Q or d m−k . Conversely, d m−k is an eigenvalue of Q with multiplicity k − 1 and the eigenvalues of Q are that of Q. This completes the proof.
By using Corollaries 3 and 4, to study the eigenvalues of Q, we may assume that the diagonal elements d 1 , d 2 , . . . , d m−1 of Q are distinct when we study the eigenvalues of Q in (1). Since eigenvalues of square matrices are invariant under similarity transformations, we can without loss of generality arrange the diagonal elements to be ordered so that d 1 < d 2 < · · · < d m−1 . Furthermore, we may assume that all entries of the vector c in (1) are nonzero. The reason for this assumption is the following. Suppose that c j , the jth entry of c, is nonzero, it can be easily seen from Lemma 1 that λ − d j is a factor of det(λI − Q); that is, d j is one of eigenvalues of Q. The remaining eigenvalues of Q are the same as those of a matrix which is obtained by simply deleting the jth row and column of Q. In summary, for any arrowhead matrix, we can find eigenvalues corresponding to repeated values in D or associated with zero elements in c by inspection.
In this paper, we call a matrix Q in (1) irreducible if the diagonal elements d 1 , d 2 , . . . , d m−1 of Q are distinct and all elements of c are nonzero. By using Corollary 4 and the above discussion, this arrowhead matrix can be reduced to an irreducible one.
Remark 5. In [4,9], Hermitian arrowhead matrices are considered; that is, it allows that c in the matrix Q of the form (1) is a vector in C m−1 . We can directly construct many (real symmetric) arrowhead matrices denoted by Q from Q. The diagonal elements of these symmetric arrowhead matrices are the exactly same as those of Q. The vector c in Q could be chosen as In such a way, there are 2 m−1 such symmetric arrowhead matrices. Because det(λI − Q) = det(λI − Q) by Lemma 1, every such symmetric arrowhead matrix Q has the identical eigenvalues with Q. This is the reason why we just consider the eigenvalues of real arrowhead matrices in this paper.
The following well-known result by Weyl on eigenvalues of a sum of two symmetric matrices is used in the proof of our main theorem. Theorem 6 (Weyl). Let F and G be two m × m symmetric matrices. Let us assume that the eigenvalues of F, G, and F + G have been arranged in increasing order. Then To apply Theorem 6 for estimating eigenvalues of an irreducible arrowhead matrix Q, we need to decompose Q into a sum of two symmetric matrices whose eigenvalues are relatively easy to be computed. Motivated by the structure of the arrowhead matrix and the eigenstructure of a special arrowhead matrix (see, Corollary 2), we write Q into a sum of a diagonal matrix and a special arrowhead matrix.
To be more precisely, let Q ∈ R m×m be an irreducible arrowhead matrix as follows: where Therefore, we can use Theorem 6 to give estimates of the eigenvalues of Q via those of E and S. To number the eigenvalues of E, we introduce the following definition. where Proof. For a given number ρ ∈ [0, 1], we split the matrix Q into a sum of a diagonal matrix E and a special arrowhead matrix S according to (17), where E and S are defined by (18). Clearly, we know that for j = 1, 2, . . . , m. By Corollary 2, we have where s and t are given by (21).
Upper Bounds. By (14) in Theorem 6, we have for all i ≥ j. Clearly, for a given j, More precisely, since { d i } m i=1 is monotonically increasing, s ≤ 0, and t ≥ 0, we have for j = 2, . . . , m − 1, and In conclusion, (19) holds.
Lower Bounds. By (15) in Theorem 6, we have, for a given j, Hence, for j = 2, . . . , m − 1, and As we can see from Theorem 8, the lower and upper bounds of the eigenvalues for Q are functions of ρ ∈ [0, 1] for the given irreducible matrix Q. In other words, the bounds of eigenvalues vary with the number ρ. Particularly, when we choose ρ being the ending points, that is, ρ = 0 and ρ = 1, we can give an alternative proof of interlacing eigenvalues theorem for arrowhead matrices (see, e.g., [12, page 186]). This theorem is stated as follows. (16),
Combining these two parts together yields our result.
The proof of the above result shows that we could have improved lower and upper bounds for each eigenvalue of an irreducible arrowhead matrix by finding an optimal parameter ρ in [0, 1]. Our main results will be given in the next section.

Main Results
Associated with the arrowhead matrix Q in Theorem 8, we define four functions f i , i = 1, 2, 3, 4, on the interval [0, 1] as follow: Obviously, where s and t are given by (21).
The following observation about monotonicity of functions f i , i = 1, 2, 3, 4, is simple, but quite useful as we will see in the proof of our main results. Since Therefore, Since Upper Bound of λ m (Q). From (19) we have For Hence, This completes the proof.

Theorem 12. Let Q be an irreducible arrowhead matrix defined by (16) and satisfying all assumptions in Theorem 8. Then the eigenvalues of Q are bounded below by
Proof. In Theorem 8, the lower bounds of the eigenvalues of Q in (20) are determined by d j , j = 1, 2, . . . , m, and s and t in (21). As we did in Theorem 12, the lower bounds of the eigenvalues of Q are functions of ρ in the interval [0, 1]. Therefore, we are able to find optimal bounds of the eigenvalues of Q by choosing proper ρ. The discussion is given for j = 1, 2 ≤ j ≤ m − 1, and j = m in (45), separately.
Lower Bound of λ 1 (Q). From (20), we have In this case, we consider ρ lying in the following two subintervals: It leads to Lower Bound of λ 2 (Q). From (20), we have EURASIP Journal on Advances in Signal Processing 7 These lead to

Numerical Examples
In this section, we will numerically compare the lower and upper bounds of eigenvalues for arrowhead matrices estimated by three approaches. The first approach is due to Cauchy, denoted by C and based on (2)-(4). The second approach, denoted by SS, is based on eigenvalue bounds provided by Theorems 11 and 12. The third approach, denoted by WS, is based on Wolkowicz-Styan's lower and upper bounds for the largest and smallest eigenvalues of a symmetric matrix [15]. These WS bounds are given by where Q ∈ R m×m is symmetric, p = √ m − 1, a = trace(Q)/m, and s 2 = trace(Q t Q)/m − a 2 .
Example 20. Consider the directed graph in Figure 1, which might be used to represent a MIMO communication scheme.
The adjacency matrix for the directed graph is which is a hub matrix with the right-most column corresponding to node 4 as the hub column. The associated The eigenvalues of Q are 0, 1, 1.4384, 5.5616. Corollary 3 implies 1 being the eigenvalue of Q. By Corollary 4, the following matrix should have eigenvalues of λ 1 ( Q) = 0, λ 2 ( Q) = 1.4384, λ 3 ( Q) = 5.5616. The C bounds, SS bounds, and WS bounds for the eigenvalues of the matrix Q are listed in Table 1. For λ 1 ( Q), the lower SS bound is best; next is the lower C bound followed by the lower WS bound; the upper SS bound is best; next is the upper WS bound followed by the upper C bound. For λ 2 ( Q), SS bounds and C bounds are the same as the C bounds. For λ 3 ( Q), the lower SS bound is best; the lower C bound and WS bound are the same; the upper SS bound is best; next is the upper WS bound followed by the upper C bound. In conclusion, the SS bounds are best. Example 21. We consider an arrowhead matrix Q as follows:   The eigenvalues of Q and the corresponding C bounds, SS bounds, and WS bounds for the eigenvalues of Q are listed in Table 2. For λ 1 (Q), the lower SS bound is the best; next is the lower WS bound followed by the lower C bound; the SS upper bound and the C upper bound are the same and are better than the upper WS bound. For λ 2 (Q), the lower SS bound is better than the lower C bound; the upper SS bound is the same as the upper C bound. For λ 3 (Q), the lower SS bound is the best; next is the lower WS bound followed by the lower C bound; the upper SS bound is the best; next is the upper WS bound followed by the upper C bound.
Example 22. We consider an arrowhead matrix Q as follows: The eigenvalues of Q and the corresponding C bounds, SS bounds, and WS bounds for the eigenvalues of Q are listed in Table 3. For λ 1 (Q), the lower C bound is best; next is the lower SS bound followed by the lower WS bound; the upper SS bound is the best; next is the upper WS bound followed by the upper C bound. For λ 2 (Q), the SS bounds are the same as the C bounds. For λ 3 (Q), the upper SS bound is the best; next is the upper WS bound followed by the upper C bound; the SS lower bound is the best; next is the lower WS bound followed by the lower C bound.

Conclusions
Motivated by the need to more accurately estimate eigengaps of the system matrices associated with hub matrices, this paper provides an efficient way to estimate the lower and upper bounds of arrowhead matrices. Improved lower and upper bounds for the eigengaps of the system matrices are developed. We applied these results to a wireless computation application, and subsequently we presented several numerical examples. In the future, we will plan to extend our results to hub dominant matrices, which will allow hub matrices with correlated nonhub columns.