Skip to main content

A DFT-based approximate eigenvalue and singular value decomposition of polynomial matrices

Abstract

In this article, we address the problem of singular value decomposition of polynomial matrices and eigenvalue decomposition of para-Hermitian matrices. Discrete Fourier transform enables us to propose a new algorithm based on uniform sampling of polynomial matrices in frequency domain. This formulation of polynomial matrix decomposition allows for controlling spectral properties of the decomposition. We set up a nonlinear quadratic minimization for phase alignment of decomposition at each frequency sample, which leads to a compact order approximation of decomposed matrices. Compact order approximation of decomposed matrices makes it suitable in filterbank and multiple-input multiple-output (MIMO) precoding applications or any application dealing with realization of polynomial matrices as transfer function of MIMO systems. Numerical examples demonstrate the versatility of the proposed algorithm provided by relaxation of paraunitary constraint, and its configurability to select different properties.

1 Introduction

Polynomial matrices have been used for a long time for modeling and realization of multiple-input multiple-output (MIMO) systems in the context of control theory [1]. Nowadays, polynomial matrices have a wide spectrum of applications in MIMO communications [26], source separation [7], and broadband array processing [8]. They also have a dominant role in development of multirate filterbanks [9].

More recently, there have been much interest in polynomial matrix decomposition such as QR decomposition [1012], eigenvalue decomposition (EVD) [13, 14], and singular value decomposition (SVD) [5, 11]. Lambert [15] has utilized Discrete Fourier transform (DFT) domain to change the problem of polynomial EVD to pointwise EVD. Since EVD is obtained at each frequency separately, eigenvectors are known at each frequency up to a scaling factor. Therefore, this method requires many frequency samples to avoid abrupt changes in adjacent eigenvectors.

Although, many methods of designing principle component filterbanks have been developed that are equivalent to EVD of pseudo circulant polynomial matrices [16, 17], the next pioneering work on polynomial matrix EVD is presented by McWhirter et al. [13]. They use an extension of Jacobi algorithm known as SBR2 for EVD of para-Hermitian polynomial matrices which guarantees exact paraunitarity of eigenvector matrix. Since final goal of SBR2 algorithm is to have strong decorrelation, the decomposition does not necessarily satisfy spectral majorization property. SBR2 algorithm has also been modified for QR decomposition and SVD [10, 11].

Jacobi-type algorithms are not the only proposed methods for polynomial matrix decomposition. Another iterative method for spectrally majorized EVD is presented in [14] which is based on the maximization of zeroth-order diagonal energy. Spectral majorization property of this algorithm is verified via simulation. Followed by the work of [6], a DFT-based approximation of polynomial SVD is also proposed in [18] which uses model order truncation by phase optimization.

In this article, we present polynomial EVD and SVD based on DFT formulation. It transforms the problem of polynomial matrix decomposition to the problem of, pointwise in frequency, constant matrix decomposition. At first it seems that applying inverse DFT on the decomposed matrices leads to polynomial EVD and SVD of the corresponding polynomial matrix. However, we will show later in this article that in order to have compact order decomposition, phase alignment of decomposed constant matrices in DFT domain results in polynomial matrices with considerably lower order. For this reason, a quadratic nonlinear minimization problem is set up to minimize the decomposition error for a given finite order constraint. Consequently, the required number of frequency samples and computational complexity of decomposition reduce dramatically. The algorithm provides compact order matrices as an approximation of polynomial matrix decomposition for an arbitrary polynomial order. This is suitable in MIMO communications and filterbank applications, where we deal with realization of MIMO linear time invariant systems. Moreover, formulation of polynomial EVD and SVD in DFT domain enables us to select the property of decomposition. We show that if eigenvalues (singular values) intersect at some frequencies in frequency domain, smooth decomposition, and spectrally majorized decomposition are distinct. The proposed algorithm is able to reach to either of these properties.

The remainder of this article is organized as follows. The relation between polynomial matrix decomposition and DFT matrix decomposition is formulated in Section 2. In Section 3, two important spectral properties of decomposition, namely spectral majorization and smooth decomposition, are provided using appropriate arrangement of singular values (eigenvalues) and corresponding singular vectors (eigenvectors). The equality of polynomial matrix and dft matrix decomposed matrices decompositions are guaranteed via the finite duration constraint, which is investigated in Section 4. The finite duration constraint imposes the phase angles of singular vector (eigenvector) to minimize a nonlinear quadratic function. A solution for this problem is proposed in Section 5. Section 6 presents the results of some computer simulations which are considered to demonstrate performance of the proposed decomposition algorithm.

1.1 Notation

Some notational conventions are as follows: constant values, vectors, and matrices are in regular character lower case, lower case over-arrow, and upper case, respectively. Coefficients of polynomial (scalar, vector, and matrix) are with indeterminate variable n in the square brackets. Any polynomial (scalar, vector, and matrix) is distinguished by bold character and indeterminate variable z in the parenthesis and its DFT by bold character and indeterminate variable k in the brackets.

2 Problem formulation

Denote a p × q polynomial matrix A(z) such that each element of A(z) is a polynomial. Equivalently, we can indicate this type of matrix by coefficient matrix A[n],

A(z)= n = N min N max A[n] z n
(1)

where A[n] is only non-zero in the interval [N min, N max]. Define the effective degree of A(z) as N max − N min (or the length of A[n] as N max − N min + 1).

The polynomial matrix multiplication of a p × q matrix A(z) and a q × t matrix B(z) is defined as

C ( z ) = A ( z ) B ( z ) c ij ( z ) = k = 1 q a ik ( z ) b kj ( z ) .

We can obtain the coefficient matrix of product by matrix convolution of A[n] and B[n], that is defined as

C [ n ] = A [ n ] B [ n ] c ij [ n ] = k = 1 q a ik [ n ] b kj [ n ]

where denotes the linear convolution operator.

Denote para-conjugate of a polynomial matrix as

A ~ ( z ) = A T ( z 1 ) = N min N max A H [ n ] z n .

in which, as a subscript denotes the complex conjugate of coefficients in the polynomial matrix A(z).

A matrix is said to be para-Hermitian if A ~ (z)=A(z) or equivalently A[n] = A H[−n]. We call a polynomial matrix paraunitary if U ~ (z)U(z)=I, where I is a q × q identity matrix.

Thin EVD of a p × p para-Hermitian polynomial matrix A(z) is of the form

A(z)=U(z)Λ(z) U ~ (z),
(2)

and thin SVD of a p × q arbitrary polynomial matrix is of the form,

A(z)=U(z)Σ(z) V ~ (z)
(3)

where U(z) and V(z) are p × r and q × r paraunitary matrices, respectively. Λ(z) and Σ(z) represent r × r diagonal matrices where r is the rank of A(z).

We can equivalently write EVD of a para-Hermitian matrix and SVD of a polynomial matrix in coefficient matrix form

A [ n ] = U [ n ] Λ [ n ] U H [ n ]
(4)
A [ n ] = U [ n ] Σ [ n ] V H [ n ]
(5)

in which, U[n], V[n], Λ[n], and Σ[n] are the coefficient matrices corresponding to U(z), V(z), Λ(z), and Σ(z).

In general, EVD and SVD of a finite-order polynomial matrix are not finite order. As an example, suppose EVD of para-Hermitian polynomial matrix

A(z)= 2 z 1 + 1 z + 1 2 .
(6)

Eigenvalues and eigenvectors of the polynomial matrix in (6) are neither of finite order nor rational

Λ ( z ) = 2 + ( z 1 + 2 + z ) 1 / 2 0 0 2 ( z 1 + 2 + z ) 1 / 2 U ( z ) = 1 2 ( z 1 + 1 ) ( z 1 + 2 + z ) 1 / 2 z 1 1 ( z 1 + 1 ) ( z 1 + 2 + z ) 1 / 2 .

The same results can be found for polynomial QR decomposition in [12].

We mainly explain the proposed algorithm for polynomial SVD, yet wherever it seems necessary we explain the result for both decomposition.

The decomposition in (3) can also be approximated by samples of discrete-time Fourier transform, yields a decomposition off the form

A[k]= U [k] Σ [k] V H [k],k=0,1,,K1.
(7)

Such a decomposition can be obtained by taking the K-point DFT of coefficient matrix A[n],

A[k]=A(z) | z = w K k = N min N max A[n] w K kn ,k=0,1,,K1,
(8)

where w K  = exp(−j 2 π / K).

DFT formulation plays an important role in decomposition of polynomial matrices because it replaces the problem of polynomial SVD that involves many protracted steps with K conventional SVD that are pointwise in frequency. It also enables us to control spectral properties of the decomposition. However, it causes two inherent drawbacks:

  1. 1.

    Regardless of what is the trajectory of polynomial singular values in frequency domain, conventional SVD order singular values irrespectively of the ordering in neighboring frequency samples.

  2. 2.

    In frequency domain, samples of polynomial singular vectors are known up to a scalar complex exponential by using the SVD at each frequency sample, which yields to discontinuous variation between neighboring frequency samples.

The first issue is directly dealt with the spectral properties of the decomposition. In Section 3, we would explain why arranging singular values in decreasing order yields to approximate spectral majorization, while smooth decomposition requires rearrangement of singular values and their corresponding singular vectors.

For the second issue, suppose conventional SVD of an arbitrary constant matrix A. If the pair u and v are the left and right singular vectors corresponding to a non-zero singular value, for an arbitrary scalar phase angle θ, the pair e j θ u and e j θ v are also left and right singular vectors corresponding to the same singular value. Although this non-uniqueness is trivial in conventional SVD, it plays a crucial role in polynomial SVD. When we perform SVD at each frequency of DFT matrix as in (7), these non-uniquenesses in phase exist at each frequency regardless of other frequency samples.

Denote u i [k] and v i [k] the i th column vector of the desired matrices U(z) and V(z). Then all the vectors of the form

u i [ k ] = e j θ i [ k ] u i [ k ] v i [ k ] = e j θ i [ k ] v i [ k ] , i = 1 , 2 , , r σ i [ k ] = σ i [ k ]
(9)

have the chance to appear as the i th column of U [k] and V [k], and i th diagonal element of Σ [k], respectively. Moreover, in many applications, specially those which are related to MIMO precoding, we can relax constraints of the problem by letting singular values to be complex (see applications of polynomial SVD in [4, 18])

u i [ k ] = e j θ i u [ k ] u i [ k ] v i [ k ] = e j θ i v [ k ] v i [ k ] , i = 1 , 2 , , r . σ i [ k ] = e j ( θ i v [ k ] θ i u [ k ] ) σ i [ k ]
(10)

Given this situation, singular values have not all their conventional meaning. For instance, the greatest singular value is conventionally 2-norm of the corresponding matrix, which is not true for complex singular values. The process of compensating singular vectors for these phases is what we call phase alignment and is developed in Section 4.

Based on what was mentioned above, Algorithm 1 gives the descriptive pseudo code for DFT-based SVD. Modifications of the algorithm for EVD of para-Hermitian matrices are straightforward. If at each frequency sample all singular values are in decreasing order, REARRANGE function (which is described in Algorithm 2) is only required for smooth decomposition, otherwise for spectral majorization, no further arrangement is required. For the phase alignment, first we need to compute phase angles which is indicated in the algorithm by DOGLEG function and is described in Algorithm 3.

3 Spectral majorized decomposition versus smooth decomposition

Two of the most appealing decomposition properties are smooth decomposition [19] and spectral majorization [13]. These two objectives do not always occur at the same time, hence we should choose which one we are willing to use as our main objective.

In many filterbank applications which are dealt with principle components filterbank, spectral majorization and strong decorrelation are both required [16]. Since smooth decomposition leads to more compact decomposition, in cases that the only objective is strong decorrelation, exploiting smooth decomposition is reasonable. The DFT-based approach of polynomial matrix decomposition is capable of decomposing a matrix with either of these properties with small modification.

Approximate SVD

Polynomial EVD of a para-Hermitian matrix is said to have spectral majorization property if [13, 16]

λ 1 ( e j ω ) λ 2 ( e j ω ) λ r ( e j ω ) , ω .

Note that, eigenvalues corresponding to para-Hermitian matrices are real in all frequencies.

We can extend the definition to the polynomial SVD, replacing singular values with eigenvalues in the definition, we have

σ 1 ( e j ω ) σ 2 ( e j ω ) σ r ( e j ω ) , ω .

If we let singular values to be complex, we can replace absolute value of singular values in the definition.

A polynomial matrix have no discontinuity in frequency domain, hence we modify definition of smooth decomposition presented in [19] to fit with our problem and avoid unnecessary discussions.

Polynomial EVD (SVD) of a matrix is said to possess smooth decomposition if eigenvectors (singular vectors) have no discontinuity in frequency domain, that is

d d ω u il ( e j ω ) <,ωand i = 1 , 2 , , r l = 1 , 2 , , p ,
(11)

where u i l is the l th element of u i .

If eigenvalues (singular values) of a polynomial matrix intersect at some frequencies, the spectral majorization and smooth decomposition are not simultaneously realizable. As an example, suppose A(z) is a polynomial matrix with u 1 (z) and u 2 (z) are eigenvectors corresponding to distinct eigenvalues λ 1(z) and λ 2(z), respectively. Lets assume u 1 ( e j ω ) and u 2 ( e j ω ) have no discontinuity in frequency domain, and λ 1(e jω) and λ 2(e jω) intersect at some frequencies. Denote

λ 1 ( e j ω ) = λ 1 ( e j ω ) λ 1 ( e j ω ) λ 2 ( e j ω ) λ 2 ( e j ω ) λ 1 ( e j ω ) < λ 2 ( e j ω ) , λ 2 ( e j ω ) = λ 2 ( e j ω ) λ 1 ( e j ω ) λ 2 ( e j ω ) λ 1 ( e j ω ) λ 1 ( e j ω ) < λ 2 ( e j ω ) ,
(12)

Algorithm 2 Rearrangement for smooth decomposition

and

u 1 ( e j ω ) = u 1 ( e j ω ) λ 1 ( e j ω ) λ 2 ( e j ω ) u 2 ( e j ω ) λ 1 ( e j ω ) < λ 2 ( e j ω ) , u 2 ( e j ω ) = u 2 ( e j ω ) λ 1 ( e j ω ) λ 2 ( e j ω ) u 1 ( e j ω ) λ 1 ( e j ω ) < λ 2 ( e j ω ) .
(13)

Obviously, u 1 ( e j ω ) and u 2 ( e j ω ) are eigenvectors corresponding to distinct eigenvalues λ 1 ( e j ω ) and λ 2 ( e j ω ), respectively. Note that, λ 1 ( e j ω ) λ 2 ( e j ω ) for all frequencies, which means λ 1(e jω) and λ 2(e jω) are spectrally majorized. However, u 1 ( e j ω ) and u 2 ( e j ω ) are discontinuous at intersection frequencies of λ 1(e jω) and λ 2(e jω), which implies that they are not smooth anymore. In this situation, although λ 1 ( e j ω ), λ 2 ( e j ω ), u 1 ( e j ω ), and u 2 ( e j ω ) are not even analytic, we can approximate them with finite order polynomials.

If a decomposition has spectral majorization, its eigenvalues (singular values) are of decreasing order in all frequencies. Therefore, they are in decreasing order in any arbitrary frequency sample set, including DFT frequencies. Obviously the converse is only approximately true. Hence, for polynomial EVD to possess spectral majorization approximately, it suffices to arrange sampled eigenvalues (singular values) of (7) in decreasing order. Since we only justify spectral majorization at DFT frequency samples, the resulting EVD (SVD) may possess the property only approximately. Similar results can be seen in [14, 20].

To have smooth singular vectors, we propose an algorithm based on inner product of consecutive frequency samples of singular vectors. We can accumulate smoothing requirement in (11) for all r elements as

d d ω u i ( e j ω ) <,ωandi=1,2,,r.
(14)

Let B be the upper bound of norm of derivative and {·} be the real value of a complex value.

For an arbitrary Δ ω we have

u i ( e j ( ω + Δ ω ) ) u i ( e j ω ) 2 = 2 2 R u i H ( e j ( ω + Δ ω ) ) u i ( e j ω ) < ( Δ ω B ) 2 ω ,
(15)

that is, for a smooth singular vector R u i ( e j ( ω + Δ ω ) ) u i ( e j ω ) can be made to be as close to unity as desired by making Δ ω sufficiently small. In our problem u i ( e j ω ) is sampled uniformly with Δω= 2 π K . Since EVD is performed at each frequency sample independently, u i [k] and u i [k+1] are not necessarily two consecutive frequency samples of a smooth eigenvector. Therefore, we should rearrange eigenvalues and eigenvectors to yield smooth decomposition. This can be done for each sample of eigenvector u i [k] by seeking for the eigenvector of successor sample u j [k+1] with the most value of R u i H [ k ] u j [ k + 1 ] .

Define inner product c ij u [k] as

c ij u [ k ] = u i H [ k 1 ] u j [ k ] .

Since, u i [k] is a scalar phase multiplication of u i [k], computation of R{ c ij u [k]} is not possible before phase alignment. Due to (15), for sufficiently small Δ ω, two consecutive samples of a smooth singular vector can be as close as desired and we can approximate

R c ij u [ k ] | c ij u [ k ] | = | c ij u [ k ] | ,

which allows us to use inner product of u [k] instead of u [k]. From (12) and (13), it can be seen that before the intersection of eigenvalues, consecutive eigenvectors which are sorted by conventional EVD in decreasing order, are from the same smooth eigenvector and so | c 11 u [k]| and | c 22 u [k]| are near unity. However, if k − 1 and k are two frequency sample before and after the intersection, respectively, due to decreasing order of eigenvalues, smoothed eigenvectors are swapped after intersection. Therefore, | c 11 u [k]| and | c 22 u [k]| are some values near zero, instead | c 12 u [k]| and | c 21 u [k]| are near unity.

Algorithm 2 describes a simple rearrangement procedure to track eigenvectors (singular vectors) for smooth decomposition.

4 Finite duration constraint

Phase alignment is critical to have compact order decomposition. Another aspect of this fact is revealed in the coefficient’s domain perspective of (7). In this domain, the multiplication is replaced by circular convolution

A [ ( ( n ) ) K ] = U [ ( ( n ) ) K ] Σ [ ( ( n ) ) K ] V H [ ( ( n ) ) K ] = U [ ( ( n ) ) K ] Σ [ ( ( n ) ) K ] V H [ ( ( n ) ) K ]
(16)

in which is the circular convolution operator and ((n)) K denotes n module K.

Polynomial SVD corresponds to linear convolution in the coefficients domain, however the decomposition obtained from DFT corresponds to circular convolution. Recalling from discrete-time signal processing, it is well known that we can equivalently utilize circular convolution instead of linear convolution if convoluted signals are zero-padded adequately. That is, for x 1[n] and x 2[2] are two signals with the length of N 1 and N 2, respectively, apply zero padding such that zero padded signals have the length N 1 + N 2 − 1 [21]. Hence, if the last M−1 coefficients of U[n], Σ[n], and V[n], are zero, the following results are hold:

A [ ( ( n ) ) K ] = U [ ( ( n ) ) K ] Σ [ ( ( n ) ) K ] V H [ ( ( n ) ) K ] A [ n ] = U [ n ] Σ [ n ] V H [ n ] , U [ ( ( n ) ) K ] U H [ ( ( n ) ) K ] = δ [ ( ( n ) ) K ] I U [ n ] U H [ n ] = δ [ n ] I , V [ ( ( n ) ) K ] V H [ ( ( n ) ) K ] = δ [ ( ( n ) ) K ] I V [ n ] V H [ n ] = δ [ n ] I.
(17)

Therefore, the problem is to obtain the phase set { θ i [k]} and correcting the singular vectors using (9). The phase set { θ i [k]} should be such that the resulting coefficients satisfy (17).

Without loss of generality, let U[n] and V[n] be causal, i.e., U[n] = V[n] = 0 for n < 0. U[n] and V[n] (which are supposed to be of length M) should be multi-sequence zero-padded at least with (M − 1) zeros.

U[n]= 1 K k = 0 K 1 U[k] w K kn =0
(18)

for n = M, M + 1, …, K − 1, in which K ≥ 2M − 1. If these conditions are satisfied, circular convolution can be used instead of linear convolution.

Since the available matrix of singular vectors at each frequency is U [k], inserting (9) in (18) for each singular vector separately leads to

k = 0 K 1 u i [k] e j θ i u [ k ] w K kn =0
(19)

for n = M, M + 1, …, K − 1.

Without loss of generality, let θ i [0] = 0. In a more compact form we can express these (K − M)-folded equations in matrix form

F M ( u i ) x ( θ i u )= f M ( u i )i=1,2,,r
(20)

in which x ( θ i u )=[exp(j θ i u [1]),exp(j θ i u [2]),,exp(j θ i u [K1]) ] T , f M ( u i )= [ u i ′T [ 0 ] , u T i [ 0 ] , , u i ′T [ 0 ] ] T is a p(K − M) × 1 vector, and

F M ( u i ) = u i [ 1 ] w K M u i [ 2 ] w K 2 M u i [ K 1 ] w K ( K 1 ) M u i [ 1 ] w K ( M + 1 ) u i [ 2 ] w K 2 ( M + 1 ) u i [ K 1 ] w K ( K 1 ) ( M + 1 ) u i [ 1 ] w K ( K 1 ) u i [ 2 ] w K 2 ( K 1 ) u i [ K 1 ] w K ( K 1 ) 2 .

For polynomial EVD, Equation (20) is enough, however, for polynomial SVD we have two options. To approximate SVD with approximately positive singular values, we must augment F M ( u i ) and f M ( u i ) with similar defined matrix and vector for v i ′ [k]

F M ( u i , v i ) = F M ( u i ) F M ( v i ) and f M ( u i , v i ) = f M ( u i ) f M ( v i ) ,

then solve

F M ( u i , v i ) x ( θ i )= f M ( u i , v i )i=1,2,,r.
(21)

An additional degree of freedom is obtained by letting singular values to be complex. However, an straightforward solution which yield to singular values and singular vectors of order M is complicated. Instead, we impose the finite duration constraint only two singular vectors

F M ( u i ) x ( θ i v ) = f M ( u i ) F M ( v i ) x ( θ i v ) = f M ( v i ) i = 1 , 2 , , r .
(22)

If K ≥ 2M − 1, then the last M − 1 coefficients of resulting polynomial vectors are zero. Therefore, according to (17), U(z) and V(z) are paraunitary. On the other hand, if K ≥ 2M + N max − N min − 1, circular convolution relation of coefficient

Σ [ ( ( n ) ) K ] = U H [ ( ( n ) ) K ] A [ ( ( n ) ) K ] V [ ( ( n ) ) K ]

results in the linear convolution Σ[n] = U H[−n] A[n] V[n]. This guarantee that Σ(z) is a diagonal polynomial matrix of order 2M + N max − N min − 2. Obviously, if U(z) and V(z) are paraunitary and Σ(z)= U ~ (z)A(z)V(z) is a diagonal matrix, A(z)=U(z)Σ(z) V ~ (z) is the polynomial SVD of A(z).

Once the set of phase { θ i u [k], θ i v [k]} are obtained from (20), (21), or (22), phase alignment of u i [k] and v i [k] can be done using (10) and inverse DFT of U[k] and V[k] yield to coefficient matrices U[n] and V[n]. For obtaining singular values, we have two options, we can either set K ≥ 2M − 1 and phase align σ i ′ [k] using (10). After inverse DFT of Σ[k], we should truncate Σ[n] to have duration M. Another option which yields to more accurate results is by calculating U ~ (z)A(z)V(z) and replacing off-diagonal elements with zero.

Next, we provide a minimization approach to determine the unknown set {θ i [k]}.

5 Gradient descent solution

In general, there may exist no phase vector θ which satisfies (20). Even when there exists a phase vector that satisfies the finite duration constraint, the solution is not straightforward. For these reasons, we can view (20) as a minimization problem [6]. We use energy of the highest order coefficients (the coefficients that we equate to zero in (18)) as the objective to the minimization problem

J( θ i )= F M ( u i ) x ( θ i u ) + f M ( u i ) 2 ,i=1,2,,r.
(23)

An alternative minimization technique as a solution for this phase optimization problem is proposed in [6], which we describe it in this section.

Throughout this section, we focus on solving θ i =argminJ( θ i ) as a least square solution for a single singular vector u i [k], so we drop the subscript “i” from the quantity θ i and use F and f , instead of F M ( u i ) and f M ( u i ) to simplify the notation. The objective J( θ ) is intentionally presented as a function of θ to emphasize the fact that our problem is classified as an unconstrained optimization.

We exploit the trusted region strategy for the problem (23). By utilizing the information about the gradient vector and Hessian matrix in each step, trusted region strategy constructs a model function m k which have a similar behavior close to the current point θ ( k ) . The model m k is usually defined as the second-order Taylor series expansion (or its approximation) of J( θ + φ ) around θ , that is

m k ( φ ) = J ( θ ) + φ T J ( θ ) + φ T 2 J ( θ ) φ ,

where J( θ ) and 2 J( θ ) are the gradient vector and the Hessian matrix corresponding to J( θ ), respectively. The model m k is designed to be a good approximation of J( θ ) near the current point and is not trustworthy on regions far from the current point. Consequently, the restriction in minimization of m k on a region around θ ( k ) is crucial, that is

φ = arg min φ m k ( φ ) φ <R.
(24)

where R is the trusted region radius.

The decision about shrinking of the trusted region is determined by comparing the actual reduction inobjective function and predicted reduction. Given a step φ , the ratio

ρ= J ( θ ) J ( θ + φ ) m k ( 0 ) m k ( φ )
(25)

is used as a criterion to indicate if the trusted region is small enough.

Among methods which approximate the solution of the constrained minimization (24) dogleg procedure is the only one which leads to analytical approximation. It also promises to achieve at least as much reduction in m k as is possible by Cauchy point (the minimizer of m k along the steepest descent direction J( θ ), subject to the trusted region) [22]. However, this procedure requires Hessian matrix (or an approximation of it) to be positive definite.

5.1 Hessian matrix modification

The gradient vector and Hessian matrix corresponding to J( θ ) are as follows

g ( θ ) = 2 I X ( θ ) F H ( F x ( θ ) + f ) , H ( θ ) = 2 R X ( θ ) F H F X ( θ ) H 2 R diag X ( θ ) F H ( F x ( θ ) + f ) ,
(26)

where X( θ ) is a diagonal matrix with the k th diagonal element exp(−j θ[k]) and k = 0, 1, …, K − 1.

In general, Hessian matrix in (26) does not promise to be always positive definite. Therefore, we should modify Hessian matrix to yield a positive definite approximation.

We provide a simple modification which brings some desirable features by omitting the second term from the Hessian matrix and diagonal loading to guarantee positive definiteness

H( θ )2R X ( θ ) F H F X ( θ ) H +αI.
(27)

The term 2R X ( θ ) F H F X ( θ ) H is positive semi-definite and in many situations, it is much more significant than the second term of Hessian matrix in (26). Hence, with diagonal loading α I (I is with conformable size and α is very small), the modified Hessian matrix guarantees (27) to be positive definite and provides the desired properties in contrast to use the exact Hessian matrix.

5.2 Dogleg method

Dogleg method starts with the unconstrained minimization of (24)

ϕ h = H 1 g
(28)

When the trusted region radius is so large that ϕ H ≤ R, it is the exact solution of (24) and we select it as the dogleg method answer. On the other hand, for small R the solution of (24) is R g / g . For intermediate values of R, the optimal solution lies on a curved trajectory between these two points [22].

The dogleg method approximates this trajectory by a path consisting of two line segments. The first line segment starts from the origin to the unconstrained minimized point along the steepest descent direction

ϕ g = g T g g T H g g .
(29)

The second line segment starts from ϕ g to ϕ h. These two line segments form an approximate trajectory which its intersection with the sphere ϕ=R is the approximate solution of (24) when ϕ h > R.

5.3 Alternating minimization

Another solution of (23) is provided by converting the problem of multivariate minimization to a sequence of single-variate minimization problem via alternating minimization [6]. In each iteration, a series of single-variate minimization is performed, while other parameters are held unchanged. Each Iteration consists of k − 1 steps, which at each step one parameter θ[k] is updated. Suppose we are at step k of i th iteration. At this step k − 1 first parameters were updated in the current iteration, and Kk − 2 last parameters were updated from the previous iteration. These parameters are held fixed, while θ[k] is minimized at the current step,

θ i [k]= arg min θ [ k ] J θ i [ 1 ] , , θ i [ k 1 ] , θ [ k ] , θ i 1 [ k + 1 ] , , θ i 1 [ K 1 ] .
(30)

The cost function is guaranteed to be non-incremental at each step; however, this method is also converges to a local minima which highly depend on the initial guess of the algorithm. For solving (30) it is suffices to make the k th element of gradient vector in (26) equal to zero. Suppose the calculation are performed for phase alignment of u [k]k=0,1,,K1,

J θ [ k ] = I e j θ [ k ] t i [ k ] = 2 t i [ k ] 2 sin ( t i [ k ] θ [ k ] ) = 0 ,
(31)

where t u ( i ) [k] is the phase angle of t u ( i ) [k] and

t i [ k ] = l = 0 k 1 e j θ i [ l ] u H [ k ] u [ l ] w K ( k l ) M 1 1 w K ( k l ) + l = k + 1 K 1 e j θ i 1 [ l ] u H [ k ] u [ l ] w K ( k l ) M 1 1 w K ( k l ) .

Fortunately, Equation (31) has a closed form solution

θ i [k]= t i [ k ] t i [ k ] + π .
(32)

However, only the second case of (32) has positive second partial deviation. Therefore, the global minima of (30) is

θ i [ k ] = t i [ k ] + π .

5.4 Initial guess

All algorithms of unconstrained minimization require to be supplied by a starting point, which we denoted by θ 0 . To avoid getting stuck in local minima, we should select a good initial guess. This can be accomplished by minimizing a different but similar cost function denoted by J ( θ )

J ( θ ) = x ( θ i ) F f 2

in which represents pseudo inverse.

Solving J ( θ ) yields to a simple initial guess

θ 0 = F f .
(33)

Based on what have been mentioned in this section, a pseudo-code description of the trusted region dogleg algorithm is given by Algorithm 3. In this algorithm, we start with the initial guess of (33) and a trusted region radius upper bound R ̄ . Then we continue the trusted region minimization procedure as described in this section.

6 Simulation results

In this section, we present some examples to demonstrate the performance of the proposed algorithm. For the first example, our algorithm is applied to a polynomial matrix example from [11]

A(z)= 2 0 2 z + 1 z + 1 1 0 0 z 1 1 .
(34)

Frequency behavior of singular values can be seen in Figure 1. There is no intersection of singular values, so the setup of the algorithm either for spectral majorization or frequency smoothness leads to identical decomposition.

Figure 1
figure 1

Singular values versus frequency.

For having approximately positive singular values, we use (21). Define the average energy of highest order coefficients for the pair of polynomial singular vectors u i and v i as E i u , v =J( θ i )/(KM) (we expect energy of highest order coefficients to be zero or at least minimized). A plot of E i versus iteration for each pair of singular vectors is depicted in Figure 2. The decomposition length is M = 9 (order is 8) and we use K = 2M + (N max − N min) = 20 number of DFT points.

Figure 2
figure 2

Average highest order coefficients energy E i versus iteration number for a decomposition with approximately positive singular values. Dotted line: Cauchy points. Dashed line: Alternative minimization. Solid Line: proposed algorithm.

As it is seen, the use of dogleg method with approximate Hessian matrix leads to a fast convergence in contrast with using alternative minimization and Cauchy-point (which is always selected along the gradient direction). Of course we should consider that due to matrix inversion, computational complexity of Dogleg method is O(K 3) while computational complexity of alternative minimization and Cauchy point is O(K 2).

The final value of average highest order coefficient for three pair of singular vectors are 5.54 × 10−5, 3.5 × 10−3, and 0.43, respectively. The first singular vector satisfies finite duration constraint almost exactly. The second singular vector fairly satisfies this constraint. However, highest order coefficients of last singular vector, possess considerable amount of energy, that seems to cause decomposition error.

Denote the relative error of the decomposition as

E A = A ( z ) U ( z ) Σ ( z ) V ~ ( z ) F A ( z ) F

in which · F is the extension of Frobenius norm for polynomial matrices and is defined by

A ( z ) F = n A [ n ] F 2 .

Since in our optimization procedure we only seek for finite duration approximation, U(z) and V(z) are only approximately paraunitary. Therefore, we also define relative error of paraunitarity as

E U = U ~ ( z ) U ( z ) I F r .

An upper bound for E U can be obtained as

E U 2 K M K i = 1 r E i ( u ) ( 1 E i ( u ) ) + K M K i = 1 r ( E i ( u ) ) 2 ,

which means as average energy on K − M highest order goes to zero, E U diminishes.

The relative error of this decomposition is E A = 1.18 × 10−2 while the error of U(z) and V(z) are E U = 3.3 × 10−2 and E V = 3.08 × 10−2, respectively. The paraunitarity error is relatively high in contrast with decomposition error. This is due to the difference between the first two singular values and the last singular value.

A plot of relative errors E A, E U, and E V for various amount of M is shown in Figure 3. The number of frequency samples is fixed at K = 2M + 2(N max − N min).

Figure 3
figure 3

Relative error versus M for a decomposition with approximately positive singular values. K = 2M = 2.

The number of frequency samples K is an optional choice, however as discussed in Section 4, it should satisfy K ≥ 2M + N max − N min − 1. In order to demonstrate the effect of number of frequency samples on the decomposition error, a plot of relative error versus different amount of K is depicted in Figure 4. Increasing the number of frequency samples does not lead to reduction of relative error. Moreover, it increases computational burden. Therefore, a value near 2M + (N max − N min) − 1 is a reasonable choice for the number of frequency samples.

Figure 4
figure 4

Relative error versus K for a decomposition with approximately positive singular values. M = 31.

Now, lets relax the problem by allowing singular values to be complex and using (22). A plot of E i u and E i v versus iteration for each pair of singular vectors is depicted in Figure 5. The decomposition length is M = 9 (order is 8) and we use K = 2M + (N max − N min) = 20 number of DFT points.

Figure 5
figure 5

Average highest order coefficients energy E i versus iteration number for a decomposition with complex singular values. Dotted line: Cauchy points. Dashed line: Alternative minimization. Solid Line: proposed algorithm.

Again Dogleg method converges very rapidly while alternative minimization and Cauchy point converge slowly. The final value of average energy for three left singular vectors are 1.23 × 10−10, 9.7 × 10−4, and 10−3, respectively. This is while these values for right singular vectors are 1.12 × 10−10, 1.4 × 10−3, and 8.7−4, respectively.

Note that the average energy of highest order coefficients for the third pair of singular vectors alleviate meaningfully. Figure 1 shows that the third singular value goes to zero and then returns to positive values. If we constrain singular values to be positive, a phase jump of π radian, is imposed to one of third singular vectors near the frequency which singular vector goes to zero. However, by letting singular values to be complex, the zero crossing occur which requires no discontinuity of singular vectors.

The relative error of this decomposition is E A = 4.9 × 10−3 while the error of U(z) and V(z) are E U = 2.5 × 10−3 and E V = 3.5 × 10−3, respectively. In contrast with constraining singular values to be positive, having complex singular values decrease decomposition and paraunitarity error significantly.

Plots of relative errors E A, E U, and E V for various amount of M and K are shown in Figures 6 and 7, respectively. Letting singular values be complex causes significant reduction of all relative errors. As it was mentioned, Figure 7 shows that increasing K from 2M + N max − N min − 1 causes no improvement in relative errors while it makes additional computational burden.

Figure 6
figure 6

Relative error versus M for a decomposition with complex singular values. K = 2M + 2.

Figure 7
figure 7

Relative error versus K for a decomposition with complex singular values. M = 9.

McWhirter and coauthors [11] have reported the relative error of decomposition. Provided that paraunitary matrices U(z) and V(z) are of order 33, the relative error of their algorithm is 0.0469. This is while our algorithm only requires paraunitary matrices of order 3 for relative error of 0.035 with positive singular values and relative error of 2.45 × 10−6 with complex singular values. In addition, in the new approach, exploiting paraunitary matrices of order 33, the relative error is 0.0032 with positive singular values and 4.7 × 10−6 with complex singular values.

This large difference is not caused by iteration numbers because we compare results while all algorithms relatively converges, and with continuation of iterations trivial improvement are obtained. The main reason lies on different constraints of the solution presented in [11] in contrast to our proposed method. While they impose paraunitary constraint on U ~ (z)A(z)V(z) to yield a diagonalized Σ(z), we impose the finite duration constraint and obtain approximation of U(z) and V(z) with fair fitting to the decomposed matrices at each frequency samples. Therefore, we can consider this method as a finite duration polynomial regression of matrices which is obtained by uniformly sampling U(z) and V(z) on the unit circle in z-plane.

As a second example, consider EVD of the following para-Hermitian matrix

A ( z ) = . 5 z 2 + 3 + . 5 z 2 . 5 z 2 + . 5 z 2 . 5 z 2 . 5 z 2 . 5 z 2 + 1 . 5 z 2

The exact smooth EVD of this matrix is of finite order

U(z)= 1 2 1 + z 1 1 z 1 1 z 1 1 + z 1 ,
(35)
Λ ( z ) = z 1 + 2 + z 1 0 0 z 1 + 2 z 1 .

Frequency behavior of eigenvalues can be seen in Figure 8. Since eigenvalues intersect at two frequencies, smooth decomposition and spectrally majorized decomposition result two distinct solutions.

Figure 8
figure 8

Eigenvalues of smooth decomposition versus frequency.

To perform smooth decomposition, we need to track and rearrange eigenvectors to avoid any discontinuity using Algorithm 2. The resulting | c ij u [k]| are shown in Figure 9 for k = 0, 1, …, K − 1. Using these | c ij u [k]| the Algorithm 2 swap first and second eigenvalues and eigenvectors for k = 12:32 which results in continuity of eigenvalues and eigenvectors.

Figure 9
figure 9

Rearrangement of eigenvalues and eigenvectors. K = 42. Dashed Line: c 12 u [k] and c 21 u [k]. Solid Line: c 11 u [k] and c 22 u [k].

Now that all eigenvalues and eigenvectors are rearranged in DFT domain, it’s time for phase alignment of eigenvectors. A plot of E i versus iteration for M = 3 and smooth decomposition is depicted in Figure 10. It is predictable that dogleg algorithm converges rapidly while the alternative minimization and Cauchy point has a long way to converge.

Figure 10
figure 10

E i versus iteration number corresponding to smooth decomposition. Dotted line: Cauchy points. Dashed line: Alternative minimization. Solid Line: proposed algorithm.

Since the energy of highest order coefficients of eigenvectors are trifling, using the proposed method for smooth decomposition results in very high accuracy, as seem in the figures. Relative error of smooth decomposition versus M is shown in Figure 11.

Figure 11
figure 11

Relative error of smooth decomposition versus M .

While using frequency smooth EVD of (35) leads to relative error below 10−5 for M ≥ 3 with a few number of iterations, Spectrally majorized EVD requires a lot more polynomial order to reach a reasonable relative error.

Unlike smooth decomposition which requires rearrangement of eigenvalues and eigenvectors, spectral majorization requires only to sort eigenvalues at each frequency sample in decreasing order. Most of conventional EVD algorithm sort eigenvalues in decreasing order, which we should only align eigenvector phases using 3. A plot of E i versus iteration for M=20 and spectrally majorized decomposition is depicted in Figure 12.

Figure 12
figure 12

E i versus iteration number corresponding to spectrally majorized decomposition. Dotted line: Cauchy points. Dashed line: Alternative minimization. Solid Line: proposed algorithm.

Due to an abrupt change in eigenvectors at the intersection frequency of eigenvalues, increasing the decomposition order leads to a slow decay of relative error. Figure 13 shows the relative error as a function of M.

Figure 13
figure 13

Relative error of spectrally majorized decomposition versus M .

To see the difference between smooth and spectrally majorized decomposition results, eigenvalues of spectrally majorized decomposition is shown in Figure 14, which is comparable with Figure 8 which corresponds to eigenvalues of smooth decomposition. Therefore, a low order polynomial is required using smooth decomposition and much higher polynomial order for spectrally majorized decomposition. Even with M = 20 the decomposition have relatively high error.

Figure 14
figure 14

Eigenvalues of spectrally majorized decomposition versus frequency. M = 20.

7 Conclusion

An algorithm for polynomial EVD and SVD based on DFT formulation has been presented. One of the advantages of the DFT formulation is that it enables us to control properties of decomposition. Among these properties, we introduce how to setup the decomposition to achieve spectrally majorization and frequency smoothness. We have shown, if singular values (eigenvalues) intersect at some frequency, then simultaneous achievement of spectral majorization and smooth decomposition is not possible. In this situation, setting up the decomposition to possess spectral majorization requires considerably higher order polynomial decomposition and more computational complexity. Highest order polynomial coefficients of singular vectors (eigenvectors) are utilized as square error to obtain a compact decomposition based on phase alignment of frequency samples. The algorithm has the flexibility to compute a decomposition with approximately positive singular values, and a more relaxed decomposition with complex singular values. A solution for this nonlinear quadratic problem is proposed via Newton’s method. Since we apply an approximate Hessian matrix to assist the Newton optimization, a fast convergence is achieved. The algorithm capability to control the order of polynomial elements of decomposed matrices and to select properties of decomposition, make the proposed method as a good choice for filterbank and MIMO precoding applications. Finally, performance of the proposed algorithm under different conditions is demonstrated via simulations. Simulation results reveal superior decomposition accuracy in contrast with coefficient domain algorithms due to relaxation of paraunitarity.

References

  1. Kailath T: Linear Systems. NJ: Prentice Hall, Englewood Cliffs; 1980.

    Google Scholar 

  2. Tugnait J, Huang B.: Multistep linear predictors-based blind identification and equalization of multiple-input multiple-output channels. IEEE Trans. Signal Process 2000, 48(1):569-571.

    Article  Google Scholar 

  3. Fischer R: Sorted spectral factorization of matrix polynomials in MIMO communications. IEEE Trans. Commun 2005, 53(6):945-951. 10.1109/TCOMM.2005.849639

    Article  Google Scholar 

  4. Zamiri-Jafarian H, Rajabzadeh M: A polynomial matrix SVD approach for time domain broadband beamforming in MIMO-OFDM systems. IEEE Vehicular Technology Conference, VTC Spring 2008 2008, 802-806.

    Chapter  Google Scholar 

  5. Brandt R: Polynomial matrix decompositions: evaluation of algorithms with an application to wideband MIMO communications. 2010.

    Google Scholar 

  6. Palomar D, Lagunas M, Pascual A, Neira A: Practical implementation of jointly designed transmit-receive space-time IIR filters. International Symposium on Signal Processing and Its Applications, ISSPA 2001, 521-524.

    Google Scholar 

  7. Lambert R, Bell A: Blind separation of multiple speakers in a multipath environment. Proceedings of the International Conference on Acoustics, Speech, and Signal Processing 1997, 423-426.

    Google Scholar 

  8. Redif S, McWhirter J, Baxter P, Cooper T: Robust broadband adaptive beamforming via polynomial eigenvalues. OCEANS 2006 2006, 1-6.

    Chapter  Google Scholar 

  9. Vaidyanathan P: Multirate Systems and, Filterbanks. Prentice Hall, Englewood Cliffs; 1993.

    Google Scholar 

  10. Foster J, McWhirter J, Chamber J: A novel algorithm for calculating the QR decomposition for a polynomial matrix. Proceedings of the International Conference on Acoustics, Speech, and Signal Processing 2009, 3177-3180.

    Google Scholar 

  11. Foster J, Mcwhirter J, Davies M, Chambers J: An algorithm for calculating the QR and singular value decompositions of polynomial matrices. IEEE Trans. Signal Process 2010, 58(3):1263-1274.

    Article  MathSciNet  Google Scholar 

  12. Cescato D, Bolcskei H: QR decomposition of Laurent polynomial matrices sampled on the unit circle. IEEE Trans. Inf. Theory 2010, 56(9):4754-4761.

    Article  MathSciNet  Google Scholar 

  13. Mcwhirter J, Baxter P, Cooper T, Redif S: An EVD algorithm for para-Hermitian polynomial matrices. IEEE Trans. Signal Process 2007, 55(5):2158-2169.

    Article  MathSciNet  Google Scholar 

  14. Tkacenko A: Approximate eigenvalue decomposition of para-Hermitian systems through successive FIR paraunitary transformations. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. (Dallas, Texas, USA; 2010:4074-4077.

    Google Scholar 

  15. Lambert R: Multichannel blind deconvolution: FIR matrix algebra and separation of multipath mixtures. 1996.

    Google Scholar 

  16. Vaidyanathan P: Theory of optimal orthonormal subband coders. IEEE Trans. Signal Process 1998, 46(4):1528-1543.

    Article  MathSciNet  Google Scholar 

  17. Tkacenko A, Vaidyanathan P: On the Spectral Factor Ambiguity of FIR Energy Compaction Filter Banks. IEEE Trans. Signal Process 2006, 54(1):146-160.

    Article  Google Scholar 

  18. Brandt R, Bengtsson M: Wideband MIMO channel diagonalization in the time domain. International Symposium on Personal, Indoor, and Mobile Radio Communication 2011, 1958-1962.

    Google Scholar 

  19. Dieci L, Eirola T: On smooth decomposition of matrices. SIAM J. Matrix Anal. Appl 1999, 20(3):800-819. 10.1137/S0895479897330182

    Article  MathSciNet  Google Scholar 

  20. Redif S, McWhirter J, Weiss S: Design of FIR paraunitary filter banks for subband coding using a polynomial eigenvalue decomposition. IEEE Trans. Signal Process 2011, 59(11):5253-5264.

    Article  MathSciNet  Google Scholar 

  21. Oppenheim A, Schafer R, Buck J: Discrete-Time Signal Processing. Prentice Hall, Englewood Cliffs; 1999.

    Google Scholar 

  22. Nocedal J, Wright S: Numerical Optimization. New York: Springer; 1999.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hamidreza Amindavar.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tohidian, M., Amindavar, H. & Reza, A.M. A DFT-based approximate eigenvalue and singular value decomposition of polynomial matrices. EURASIP J. Adv. Signal Process. 2013, 93 (2013). https://doi.org/10.1186/1687-6180-2013-93

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-93

Keywords