- Research
- Open access
- Published:

# Tensor subspace Tracking via Kronecker structured projections (TeTraKron) for time-varying multidimensional harmonic retrieval

*EURASIP Journal on Advances in Signal Processing*
**volume 2014**, Article number: 123 (2014)

## Abstract

We present a framework for Tensor-based subspace Tracking via Kronecker-structured projections (TeTraKron). TeTraKron allows to extend arbitrary matrix-based subspace tracking schemes to track the tensor-based subspace estimate. The latter can be computed via a structured projection applied to the matrix-based subspace estimate which enforces the multi-dimensional structure in a computationally efficient fashion. This projection is tracked by considering all matrix rearrangements of the signal tensor jointly, which can be efficiently realized via parallel processing. In addition, we incorporate forward-backward-averaging and find a similar link between the real-valued matrix-based and tensor-based subspace estimation. This enables the tracking of the real-valued tensor-based subspace estimate via a similar Kronecker-structured projection applied to the real-valued matrix-based subspace estimate. In time-varying multidimensional harmonic retrieval problems, the TeTraKron-based subspace tracking schemes outperform the original matrix-based subspace tracking algorithms as well as the batch solutions provided by the SVD and the HOSVD. Moreover, incorporating forward-backward-averaging leads to an improved accuracy of the subspace tracking, and only real-valued processing is involved. Furthermore, we evaluate the performances of ESPRIT-type parameter estimation schemes where the subspace estimates obtained by the proposed TeTraKron-based subspace tracking algorithms are used for the tracking of spatial frequencies in time-varying scenarios.

## 1 Introduction

The design of adaptive algorithms to track the subspace of an instationary random signal has a long standing history in signal processing. The main challenges are achieving a fast adaptation and a good steady-state behavior while keeping the computational complexity low. The first subspace-tracking schemes like [1] still had a complexity of \mathcal{O}\{{M}^{2}\xb7d\} where *M* is the number of channels (sensors) and *d* is the rank of the signal subspace. Later, as a singular value decomposition (SVD) technique and a data projection method (DPM) were proposed in [2] and [3], respectively, the complexity was lowered to \mathcal{O}\{M\xb7{d}^{2}\}. Approaches that belong to a low complexity class with \mathcal{O}\{M\xb7d\} operations have been developed, such as the scheme in [4] that estimates the eigenvector associated with the largest eigenvalue. Classified into the category of power-based methods similarly as DPM, the fast Rayleigh quotient-based adaptive noise subspace (FRANS) algorithm [5] combines DPM with a fast orthonormalization procedure. Another power-based approach that has the complexity of \mathcal{O}\{M\xb7d\}, as well as the fast approximated power iteration (FAPI) algorithm [6], is based on the power iteration method and a novel projection approximation. On the other hand, belonging to the category of projection approximation-based methods, the well-known projection approximation subspace tracking (PAST) algorithm [7] is one of the first low complexity subspace tracking schemes with \mathcal{O}\{M\xb7d\} operations. PAST interprets the signal subspace estimate as the solution of a minimization problem which can be simplified via an appropriate projection approximation and then applies a recursive least squares (RLS) procedure to track the signal subspace. A deflation-based version of PAST (PASTd) with an even lower complexity was proposed in [7]. In the literature, variants of some of the aforementioned algorithms have been developed to improve the numerical stability or to track the minor subspace. For a more detailed survey, the reader is referred to [8].

For stationary multidimensional signals, it has been shown that the subspace estimation accuracy can be significantly improved if tensors are used to store and manipulate the signals. A signal subspace estimate based on the Higher-Order Singular Value Decomposition (HOSVD) [9] was introduced in [10]. Therefore, extending this subspace estimation scheme to the tracking of the subspace of a time-varying multidimensional signal is of significant interest.

To this end, we introduce the Tensor-based subspace Tracking via Kronecker structured projections (TeTraKron) framework^{a}. TeTraKron allows to exploit the rich heritage of matrix-based subspace tracking and flexibly extends arbitrary matrix-based subspace tracking schemes to the tracking of the HOSVD-based subspace estimate defined in [10] by running them on all the unfoldings of the data tensor in parallel. Note that tracking the subspaces of all unfoldings of a tensor has been proposed before, e.g., in [11, 12]. For the computer vision application, the incremental tensor subspace learning algorithm developed in [11] tracks the subspace of the unfoldings by using an incremental SVD approach. The unfoldings of the data tensor are projected onto the subspaces. An incremental tensor analysis framework and its variants were introduced in [12] to efficiently compute a compact summary of high-dimensional data and to reveal the hidden correlations. However, these approaches do not consider the recombination of these subspaces to the HOSVD-based subspace estimate from [10]. The computationally efficient recombination is the main focus of the TeTraKron framework. Moreover, [11, 12] require to track the core tensor of the HOSVD which TeTraKron does not need at all.

The incorporation of forward-backward-averaging is also investigated. Employed by many parameter estimation algorithms as a preprocessing step, forward-backward-averaging virtually augments the observations and leads to an enhanced estimation accuracy. In addition, complex-valued measurements can be further conveniently and efficiently transformed to real-valued data [10]. Due to the fact that only real-valued computations are involved in the subsequent steps, the complexity is reduced. In this work, we show that after incorporating forward-backward-averaging, the real-valued HOSVDbased subspace estimate can be obtained by applying a structured projection to the real-valued matrix-based subspace estimate as well. Consequently, the TeTraKron framework enables the extension of matrix-based subspace tracking schemes to realize real-valued tensor-based subspace tracking where forward-backward-averaging is included and provides benefits in terms of both the performance and the complexity. In time-varying multidimensional harmonic retrieval problems, we use the subspace estimates tracked via the TeTraKron-based subspace tracking schemes in ESPRIT-type parameter estimation algorithms [10] and evaluate their resulting performances.

This paper is organized as follows: Section 2 introduces the data model for the matrix-based and the tensor-based subspace estimation. The TeTraKron framework is described in detail in Section 3. The incorporation of forward-backward-averaging and real-valued subspace tracking are also discussed. Section 4 provides examples of how TeTraKron can be employed to develop tensor-based subspace tracking schemes based on PAST, PASTd, and FAPI. Section 5 presents numerical results before the conclusions are drawn in Section 6.

To facilitate the distinction between scalars, vectors, matrices, and tensors, the following notation is used throughout the manuscript: scalars are represented by italic letters, vectors by lowercase bold-faced letters, matrices by uppercase bold-faced letters, and tensors as bold-faced calligraphic letters. The superscripts ^{T}, ^{H}, ^{−1}, and ^{∗} refer to matrix transposition, Hermitian transposition, matrix inversion, and complex conjugation, respectively. The Kronecker product is represented via ⊗ and the Khatri-Rao (columnwise Kronecker) product via ◇. An *M*×*M* identity matrix is symbolized by *I*_{
M
}. A matrix denoted by *I*_{M×d} (*M*>*d*) has the form {\mathit{I}}_{M\times d}={\left[\begin{array}{ll}{\mathit{I}}_{d}& {0}_{d\times (M-d)}\end{array}\right]}^{\mathrm{T}}. The two-norm of a vector is denoted by ∥·∥. The operator Tri{·} calculates the upper/lower triangular part of its argument and copies its Hermitian transpose to the other lower/upper triangular part [7].

An *R*-way tensor with size *I*_{
r
} along mode *r*=1,2,…,*R* is represented as \mathcal{A}\in {\mathbb{C}}^{{I}_{1}\times {I}_{2}\times \dots \times {I}_{R}}. The *r*-mode vectors of\mathcal{A} are obtained by varying the *r* th index from 1 to *I*_{
r
} and keeping all other indices fixed. Aligning all *r*-mode vectors as the columns of a matrix yields the *r*-mode unfolding of\mathcal{A} which is denoted by {\left[\mathcal{A}\right]}_{(r)}\in {\mathbb{C}}^{{I}_{r}\times {I}_{r+1}\xb7\dots \xb7{I}_{R}\xb7{I}_{1}\xb7\dots \xb7{I}_{r-1}}. The order of the columns is arbitrary as long as it is chosen consistently. We use the reverse cyclical ordering, as proposed in [9]. The *r*-mode product between a tensor\mathcal{A} and a matrix ** U** is written as \mathcal{A}{\times}_{r}\mathit{U}. It is computed by multiplying all

*r*-mode vectors of\mathcal{A} with

**. In other words, {\left[\mathcal{A}{\times}_{r}\mathit{U}\right]}_{(r)}=\mathit{U}\xb7{\left[\mathcal{A}\right]}_{(r)}. The**

*U**r*-rank of a tensor\mathcal{A} is the rank of the

*r*-mode unfolding matrix {\left[\mathcal{A}\right]}_{(r)}. The tensor {\mathcal{I}}_{R,d} is an

*R*-dimensional identity tensor of size

*d*×

*d*×…×

*d*, which is equal to one if all

*R*indices are equal and zero otherwise. In addition, \left[\mathcal{A}{\bigsqcup}_{r}\mathcal{\mathcal{B}}\right] symbolizes the concatenation of two tensors\mathcal{A} and\mathcal{\mathcal{B}} along the

*r*th mode [10].

## 2 Data model

In this section, we introduce the data model for both the matrix-based and the tensor-based subspace estimation. To this end, we start with the non-adaptive case where the subspaces are estimated once, based on *N* observations in a stationary window. We consider a linear mixture of *d* sources superimposed by additive noise, which can be expressed as

Here, \mathit{X}\in {\mathbb{C}}^{M\times N} is the matrix of observations from *M* channels at *N* subsequent time instants, \mathit{A}\in {\mathbb{C}}^{M\times d} is the unknown mixing matrix or the array steering matrix, \mathit{S}\in {\mathbb{C}}^{d\times N} contains the unknown source symbols, and ** W** represents the additive noise samples. Then, the SVD of

**can be expressed as**

*X*where the columns of {\mathit{\xdb}}_{\mathrm{s}}\in {\mathbb{C}}^{M\times d} represent an orthonormal basis for the estimated signal subspace, i.e., \text{span}\left\{{\mathit{\xdb}}_{\mathrm{s}}\right\}\approx \text{span}\{\mathit{A}\}.

We can arrange the elements of the matrix \mathit{X}\in {\mathbb{C}}^{M\times N} into a tensor \mathcal{X}\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\dots \times {M}_{R}\times N}, where *M*=*M*_{1}·*M*_{2}…·*M*_{
R
}. While such a rearrangement is always possible, it only provides a benefit if the actual underlying signal has a corresponding multidimensional structure, e.g., it resembles a signal sampled on a multidimensional lattice. These dimensions can for instance relate to space (1-D or 2-D arrays at transmitter or receiver), frequency, time, or polarization, depending on the application. The corresponding tensor-valued data model takes the following form [10]

where \mathcal{A}\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\dots \times {M}_{R}\times d} and \mathcal{W}\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\dots \times {M}_{R}\times N} represent the mixing tensor and the noise tensor, respectively. Since (3) is a rearranged version of (1), the corresponding quantities are linked via the relations \mathit{X}={\left[\mathcal{X}\right]}_{R+1}^{\mathrm{T}}, \mathit{A}={\left[\mathcal{A}\right]}_{R+1}^{\mathrm{T}}, and \mathit{W}={\left[\mathcal{W}\right]}_{R+1}^{\mathrm{T}}, respectively. As shown in [10], based on (3), we can define a tensor-based subspace estimate by computing a truncated Higher-Order SVD (HOSVD) [9],

where {\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]}\in {\mathbb{C}}^{{M}_{r}\times {p}_{r}} has unitary columns and denotes the matrix of the estimated *r*-mode singular vectors. Moreover, *p*_{
r
} is the *r*-rank of the mixing tensor\mathcal{A}, and {\widehat{\mathcal{S}}}^{\left[\mathrm{s}\right]}\in {\mathbb{C}}^{{p}_{1}\times {p}_{2}\dots \times {p}_{R+1}} represents the truncated core tensor that can be computed from\mathcal{X} via

Based on the HOSVD, an improved signal subspace estimate is given by {\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}\in {\mathbb{C}}^{M\times d}, where \phantom{\rule{0.3em}{0ex}}{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]} is [10]

where {\widehat{\Sigma}}_{\mathrm{s}} has been defined in (2). Compared to the tensor-based subspace estimation in [10], the multiplication with {\widehat{\Sigma}}_{\mathrm{s}}^{-1} represents only a normalization [13].

As discussed in [10], (6) provides a better subspace estimate than {\mathit{\xdb}}_{\mathrm{s}} if and only if\mathcal{A} is *r*-rank deficient in at least one mode *r*=1,2,…,*R*, i.e., *p*_{
r
}<*M*_{
r
}. An example where this assumption is fulfilled is given by *R*-D harmonic retrieval [10], where we consider a superposition of *d* harmonics samples on an *R*-D lattice. This gives rise to a mixing matrix ** A** and a mixing tensor\mathcal{A} of the following form

where {\mathit{A}}_{r}\in {\mathbb{C}}^{{M}_{r}\times d} represents the mixing matrix in the *r* th mode. In this case, we have *p*_{
r
}≤*d* and, therefore, the tensor-based subspace estimate is superior to the matrix-based subspace estimate if *d*<*M*_{
r
} for at least one *r*=1,2…,*R*[10]. However, there are applications with *r*-rank deficiencies where the observed signal obeys (3) but not (7), for instance, the tensor-based blind channel estimation scheme in [14].

## 3 Tensor subspace Tracking via Kronecker-structured projections (TeTraKron)

In a time-varying scenario, the observation matrix ** X** is augmented by a new column \mathit{x}(n)\in {\mathbb{C}}^{M\times 1} with every new snapshot

*n*

where ** A**(

*n*) is the mixing matrix or the array steering matrix for the

*n*th snapshot. By employing a subspace tracking scheme, an estimate of the signal subspace {\widehat{\mathit{U}}}_{\mathrm{s}}(n) is obtained for each new snapshot

*n*. In the tensor case, \mathit{x}(n)\in {\mathbb{C}}^{M\times 1} is rearranged into \mathcal{X}(n)\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\dots \times {M}_{R}}. A tensor-based subspace tracking algorithm aims at estimating the HOSVD-based subspace estimate {\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}(n)\right]}_{(R+1)}^{\mathrm{T}}\in {\mathbb{C}}^{M\times d} for each new snapshot.

### 3.1 Tensor subspace estimation via structured projections

At first sight, (6) suggests that in order to track the signal subspace, we need to track the *r*-mode singular vectors as well as the core tensor. However, it can be shown that tracking the core tensor is indeed unnecessary, since the tensor-based subspace estimate can be computed from the matrix-based subspace estimate via a structured projection which does not involve the core tensor. This was first pointed out in [15] for the 2-D case. However, it can be generalized to an arbitrary number of dimensions. This claim is summarized in the following theorem:

####
**Theorem**
**1**

The HOSVD-based subspace estimate can be computed by projecting the unstructured matrix-based subspace estimate obtained via the SVD onto a Kronecker structure in the following manner

where {\widehat{T}}_{r}={\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]}\xb7{\mathit{\xdb}}_{r}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}} is a projection matrix onto the space spanned by the *r*-mode vectors.

*Proof*. [13]

In the proof, the core tensor in (6) is first eliminated by substituting (5) into (6), and {\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}} is then computed as

Relying on the observation that \mathit{X}={\left[\mathcal{X}\right]}_{(R+1)}^{\mathrm{T}}, {\widehat{V}}_{\mathrm{s}}={\mathit{\xdb}}_{R+1}^{{\left[\mathrm{s}\right]}^{\ast}}, and

which follows (2), the identity (10) is proved. Equation (10) provides the central idea behind the TeTraKron framework that we introduce in this paper. It shows that the tensor-based subspace estimate can be understood as a projection of the unstructured matrix-based subspace estimate onto the Kronecker structure inherent in the data. It also shows that for all modes where *p*_{
r
}=*M*_{
r
}, we have {\widehat{T}}_{r}={\mathit{I}}_{r}, i.e., no projection is performed. Another consequence we can draw from (10) is that there is no need to compute (or track) the core tensor. We can find the tensor-based subspace estimate only based on the *r*-mode subspaces contained in {\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]}. These are the subspaces obtained from the *r*-mode unfoldings of\mathcal{X}, which are again matrices. Therefore, any matrix-based subspace tracking scheme can be applied to track these subspaces as well.

Consequently, the main idea can be summarized as follows: in addition to tracking the subspace of the matrix ** X** (which is the same as tracking the row space of the (

*R*+1)-mode unfolding), we apply the same tracking algorithm to

**all**

*r*-mode unfoldings of the tensor which satisfy

*p*

_{ r }<

*M*

_{ r }for

*r*=1,2,…,

*R*in parallel. Note that even though this seems to increase the complexity by a factor equal to the number of modes we track, all these trackers can run in parallel which facilitates an efficient implementation. After each step, the tensor-based subspace estimate can be recombined via (10).

However, this recombination requires \mathcal{O}\left\{{M}^{2}\xb7d\right\} multiplications, i.e., it is quadratic in *M*, which is undesirable. To lower the complexity, we rewrite (10) as

where {\mathit{U}}_{\text{Kron}}^{\left[\mathrm{s}\right]}={\mathit{U}}_{1}^{\left[\mathrm{s}\right]}\otimes \dots \otimes {\mathit{U}}_{R}^{\left[\mathrm{s}\right]}\in {\mathbb{C}}^{M\times {d}^{R}} and {\widehat{\u016a}}_{\mathrm{s}}={\mathit{U}}_{\text{Kron}}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}\xb7{\widehat{U}}_{\mathrm{s}}\in {\mathbb{C}}^{{d}^{R}\times d} assuming *p*_{
r
}=*d*≤*M*_{
r
}, for *r*=1,2,…,*R*. Note that the matrix product in (11) requires only \mathcal{O}\left\{M\xb7{d}^{R}\right\} multiplications, i.e., it is linear in *M*. Moreover, (11) can be used for tensor-based subspace tracking as well: we track {\mathit{U}}_{r}^{\left[\mathrm{s}\right]} for *r*=1,2,…,*R* by applying matrix-based subspace tracking schemes to all unfoldings, then project our *M*-dimensional observations into a lower-dimensional space by premultiplying them with {\mathit{U}}_{\text{Kron}}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}\in {\mathbb{C}}^{{d}^{R}\times M} and finally run a matrix-based subspace tracker on the lower-dimensional data to track the *d*-dimensional subspace {\widehat{\u016a}}_{\mathrm{s}}\in {\mathbb{C}}^{{d}^{R}\times d}.

TeTraKron allows to readily extend arbitrary matrix-based subspace tracking schemes to tensors which yields an improved estimation accuracy as we demonstrate in Section 5. Therefore, we obtain novel tensor-based subspace trackers by building on known algorithms, which is a particularly attractive feature of the TeTraKron framework. In addition to running these trackers on all unfoldings in parallel and recombining the signal subspace estimate via (10) or (11), the only modification we have to apply to the matrix-based subspace tracking schemes is the following: as introduced at the beginning of this section, it is typically assumed that the observation matrix ** X** is augmented by a new column

**(**

*x**n*) with each new snapshot

*n*. For the

*r*-mode unfoldings of \mathcal{X}(n)\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\dots \times {M}_{R}} that is a rearranged version of

**(**

*x**n*), every new snapshot generates not only one but also several new columns. For instance, for the 1-mode unfolding, we obtain {\prod}_{r=2}^{R}{M}_{r} new columns, each of size

*M*

_{1}. This new batch of columns can be processed sequentially, or, by modifying the tracking schemes, also in one batch. We demonstrate such a modification using the examples of the PAST algorithm [7] and the FAPI algorithm [6] in Section 4.

### 3.2 Forward-backward-averaging and real-valued subspace tracking

Forward-backward-averaging and real-valued subspace estimation are introduced for the matrix case in [16] and the tensor case in [10], respectively. By employing forward-backward-averaging, the number of snapshots is virtually doubled. It also results in the decorrelation of two coherent sources. Moreover, the complex spatial covariance matrices can be mapped to the real-valued matrices such that the subsequent computations are real-valued, which contributes to a reduced complexity.

In this section, we revisit the concept of forward-backward-averaging and present real-valued subspace tracking. First, we calculate a forward-backward-averaged version of the measurement tensor \mathcal{X}\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\times \dots \times {M}_{R}\times N}[10]

where *Π*_{
p
} is a *p*×*p* exchange matrix which has ones on its anti-diagonal and zeros elsewhere. We further map this centro-Hermitian tensor to a real-valued tensor \phi (\mathcal{Z})\in {\mathbb{R}}^{{M}_{1}\times {M}_{2}\times \dots \times {M}_{R}\times 2N} by using the transformation [10]^{b}

where *Q*_{
p
} is a *p*×*p* left- ** Π** real and unitary matrix, i.e., it satisfies {\mathit{\Pi}}_{p}\xb7{\mathit{Q}}_{p}^{\ast}={\mathit{Q}}_{p}[17].

For the matrix-based case, the matrix of observations \mathit{X}\in {\mathbb{C}}^{M\times N} is mapped to the following centro-Hermitian matrix

Then, ** Z** is transformed to a real-valued matrix using the transformation [16]

where *Q*_{
M
} is defined as [10]

By computing a truncated HOSVD of \phi (\mathcal{Z}), a real-valued subspace estimate is obtained as {\left[{\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}\in {\mathbb{R}}^{M\times d}, where {\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]} has the following form [10]

The matrices {\mathit{\xca}}_{r}^{\left[\mathrm{s}\right]}, *r*=1,2,…,*R*+1, denote the estimates of the real-valued bases for the *r*-mode subspaces, and {\widehat{\Sigma}}_{\mathrm{s}}^{\prime} represents the diagonal matrix of the (*R*+1)-mode singular values. The multiplication of {\widehat{\Sigma}}_{\mathrm{s}}^{{\prime}^{-1}} is only a normalization procedure. By substituting the expression of the core tensor {\widehat{\mathcal{S}}}_{Z}^{\left[\mathrm{s}\right]}

into (17), {\left[{\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}\in {\mathbb{R}}^{M\times d} is further expressed as

where {\widehat{T}}_{r}^{\prime}={\mathit{\xca}}_{r}^{\left[\mathrm{s}\right]}\xb7{\mathit{\xca}}_{r}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}. Substituting (12) into (13) gives

Notice that

Expanding the (*R*+1)-mode unfolding of (20) yields

As there exists a link between the matrix-based data model and its tensor-based counterpart such that \mathit{X}={\left[\mathcal{X}\right]}_{(R+1)}^{\mathrm{T}}, we can express {\left[\phi (\mathcal{Z})\right]}_{(R+1)}^{\mathrm{T}} as

Recall that *φ*(** Z**) is defined via (14) and (15), leading to the observation that

As a real-valued equivalent to Theorem 1, we have

where the columns of {\mathit{\xca}}_{\mathrm{s}}\in {\mathbb{R}}^{M\times d} represent a real-valued orthonormal basis for the estimated signal subspace in the matrix-based case. The proof proceeds along the lines of the proof of Theorem 1 in [13].

Hence, (25) indicates that the real-valued tensor-based subspace estimate can be computed by applying a Kronecker-structured projection to the real-valued matrix-based subspace estimate similar to the complex-valued case. The calculation of the core tensor is also not required. Note that the obtained real-valued tensor-based subspace estimate can be employed in the unitary tensor ESPRIT algorithm [10] for parameter estimation in multidimensional harmonic retrieval problems. We show the corresponding numerical results in Section 5.

To summarize, in case that forward-backward-averaging is incorporated, the TeTraKron framework can be employed to extend a matrix-based subspace tracking scheme to tensors such that real-valued tensor-based subspace tracking is realized. For each snapshot, a forward-backward-averaged version of the measurement tensor is mapped to a real-valued tensor similarly as (12) and (13). Then, any matrix-based subspace tracker can be run on all unfoldings of this tensor in parallel where only real-valued computations are involved. Finally, the real-valued subspace estimate is obtained via a recombination procedure such as (25)^{c}. In Section 4, where the matrix-based algorithms PAST [7] and FAPI [6] are used as examples, we explain explicitly how real-valued tensor-based subspace tracking based on the TeTraKron framework is performed.

## 4 Examples

In this section, we provide examples of how the TeTraKron framework can be used to devise tensor-based subspace tracking schemes. Since TeTraKron allows us to extend an arbitrary matrix-based subspace tracking scheme to the tensor case, we choose two examples, namely, the simple but widely used PAST algorithm [7] as well as the FAPI scheme [6] that is a fast implementation of the power iteration method for subspace tracking.

### 4.1 Tensor-based PAST/PASTd

The PAST algorithm for tracking the signal subspace is summarized in Algorithm 1, where ** x**(

*n*) is the new measurement vector at time

*n*,

**(**

*P**n*) corresponds to the inverse of the correlation matrix of the projected vector \mathit{y}(n)={\mathit{\xdb}}_{\mathrm{s}}^{\mathrm{H}}(n)\xb7\mathit{x}(n), which is approximated as \mathit{y}(n)={\mathit{\xdb}}_{\mathrm{s}}^{\mathrm{H}}(n-1)\xb7\mathit{x}(n). Moreover,

**(**

*g**n*) is the gain vector and

*β*the forgetting factor of the underlying RLS procedure. Finally,

**(**

*e**n*) is the approximation error and

*I*_{M×r}symbolizes the first

*r*columns of an

*M*×

*M*identity matrix.

Recall that the data tensor observed at each new snapshot \mathcal{X}(n)\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\dots \times {M}_{R}} is a rearranged version of \mathit{x}(n)\in {\mathbb{C}}^{M\times 1}. In the TeTraKron extension of PAST, we apply the same algorithm to all unfoldings of \mathcal{X}(n), i.e., {\left[\mathcal{X}(n)\right]}_{(r)}\in {\mathbb{C}}^{{M}_{r}\times \frac{M}{{M}_{r}}}, *r*=1,2…,*R*. For instance, in the *R*=2-dimensional case, i.e., our data tensor\mathcal{X} is of size *M*_{1}×*M*_{2}×*N*. With each new observation vector \mathit{x}(n)\in {\mathbb{C}}^{{M}_{1}\xb7{M}_{2}\times 1}, we obtain a new matrix of observations for the one-space and the two-space of\mathcal{X} which is given by \stackrel{~}{X}(n)\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}} for {\left[\mathcal{X}\right]}_{(1)} and {\stackrel{~}{X}}^{\mathrm{T}}(n) for {\left[\mathcal{X}\right]}_{(2)}. Note that \stackrel{~}{X}(n) is a rearranged version of ** x**(

*n*) which satisfies \text{vec}\left\{\stackrel{~}{X}(n)\right\}=\mathit{x}(n).

Since PAST is based on RLS, it can be modified to process the entire new batch of observations at the same time. The modified update equations for the *r*-mode unfolding {\left[\mathcal{X}(n)\right]}_{(r)}\in {\mathbb{C}}^{{M}_{r}\times \frac{M}{{M}_{r}}} become

Notice that to process the batch of \frac{M}{{M}_{r}} columns, the inverse of a \frac{M}{{M}_{r}}\times \frac{M}{{M}_{r}} matrix is involved. Especially for *R*>2, it is usually true that d<\frac{M}{{M}_{r}}. In such cases, it becomes computationally more efficient to update the *d*×*d* correlation matrix of *Y*_{
r
}(*n*) given by

and directly calculate its inverse {\mathit{P}}_{r}(n)={\mathit{C}}_{{y}_{r}{y}_{r}}^{-1}(n). Multiplying \left(\beta \xb7{\mathit{I}}_{M/{M}_{r}}+{\mathit{Y}}_{r}^{\mathrm{H}}(n)\xb7{\mathit{H}}_{r}(n)\right) to the right of both sides of (28) yields

Then, *G*_{
r
}(*n*) is expressed as

Substituting (27) into (34) gives

Knowing that

while the operation Tri{·} in (29) is only employed to preserve the Hermitian symmetry of *P*_{
r
}(*n*) in the presence of rounding errors [7], *G*_{
r
}(*n*) is alternatively updated as

Equations (27), (28), and (29) are replaced by (32) and (37). The deflation-based version of PAST, PASTd [7], is also based on RLS and can hence be modified in the same manner. We summarize the tensor-based PASTd algorithm for updating the *r*-mode subspace estimates in Algorithm 2, where {\mathit{\xfb}}_{r,i}^{\left[\mathrm{s}\right]}(n) denotes the *i* th column of {\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]}(n)\in {\mathbb{C}}^{{M}_{r}\times d}, and *d*_{r,i}(*n*) represents the *i* th entry of {\mathit{d}}_{r}(n)\in {\mathbb{R}}^{d} that contains the eigenvalue estimates. The initial values of the eigenvalue estimates are chosen to be 1 [7], i.e., *d*_{
r
}(0) is an all-ones vector.

When forward-backward-averaging is incorporated, each new observation vector ** x**(

*n*) is augmented by a new virtual column. In the matrix-based case, the real-valued subspace estimate {\widehat{\mathit{E}}}_{s}(n) is obtained by applying the PAST algorithm to

where ** Z**(

*n*) is a forward-backward-averaged version of

**(**

*x**n*) given by

In the tensor-based case, a real-valued tensor \phi \left(\mathcal{Z}(n)\right)\in {\mathbb{R}}^{{M}_{1}\times {M}_{2}\times \dots \times {M}_{R}\times 2} is computed by using *N*=1 in (13)

where \mathcal{Z}(n) is a forward-backward-averaged version of \mathcal{X}(n)

In the new real-valued tensor-based subspace tracking scheme that is an extension of PAST based on the TeTraKron framework, we run the PAST algorithm on all unfoldings of the real-valued tensor \phi \left(\mathcal{Z}(n)\right) in parallel. For the *r*-mode unfolding {\left[\phi \left(\mathcal{Z}(n)\right)\right]}_{(r)}\in {\mathbb{R}}^{{M}_{r}\times 2\xb7\frac{M}{{M}_{r}}} where *p*_{
r
}<*M*_{
r
}, the updated equations where only real-valued computations are involved are as follows:

Afterwards, the real-valued signal subspace estimate can be recombined via (25).

### 4.2 Tensor-based FAPI

As an additional example of how to apply the TeTraKron framework to extend a matrix-based subspace tracker to the tensor case, we consider the exponential window FAPI algorithm [6]. As before, to update the *r*-mode subspace estimate, the matrix-based subspace tracker has to be modified to process a batch of \frac{M}{{M}_{r}} observations. This can be carried out at the same time as in the aforementioned example of the extended PAST algorithm. Alternatively, each column of the *r*-mode unfolding of \mathcal{X}(n) can be treated as a new observation vector, and the columns are processed sequentially^{d}. The *r*-mode subspace estimates can be updated via exactly the same procedures as in the matrix-based subspace tracking scheme. However, only when the first column of the *r*-mode unfolding of \mathcal{X}(n) is used as an equivalent new observation vector, the forgetting factor stays the same as that used in the matrix-based subspace tracking algorithm at each snapshot. Starting from the second column that is treated as the second equivalent observation vector, the forgetting factor is set to one. In the following, we explain in detail how this approach is applied to obtain a tensor-based version of the exponential window FAPI algorithm [6] via the TeTraKron framework. Denote the *ℓ* th column of the *r*-mode unfolding {\left[\mathcal{X}(n)\right]}_{(r)}\in {\mathbb{C}}^{{M}_{r}\times \frac{M}{{M}_{r}}} as {\mathit{x}}_{\ell}^{(r)}\in {\mathbb{C}}^{{M}_{r}}, where \ell =1,2,\dots ,\frac{M}{{M}_{r}}. The details on updating {\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]} are given in Algorithm 3. For each snapshot, after obtaining {\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]} (*r*=1,2,…,*R*) via the procedures in Algorithm 3 and the matrix-based subspace estimate {\mathit{\xdb}}_{\mathrm{s}} by employing exponential window FAPI [6], we recombine them via (10) to obtain the tensor-based subspace estimate {\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}.

In case that forward-backward-averaging is incorporated, similar procedures as in Algorithm 3 are employed on the *r*-mode unfolding of the real-valued tensor, {\left[\phi \left(\mathcal{Z}(n)\right)\right]}_{(r)}\in {\mathbb{R}}^{{M}_{r}\times 2\xb7\frac{M}{{M}_{r}}}, to update the real-valued estimate of the *r*-mode subspace {\mathit{\xca}}_{r}^{\left[\mathrm{s}\right]}, i.e., {\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]}(n-1), \frac{M}{{M}_{r}}, and {\mathit{x}}_{\ell}^{(r)} in Algorithm 3 are replaced by {\mathit{\xca}}_{r}^{\left[\mathrm{s}\right]}(n-1), 2\xb7\frac{M}{{M}_{r}}, and the *ℓ* th column of {\left[\phi \left(\mathcal{Z}(n)\right)\right]}_{(r)}, where \ell =1,2,\dots ,2\xb7\frac{M}{{M}_{r}}, respectively. Then, the real-valued signal subspace estimate can be recombined via (25).

### 4.3 Summary of the proposed tensor-based subspace tracking schemes

To this end, we summarize the tensor-based subspace tracking algorithms developed via the TeTraKron framework in Algorithms 4 and 5. For the schemes^{e} that have been employed in the simulations (cf. Section 5), the corresponding equations are specified.

## 5 Simulation results

In this section, we first demonstrate the performance of the tensor extension of PAST and PASTd achieved via the proposed TeTraKron framework. To this end, we choose a simulation scenario that represents an extension of the one shown in [7] to *R*=2 dimensions. We consider a uniform rectangular array (URA) with *d*=3 impinging wavefronts. The first two sources are moved by changing their spatial frequencies (direction cosines) as a function of the time index *n*=1,2,…,*N* according to

for t[\phantom{\rule{0.3em}{0ex}}n]=\frac{n-1}{N-1}, whereas the third source remains stationary at {\mu}_{3}^{(1)}={\mu}_{3}^{(2)}=0.1. Therefore, for *n* close to *N*/2 the first and the second sources cross. Note that {\mu}_{i}^{(r)} represents the spatial frequency of the *i* th source in the *r* th dimension for *i*=1,2,…,*d* and *r*=1,2,…,*R*. The total number of snapshots *N* is set to 1,000 in all examples. The source samples as well as the noise samples are drawn from a zero mean circularly symmetric complex Gaussian distribution with variance one (SNR = 0 dB). We choose the forgetting factor *β*=0.97 for all examples shown in this section. Similar to [7], we compare the algorithms based on the largest principal angle (LPA) between the true and the estimated signal subspace since the LPA provides a measure for the agreement of the subspaces which is invariant to the particular choice of the basis.

Figure 1 shows the LPA for a 9×9 URA. The curve labeled ‘PAST’ refers to the original matrix-based PAST algorithm from [7]. As summarized in Section 4.3, TeTraKron-PAST and TeTraKron-PAST II refer to the tensor extensions of PAST via the proposed TeTraKron framework based on (10) and the reduced-complexity version (11), respectively. For reference, we display two curves labeled ‘SVD’ and ‘HOSVD’ where the entire matrix/tensor of observations^{f} up to the current snapshot *n* is used to calculate a subspace estimate via the SVD and the HOSVD, respectively.In Figure 2, we replace PAST by PASTd. Moreover, we change the array size to a 7×7 URA to demonstrate that the tensor gain is present for different array sizes. Both simulation results show that the tensor-based subspace tracking algorithms outperform the matrix-based algorithms, as expected.

Using the same scenario as for Figure 1, we evaluate the performance of the tensor extensions of the FAPI algorithm [6]. In Figure 3, where the LPA is plotted, the curves labeled ‘TeTraKron-FAPI’ and ‘TeTraKron-FAPI with FBA’ correspond to two tensor-based FAPI schemes that are extensions of the FAPI algorithm via the TeTraKron framework based on (10) and (25), respectively. The curve labeled ‘FAPI with FBA’ corresponds to an extended version of FAPI where forward-backward-averaging is included (cf. (38)). It is observed that the performance of tensor-based FAPI is superior to that of the matrix-based FAPI algorithm. A gain is further obtained by incorporating forward-backward-averaging.

In the fourth example, a three-dimensional harmonic retrieval problem of size 7×7×7 is simulated. As an extension of the two-dimensional scenarios shown previously, the spatial frequencies (direction cosines) of the first two sources are represented as a function of the time index *n*=1,2,…,*N*

for t[\phantom{\rule{0.3em}{0ex}}n]=\frac{n-1}{N-1}, whereas the third source remains stationary at {\mu}_{3}^{(1)}={\mu}_{3}^{(2)}={\mu}_{3}^{(3)}=0.1. Similarly, as *n* approaches *N*/2, the first and the second sources cross. The LPA between the true and the estimated signal subspace is illustrated in Figure 4. The curve labeled ‘TeTraKron-PAST with FBA’ refers to the real-valued subspace tracking scheme as a tensor extension of PAST via the proposed TeTraKron framework based on (25) where forward-backward-averaging is incorporated. On the other hand, the curve labeled ‘PAST with FBA’ corresponds to an extended version of the matrix-based PAST algorithm with forward-backward-averaging included (cf. (38)). It can be observed that incorporating forward-backward-averaging contributes to a performance improvement for both the matrix-based and the tensor-based algorithms. Similarly as in the first three examples, the tensor-based subspace tracking schemes provide a better performance compared to the matrix-based schemes due to the fact that the former exploit the multidimensional structure inherent in the data in a better way.

Moreover, we assess the performance of various ESPRIT-type parameter estimation algorithms that use the subspace estimates obtained by the proposed subspace tracking schemes and least squares (LS) to solve the invariance equations [10]. The evaluation criterion is the root mean square estimation error (RMSE). We define the RMSE in the spatial frequency domain as

where {\widehat{\mu}}_{i}^{(r)} represents an estimate of {\mu}_{i}^{(r)}.

A two-dimensional scenario is considered where the sources are assumed to be correlated. The source samples ** s** are generated such that {\mathit{R}}_{\text{ss}}=\mathbb{E}\left\{\mathit{s}\xb7{\mathit{s}}^{\mathrm{H}}\right\} has the form

where {\rho}_{i,j}=\rho \xb7{e}^{\u0237\xb7{\phi}_{i,j}}, and *φ*_{i,j}=−*φ*_{j,i} are drawn from a uniform distribution in [0,2*π*] for *i*=1,2,3, *j*=1,2,3, and *i*≠*j*. Here, *ρ* is chosen as 0.7. The other parameters are the same as for Figure 2.

We plot the LPA in Figure 5. Similar observations as in the previous examples can be obtained. In addition, the performance improvement due to the incorporation of forward-backward-averaging is more significant in the presence of source correlation compared to the case of uncorrelated sources. The RMSE of the tracked spatial frequencies is shown in Figure 6. Four parameter estimation techniques, standard ESPRIT (SE) [18], unitary ESPRIT (UE) [16], standard tensor ESPRIT (STE) [10], and unitary tensor-ESPRIT (UTE) [10] are employed where the subspace estimates are tracked by PAST, PAST with forward-backward-averaging, tensor extension of PAST via TeTraKron, and tensor extension of PAST via TeTraKron with forward-backward-averaging incorporated, respectively. It can be observed that the tensor-based parameter estimation algorithms combined with tensor-based subspace tracking schemes outperform the combinations of matrix-based parameter estimation and matrix-based subspace tracking algorithms. Moreover, a gain is achieved by incorporating forward-backward-averaging. This corroborates the benefits provided by forward-backward-averaging that it decorrelates coherent sources and contributes to an enhanced accuracy of the parameter estimation.In Figures 7 and 8, the corresponding spatial frequency estimates of the first mode and the second mode averaged over 1,000 trials are illustrated, respectively. To demonstrate the variation of the spatial frequency estimates around the averaged values, we plot the bars that represent the ±1 standard deviation estimated from the 1,000 trials as well. The two combinations of tensor-based subspace tracking and parameter estimation techniques (corresponding to the two plots labeled ‘STE + TeTraKron-PAST’ and ‘UTE + TeTraKron-PAST with FBA’, respectively, at the bottom of Figures 7 and 8) achieve an accurate estimation of the spatial frequencies. Even when the first two sources cross, their performances do not suffer much, and a fast adaptation is observed. In case of unitary tensor ESPRIT combined with the TeTraKron extension of PAST where forward-backward-averaging is included, its performance is slightly better than that of the combination of standard tensor ESPRIT and the TeTraKron extension of PAST. By contrast, the matrix-based algorithms standard ESPRIT and unitary ESPRIT combined with PAST adapt much slower at the beginning of the tracking as well as at the crossing point of the first two sources and fail to accurately estimate the spatial frequencies. The deviation of the spatial frequency tracks from the average is also much more severe compared to the deviation in case of the two combinations of tensor-based subspace tracking and parameter estimation techniques.

## 6 Conclusions

In this paper, we have proposed the Tensor-based subspace Tracking via Kronecker structured projections (TeTraKron) framework. TeTraKron allows to extend arbitrary existing matrix-based subspace tracking schemes to the tracking of the HOSVD-based subspace estimate. Therefore, compared to previous matrix-based subspace tracking schemes, the subspace estimation accuracy is improved. The extension is based on an algebraic link between matrix-based and tensor-based subspace estimates via a Kronecker structured projection. Therefore, matrix-based subspace tracking schemes are applied to all tensor unfoldings, and there is no need to track the core tensor. We have proposed a low-complexity approach for the recombination of the separate subspaces into one final estimate which is achieved in linear complexity. In addition, we have investigated the incorporation of forward-backward-averaging. To this end, a connection between the real-valued matrix-based and the HOSVD-based subspace estimate via a similar Kronecker structured projection has been revealed. Consequently, the TeTraKron framework can also be employed to devise real-valued tensor-based subspace tracking schemes. As an example, we have used the TeTraKron framework to extend the PAST, the PASTd, and the FAPI algorithms to tensors and demonstrated the enhanced subspace estimation accuracy via numerical simulations. In time-varying multidimensional harmonic retrieval problems, we have investigated the performances of standard tensor ESPRIT and unitary tensor ESPRIT where the tensor-based subspace estimates tracked by the TeTraKron-based subspace tracking algorithms are used for direction-of-arrival estimation.

## Endnotes

^{a} Parts of this paper have been published at the IEEE 5th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP 2013), Saint Martin, French Antilles, December 2013.

^{b} Here, {\mathit{Q}}_{2N}^{\mathrm{T}} is used instead of {\mathit{Q}}_{2N}^{\mathrm{H}} as originally proposed in [10] where this transformation was introduced. This facilitates to build the link between the matrix-based data model and its tensor-based counterpart, i.e., (24).

^{c} Alternatively, after tracking the *r*-mode subspaces, a projected lower-dimensional subspace similar to {\widehat{\u016a}}_{\mathrm{s}} can be tracked, and a recombination procedure similar to (11) can be used, leading to a reduced complexity.

^{d} Both approaches, namely, the batch processing and the sequential processing, can be employed in the tensor-based PAST algorithm. We have performed simulations to compare these two methods. It has been observed that when they are used to modify the PAST algorithm to update the *r*-mode subspace estimates, they lead to the same performance. It should be noted that for some state-of-the-art matrix-based subspace tracking schemes, such as the exponential window FAPI algorithm [6], they have been developed based on the fact that each new observation takes the form of a column vector. Compared to the batch processing, this important feature is better preserved in the sequential processing. Consequently, when extending these schemes to the tensor case via TeTraKron, it is more convenient to use the sequential processing.

^{e} TeTraKron-PAST, TeTraKron-PASTd, and TeTraKron-FAPI refer to the tensor extensions of PAST [7], PASTd [7], and FAPI [6] via the proposed TeTraKron framework based on (10), respectively. In case of the reduced-complexity version (11), the corresponding tensor extensions of PAST and PASTd are called TeTraKron-PAST II and TeTraKron-PASTd II, respectively. In addition, we use ‘TeTraKron-PAST with FBA’ and ‘TeTraKron-FAPI with FBA’ to refer to the real-valued subspace tracking schemes as tensor extensions of PAST and FAPI, respectively. Here the TeTraKron framework based on (25) is applied, and forward-backward-averaging is incorporated.

^{f} To render the comparison to the adaptive RLS-based schemes fair, the exponential weighting with a forgetting factor *β* is also applied here.

## References

Owsley N: Adaptive data orthogonalization. In

*Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP ‘78)*. Tulsa, OK; 1978:109-112.Karasalo I: Estimating the covariance matrix by signal subspace averaging.

*IEEE Trans. Acoustics, Speech Signal Process*1986, 34(1):8-12. 10.1109/TASSP.1986.1164779Yang J-F, Kaveh M: Adaptive eigensubspace algorithms for direction or frequency estimation and tracking.

*IEEE Trans Acoust. Speech Signal Process*1988, 36(2):241-251. 10.1109/29.1516Oja E: A simplified neuron model as a principal components analyzer.

*J. Math. Biol*1982, 15(3):267-273. 10.1007/BF00275687Attallah S, Abed-Meraim K: Low-cost adaptive algorithm for noise subspace estimation.

*Electron. Lett*2002, 38(12):609-611. 10.1049/el:20020388Badeau R, David B, Richard G: Fast approximated power iteration subspace tracking.

*IEEE Trans. Signal Process*2005, 53(8):2931-2941.Yang B: Projection approximation subspace tracking.

*IEEE Trans. Signal Process*1995, 43(1):95-107. 10.1109/78.365290Delmas JP: Subspace tracking for signal processing. In

*Adaptive Signal Processing: Next Generation Solutions*. Edited by: Adali T, Haykin S. John Wiley & Sons, Inc., Hoboken; 2010:211-270.de Lathauwer L, de Moor B, Vanderwalle J: A multilinear singular value decomposition.

*SIAM J. Matrix Anal. Appl*2000, 21(4):1253-1278. 10.1137/S0895479896305696Haardt M, Roemer F, Del Galdo G: Higher-order SVD based subspace estimation to improve the parameter estimation accuracy in multi-dimensional harmonic retrieval problems.

*IEEE Trans. Sig. Proc*2008, 56: 3198-3213.Hu W, Li X, Zhang X, Shi X, Maybank S, Zhang Z: Incremental tensor subspace learning and its applications to foreground segmentation and tracking.

*Int. J. Comput. Vis*2011, 91(3):303-327. 10.1007/s11263-010-0399-6Sun J, Tao D, Papadimitriou S, Yu PS, Faloutsos C: Incremental tensor analysis: theory and applications.

*ACM Trans. Knowl. Discov. Data (TKDD)*2008., 2(3):Roemer F, Haardt M: G Del Galdo, Analytical performance assessment of multi-dimensional matrix- and tensor-based ESPRIT-type algorithms.

*IEEE Trans. Signal Process*2014, 62(10):2611-2625.Song B, Roemer F, Haardt M: Blind estimation of SIMO channels using a tensor-based subspace method. In

*Proc. 44-th Asilomar Conf. on Signals, Systems, and Computers*. CA, Pacific Grove; 2010.Roemer F, Becker H, Haardt M, Weis M: Analytical performance evaluation for HOSVD-based parameter estimation schemes. In

*Proc. of the IEEE Int. Workshop on Comp. Adv. in Multi-Sensor Adaptive Proc. (CAMSAP 2009)*. Aruba, Dutch Antilles; 2009.Haardt M, Nossek JA: Unitary ESPRIT: How to obtain increased estimation accuracy with a reduced computational burden.

*IEEE Trans. Sig. Proc*1995, 43(5):1232-1242. 10.1109/78.382406Lee A: Centrohermitian and skew-centrohermitian matrices.

*Linear Algebra Its Appl.*1980, 29: 205-210.Roy R, Kailath T: ESPRIT–Estimation of signal parameters via rotational invariance techniques.

*IEEE Trans. Acoust. Speech, Signal Process*1989, 37: 984-995. 10.1109/29.32276

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Cheng, Y., Roemer, F., Khatib, O. *et al.* Tensor subspace Tracking via Kronecker structured projections (TeTraKron) for time-varying multidimensional harmonic retrieval.
*EURASIP J. Adv. Signal Process.* **2014**, 123 (2014). https://doi.org/10.1186/1687-6180-2014-123

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/1687-6180-2014-123