In a time-varying scenario, the observation matrix *X* is augmented by a new column \mathit{x}(n)\in {\mathbb{C}}^{M\times 1} with every new snapshot *n*

\begin{array}{l}\mathit{x}(n)=\mathit{A}(n)\xb7\mathit{s}(n)+\mathit{w}(n),\end{array}

(9)

where *A*(*n*) is the mixing matrix or the array steering matrix for the *n* th snapshot. By employing a subspace tracking scheme, an estimate of the signal subspace {\widehat{\mathit{U}}}_{\mathrm{s}}(n) is obtained for each new snapshot *n*. In the tensor case, \mathit{x}(n)\in {\mathbb{C}}^{M\times 1} is rearranged into \mathcal{X}(n)\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\dots \times {M}_{R}}. A tensor-based subspace tracking algorithm aims at estimating the HOSVD-based subspace estimate {\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}(n)\right]}_{(R+1)}^{\mathrm{T}}\in {\mathbb{C}}^{M\times d} for each new snapshot.

### 3.1 Tensor subspace estimation via structured projections

At first sight, (6) suggests that in order to track the signal subspace, we need to track the *r*-mode singular vectors as well as the core tensor. However, it can be shown that tracking the core tensor is indeed unnecessary, since the tensor-based subspace estimate can be computed from the matrix-based subspace estimate via a structured projection which does not involve the core tensor. This was first pointed out in [15] for the 2-D case. However, it can be generalized to an arbitrary number of dimensions. This claim is summarized in the following theorem:

####
**Theorem**
**1**

The HOSVD-based subspace estimate can be computed by projecting the unstructured matrix-based subspace estimate obtained via the SVD onto a Kronecker structure in the following manner

\begin{array}{l}{\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}=\left({\widehat{T}}_{1}\otimes {\widehat{T}}_{2}\dots \otimes {\widehat{T}}_{R}\right)\xb7{\mathit{\xdb}}_{\mathrm{s}},\end{array}

(10)

where {\widehat{T}}_{r}={\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]}\xb7{\mathit{\xdb}}_{r}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}} is a projection matrix onto the space spanned by the *r*-mode vectors.

*Proof*. [13]

In the proof, the core tensor in (6) is first eliminated by substituting (5) into (6), and {\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}} is then computed as

\begin{array}{l}\phantom{\rule{-12.0pt}{0ex}}{\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}=\left({\widehat{T}}_{1}\otimes {\widehat{T}}_{2}\dots \otimes {\widehat{T}}_{R}\right)\xb7{\left[\mathcal{X}\right]}_{(R+1)}^{\mathrm{T}}\xb7{\mathit{\xdb}}_{R+1}^{{\left[\mathrm{s}\right]}^{\ast}}\xb7{\widehat{\Sigma}}_{\mathrm{s}}^{-1}.\end{array}

Relying on the observation that \mathit{X}={\left[\mathcal{X}\right]}_{(R+1)}^{\mathrm{T}}, {\widehat{V}}_{\mathrm{s}}={\mathit{\xdb}}_{R+1}^{{\left[\mathrm{s}\right]}^{\ast}}, and

\begin{array}{l}\mathit{X}={\widehat{\mathit{U}}}_{\mathrm{s}}\xb7{\widehat{\mathit{\Sigma}}}_{\mathrm{s}}\xb7{\widehat{\mathit{V}}}_{\mathrm{s}}^{\mathrm{H}},\end{array}

which follows (2), the identity (10) is proved. Equation (10) provides the central idea behind the TeTraKron framework that we introduce in this paper. It shows that the tensor-based subspace estimate can be understood as a projection of the unstructured matrix-based subspace estimate onto the Kronecker structure inherent in the data. It also shows that for all modes where *p*_{
r
}=*M*_{
r
}, we have {\widehat{T}}_{r}={\mathit{I}}_{r}, i.e., no projection is performed. Another consequence we can draw from (10) is that there is no need to compute (or track) the core tensor. We can find the tensor-based subspace estimate only based on the *r*-mode subspaces contained in {\mathit{\xdb}}_{r}^{\left[\mathrm{s}\right]}. These are the subspaces obtained from the *r*-mode unfoldings of\mathcal{X}, which are again matrices. Therefore, any matrix-based subspace tracking scheme can be applied to track these subspaces as well.

Consequently, the main idea can be summarized as follows: in addition to tracking the subspace of the matrix *X* (which is the same as tracking the row space of the (*R*+1)-mode unfolding), we apply the same tracking algorithm to **all** *r*-mode unfoldings of the tensor which satisfy *p*_{
r
}<*M*_{
r
} for *r*=1,2,…,*R* in parallel. Note that even though this seems to increase the complexity by a factor equal to the number of modes we track, all these trackers can run in parallel which facilitates an efficient implementation. After each step, the tensor-based subspace estimate can be recombined via (10).

However, this recombination requires \mathcal{O}\left\{{M}^{2}\xb7d\right\} multiplications, i.e., it is quadratic in *M*, which is undesirable. To lower the complexity, we rewrite (10) as

\begin{array}{l}{\left[{\widehat{\mathcal{U}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}={\mathit{U}}_{\text{Kron}}^{\left[\mathrm{s}\right]}\xb7{\widehat{\u016a}}_{\mathrm{s}},\end{array}

(11)

where {\mathit{U}}_{\text{Kron}}^{\left[\mathrm{s}\right]}={\mathit{U}}_{1}^{\left[\mathrm{s}\right]}\otimes \dots \otimes {\mathit{U}}_{R}^{\left[\mathrm{s}\right]}\in {\mathbb{C}}^{M\times {d}^{R}} and {\widehat{\u016a}}_{\mathrm{s}}={\mathit{U}}_{\text{Kron}}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}\xb7{\widehat{U}}_{\mathrm{s}}\in {\mathbb{C}}^{{d}^{R}\times d} assuming *p*_{
r
}=*d*≤*M*_{
r
}, for *r*=1,2,…,*R*. Note that the matrix product in (11) requires only \mathcal{O}\left\{M\xb7{d}^{R}\right\} multiplications, i.e., it is linear in *M*. Moreover, (11) can be used for tensor-based subspace tracking as well: we track {\mathit{U}}_{r}^{\left[\mathrm{s}\right]} for *r*=1,2,…,*R* by applying matrix-based subspace tracking schemes to all unfoldings, then project our *M*-dimensional observations into a lower-dimensional space by premultiplying them with {\mathit{U}}_{\text{Kron}}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}\in {\mathbb{C}}^{{d}^{R}\times M} and finally run a matrix-based subspace tracker on the lower-dimensional data to track the *d*-dimensional subspace {\widehat{\u016a}}_{\mathrm{s}}\in {\mathbb{C}}^{{d}^{R}\times d}.

TeTraKron allows to readily extend arbitrary matrix-based subspace tracking schemes to tensors which yields an improved estimation accuracy as we demonstrate in Section 5. Therefore, we obtain novel tensor-based subspace trackers by building on known algorithms, which is a particularly attractive feature of the TeTraKron framework. In addition to running these trackers on all unfoldings in parallel and recombining the signal subspace estimate via (10) or (11), the only modification we have to apply to the matrix-based subspace tracking schemes is the following: as introduced at the beginning of this section, it is typically assumed that the observation matrix *X* is augmented by a new column *x*(*n*) with each new snapshot *n*. For the *r*-mode unfoldings of \mathcal{X}(n)\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\dots \times {M}_{R}} that is a rearranged version of *x*(*n*), every new snapshot generates not only one but also several new columns. For instance, for the 1-mode unfolding, we obtain {\prod}_{r=2}^{R}{M}_{r} new columns, each of size *M*_{1}. This new batch of columns can be processed sequentially, or, by modifying the tracking schemes, also in one batch. We demonstrate such a modification using the examples of the PAST algorithm [7] and the FAPI algorithm [6] in Section 4.

### 3.2 Forward-backward-averaging and real-valued subspace tracking

Forward-backward-averaging and real-valued subspace estimation are introduced for the matrix case in [16] and the tensor case in [10], respectively. By employing forward-backward-averaging, the number of snapshots is virtually doubled. It also results in the decorrelation of two coherent sources. Moreover, the complex spatial covariance matrices can be mapped to the real-valued matrices such that the subsequent computations are real-valued, which contributes to a reduced complexity.

In this section, we revisit the concept of forward-backward-averaging and present real-valued subspace tracking. First, we calculate a forward-backward-averaged version of the measurement tensor \mathcal{X}\in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\times \dots \times {M}_{R}\times N}[10]

\begin{array}{ll}\mathcal{Z}=& \phantom{\rule{0.3em}{0ex}}\left[\mathcal{X}{\bigsqcup}_{R+1}\left[{\mathcal{X}}^{\ast}{\times}_{1}{\mathit{\Pi}}_{{M}_{1}}{\times}_{2}{\mathit{\Pi}}_{{M}_{2}}\cdots {\times}_{R+1}{\mathit{\Pi}}_{N}\right]\right]\phantom{\rule{2em}{0ex}}\\ \in {\mathbb{C}}^{{M}_{1}\times {M}_{2}\times \dots \times {M}_{R}\times 2N},\phantom{\rule{2em}{0ex}}\end{array}

(12)

where *Π*_{
p
} is a *p*×*p* exchange matrix which has ones on its anti-diagonal and zeros elsewhere. We further map this centro-Hermitian tensor to a real-valued tensor \phi (\mathcal{Z})\in {\mathbb{R}}^{{M}_{1}\times {M}_{2}\times \dots \times {M}_{R}\times 2N} by using the transformation [10]^{b}

\begin{array}{l}\phi (\mathcal{Z})=\mathcal{Z}{\times}_{1}{\mathit{Q}}_{{M}_{1}}^{\mathrm{H}}{\times}_{2}{\mathit{Q}}_{{M}_{2}}^{\mathrm{H}}\cdots {\times}_{R+1}{\mathit{Q}}_{2N}^{\mathrm{T}},\end{array}

(13)

where *Q*_{
p
} is a *p*×*p* left- *Π* real and unitary matrix, i.e., it satisfies {\mathit{\Pi}}_{p}\xb7{\mathit{Q}}_{p}^{\ast}={\mathit{Q}}_{p}[17].

For the matrix-based case, the matrix of observations \mathit{X}\in {\mathbb{C}}^{M\times N} is mapped to the following centro-Hermitian matrix

\begin{array}{l}\mathit{Z}=\left[\begin{array}{ll}\mathit{X}& {\mathit{\Pi}}_{M}\xb7{\mathit{X}}^{\ast}\xb7{\mathit{\Pi}}_{N}\end{array}\right]\in {\mathbb{C}}^{M\times 2N}.\end{array}

(14)

Then, *Z* is transformed to a real-valued matrix using the transformation [16]

\begin{array}{l}\phi (\mathit{Z})={\mathit{Q}}_{M}^{\mathrm{H}}\xb7\mathit{Z}\xb7{\mathit{Q}}_{2N},\end{array}

(15)

where *Q*_{
M
} is defined as [10]

\begin{array}{l}{\mathit{Q}}_{M}={\mathit{Q}}_{{M}_{1}}\otimes {\mathit{Q}}_{{M}_{2}}\cdots \otimes {\mathit{Q}}_{{M}_{R}}.\end{array}

(16)

By computing a truncated HOSVD of \phi (\mathcal{Z}), a real-valued subspace estimate is obtained as {\left[{\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}\in {\mathbb{R}}^{M\times d}, where {\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]} has the following form [10]

\begin{array}{l}{\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]}={\widehat{\mathcal{S}}}_{Z}^{\left[\mathrm{s}\right]}{\times}_{1}{\mathit{\xca}}_{1}^{\left[\mathrm{s}\right]}{\times}_{2}{\mathit{\xca}}_{2}^{\left[\mathrm{s}\right]}\dots {\times}_{R}{\mathit{\xca}}_{R}^{\left[\mathrm{s}\right]}{\times}_{R+1}{\widehat{\Sigma}}_{\mathrm{s}}^{{\prime}^{-1}}.\end{array}

(17)

The matrices {\mathit{\xca}}_{r}^{\left[\mathrm{s}\right]}, *r*=1,2,…,*R*+1, denote the estimates of the real-valued bases for the *r*-mode subspaces, and {\widehat{\Sigma}}_{\mathrm{s}}^{\prime} represents the diagonal matrix of the (*R*+1)-mode singular values. The multiplication of {\widehat{\Sigma}}_{\mathrm{s}}^{{\prime}^{-1}} is only a normalization procedure. By substituting the expression of the core tensor {\widehat{\mathcal{S}}}_{Z}^{\left[\mathrm{s}\right]}

\begin{array}{l}{\widehat{\mathcal{S}}}_{Z}^{\left[\mathrm{s}\right]}=\phi (\mathcal{Z}){\times}_{1}{\mathit{\xca}}_{1}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}{\times}_{2}{\mathit{\xca}}_{2}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}\dots {\times}_{R+1}{\mathit{\xca}}_{R+1}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}\end{array}

(18)

into (17), {\left[{\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}\in {\mathbb{R}}^{M\times d} is further expressed as

\begin{array}{ll}{\left[{\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}=& \phantom{\rule{0.3em}{0ex}}{\widehat{\Sigma}}_{\mathrm{s}}^{{\prime}^{-1}}\xb7{\mathit{\xca}}_{R+1}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}\xb7{\left[\phi (\mathcal{Z})\right]}_{(R+1)}\phantom{\rule{2em}{0ex}}\\ \times {\left({\widehat{T}}_{1}^{\prime}\otimes {\widehat{T}}_{2}^{\prime}\dots \otimes {\widehat{T}}_{R}^{\prime}\right)}^{\mathrm{T}},\phantom{\rule{2em}{0ex}}\end{array}

(19)

where {\widehat{T}}_{r}^{\prime}={\mathit{\xca}}_{r}^{\left[\mathrm{s}\right]}\xb7{\mathit{\xca}}_{r}^{{\left[\mathrm{s}\right]}^{\mathrm{H}}}. Substituting (12) into (13) gives

\phantom{\rule{-17.0pt}{0ex}}\begin{array}{l}\phi (\mathcal{Z})=\left[\left[\mathcal{X}{\times}_{1}{\mathit{Q}}_{{M}_{1}}^{\mathrm{H}}\cdots {\times}_{R}{\mathit{Q}}_{{M}_{R}}^{\mathrm{H}}\right]{\bigsqcup}_{R+1}\right.\\ \left(\right)close="]">\left[{\mathcal{X}}^{\ast}{\times}_{1}\left({\mathit{Q}}_{{M}_{1}}^{\mathrm{H}}\xb7{\mathit{\Pi}}_{{M}_{1}}\right)\cdots {\times}_{R}\left({\mathit{Q}}_{{M}_{R}}^{\mathrm{H}}\xb7{\mathit{\Pi}}_{{M}_{R}}\right){\times}_{R+1}{\mathit{\Pi}}_{N}\right]\end{array}\n \n \n \n \n \xd7\n \n \n R\n +\n 1\n \n \n \n \n Q\n \n \n 2\n N\n \n \n T\n \n \n .\n \n \n

(20)

Notice that

\begin{array}{l}\left({\mathit{Q}}_{{M}_{1}}^{\mathrm{H}}\xb7{\mathit{\Pi}}_{{M}_{1}}\right)\otimes \left({\mathit{Q}}_{{M}_{2}}^{\mathrm{H}}\xb7{\mathit{\Pi}}_{{M}_{2}}\right)\cdots \otimes \left({\mathit{Q}}_{{M}_{R}}^{\mathrm{H}}\xb7{\mathit{\Pi}}_{{M}_{R}}\right)\phantom{\rule{2em}{0ex}}\\ =& \phantom{\rule{2.77626pt}{0ex}}{\mathit{Q}}_{M}^{\mathrm{H}}\xb7({\mathit{\Pi}}_{{M}_{1}}\otimes {\mathit{\Pi}}_{{M}_{2}}\cdots \otimes {\mathit{\Pi}}_{{M}_{R}})\phantom{\rule{2em}{0ex}}\\ =& \phantom{\rule{2.77626pt}{0ex}}{\mathit{Q}}_{M}^{\mathrm{H}}\xb7{\mathit{\Pi}}_{M}.\phantom{\rule{2em}{0ex}}\end{array}

(21)

Expanding the (*R*+1)-mode unfolding of (20) yields

\begin{array}{l}\phantom{\rule{-12.0pt}{0ex}}{\left[\phi (\mathcal{Z})\right]}_{(R+1)}={\mathit{Q}}_{2N}^{\mathrm{T}}\xb7\left[\begin{array}{l}{\left[\mathcal{X}\right]}_{(R+1)}\xb7{\left({\mathit{Q}}_{M}^{\mathrm{H}}\right)}^{\mathrm{T}}\\ {\mathit{\Pi}}_{N}\xb7{\left[\mathcal{X}\right]}_{(R+1)}^{\ast}\xb7{\left({\mathit{Q}}_{M}^{\mathrm{H}}\xb7{\mathit{\Pi}}_{M}\right)}^{\mathrm{T}}\end{array}\right].\end{array}

(22)

As there exists a link between the matrix-based data model and its tensor-based counterpart such that \mathit{X}={\left[\mathcal{X}\right]}_{(R+1)}^{\mathrm{T}}, we can express {\left[\phi (\mathcal{Z})\right]}_{(R+1)}^{\mathrm{T}} as

\begin{array}{ll}{\left[\phi (\mathcal{Z})\right]}_{(R+1)}^{\mathrm{T}}& ={\mathit{Q}}_{M}^{\mathrm{H}}\xb7\left[\begin{array}{ll}\mathit{X}& {\mathit{\Pi}}_{M}\xb7{\mathit{X}}^{\ast}\xb7{\mathit{\Pi}}_{N}\end{array}\right]\xb7{\mathit{Q}}_{2N}.\phantom{\rule{2em}{0ex}}\end{array}

(23)

Recall that *φ*(*Z*) is defined via (14) and (15), leading to the observation that

\begin{array}{l}\phi (\mathit{Z})={\left[\phi (\mathcal{Z})\right]}_{(R+1)}^{\mathrm{T}}.\end{array}

(24)

As a real-valued equivalent to Theorem 1, we have

\begin{array}{l}{\left[{\widehat{\mathcal{E}}}^{\left[\mathrm{s}\right]}\right]}_{(R+1)}^{\mathrm{T}}=\left({\widehat{T}}_{1}^{\prime}\otimes {\widehat{T}}_{2}^{\prime}\dots \otimes {\widehat{T}}_{R}^{\prime}\right)\xb7{\mathit{\xca}}_{\mathrm{s}},\end{array}

(25)

where the columns of {\mathit{\xca}}_{\mathrm{s}}\in {\mathbb{R}}^{M\times d} represent a real-valued orthonormal basis for the estimated signal subspace in the matrix-based case. The proof proceeds along the lines of the proof of Theorem 1 in [13].

Hence, (25) indicates that the real-valued tensor-based subspace estimate can be computed by applying a Kronecker-structured projection to the real-valued matrix-based subspace estimate similar to the complex-valued case. The calculation of the core tensor is also not required. Note that the obtained real-valued tensor-based subspace estimate can be employed in the unitary tensor ESPRIT algorithm [10] for parameter estimation in multidimensional harmonic retrieval problems. We show the corresponding numerical results in Section 5.

To summarize, in case that forward-backward-averaging is incorporated, the TeTraKron framework can be employed to extend a matrix-based subspace tracking scheme to tensors such that real-valued tensor-based subspace tracking is realized. For each snapshot, a forward-backward-averaged version of the measurement tensor is mapped to a real-valued tensor similarly as (12) and (13). Then, any matrix-based subspace tracker can be run on all unfoldings of this tensor in parallel where only real-valued computations are involved. Finally, the real-valued subspace estimate is obtained via a recombination procedure such as (25)^{c}. In Section 4, where the matrix-based algorithms PAST [7] and FAPI [6] are used as examples, we explain explicitly how real-valued tensor-based subspace tracking based on the TeTraKron framework is performed.