 Research
 Open Access
 Published:
Lowrank filter and detector for multidimensional data based on an alternative unfolding HOSVD: application to polarimetric STAP
EURASIP Journal on Advances in Signal Processing volume 2014, Article number: 119 (2014)
Abstract
This paper proposes an extension of the higher order singular value decomposition (HOSVD), namely the alternative unfolding HOSVD (AUHOSVD), in order to exploit the correlated information in multidimensional data. We show that the properties of the AUHOSVD are proven to be the same as those for HOSVD: the orthogonality and the lowrank (LR) decomposition. We next derive LR filters and LR detectors based on AUHOSVD for multidimensional data composed of one LR structure contribution. Finally, we apply our new LR filters and LR detectors in polarimetric spacetime adaptive processing (STAP). In STAP, it is well known that the response of the background is correlated in time and space and has a LR structure in spacetime. Therefore, our approach based on AUHOSVD seems to be appropriate when a dimension (like polarimetry in this paper) is added. Simulations based on signaltointerferenceplusnoise ratio (SINR) losses, probability of detection (Pd), and probability of false alarm (Pfa) show the interest of our approach: LR filters and LR detectors which can be obtained only from AUHOSVD outperform the vectorial approach and those obtained from a single HOSVD.
1 Introduction
In signal processing, more and more applications deal with multidimensional data, whereas most of the signal processing algorithms are derived based on one or twodimensional models. Consequently, multidimensional data have to be folded as vector or matrix to be processed. These operations are not lossless since they involve a loss of structure. Several issues may arise from this loss: decrease of performances and lack of robustness (see for instance [1]). The multilinear algebra [2, 3] provides a good framework to exploit these data by preserving the structure information. In this context, data are represented as multidimensional arrays called tensor. However, generalizing matrixbased algorithms to the multilinear algebra framework is not a trivial task. In particular, some multilinear tools do not retain all the properties of the vectorial and matrix tools. Let us consider the case of the singular value decomposition (SVD). The SVD decomposes a matrix into a sum of rank1 matrices and has uniqueness and orthonormality properties. There is no single multilinear extension of the SVD, with exactly the same properties as the SVD. Depending on which properties are preserved, several extensions of the SVD have been introduced.
On one hand, CANDECOMP/PARAFAC (CP) [4] decomposes a tensor as a sum of rank1 tensors, preserving the definition of rank. Due to its properties of identifiability and uniqueness, this decomposition is relevant for multiple parameter estimation. CP was first introduced in the signal processing community for direction of arrival (DOA) estimation [5, 6]. New decompositions were then derived from CP. For example, in [7, 8], a decomposition based on a constrained CP model is applied to multipleinput multipleoutput (MIMO) wireless communication system. These decompositions share a common issue for some applications: they are not orthogonal.
On the other hand, the higher order singular value decomposition (HOSVD) [3, 9] decomposes a tensor as a product of a core tensor and a unitary matrix for each dimension of the tensor. This decomposition relies on matrix unfoldings. In general, HOSVD does not have the properties of identifiability and uniqueness. The model provided by HOSVD is defined up to a rotation. It implies that HOSVD cannot be used for multiple parameters estimation unlike CP. Orthogonality properties of HOSVD allow to extend the lowrank methods such as [10, 11]. These methods are based on the ranks of the different matrix unfoldings, called pranks [2, 12, 13]. HOSVD has been successfully applied in many fields such as image processing [14], sonar and seismoacoustic [15], ESPRIT [16], ICA [17], and video compression [18].
HOSVD is based on the classic tensor unfolding and in particular on the matrix of left singular vectors. This unfolding transforms a tensor into a matrix in order to highlight one dimension. In other words, HOSVD only considers simple information, which is the information contained in each dimension taken separately. The correlated information, which is the information contained in a combination of dimensions, is neglected. In [19], a new decomposition, PARATREE^{a}, based on the sequential unfolding SVD (SUSVD) was proposed. This decomposition considers some correlated information, using the right matrix of the singular vectors. This approach can be improved, to consider any type of correlated information. Consequently, we propose to develop a new set of orthogonal decompositions which will be called alternative unfolding HOSVD (AUHOSVD). In this paper, we will define this new operator and study its main properties, especially the extension of the lowrank approximation. We will show the link between AUHOSVD and HOSVD.
Based on this new decomposition, we derive new lowrank (LR) filters and LR detectors for multidimensional data containing a target embedded in an interference. We assume that the interference is the sum of two noises: a white Gaussian noise and a lowrank structured one. In order to illustrate the interest of these new LR filters and LR detectors, we will consider the multidimensional spacetime adaptive processing (STAP). STAP is a technique used in airborne phased array radar to detect moving target embedded in an interference background such as jamming (jammers are not considered in this paper) or strong ground clutter [20] plus a white Gaussian noise (resulting from the sensor noise). While conventional radars are capable of detecting targets both in the time domain related to target range and in the frequency domain related to target velocity, STAP uses an additional domain (space) related to the target angular localization. From the Brennan rule [21], STAP clutter is shown to have a lowrank structure^{b}. That means that the clutter response in STAP is correlated in time and space. Therefore, if we add a dimension, the LR filter and LR detector based on HOSVD will not be interesting. In this paper, we show the interest of our new LR filters and LR detectors based on AUHOSVD in a particular case of multidimensional STAP: polarimetric STAP [22]. In this polarimetric configuration, each element transmits and receives in both H and V polarizations, resulting in three polarimetric channels (HH, VV, HV/VH). The dimension of the data model is then three. Simulations based on signaltointerferenceplusnoise ratio (SINR) losses [20], probability of detection (Pd), and probability of false alarm (Pfa) show the interest of our approach: LR filters and LR detectors which are obtained using AUHOSVD outperform the vectorial approach and those obtained from HOSVD in the general polarimetry model (the channels HH and VV are not completely correlated). We believe that these results could be extended to more generalized multidimensional STAP systems like MIMOSTAP [23–26].
The paper is organized as follows. Section 2 gives a brief overview of the basic multilinear algebra tools. In particular, the HOSVD and its main properties are presented. In Section 3, the AUHOSVD and its properties are derived. Section 4 is devoted to the derivation of the LR filters and LR detectors based on AUHOSVD. Finally, in Section 5, these new tools are applied to the case of polarimetric STAP.
The following convention is adopted: scalars are denoted as italic letters, vectors as lowercase boldface letters, matrices as boldface capitals, and tensors as boldface calligraphic letters. We use the superscripts ^{H} for Hermitian transposition and ^{∗} for complex conjugation. The expectation is denoted by E[.], and the Frobenius norm is denoted ∥.∥.
2 Some basic multilinear algebra tools
This section contains the main multilinear algebra tools used in this paper. Let , $\mathit{\mathcal{B}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$, be two P thorder tensors and ${h}_{{i}_{1}\dots {i}_{P}}$, ${b}_{{i}_{1}\dots {i}_{P}}$ their elements.
2.1 Basic operators of multilinear algebra
Unfoldings In this paper, three existing unfoldings are used; for a general definition of tensor unfolding, we refer the reader to [2].

Vector: vec transforms a tensor into a vector, $\mathit{\text{vec}}\left(\mathit{\mathscr{H}}\right)\in {C}^{{I}_{1}{I}_{2}\dots {I}_{P}}$. We denote v e c^{−1}, the inverse operator.

Matrix: this operator transforms the tensor into a matrix ${\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{p}\in {C}^{{I}_{p}\times {I}_{1}\dots {I}_{p1}{I}_{p+1}\dots {I}_{P}}$, p=1…P. For example, ${\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{1}\in {C}^{{I}_{1}\times {I}_{2}\dots {I}_{P}}$. This transformation allows to enhance simple information (i.e., information contained in one dimension of the tensor).

Square matrix: this operator transforms a 2P thorder tensor $\mathit{R}\in {C}^{{I}_{1}\times {I}_{2}\dots \times {I}_{P}\times {I}_{1}\times {I}_{2}\dots \times {I}_{P}}$ into a square matrix, $\mathit{\text{SqMat}}\left(\mathit{R}\right)\in {C}^{{I}_{1}\dots {I}_{P}\times {I}_{1}\dots {I}_{P}}$. S q M a t^{−1} is the inverse operator.
The inverse operators always exist. However, the way the tensor was unfolded must be known.
Products

The scalar product $<\mathit{\mathscr{H}},\mathit{\mathcal{B}}>$ of two tensors is defined as
$$\begin{array}{l}<\mathit{\mathscr{H}},\mathit{\mathcal{B}}>=\sum _{{i}_{1}}\sum _{{i}_{2}}\dots \sum _{{i}_{P}}{b}_{{i}_{1}{i}_{2}\dots {i}_{P}}^{\ast}{h}_{{i}_{1}{i}_{2}\dots {i}_{P}}\\ \phantom{\rule{5.3em}{0ex}}=\mathit{\text{vec}}{\left(\mathit{\mathcal{B}}\right)}^{H}\mathit{\text{vec}}\left(\mathit{\mathscr{H}}\right).\end{array}$$(1)It is the usual Hermitian scalar product in linear spaces [12, 27].

Let $\mathbf{E}\in {C}^{{J}_{n}\times {I}_{n}}$ be a matrix, the nmode product between E and a tensor is defined as
$$\begin{array}{l}\mathit{G}=\mathit{\mathscr{H}}{\times}_{n}\mathbf{E}\in {C}^{{I}_{1}\times \dots \times {J}_{n}\times \dots \times {I}_{P}}\\ \phantom{\rule{2.77626pt}{0ex}}\iff \phantom{\rule{2.77626pt}{0ex}}& {\left(\mathit{G}\right)}_{{i}_{1}\dots {j}_{n}\dots {i}_{P}}=\sum _{{i}_{n}}{h}_{{i}_{1}\dots {i}_{n}\dots {i}_{P}}{e}_{{j}_{n}{i}_{n}}\\ \phantom{\rule{2.77626pt}{0ex}}\iff \phantom{\rule{2.77626pt}{0ex}}& {\left[\mathit{G}\right]}_{n}=\mathbf{E}{\left[\mathit{\mathscr{H}}\right]}_{n}.\end{array}$$(2) 
The outer product between and , $\mathit{E}=\mathit{\mathscr{H}}\circ \mathit{\mathcal{B}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}\times {I}_{1}\times \dots \times {I}_{P}}$ is defined as
$${e}_{{i}_{1}\dots {i}_{P}{i}_{1}\dots {i}_{P}}={h}_{{i}_{1}\dots {i}_{P}}.{b}_{{i}_{1}\dots {i}_{P}}.$$(3)
Tensor ranks There are two concepts of rank for tensors:

The tensor rank: it is defined as the minimum number of rank1 tensors (for example a P thorder rank1 tensor can be written as the outer product of P vectors) necessary to obtain the considered tensor.

The pranks: they are defined as the ranks of the unfoldings, ${r}_{p}=\text{rank}\left({\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{p}\right)$.
2.2 Higher order singular value decomposition
This subsection recalls the main results on the HOSVD used in this paper.
Theorem 2.1
The higher order singular value decomposition (HOSVD) is a particular case of Tucker decomposition [9] with orthogonality properties. HOSVD decomposes a tensor as follows [3]:
where $\forall n,\phantom{\rule{0.3em}{0ex}}{\mathbf{U}}^{\left(n\right)}\in {C}^{{I}_{n}\times {I}_{n}}$ is an orthonormal matrix and $\mathit{K}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$ is the core tensor, which satisfies the allorthogonality conditions [3]. The matrix U^{(n)} is given by the singular value decomposition of the ndimension unfolding, ${\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{n}={\mathbf{U}}^{\left(n\right)}{\mathbf{\Sigma}}^{\left(n\right)}{\mathbf{V}}^{\left(n\right)H}$. Using classical unfolding, the HOSVD only considers the simple information.
Remark Let $\mathit{\mathscr{H}}\in {C}^{{I}_{1}\times {I}_{2}\dots \times {I}_{P}\times {I}_{1}\times {I}_{2}\dots \times {I}_{P}}$ be a 2P thorder Hermitian tensor, i.e., ${h}_{{i}_{1},\dots ,{i}_{p},{j}_{1},\dots ,{j}_{p}}={h}_{{j}_{1},\dots ,{j}_{p},{i}_{1},\dots ,{i}_{p}}^{\ast}$. The HOSVD of is written as [16]
The following result permits to compute a tensor approximation of lower pranks.
Proposition 2.1(Lowrank approximation)
Let us introduce $\mathit{\mathscr{H}}={\mathit{\mathscr{H}}}_{c}+{\mathit{\mathscr{H}}}_{0}$. ${\mathit{\mathscr{H}}}_{c}$ is a (r_{1},…,r_{ P }) lowrank tensor where r_{ k }<I_{ k }, for k=1,…,P. An approximation of ${\mathit{\mathscr{H}}}_{0}$ is given by [15, 27]:
with ${\mathbf{U}}_{0}^{\left(n\right)}=\left[\phantom{\rule{0.3em}{0ex}}{\mathbf{u}}_{{r}_{n}+1}^{\left(n\right)}\dots {\mathbf{u}}_{{I}_{n}}^{\left(n\right)}\right]$.
It is well known that this solution is not optimal in the sense of least squares. However, it is a correct approximation in most cases [16, 27] and it is easy to implement. That is why iterative algorithms will not be used in this paper.
2.3 Covariance tensor and estimation
Definition Let $\mathit{Z}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$ be a random P thorder tensor, and the covariance tensor $\mathit{R}\in {C}^{{I}_{1}\times \dots \times {I}_{P}\times {I}_{1}\dots \times {I}_{P}}$ is defined as [28]
Sample covariance matrix Let $\mathbf{z}\in {C}^{{I}_{1}\dots {I}_{P}}$ be a zeromean Gaussian random vector and $\mathbf{R}\in {C}^{{I}_{1}\dots {I}_{P}\times {I}_{1}\dots {I}_{P}}$ its covariance matrix. Let z_{ k } be K observations of z. The sample covariance matrix (SCM), $\widehat{\mathbf{R}}$, is written as follows:
Sample covariance tensor Let ${\mathit{Z}}_{k}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$ be K observations of . By analogy with the SCM, $\widehat{\mathit{R}}\in {C}^{{I}_{1}\dots \times {I}_{P}\times {I}_{1}\dots \times {I}_{P}}$, the sample covariance tensor (SCT) is defined as [16]
Remark If we denote $\mathbf{z}=\mathit{\text{vec}}\left(\mathit{Z}\right)$, then
3 Alternative unfolding HOSVD
Due to proposition 2.1, it is possible to design LR filters based on HOSVD. This approach does not work when all pranks are full (i.e., r_{ p }=I_{ p }, p=1…P), since no projection could be done. However, the data may still have a LR structure. This is the case of correlated data where one or more ranks relative to a group of dimensions are deficient. Tensor decompositions allowing to exploit this kind of structure have not been promoted. To fill this gap, we propose to introduce a new tool which will be able to extract this kind of information. This section contains the main contribution of this paper: the derivation of the AUHOSVD and its principal properties.
3.1 Generalization of standard operators
Notation of indices In order to consider correlated information, we introduce a new notation for the indices of a tensor. We consider $\mathit{\mathscr{H}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$, a P thorder tensor. We denote $\mathbb{A}=\{1,\dots ,P\}$ the set of the dimensions and ${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$, L subsets of which define a partition of . In other words, ${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$satisfy the following conditions:

$${\mathbb{A}}_{1}\cup \dots \cup {\mathbb{A}}_{L}=\mathbb{A}$$

They are pairwise disjoint, i.e.,
$$\forall i\ne j,{\mathbb{A}}_{i}\cap {\mathbb{A}}_{j}=\mathrm{\varnothing .}$$
Moreover ${C}^{{I}_{1}\dots {I}_{P}}$ is denoted ${C}^{{I}_{\mathbb{A}}}$. For example, when ${\mathbb{A}}_{1}=\{1,2\}$ and ${\mathbb{A}}_{2}=\{3,4\}$, ${C}^{{I}_{{\mathbb{A}}_{1}}\times {I}_{{\mathbb{A}}_{2}}}$ means ${C}^{{I}_{1}{I}_{2}\times {I}_{3}{I}_{4}}$.
A generalization of unfolding in matrices In order to build our new decomposition, we need a generalized unfolding, adapted from [2]. This operator allows to unfold a tensor into a matrix whose dimensions could be any combination ${\mathbb{A}}_{l}$ of the tensor dimensions. It is denoted as ${\left[\phantom{\rule{0.3em}{0ex}}.\right]}_{{\mathbb{A}}_{l}}$, and it transforms into a matrix ${\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}\in {C}^{{I}_{{\mathbb{A}}_{l}}\times {I}_{\mathbb{A}\setminus {\mathbb{A}}_{l}}}$.
A new unfolding in tensors We denote as Reshape the operator which transforms a tensor into a tensor Reshape$(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\in {C}^{{I}_{{\mathbb{A}}_{1}}\times \dots \times {I}_{{\mathbb{A}}_{L}}}$ and Reshape^{−1} the inverse operator.
A new tensor product The nmode product allows to multiply a tensor with a matrix along one dimension. We propose to extend the nmode product to multiply a tensor with a matrix along several dimensions, combined in ${\mathbb{A}}_{l}$. Let $\mathbf{D}\in {C}^{{I}_{{\mathbb{A}}_{l}}\times {I}_{{\mathbb{A}}_{l}}}$ be a square matrix. This new product, called multimode product, is defined as
The following proposition shows the link between multimode product and nmode product.
Proposition 3.1(Link between n mode product and multimode product).
Let $\mathit{\mathscr{H}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$ be a P thorder tensor, ${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$ be a partition of , and $\mathbf{D}\in {C}^{{I}_{{\mathbb{A}}_{l}}\times {I}_{{\mathbb{A}}_{l}}}$ be a square matrix. Then, the following equality is verified:
Proof 3.1
The proof of Theorem 3.1 relies on the following straightforward result:
This leads to ${\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathcal{B}}\right]}_{{\mathbb{A}}_{l}}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}\right(\mathit{\mathcal{B}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left)\right]}_{l}$ and ${\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}\right(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left)\right]}_{l}$. Applying these two results on (11), we obtain
From Equation 2, Equation 13 is equivalent to
Finally, one has
Remark Thanks to the previous proposition and the commutative property of nmode product, multimode product is also commutative.
3.2 AUHOSVD
With the new tools presented in the previous subsection, we are now able to introduce the AUHOSVD. This is the purpose of the following theorem.
Theorem 3.1(Alternative unfolding HOSVD).
Let $\mathit{\mathscr{H}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$ and ${\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}$ a partition of . Then, may be decomposed as follows:
where

l∈ [ 1,L], ${\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}\in {C}^{{\mathbb{A}}_{l}\times {\mathbb{A}}_{l}}$ is unitary. The matrix ${\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}$ is given by the singular value decomposition of the ${\mathbb{A}}_{l}$dimension unfolding, ${\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}={\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}{\mathbf{\Sigma}}^{\left({\mathbb{A}}_{l}\right)}{\mathbf{V}}^{\left({\mathbb{A}}_{l}\right)H}$.

$${\mathit{K}}_{{\mathbb{A}}_{1}/\dots /{\mathbb{A}}_{L}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$$
is the core tensor. It has the same properties as the HOSVD core tensor.
Notice that there are several ways to decompose a tensor with the AUHOSVD. Each choice of the ${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$ gives a different decomposition. For a P thorder tensor, the number of different AUHOSVD is given by the Bell number, B_{ P }:
The AUHOSVD associated to the partition ${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$ is denoted ${\text{AUHOSVD}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}}$.
Proof 3.2.
First, let us consider ${\mathbb{A}}_{1},\dots ,\phantom{\rule{1em}{0ex}}{\mathbb{A}}_{L}$, a partition of . $\mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L})$ is a L thorder tensor and may be decomposed using the HOSVD:
where the matrix U^{(l)} is given by the singular value decomposition of the ldimension unfolding, ${\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}\right(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left)\right]}_{l}\phantom{\rule{2.77626pt}{0ex}}=\phantom{\rule{2.77626pt}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}={\mathbf{U}}^{\left(l\right)}{\mathbf{\Sigma}}^{\left(l\right)}{\mathbf{V}}^{\left(l\right)H}$.
Since the matrices U^{(l)}’s are unitary, Equation 15 is equivalent to
Then, using proposition 3.1, the following equality is true:
which leads to
Finally, the operator Reshape^{−1} is applied
which concludes the proof.
Example For a thirdorder tensor $\mathit{\mathscr{H}}\in {C}^{{I}_{1}\times {I}_{2}\times {I}_{3}}$ with ${\mathbb{A}}_{1}=\{1,3\}$, ${\mathbb{A}}_{2}=\left\{2\right\}$, the AUHOSVD will be written as follows:
with ${\mathit{K}}_{{\mathbb{A}}_{1}/{\mathbb{A}}_{2}}\in {C}^{{I}_{1}\times {I}_{2}\times {I}_{3}}$, ${\mathbf{U}}^{\left({\mathbb{A}}_{1}\right)}\in {C}^{{I}_{1}{I}_{3}\times {I}_{1}{I}_{3}}$ and ${\mathbf{U}}^{\left({\mathbb{A}}_{2}\right)}\in {C}^{{I}_{2}\times {I}_{2}}$.
Remark Let $\mathit{\mathscr{H}}\in {C}^{{I}_{1}\times {I}_{2}\dots \times {I}_{P}\times {I}_{1}\times {I}_{2}\dots \times {I}_{P}}$ be a 2P thorder Hermitian tensor. We consider 2L subsets of {I_{1},…,I_{ P },I_{1},…,I_{ P }} such as

$${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$$
and ${\mathbb{A}}_{L+1},\dots ,{\mathbb{A}}_{2L}$ are two partitions of {I_{1},…,I_{ P }}

l∈ [ 1,L],
$${\mathbb{A}}_{l}={\mathbb{A}}_{l+L}$$
Under these conditions, the AUHOSVD of is written:
As discussed previously, the main motivation for introducing the new AUHOSVD is to extract the correlated information when processing the lowrank decomposition. This is the purpose of the following proposition.
Proposition 3.2(Lowrank approximation).
Let , ${\mathit{\mathscr{H}}}_{c}$, ${\mathit{\mathscr{H}}}_{0}$ be three P thorder tensors such that
where ${\mathit{\mathscr{H}}}_{c}$ is a $\left({r}_{{\mathbb{A}}_{1}},\dots ,{r}_{{\mathbb{A}}_{L}}\right)$ lowrank tensor^{c}$\left({r}_{{\mathbb{A}}_{l}}=\mathit{\text{rank}}\left({\left[\phantom{\rule{0.3em}{0ex}}{\mathit{\mathscr{H}}}_{c}\right]}_{{\mathbb{A}}_{l}}\right)\right)$. Then, ${\mathit{\mathscr{H}}}_{0}$ is approximated by
where ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}$, …, ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}$ minimize the following criterion:
In this paper, the matrices ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}$’s are given by truncation of the matrices ${\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}$’s obtained by the ${\text{AUHOSVD}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}}$ of : ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{l}\right)}=\left[\phantom{\rule{0.3em}{0ex}}{\mathbf{u}}_{{r}_{{\mathbb{A}}_{l}}+1}^{\left({\mathbb{A}}_{l}\right)}\dots {\mathbf{u}}_{{\mathbb{A}}_{l}}^{\left({\mathbb{A}}_{l}\right)}\right]$. This solution is not optimal in the sense of least squares but is easy to implement. However, thanks to the strong link with HOSVD, it should be a correct approximation. That is why iterative algorithms for AUHOSVD will not be investigated in this paper.
Proof 3.3.
By applying Reshape to Equation 21, one obtains
Then, $\mathit{\text{Reshape}}\left({\mathit{\mathscr{H}}}_{c},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\right)$ is a $\left({r}_{{\mathbb{A}}_{1}},\dots ,{r}_{{\mathbb{A}}_{L}}\right)$ lowrank tensor $\left(\text{where}{r}_{{\mathbb{A}}_{l}}=\mathit{\text{rank}}\left({\left[\mathit{\text{Reshape}}\left({\mathit{\mathscr{H}}}_{c},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\right)\phantom{\rule{0.3em}{0ex}}\right]}_{l}\right)\right)$. Proposition 2.1 can now be applied, and this leads to
Finally, applying R e s h a p e^{−1} to the previous equation leads to the end of the proof:
Discussion on choice of partition and complexity As mentioned previously, the total number of AUHOSVD for a P thorder tensor is equal to B_{ P }. Since this number could become significant, it is important to have a procedure to find good partitions for the AUHOSVD computation. We propose a twostep procedure. Since the AUHOSVD has been developed for LR reduction, the most important criterion is to choose the partitions which emphasize deficient ranks. For some applications, it is possible to use a priori knowledge to select some partitions as will be shown in Section 5 for polarimetric STAP. Next, another step is needed if several partitions induce an AUHOSVD with a deficient rank. At this point, we propose to maximize a criterion (see Section 5.3 for examples) over the remaining partitions.
Concerning the complexity, the number of operation necessary to compute the HOSVD of a P thorder tensor is equal to $4\left(\prod _{p}{I}_{p}\right)\left(\sum _{p}{I}_{p}\right)$[3]. Similarly, the complexity of the AUHOSVD is equal to $4\left(\prod _{p}{I}_{p}\right)\left(\sum _{l}{I}_{{\mathbb{A}}_{l}}\right)$.
4 Lowrank filter and detector for multidimensional data based on the alternative unfolding HOSVD
We propose in this section to apply this new decomposition to derive a tensorial LR filter and a tensorial LR detector for multidimensional data. We consider the case of a Pdimensional data composed of a target described by its steering tensor and two additive noises: and . We assume that we have K secondary data ${\mathit{X}}_{k}$ containing only the additive noises. This configuration could be summarized as follows:
where $\mathit{X},{\mathit{X}}_{k},\mathit{C},{\mathit{C}}_{k},\mathit{N},{\mathit{N}}_{k}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}$. We assume that $\mathit{N},{\mathit{N}}_{k}\sim \mathcal{C}\mathcal{N}\left(\mathbf{0},{\sigma}^{2}\mathit{\text{SqMa}}{t}^{1}\left({\mathbf{I}}_{{I}_{1}\dots {I}_{p}}\right)\right)$ and $\mathit{C},{\mathit{C}}_{k}\sim \mathcal{C}\mathcal{N}(\mathbf{0},{\mathit{R}}_{c})$$\left(\mathit{\text{SqMa}}{t}^{1}\left({\mathbf{I}}_{{I}_{1}\dots {I}_{p}}\right),{\mathit{R}}_{c}\in {C}^{{I}_{1}\times \dots \times {I}_{P}\times {I}_{1}\times \dots \times {I}_{P}}\right)$. These notations mean $\mathit{\text{vec}}\left(\mathit{N}\right),\mathit{\text{vec}}\left({\mathit{N}}_{k}\right)\sim \mathcal{C}\mathcal{N}(\mathbf{0},{\sigma}^{2}{\mathbf{I}}_{{I}_{1}\dots {I}_{p}})$ and $\mathit{\text{vec}}\left(\mathit{C}\right),$$\mathit{\text{vec}}\left({\mathit{C}}_{k}\right)\sim \mathcal{C}\mathcal{N}(\mathbf{0},\mathit{\text{SqMat}}({\mathit{R}}_{c}\left)\right)$. We denote $\mathit{R}={\mathit{R}}_{c}\phantom{\rule{2.56804pt}{0ex}}+$${\sigma}^{2}\mathit{\text{SqMa}}{t}^{1}\left({\mathbf{I}}_{{I}_{1}\dots {I}_{p}}\right)$ the covariance tensor of the total interference. We assume in the following that the additive noise (and hence also ${\mathit{C}}_{k}$) has a lowrank structure.
4.1 LR filters
Proposition 4.1(Optimal tensor filter).
The optimal tensor filter, , is defined as the filter which maximizes the SINR output
It is given by the following expression:
Proof 4.1.
See Appendix 1.
In practical cases, is unknown. Hence, we propose an adaptive version:
where $\widehat{\mathit{R}}$ is the estimation of given by the SCT from Equation 9. This filter is equivalent to the vector filter. In order to reach correct performance [29], K=2I_{1}…I_{ P } secondary data are necessary. As with the vectorial approach, it is interesting to use the lowrank structure of to reduce this number K.
Proposition 4.2(Lowrank tensor filter).
Let ${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$ be a partition of {1,…,P}. The lowrank tensor filter associated to ${\text{AUHOSVD}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}}$, which removes the lowrank noise , is given by
The matrices ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}$’s are given by truncation of the matrices ${\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}$’s obtained by the ${\text{AUHOSVD}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}}$ of : ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{l}\right)}=\left[\phantom{\rule{0.3em}{0ex}}{\mathbf{u}}_{{r}_{{\mathbb{A}}_{l}}+1}^{\left({\mathbb{A}}_{l}\right)}\dots {\mathbf{u}}_{{\mathbb{A}}_{l}}^{\left({\mathbb{A}}_{l}\right)}\right]$.
For a Pdimensional configuration, B_{ P } filters will be obtained.
Proof 4.2.
See Appendix 2.
In its adaptive version, denoted ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}({\mathbb{A}}_{1}/\dots /{\mathbb{A}}_{L})}$, the matrices ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)},\dots ,{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}$ are replaced by their estimates ${\widehat{\mathbf{U}}}_{0}^{\left({\mathbb{A}}_{1}\right)},\dots ,{\widehat{\mathbf{U}}}_{0}^{\left({\mathbb{A}}_{L}\right)}$.
The number of secondary data necessary to reach classical performance is not known. In the vectorial case, the performance of LR filter depends on the deficient rank [10, 11]. It will be similar for the LR tensor filters. This implies that the choice of the partition ${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$ is critical.
4.2 LR detectors
In a detection point of view, the problem can also be stated as the following binary hypothesis test:
Proposition 4.3(Lowrank tensor detector).
Let ${\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}$ be a partition of {1,…,P}. The lowrank tensor detector associated to ${\text{AUHOSVD}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}}$, which removes the lowrank noise and performs the generalized likelihood ratio test (GLRT), is given by
where
The matrices ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}$’s are given by truncation of the matrices ${\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}$’s obtained by the ${\text{AUHOSVD}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}}$ of : ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{l}\right)}=\left[\phantom{\rule{0.3em}{0ex}}{\mathbf{u}}_{{r}_{{\mathbb{A}}_{l}}+1}^{\left({\mathbb{A}}_{l}\right)}\dots {\mathbf{u}}_{{\mathbb{A}}_{l}}^{\left({\mathbb{A}}_{l}\right)}\right]$.
Proof 4.3.
See Appendix 3.
In its adaptive version, denoted as ${\widehat{\Lambda}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}$, the matrices ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)},\dots ,{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}$ are replaced by their estimates ${\widehat{\mathbf{U}}}_{0}^{\left({\mathbb{A}}_{1}\right)},\dots ,{\widehat{\mathbf{U}}}_{0}^{\left({\mathbb{A}}_{L}\right)}$.
4.3 Particular case
When the partition ${\mathbb{A}}_{1}=\{1,\dots ,P\}$ is chosen, the filter and the detector obtained by the AUHOSVD are equal to the vectorial one. In other words, it is equivalent to apply the operator vec on Equations 24 and 25 and use the vectorial method. We denote m=I_{1}…I_{ P }, $\mathbf{x}=\mathit{\text{vec}}\left(\mathit{X}\right)$ and $\mathbf{s}=\mathit{\text{vec}}\left(\mathit{S}\right)$. We obtain the basis of the orthogonal clutter subspace U_{0} by taking the last (m−r) columns of U which is computed by the SVD of $\mathit{\text{SqMat}}\left(\mathit{R}\right)=\mathbf{U}\mathbf{\Sigma}{\mathbf{V}}^{H}$. From this basis, the lowrank filter is then equal to [10, 11]:
In its adaptive version, denoted ${\widehat{\mathbf{w}}}_{\mathit{\text{lr}}}$, the matrix U_{0} is replaced by its estimate $\widehat{{\mathbf{U}}_{0}}$.
Similarly, the detector is equal to the lowrank normalized matched filter proposed in [30, 31]:
In its adaptive version, denoted ${\widehat{\Lambda}}_{\text{LRNMF}}$, the matrix U_{0} is replaced by its estimate ${\widehat{\mathbf{U}}}_{0}$.
5 Application to polarimetric STAP
5.1 Model
We propose to apply the LR filters and the LR detectors derived in the previous section to polarimetric STAP. STAP is applied to airborne radar in order to detect moving targets [20]. Typically, the radar receiver consists of an array of N antenna elements processing M pulses in a coherent processing interval. In polarimetric configuration, each element transmits and receives in both H and V polarizations, resulting in three polarimetric channels (HH, VV, HV/VH). The number of dimensions of polarimetric STAP data is then equal to three: N antenna, M pulses, and three polarimetric channels.
We are in the data configuration proposed in Equations 24 and 25 which is recalled in the following equations:
where $\mathit{X},{\mathit{X}}_{k}\in {C}^{M\times N\times 3}$. The steering tensor and the responses of the background and ${\mathit{C}}_{k}$, called clutter in STAP, are obtained from the model proposed in [22]. and ${\mathit{N}}_{k}$, which arise from the electrical components of the radar, are distributed as a white Gaussian noise.
The steering tensor, , is formed as follows:
where s_{HH}(θ,v) is the 2D steering vector [20], characterized by the angle of arrival (AOA) θ and the speed v of the target. α_{VV} and α_{VH} are two complex coefficients. These coefficients are assumed to be known. This is the case when the detection process concerns a particular target (surface, doublebounds, volume, …). The covariance tensor, denoted as $\mathit{R}\in {C}^{M\times N\times 3\times M\times N\times 3}$, of the two noises ($\mathit{C}+\mathit{N}$ and ${\mathit{C}}_{k}+{\mathit{N}}_{k}$) is given by
where σ^{2} is the power of the white noise. R_{ p c } is built as follows:
where ${R}_{c}\in {C}^{\mathit{\text{MN}}\times \mathit{\text{MN}}}$ is the covariance matrix of the HH channel clutter, built as the 2D clutter, which is known to have a LR structure [20]. γ_{VV} and γ_{VH} are two coefficients relative to the nature of the ground, and ρ is the correlation coefficient between the channels HH and VV. Due to the structure of R_{ p c }, the lowrank structure of the clutter is preserved.
In the following subsection, we discuss about the choice of partitions in this particular context.
5.2 Choice of partition
For polarimetric STAP, we have P=3 and $\mathbb{A}=\{1,2,3\}$: B_{3}=5 LR filters and LR detectors are obtained. The different choices of partition are presented in Table 1. All filters and detectors are computed with the AUHOSVD. Nevertheless, the first two partitions are particular cases. When ${\mathbb{A}}_{1}=\{1,2,3\}$, the algorithms are equal to the vectorial one as mentioned in Section 4.3. When ${\mathbb{A}}_{1}=\left\{1\right\}$, ${\mathbb{A}}_{2}=\left\{2\right\}$, ${\mathbb{A}}_{3}=\left\{3\right\}$, we obtain the same LR filter and LR detector as those given by the HOSVD. The ranks relative to the LR filters and LR detectors are described in the following:

The rank r_{1} is the spatial rank, and the rank r_{2} is the temporal rank. They depend on radar parameters, and in most cases, they are not deficient.

r_{3} could be deficient depending on the nature of the data and especially on the correlation coefficient ρ between the polarimetric channels.

r_{12} is the same as the 2D lowrank vector case and can be calculated by the Brennan’s rule [21].

r_{123} is deficient and is linked to r_{3} and r_{12}.

r_{13} and r_{23} could be deficient and depends on r_{1}, r_{2} and r_{3}.
5.3 Performance criteria
In order to evaluate the performance of our LR filters, we evaluate the SINR loss defined as follows [20]:
where S I N R_{ o u t } is the SINR at the output of the LR tensor STAP filter and S I N R_{ m a x } the SINR at the output of the optimal filter ${\mathit{W}}_{\text{opt}}$. The S I N R_{ o u t } is maximum when $\mathit{W}={\mathit{W}}_{\text{opt}}=\mathit{\text{ve}}{c}^{1}\left(\mathit{\text{SqMat}}{\left(\mathit{R}\right)}^{1}\mathit{\text{vec}}\left(\mathit{S}\right)\right)$. After some developments, the SINR loss is equal to [1]
For the moment, as the analytical formulation of the SINR loss for the tensorial approach is not available, it will be evaluated using Monte Carlo simulations.
In order to evaluate the performance of our LR detectors, we use the probability of false alarm (Pfa) and probability of detection (Pd):
where η is the detector threshold. Since there is no analytical formulation for Pfa and Pd (for the adaptive version) even in the vectorial case, Monte Carlo simulations are used to evaluate them.
5.4 Simulations
Parameters The simulations are performed with the following parameters. The target is characterized by an angle of arrival (AOA) of θ=0° and a speed of v=10 m s ^{−1}, a case where the 2D STAP is known to be inefficient because the target is close to the clutter ridge. The radar receiver contains N=8 sensors processing M=8 pulses. The platform speed V, is equal to 100 m s ^{−1}. For the clutter, we consider two cases: ρ=1, i.e., the channels HH and VV are entirely correlated and ρ=0.5. The SNR is equal to 45 dB and the cluttertonoise ratio (CNR) to 40 dB. r_{1}, r_{2}, r_{3}, and r_{12} can be calculated based on the radar configuration. r_{13} depends on the value of ρ. r_{123} and r_{23} are estimated according to the singular values of the different unfoldings of . The results are presented in Table 2. All MonteCarlo simulations are performed with N_{ r e a }=1,000 samples, except the probability of false alarm where N_{ r e a }=100.
Results on SINR losses Figures 1 and 2 show the SINR losses for each filter as a function of K. SINR losses are obtained from MonteCarlo simulations using Equation 43. On both figures, the SINR loss of the 2D STAP is plotted for comparison. The wellknown result is obtained: the SINR loss reaches −3 dB when K=2r_{12}=30, and it tends to 0 dB as K increases. Similarly, the SINR loss of ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,2,3)}$ reaches −3 dB when K=2r_{123} (60 for ρ=1 and 90 for ρ=0.5). When ρ=1, all LR filters achieve reasonable performance since all ranks, except r_{1} and r_{2}, are deficient. ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1/2/3)}$, ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1/2,3)}$, and ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,3/2)}$, which can only be obtained by AUHOSVD, outperform ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,2,3)}$ and the 2D STAP for a small number of secondary data. This situation is more realistic since the assumption of homogeneity of the data is no longer true when K is too large. ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,2/3)}$ has poor performance in this scenario.
When ρ=0.5, ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,2/3)}$ outperforms ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,2,3)}$ and the 2D STAP regardless of the number of secondary data. This corresponds to a more realistic scenario, since the channels HH and VV are not entirely correlated. ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1/2/3)}$, ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1/2,3)}$, and ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,3/2)}$ do not have acceptable performance. This is explained by the fact that all ranks pertaining to these filters are full and no projection can be done as mentioned at the end of Section 3. These filters (for ρ=0.5) will not be studied in the rest of the simulations.
Figures 3 and 4 show the SINR loss as a function of the CNR for K=2r_{12}=30 secondary data. They show that our filters are more robust than the vectorial one for polarimetric STAP configuration.
Figures 5 and 6 show the SINR loss as a function of the target velocity for K=180. For both cases, the 2D STAP achieves the expected performance. For ρ=1, the difference in polarimetric properties between the target and the clutter is exploited by our filters, since r_{3} is deficient. When the target is in the clutter ridge, the SINR loss is higher (especially for ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1/2/3)}$, ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1/2,3)}$, and ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,3/2)}$) than the 2D LR STAP filter. By contrast, when ρ=0.5, the 2D LR STAP filter outperforms ${\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,2/3)}{\widehat{\mathit{W}}}_{\mathit{\text{lr}}(1,2,3)}$ (in the context of large K), since r_{3} is full.
Results on Pfa and Pd The Pfa as a function of threshold is presented in Figures 7 and 8. The probability of detection as a function of SNR is presented in Figures 9 and 10 for K=30. The thresholds are chosen in order to have a PfA of 10^{−2} according to Figures 7 and 8. When ρ=1, ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1/2/3)}$, ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1/2,3)}$, and ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1,3/2)}$, which can only be obtained by AUHOSVD, outperform ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1,2,3)}$ and the 2D STAP LRNMF. For instance, Pd is equal to 90% when the SNR is equal to 15 dB for ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1/2/3)}$, ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1/2,3)}$, and ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1,3/2)}$; 20 dB for the 2D STAP LRNMF; and 33 dB for ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1,2,3)}$. When ρ=0.5, ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1,2/3)}$ outperforms ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1,2,3)}$ and the 2D STAP LRNMF. For instance, Pd is equal to 90% when the SNR is equal to 16 dB for ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1,2/3)}$, 20 dB for the 2D STAP LRNMF, and 54 dB for ${\widehat{\Lambda}}_{\mathit{\text{lr}}(1,2,3)}$.
The results on Pd confirm the results on SINR loss concerning the most efficient partition for the two scenarios. In particular, it shows that the best results are provided by the filters and detectors which can only be obtained with the AUHOSVD.
6 Conclusion
In this paper, we introduced a new multilinear decomposition: the AUHOSVD. This new decomposition generalizes the HOSVD and highlights the correlated data in a multidimensional set. We showed that the properties of the AUHOSVD are proven to be the same as those for HOSVD: the orthogonality and the LR decomposition. We have also derived LR filters and LR detectors based on AUHOSVD for multidimensional data containing one LR structure contribution. Finally, we applied our new LR filters and LR detectors to polarimetric spacetime adaptive processing (STAP) where the dimension of the problem is three and the contribution of the background is correlated in time and space. Simulations based on signaltointerferenceplusnoise ratio (SINR) losses, probability of detection (Pd), and probability of false alarm (Pfa) showed the interest of our approach: LR filters and LR detectors which can be obtained only from AUHOSVD outperformed the vectorial approach and those obtained from HOSVD in the general polarimetry physic model (where the channels HH and VV are not completely correlated). The main future work concerns the application of the LR filters and LR detectors developed from the AUHOSVD for the general system of MIMOSTAP [23–26].
Endnotes
^{a} This new decomposition has similarity with the block term decomposition introduced in [13, 32] and [33], which proposes to unify HOSVD and CP.
^{b} Using this assumption, a lowrank vector STAP filter can be derived based on the projector onto the subspace orthogonal to the clutter (see [10, 11, 34] for more details).
^{c} This definition of rank is directly extended from the definition of pranks.
Appendices
Appendix 1
Proof of Proposition 4.1
By analogy with the vector case [35], we derive the optimal filter ${\mathit{W}}_{\text{opt}}$, which maximizes the output S I N R_{ o u t }:
Then,
By CauchySchwarz inequality, (47) is maximum when $\mathit{\text{SqMat}}{\left(\mathit{R}\right)}^{\frac{1}{2}\left(H\right)}\mathit{\text{vec}}\left({\mathit{W}}_{\text{opt}}\right)=\mathit{\text{SqMat}}{\left(\mathit{R}\right)}^{\frac{1}{2}}\mathit{\text{vec}}\left(\mathit{S}\right)$ and $\mathit{\text{vec}}\left({\mathit{W}}_{\text{opt}}\right)=\mathit{\text{SqMat}}{\left(\mathit{R}\right)}^{1}\mathit{\text{vec}}\left(\mathit{S}\right)$. We replace ${\mathit{W}}_{\text{opt}}$ in (46):
Appendix 2
Proof of Proposition 4.2
We propose to derive the lowrank tensor filter as follows:

First, the covariance tensor$\mathit{R}$ is decomposed with the AUHOSVD:
$$\begin{array}{l}\mathit{R}={\mathit{K}}_{{\mathbb{A}}_{1}/\dots /{\mathbb{A}}_{2L}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}^{\left({\mathbb{A}}_{1}\right)}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}^{\left({\mathbb{A}}_{L}\right)}\\ {\times}_{{\mathbb{A}}_{L+1}}{\mathbf{U}}^{\left({\mathbb{A}}_{1}\right)\ast}\dots {\times}_{{\mathbb{A}}_{2L}}{\mathbf{U}}^{\left({\mathbb{A}}_{L}\right)\ast}.\end{array}$$(49) 
$${r}_{{\mathbb{A}}_{1}}$$
, …, ${r}_{{\mathbb{A}}_{L}}\left({r}_{{\mathbb{A}}_{l}}=\mathit{\text{rank}}\left({\left[\mathit{R}\right]}_{{\mathbb{A}}_{l}}\right)\right)$ are estimated.

Each ${\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}$ is truncated to obtain ${\mathbf{U}}_{0}^{\left({\mathbb{A}}_{l}\right)}=\left[{\mathbf{u}}_{{r}_{{\mathbb{A}}_{l}}+1}^{\left({\mathbb{A}}_{l}\right)},\dots ,{\mathbf{u}}_{{\mathbb{A}}_{l}}^{\left({\mathbb{A}}_{l}\right)}\right]$.

Proposition 3.2, with $\mathit{\mathscr{H}}=\mathit{X}$, ${\mathit{\mathscr{H}}}_{c}=\mathit{C}$ and ${\mathit{\mathscr{H}}}_{0}=\alpha \mathit{S}+\mathit{N}$ is applied in order to remove the LR contribution:
$$\mathit{X}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)H}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)H}\approx \alpha \mathit{S}+\mathit{N}.$$(50) 
The problem is then to filter which is corrupted by a white noise . The filter given by (27) is applied with $\mathit{R}={\mathit{I}}_{{I}_{1}\dots {I}_{p}}$:
$$y=\phantom{\rule{0.3em}{0ex}}\left<\mathit{S},\mathit{X}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)H}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)H}\phantom{\rule{0.3em}{0ex}}>\right.$$(51) 
Finally, the output filter is rewritten:
$$\begin{array}{l}{\mathit{W}}_{\mathit{\text{lr}}({\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{P})}=\mathit{S}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)H}\dots \\ {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)H}\end{array}$$(52)$$\begin{array}{l}y=<{\mathit{W}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{P}},\mathit{X}>.\end{array}$$(53)
Appendix 3
Proof of Proposition 4.3
To prove Proposition 4.3, let us recall the hypothesis test:

Using Proposition 3.2, the data are preprocessed in order to remove the LR contribution. We denote ${\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}={\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)H}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)H}$. The hypothesis test becomes
$$\left\{\begin{array}{l}{H}_{0}:{\mathit{X}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}={\mathit{N}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}},{\mathit{X}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}={\mathit{N}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\\ {H}_{1}:{\mathit{X}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}=\alpha {\mathit{S}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}+{\mathit{N}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}},\\ {\mathit{X}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}={\mathit{N}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\end{array}.\right.$$(55) 
Then, the operator vec is applied, which leads to
$$\phantom{\rule{13.0pt}{0ex}}\left\{\begin{array}{l}{H}_{0}:\mathit{\text{vec}}\left({\mathit{X}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)=\mathit{\text{vec}}\left({\mathit{N}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right),\\ \phantom{\rule{2em}{0ex}}\mathit{\text{vec}}\left({\mathit{X}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)=\mathit{\text{vec}}\left({\mathit{N}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)\\ {H}_{1}:\mathit{\text{vec}}\left({\mathit{X}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)=\mathrm{\alpha vec}\left({\mathit{S}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)+\mathit{\text{vec}}\left({\mathit{N}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right),\\ \phantom{\rule{2em}{0ex}}\mathit{\text{vec}}\left({\mathit{X}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)=\mathit{\text{vec}}\left({\mathit{N}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)\end{array}\right.$$(56)where $\mathit{\text{vec}}\left({\mathit{N}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right),\phantom{\rule{1em}{0ex}}\mathit{\text{vec}}\left({\mathit{N}}_{k,{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)\sim \mathcal{C}\mathcal{N}\left(\mathbf{0},{\sigma}^{2}{\mathbf{I}}_{{I}_{1}\dots {I}_{p}}\right)$.

The problem is then to detect a signal $\mathit{\text{vec}}\left({\mathit{S}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)$ corrupted by a white noise $\mathit{\text{vec}}\left({\mathit{N}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)$. Since α and σ are unknown, the adaptive normalized matched filter introduced in [30] can be applied:
$$\phantom{\rule{18.0pt}{0ex}}\begin{array}{l}{\Lambda}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\\ =\frac{<\mathit{\text{vec}}\left({\mathit{S}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right),\mathit{\text{vec}}\left({\mathit{X}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)>{}^{2}}{<\mathit{\text{vec}}\left({\mathit{S}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right),\mathit{\text{vec}}\left({\mathit{S}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)><\mathit{\text{vec}}\left({\mathit{X}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right),\mathit{\text{vec}}\left({\mathit{X}}_{{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}}\right)>}.\end{array}$$(57) 
Finally, the proposition is proven by applying the operator v e c^{−1}.
References
 1.
Boizard M, Ginolhac G, Pascal F, Forster P: A new tool for multidimensional lowrank STAP filter: cross HOSVDs. In Proceedings of EUSIPCO. Bucharest,, Romania; September 2012.
 2.
Kolda T, Bader B: Tensor decompositions and applications. SIAM Rev 2009, 51: 455500. 10.1137/07070111X
 3.
Lathauwer LD, Moor BD, Vandewalle J: A multilinear singular value decomposition. SIAM J. Matrix Anal. Apl 2000, 24(4):12531278.
 4.
Harshman RA: Foundation of the PARAFAC procedure: model and conditions for an explanatory multimode factor analysis. UCLA Working Pap. Phon 1970, 16: 184.
 5.
Sidiropoulos ND, Bro R, Giannakis GB: Parallel factor analysis in sensor array processing. IEEE Trans. Proc. Sig. Proc 2000, 48(8):23772388. 10.1109/78.852018
 6.
Sidiropoulos ND, Bro R, Giannakis GB: Blind PARAFAC receivers for DSCDMA systems. IEEE Trans. Proc. Sig. Proc 2000, 48(3):810823. 10.1109/78.824675
 7.
de Almeida ALF, Favier G, Mota JCM: Constrained Tucker3 model for blind beamforming. Elsevier Signal Process 2009, 89: 12401244. 10.1016/j.sigpro.2008.11.016
 8.
Favier G, da Costa MN, de Almeida ALF, Romano JMT: Tensor space time (TST) coding for MIMO wireless communication systems. Elsevier Signal Process 2012, 92: 10791092. 10.1016/j.sigpro.2011.10.021
 9.
Tucker LR: Some mathematical notes on threemode factor analysis. Psychometrika 1966, 31: 279311. 10.1007/BF02289464
 10.
Kirsteins I, Tufts D: Adaptive detection using a low rank approximation to a data matrix. IEEE Trans. Aero. Elec. Syst 1994, 30: 5567. 10.1109/7.250406
 11.
Haimovich A: Asymptotic distribution of the conditional signaltonoise ratio in an eigenanalysisbased adaptive array. IEEE Trans. Aero. Elec. Syst 1997, 33: 988997.
 12.
Comon P: Tensors : a brief introduction. IEEE Signal Process. Mag 2014, 31(3):4453.
 13.
Lathauwer LD: Decompositions of a higherorder tensor in block terms part I: lemmas for partitioned matrices. SIAM J. Matrix Anal. Apl 2008, 30(3):10221032. 10.1137/060661685
 14.
Muti D, Bourennane S: Multidimensional filtering based on a tensor approach. Elsevier Signal Process 2005, 85: 23382353. 10.1016/j.sigpro.2004.11.029
 15.
Bihan NL, Ginolhac G: Threemode data set analysis using higher order subspace method: application to sonar and seismoacoustic signal processing. Elsevier Signal Process 2004, 84(5):919942. 10.1016/j.sigpro.2004.02.003
 16.
Haardt M, Roemer F, Galdo GD: Higherorder SVDbased subspace estimation to improve the parameter estimation accuracy in multidimensionnal harmonic retrieval problems. IEEE Trans. Proc. Sig. Proc 2008, 56(7):31983213.
 17.
Lathauwer LD, Moor BD, Vandewalle J: Independent component analysis and (simultaneous) thirdorder tensor diagonalization. IEEE Trans. Sig. Proc 2001, 49: 22622271. 10.1109/78.950782
 18.
Vasilescu MAO, Terzopoulos D: Multilinear subspace analysis of image ensembles. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition June 2003.
 19.
Salmi J, Richter A, Koivunen V: Sequential unfolding SVD for tensors with applications in array signal processing. IEEE Trans. Sig. Proc 2009, 57(12):47194733.
 20.
Ward J: Spacetime adaptive processing for airborne radar. Technical report Lincoln Lab., MIT, Lexington, Mass., USA (December 1994)
 21.
Brennan LE, Staudaher FM: Subclutter visibility demonstration. Technical report RLTR9221, Adaptive Sensors Incorporated (March 1992)
 22.
Showman G, Melvin W, Belenkii M: Performance evaluation of two polarimetric STAP architectures. Proc. of the IEEE Int. Radar Conf May 2003, 5965.
 23.
Mecca VF, Ramakrishnan D, Krolik JL: MIMO radar spacetime adaptive processing for multipath clutter mitigation. Fourth IEEE Workshop On Sensor Array and Multichannel Processing, 2006 July 2006, 249253.
 24.
Chen CY, Vaidyanathan PP: A subspace method for MIMO radar spacetime adaptive processing. In IEEE International Conference on ICASSP 2007. Honolulu, Hawaii, USA; 2007:925928.
 25.
Chen CY, Vaidyanathani PP: MIMO radar spacetime adaptive processing using prolate spheroidal wave functions. IEEE Trans. Sig. Proc 2008, 56(2):623635.
 26.
Fa R, de Lamare RC, Clarke P: Reducedrank STAP for MIMO radar based on joint iterative optimization of knowledgeaided adaptive filters. 2009 Conference Record of the FortyThird Asilomar Conference On Signals, Systems and Computers 496500.
 27.
Lathauwer LD, Moor BD, Vandewalle J: On the best rank1 and rank ( r_{1}, r_{2},..., r_{ n }) approximation and applications of higherorder tensors. SIAM J. Matrix Anal. Apl 2000, 21(4):13241342. 10.1137/S0895479898346995
 28.
Miron S, Bihan NL, Mars J: Vectorsensor MUSIC for polarized seismic sources localization. EURASIP J. Adv. Signal Process 2005, 2005: 7484. 10.1155/ASP.2005.74
 29.
Reed IS, Mallett JD, Brennan LE: Rapid convergence rate in adaptive arrays. IEEE Trans. Aero. Elec. Syst 1974, AES10(6):853863.
 30.
Scharf LL, Friedlander B: Matched subspace detector. IEEE Trans. Sig. Proc 1994, 42(8):21462157. 10.1109/78.301849
 31.
Rangaswamy M, Lin FC, Gerlach KR: Robust adaptive signal processing methods for heterogeneous radar clutter scenarios. Elsevier Signal Process 84: 2004.
 32.
Lathauwer LD: Decompositions of a higherorder tensor in block terms part II definitions and uniqueness. SIAM J. Matrix Anal. Apl 2008, 30(3):10331066. 10.1137/070690729
 33.
Lathauwer LD, Nion D: Decompositions of a higherorder tensor in block terms  part III: alternating least squares algorithms. SIAM J. Matrix Anal. Apl 2008, 30(3):10671083. 10.1137/070690730
 34.
Ginolhac G, Forster P, Pascal F, Ovarlez JP: Performance of two lowrank STAP filters in a heterogeneous noise. IEEE Trans. Signal Process 2013, 61: 5761.
 35.
Brennan LE, Reed LS: Theory of adaptive radar. IEEE Trans. Aero. Elec. Syst 1973, 9(2):237252.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Boizard, M., Ginolhac, G., Pascal, F. et al. Lowrank filter and detector for multidimensional data based on an alternative unfolding HOSVD: application to polarimetric STAP. EURASIP J. Adv. Signal Process. 2014, 119 (2014). https://doi.org/10.1186/168761802014119
Received:
Accepted:
Published:
Keywords
 Multilinear algebra
 HOSVD
 Lowrank approximation
 STAP
 Lowrank filter
 Lowrank detector