Due to proposition 2.1, it is possible to design LR filters based on HOSVD. This approach does not work when all pranks are full (i.e., r_{
p
}=I_{
p
}, p=1…P), since no projection could be done. However, the data may still have a LR structure. This is the case of correlated data where one or more ranks relative to a group of dimensions are deficient. Tensor decompositions allowing to exploit this kind of structure have not been promoted. To fill this gap, we propose to introduce a new tool which will be able to extract this kind of information. This section contains the main contribution of this paper: the derivation of the AUHOSVD and its principal properties.
3.1 Generalization of standard operators
Notation of indices In order to consider correlated information, we introduce a new notation for the indices of a tensor. We consider \mathit{\mathscr{H}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}, a P thorder tensor. We denote \mathbb{A}=\{1,\dots ,P\} the set of the dimensions and {\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}, L subsets of which define a partition of . In other words, {\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}satisfy the following conditions:
Moreover {C}^{{I}_{1}\dots {I}_{P}} is denoted {C}^{{I}_{\mathbb{A}}}. For example, when {\mathbb{A}}_{1}=\{1,2\} and {\mathbb{A}}_{2}=\{3,4\}, {C}^{{I}_{{\mathbb{A}}_{1}}\times {I}_{{\mathbb{A}}_{2}}} means {C}^{{I}_{1}{I}_{2}\times {I}_{3}{I}_{4}}.
A generalization of unfolding in matrices In order to build our new decomposition, we need a generalized unfolding, adapted from [2]. This operator allows to unfold a tensor into a matrix whose dimensions could be any combination {\mathbb{A}}_{l} of the tensor dimensions. It is denoted as {\left[\phantom{\rule{0.3em}{0ex}}.\right]}_{{\mathbb{A}}_{l}}, and it transforms into a matrix {\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}\in {C}^{{I}_{{\mathbb{A}}_{l}}\times {I}_{\mathbb{A}\setminus {\mathbb{A}}_{l}}}.
A new unfolding in tensors We denote as Reshape the operator which transforms a tensor into a tensor Reshape(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\in {C}^{{I}_{{\mathbb{A}}_{1}}\times \dots \times {I}_{{\mathbb{A}}_{L}}} and Reshape^{−1} the inverse operator.
A new tensor product The nmode product allows to multiply a tensor with a matrix along one dimension. We propose to extend the nmode product to multiply a tensor with a matrix along several dimensions, combined in {\mathbb{A}}_{l}. Let \mathbf{D}\in {C}^{{I}_{{\mathbb{A}}_{l}}\times {I}_{{\mathbb{A}}_{l}}} be a square matrix. This new product, called multimode product, is defined as
\mathit{\mathcal{B}}=\mathit{\mathscr{H}}{\times}_{{\mathbb{A}}_{l}}\mathbf{D}\phantom{\rule{2.77626pt}{0ex}}\iff \phantom{\rule{2.77626pt}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathcal{B}}\right]}_{{\mathbb{A}}_{l}}=\mathbf{D}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}.
(11)
The following proposition shows the link between multimode product and nmode product.
Proposition 3.1(Link between n mode product and multimode product).
Let \mathit{\mathscr{H}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}} be a P thorder tensor, {\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L} be a partition of , and \mathbf{D}\in {C}^{{I}_{{\mathbb{A}}_{l}}\times {I}_{{\mathbb{A}}_{l}}} be a square matrix. Then, the following equality is verified:
\begin{array}{l}\mathit{\text{Reshape}}(\mathit{\mathscr{H}}{\times}_{{\mathbb{A}}_{l}}\mathbf{D},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{2em}{0ex}}=\mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}){\times}_{l}\mathbf{D}\phantom{\rule{2em}{0ex}}\end{array}
(12)
Proof
3.1
The proof of Theorem 3.1 relies on the following straightforward result:
\forall l\in \phantom{\rule{2.77626pt}{0ex}}[\phantom{\rule{0.3em}{0ex}}1,L],\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}=\phantom{\rule{2.77626pt}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}\right(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left)\right]}_{l}.
This leads to {\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathcal{B}}\right]}_{{\mathbb{A}}_{l}}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}\right(\mathit{\mathcal{B}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left)\right]}_{l} and {\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}\right(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left)\right]}_{l}. Applying these two results on (11), we obtain
\begin{array}{l}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}(\mathit{\mathcal{B}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\right]}_{l}\\ \phantom{\rule{2em}{0ex}}=\mathbf{D}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}\right(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left)\right]}_{l}.\end{array}
(13)
From Equation 2, Equation 13 is equivalent to
\mathit{\text{Reshape}}(\mathit{\mathcal{B}},{\mathbb{A}}_{1},\dots ,\mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left){\times}_{l}{\mathbb{A}}_{L}\right)\mathbf{D}.
Finally, one has
\begin{array}{l}\mathit{\text{Reshape}}(\mathit{\mathscr{H}}{\times}_{{\mathbb{A}}_{l}}\mathbf{D},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{2em}{0ex}}=\mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}){\times}_{l}\mathbf{D}.\phantom{\rule{2em}{0ex}}\end{array}
Remark Thanks to the previous proposition and the commutative property of nmode product, multimode product is also commutative.
3.2 AUHOSVD
With the new tools presented in the previous subsection, we are now able to introduce the AUHOSVD. This is the purpose of the following theorem.
Theorem 3.1(Alternative unfolding HOSVD).
Let \mathit{\mathscr{H}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}} and {\mathbb{A}}_{1}\dots {\mathbb{A}}_{L} a partition of . Then, may be decomposed as follows:
\mathit{\mathscr{H}}={\mathit{K}}_{{\mathbb{A}}_{1}/\dots /{\mathbb{A}}_{L}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}^{\left({\mathbb{A}}_{1}\right)}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}^{\left({\mathbb{A}}_{L}\right)},
(14)
where

l∈ [ 1,L], {\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}\in {C}^{{\mathbb{A}}_{l}\times {\mathbb{A}}_{l}} is unitary. The matrix {\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)} is given by the singular value decomposition of the {\mathbb{A}}_{l}dimension unfolding, {\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}={\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}{\mathbf{\Sigma}}^{\left({\mathbb{A}}_{l}\right)}{\mathbf{V}}^{\left({\mathbb{A}}_{l}\right)H}.

{\mathit{K}}_{{\mathbb{A}}_{1}/\dots /{\mathbb{A}}_{L}}\in {C}^{{I}_{1}\times \dots \times {I}_{P}}
is the core tensor. It has the same properties as the HOSVD core tensor.
Notice that there are several ways to decompose a tensor with the AUHOSVD. Each choice of the {\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L} gives a different decomposition. For a P thorder tensor, the number of different AUHOSVD is given by the Bell number, B_{
P
}:
\begin{array}{c}{B}_{1}=1\\ \phantom{\rule{2em}{0ex}}{B}_{P+1}=\sum _{k=1}^{P}\left(\genfrac{}{}{0.0pt}{}{P}{k}\right){B}_{k}.\end{array}
The AUHOSVD associated to the partition {\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L} is denoted {\text{AUHOSVD}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}}.
Proof 3.2.
First, let us consider {\mathbb{A}}_{1},\dots ,\phantom{\rule{1em}{0ex}}{\mathbb{A}}_{L}, a partition of . \mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1}\dots {\mathbb{A}}_{L}) is a L thorder tensor and may be decomposed using the HOSVD:
\begin{array}{l}\mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})=\mathit{K}{\times}_{1}{\mathbf{U}}^{\left(1\right)}\dots {\times}_{L}{\mathbf{U}}^{\left(L\right)},\end{array}
(15)
where the matrix U^{(l)} is given by the singular value decomposition of the ldimension unfolding, {\left[\phantom{\rule{0.3em}{0ex}}\mathit{\text{Reshape}}\right(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\left)\right]}_{l}\phantom{\rule{2.77626pt}{0ex}}=\phantom{\rule{2.77626pt}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}\mathit{\mathscr{H}}\right]}_{{\mathbb{A}}_{l}}={\mathbf{U}}^{\left(l\right)}{\mathbf{\Sigma}}^{\left(l\right)}{\mathbf{V}}^{\left(l\right)H}.
Since the matrices U^{(l)}’s are unitary, Equation 15 is equivalent to
\begin{array}{l}\mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}){\times}_{1}{\mathbf{U}}^{\left(1\right)H}\dots {\times}_{L}{\mathbf{U}}^{\left(L\right)H}=\mathit{K}.\end{array}
(16)
Then, using proposition 3.1, the following equality is true:
\begin{array}{l}\mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}){\times}_{1}{\mathbf{U}}^{\left(1\right)H}\dots {\times}_{L}{\mathbf{U}}^{\left(L\right)H}\\ =\mathit{\text{Reshape}}\left(\mathit{\mathscr{H}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}^{\left(1\right)H}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}^{\left(L\right)H},\right.\\ \left(\right)close=")">\phantom{\rule{2em}{0ex}}{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}& ,\end{array}\n
(17)
which leads to
\begin{array}{l}\phantom{\rule{8.0pt}{0ex}}\mathit{\text{Reshape}}\left(\mathit{\mathscr{H}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}^{\left(1\right)H}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}^{\left(L\right)H},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\right)=\mathit{K}.\end{array}
(18)
Finally, the operator Reshape^{−1} is applied
\begin{array}{l}\mathit{\mathscr{H}}=\mathit{\text{Reshap}}{e}^{1}(\mathit{K},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}){\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}^{\left(1\right)}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}^{\left(L\right)},\end{array}
(19)
which concludes the proof.
Example For a thirdorder tensor \mathit{\mathscr{H}}\in {C}^{{I}_{1}\times {I}_{2}\times {I}_{3}} with {\mathbb{A}}_{1}=\{1,3\}, {\mathbb{A}}_{2}=\left\{2\right\}, the AUHOSVD will be written as follows:
\begin{array}{l}\mathit{\mathscr{H}}={\mathit{K}}_{{\mathbb{A}}_{1}/{\mathbb{A}}_{2}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}^{\left({\mathbb{A}}_{1}\right)}{\times}_{{\mathbb{A}}_{2}}{\mathbf{U}}^{\left({\mathbb{A}}_{2}\right)},\end{array}
(20)
with {\mathit{K}}_{{\mathbb{A}}_{1}/{\mathbb{A}}_{2}}\in {C}^{{I}_{1}\times {I}_{2}\times {I}_{3}}, {\mathbf{U}}^{\left({\mathbb{A}}_{1}\right)}\in {C}^{{I}_{1}{I}_{3}\times {I}_{1}{I}_{3}} and {\mathbf{U}}^{\left({\mathbb{A}}_{2}\right)}\in {C}^{{I}_{2}\times {I}_{2}}.
Remark Let \mathit{\mathscr{H}}\in {C}^{{I}_{1}\times {I}_{2}\dots \times {I}_{P}\times {I}_{1}\times {I}_{2}\dots \times {I}_{P}} be a 2P thorder Hermitian tensor. We consider 2L subsets of {I_{1},…,I_{
P
},I_{1},…,I_{
P
}} such as

{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}
and {\mathbb{A}}_{L+1},\dots ,{\mathbb{A}}_{2L} are two partitions of {I_{1},…,I_{
P
}}

l∈ [ 1,L],
{\mathbb{A}}_{l}={\mathbb{A}}_{l+L}
Under these conditions, the AUHOSVD of is written:
\begin{array}{l}\mathit{\mathscr{H}}=\phantom{\rule{2.56804pt}{0ex}}{\mathit{K}}_{{\mathbb{A}}_{1}/\dots /{\mathbb{A}}_{2L}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}^{\left({\mathbb{A}}_{1}\right)}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}^{\left({\mathbb{A}}_{L}\right)}\phantom{\rule{2em}{0ex}}\\ {\times}_{{\mathbb{A}}_{L+1}}{\mathbf{U}}^{\left({\mathbb{A}}_{1}\right)\ast}\dots {\times}_{{\mathbb{A}}_{2L}}{\mathbf{U}}^{\left({\mathbb{A}}_{L}\right)\ast}.\end{array}
As discussed previously, the main motivation for introducing the new AUHOSVD is to extract the correlated information when processing the lowrank decomposition. This is the purpose of the following proposition.
Proposition 3.2(Lowrank approximation).
Let , {\mathit{\mathscr{H}}}_{c}, {\mathit{\mathscr{H}}}_{0} be three P thorder tensors such that
\mathit{\mathscr{H}}={\mathit{\mathscr{H}}}_{c}+{\mathit{\mathscr{H}}}_{0},
(21)
where {\mathit{\mathscr{H}}}_{c} is a \left({r}_{{\mathbb{A}}_{1}},\dots ,{r}_{{\mathbb{A}}_{L}}\right) lowrank tensor^{c}\left({r}_{{\mathbb{A}}_{l}}=\mathit{\text{rank}}\left({\left[\phantom{\rule{0.3em}{0ex}}{\mathit{\mathscr{H}}}_{c}\right]}_{{\mathbb{A}}_{l}}\right)\right). Then, {\mathit{\mathscr{H}}}_{0} is approximated by
{\mathit{\mathscr{H}}}_{0}\approx \mathit{\mathscr{H}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)H}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)H}
(22)
where {\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}, …, {\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)} minimize the following criterion:
\begin{array}{l}\left({\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)},\dots ,{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}\right)\\ \phantom{\rule{2em}{0ex}}=\mathit{\text{argmin}}\left\left{\mathit{\mathscr{H}}}_{0}\mathit{\mathscr{H}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)H}\dots \right.\right.\\ {\left(\right)close="">\left(\right)close="">\phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}{\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)H}}^{}& 2\\ .\end{array}\n
(23)
In this paper, the matrices {\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}’s are given by truncation of the matrices {\mathbf{U}}^{\left({\mathbb{A}}_{l}\right)}’s obtained by the {\text{AUHOSVD}}_{{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}} of : {\mathbf{U}}_{0}^{\left({\mathbb{A}}_{l}\right)}=\left[\phantom{\rule{0.3em}{0ex}}{\mathbf{u}}_{{r}_{{\mathbb{A}}_{l}}+1}^{\left({\mathbb{A}}_{l}\right)}\dots {\mathbf{u}}_{{\mathbb{A}}_{l}}^{\left({\mathbb{A}}_{l}\right)}\right]. This solution is not optimal in the sense of least squares but is easy to implement. However, thanks to the strong link with HOSVD, it should be a correct approximation. That is why iterative algorithms for AUHOSVD will not be investigated in this paper.
Proof 3.3.
By applying Reshape to Equation 21, one obtains
\begin{array}{l}\mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\\ =\mathit{\text{Reshape}}({\mathit{\mathscr{H}}}_{c},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\\ \phantom{\rule{1em}{0ex}}+\mathit{\text{Reshape}}({\mathit{\mathscr{H}}}_{0},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}).\end{array}
Then, \mathit{\text{Reshape}}\left({\mathit{\mathscr{H}}}_{c},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\right) is a \left({r}_{{\mathbb{A}}_{1}},\dots ,{r}_{{\mathbb{A}}_{L}}\right) lowrank tensor \left(\text{where}{r}_{{\mathbb{A}}_{l}}=\mathit{\text{rank}}\left({\left[\mathit{\text{Reshape}}\left({\mathit{\mathscr{H}}}_{c},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L}\right)\phantom{\rule{0.3em}{0ex}}\right]}_{l}\right)\right). Proposition 2.1 can now be applied, and this leads to
\begin{array}{l}\mathit{\text{Reshape}}({\mathit{\mathscr{H}}}_{0},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\\ \approx \mathit{\text{Reshape}}(\mathit{\mathscr{H}},{\mathbb{A}}_{1},\dots ,{\mathbb{A}}_{L})\\ \phantom{\rule{1em}{0ex}}{\times}_{1}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)H}\dots {\times}_{L}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)H}.\end{array}
Finally, applying R e s h a p e^{−1} to the previous equation leads to the end of the proof:
{\mathit{\mathscr{H}}}_{0}\approx \mathit{\mathscr{H}}{\times}_{{\mathbb{A}}_{1}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{1}\right)H}\dots {\times}_{{\mathbb{A}}_{L}}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)}{\mathbf{U}}_{0}^{\left({\mathbb{A}}_{L}\right)H}.
Discussion on choice of partition and complexity As mentioned previously, the total number of AUHOSVD for a P thorder tensor is equal to B_{
P
}. Since this number could become significant, it is important to have a procedure to find good partitions for the AUHOSVD computation. We propose a twostep procedure. Since the AUHOSVD has been developed for LR reduction, the most important criterion is to choose the partitions which emphasize deficient ranks. For some applications, it is possible to use a priori knowledge to select some partitions as will be shown in Section 5 for polarimetric STAP. Next, another step is needed if several partitions induce an AUHOSVD with a deficient rank. At this point, we propose to maximize a criterion (see Section 5.3 for examples) over the remaining partitions.
Concerning the complexity, the number of operation necessary to compute the HOSVD of a P thorder tensor is equal to 4\left(\prod _{p}{I}_{p}\right)\left(\sum _{p}{I}_{p}\right)[3]. Similarly, the complexity of the AUHOSVD is equal to 4\left(\prod _{p}{I}_{p}\right)\left(\sum _{l}{I}_{{\mathbb{A}}_{l}}\right).