Overview of Constrained PARAFAC Models

In this paper, we present an overview of constrained PARAFAC models where the constraints model linear dependencies among columns of the factor matrices of the tensor decomposition, or alternatively, the pattern of interactions between different modes of the tensor which are captured by the equivalent core tensor. Some tensor prerequisites with a particular emphasis on mode combination using Kronecker products of canonical vectors that makes easier matricization operations, are first introduced. This Kronecker product based approach is also formulated in terms of the index notation, which provides an original and concise formalism for both matricizing tensors and writing tensor models. Then, after a brief reminder of PARAFAC and Tucker models, two families of constrained tensor models, the co-called PARALIND/CONFAC and PARATUCK models, are described in a unified framework, for $N^{th}$ order tensors. New tensor models, called nested Tucker models and block PARALIND/CONFAC models, are also introduced. A link between PARATUCK models and constrained PARAFAC models is then established. Finally, new uniqueness properties of PARATUCK models are deduced from sufficient conditions for essential uniqueness of their associated constrained PARAFAC models.


I. INTRODUCTION
Tensor calculus was introduced in differential geometry, at the end of the 19 th century, and then tensor analysis was developed in the context of Einstein's theory of general relativity, with the introduction of index notation, the so-called Einstein summation convention, at the beginning of the 20 th century, which allows to simplify and shorten physics equations involving tensors.Index notation is also useful for simplifying multivariate statistical calculations, particularly those involving cumulant tensors [1].Gérard Favier is with the I3S Laboratory, University of Nice-Sophia Antipolis (UNS), CNRS, France.André L. F. de Almeida is with the Wireless Telecom Research Group, Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza, Brazil.E-mails: favier@i3s.unice.fr,andre@gtel.ufc.br.July 30, 2014 DRAFT Generally speaking, tensors are used in physics and differential geometry for characterizing the properties of a physical system, representing fundamental laws of physics, and defining geometrical objects whose components are functions.When these functions are defined over a continuum of points of a mathematical space, the tensor forms what is called a tensor field, a generalization of vector field used to solve problems involving curved surfaces or spaces, as it is the case of curved space-time in general relativity.From a mathematical point of view, two other approaches are possible for defining tensors, in terms of tensor products of vector spaces, or multilinear maps.Symmetric tensors can also be linked with homogeneous polynomials [2].
After first tensor developments by mathematicians and physicists, the need of analysing collections of data matrices that can be seen as three-way data arrays, gave rise to three-way models for data analysis, with the pioneering works of Tucker (1966) in psychometrics [3], and Harshman (1970) in phonetics [4], who proposed what is now referred to as the Tucker and the PARAFAC (parallel factor) decompositions/models, respectively.PARAFAC decompositions were independently proposed by Carroll and Chang in 1970 [5] under the name CANDECOMP (canonical decomposition), then called CP (for CANDECOMP/PARAFAC) in [6].For an history of the development of multi-way models in the context of data analysis, see [7].Since the nineties, multi-way analysis has known a growing success in chemistry and especially in chemometrics.See Bro's thesis (1998) [8] and the book by Smilde et al. (2004) [9] for a description of various chemical applications of three-way models, with a pedagogical presentation of these models and of various algorithms for estimating their parameters.At the same period, tensor tools were developed for signal processing applications, more particularly for solving the so-called blind source separation (BSS) problem using cumulant tensors.See [10], [11], [12], and De Lathauwer's thesis [13] where the concept of HOSVD (high order singular value decomposition) is introduced, a tensor tool generalizing the standard matrix SVD to arrays of order higher than two.A recent overview of BSS approaches and applications can be found in the handbook co-edited by Comon and Jutten [14].
Nowadays, (high order) tensors, also called multi-way arrays in the data analysis community, play an important role in many fields of application for representing and analysing multidimensional data, as in psychometrics, chemometrics, food industry, environmental sciences, signal/image processing, computer vision, neuroscience, information sciences, data mining, pattern recognition, among many others.Then, they are simply considered as multidimensional arrays of numbers, constituting a generalization of vectors and matrices that are first-and second-order tensors respectively, to orders higher than two.Tensor models, also called tensor decompositions, are very useful for analysing multidimensional data under the form of signals, images, speech, music sequences, or texts, and also for designing new systems as it is the case of wireless communication systems since the publication of the seminal paper by Sidiropoulos et al., in 2000 [15].Besides the references already cited, overviews of tensor tools, models, algorithms, and applications can be found in [16], [17], [18], [19].
Tensor models incorporating constraints (sparsity; non-negativity; smoothness; symmetry; column orthonormality of factor matrices; Hankel, Toeplitz, and Vandermonde structured matrix factors; allocation constraints,...) have been the object of intensive works, during the last years.Such constraints can be inherent to the problem under study, or the result of a system design.An overview of constraints on components of tensor models most often encountered in multi-way data analysis can be found in [7].
Incorporation of constraints in tensor models may facilitate physical interpretabibility of matrix factors.
Moreover, imposing constraints may allow to relax uniqueness conditions, and to develop specialized parameter estimation algorithms with improved performance both in terms of accuracy and computational cost, as it is the case of CP models with a columnwise orthonormal factor matrix [20].One can classify the constraints into three main categories: i) sparsity/non-negativity, ii) structural, iii) linear dependencies/mode interactions.It is worth noting that the three categories of constraints involve specific parameter estimation algorithms, the first two ones generally inducing an improvement of uniqueness property of the tensor decomposition, while the third category implies a reduction of uniqueness, named partial uniqueness.We briefly review the main results concerning the first two types of constraints, section III of this paper being dedicated to the third category.
Sparse and non-negative tensor models have recently been the subject of many works in various fields of applications like computer vision ( [21], [22]), image compression [23], hyperspectral imaging [24], music genre classification [25] and audio source separation [26], multi-channel EEG (electroencephalography) and network traffic analysis [27], fluorescence analysis [28], data denoising and image classification [29], among many others.Two non-negative tensor models have been more particularly studied in the literature, the so-called non-negative tensor factorization (NTF), i.e.PARAFAC models with non-negativity constraints on the matrix factors, and non-negative Tucker decomposition (NTD), i.e.Tucker models with non-negativity constraints on the core tensor and/or the matrix factors.The crucial importance of NTF/NTD for multi-way data analysis applications results from the very large volume of real-world data to be analyzed under constraints of sparseness and non-negativity of factors to be estimated, when only non-negative parameters are physically interpretable.Many NTF/NTD algorithms are now available.Most of them can be viewed as high-order extensions of non-negative matrix factorization (NMF) methods, in the sense that they are based on an alternating minimization of cost functions incorporating sparsity measures (also named distances or divergences) with application of NMF methods to matricized or vectorized forms of the tensor to be decomposed.See for instance [30], [23], [16], [28] for NTF, and [31], [29] for NTD.An overview of NMF and NTF/NTD algorithms can be found in [16].
The second category of constraints concerns the case where the core tensor and/or some matrix factors of the tensor model have a special structure.For instance, we recently proposed a nonlinear CDMA scheme for multiuser SIMO communication systems that is based on a constrained block-Tucker2 model whose core tensor, composed of the information symbols to be transmitted and their powers up to a certain degree, is characterized by matrix slices having a Vandermonde or a Hankel structure [32], [33].
We also developed Volterra-PARAFAC models for nonlinear system modeling and identification.These models are obtained by expanding high-order Volterra kernels, viewed as symmetric tensors, by means of symmetric or doubly symmetric PARAFAC decompositions [34], [35].Block structured nonlinear systems like Wiener, Hammerstein, and parallel-cascade Wiener systems, can be identified from their associated Volterra kernels that admit symmetric PARAFAC decompositions with Toeplitz factors [36], [37].Symmetric PARAFAC models with Hankel factors, and symmetric block PARAFAC models with block Hankel factors are encountered for blind identification of MIMO linear channels using fourth-order cumulant tensors, in the cases of memoryless and convolutive channels, respectively [38], [39].In the presence of structural constraints, specific estimation algorithms can be derived as it is the case for symmetric CP decompositions [40], CP decompositions with Toeplitz factors (in [41] an iterative solution was proposed, whereas in [42] a non-iterative algorithm was developed), Vandermonde factors [43], circulant factors [44], or more generally with banded and/or structured matrix factors [45], [46], and for Hankel and Vandermonde structured core tensors [33].
The rest of this paper is organized as follows.In Section II, we present some tensor prerequisites with a particular emphasis on mode combination using Kronecker products of canonical vectors that makes easier the matricization operations, especially to derive matrix representations of tensor models.This Kronecker product based approach is also formulated in terms of the index notation, which provides an original and concise formalism for both matricizing tensors and writing tensor models.We also present the two most common tensor models, the so called Tucker and PARAFAC models, in a general framework, i.e. for N th -order tensors.Then, in Section III, two families of constrained tensor models, the co-called PARALIND/CONFAC and PARATUCK models, are described in a unified way, with a generalization to N th order tensors.New tensor models, called nested Tucker models and block PARALIND/CONFAC models, are also introduced.A link between PARATUCK models and constrained PARAFAC models is also established.In Section IV, uniqueness properties of PARATUCK models are deduced using this link.The paper is concluded in Section V.

Notations and definitions:
R and C denote the fields of real and complex numbers, respectively.Scalars, column vectors, matrices, and high order tensors are denoted by lowercase, boldface lowercase, boldface uppercase, and calligraphic letters, e.g.a, a, A, and A, respectively.The vector A i. (resp.A .j ) represents the i th row (resp.j th column) of A.
I N , 1 T N , and e (N ) n stand for the identity matrix of order N , the all-ones row vector of dimensions 1 × N , and the n th canonical vector of the Euclidean space R N , respectively.
A T , A H , A † , tr(A), and r A denote the transpose, the conjugate (Hermitian) transpose, the Moore-Penrose pseudo-inverse, the trace, and the rank of A, respectively.D i (A) = diag(A i. ) represents the diagonal matrix having the elements of the i th row of A on its diagonal.The operator bdiag(.)forms a block-diagonal matrix from its matrix arguments, while the operator vec(.)transforms a matrix into a column vector by stacking the columns of its matrix argument one on top of the other one.In case of a tensor X , the vec operation is defined in (6).
The outer product of N non-zero vectors defines a rank-one tensor of order N .
By convention, the order of dimensions is directly related to the order of variation of the associated indices.For instance, in (1) and (2), the product I n1 I n2 • • • I nN of dimensions means that n 1 is the index varying the most slowly while n N is the index varying the most fastly in the Kronecker products computation.
For S = {1, . . ., N }, we have the following identities In particular, for u∈ Some useful matrix formulae are recalled in the Appendix.

II. TENSOR PREREQUISITES
In this paper, a tensor is simply viewed as a multidimensional array of measurements.Depending that these measurements are real-or complex-valued, we have a real-or complex-valued tensor, respectively.
The order N of a tensor refers to the number of indices that characterize its elements ) being associated with a dimension, also called a way, or a mode, and I n denoting the mode-n dimension.
An N th -order complex-valued tensor X ∈ C I1×•••×IN , also called an N -way array, of dimensions The coefficients x i1,••• ,iN represent the coordinates of X in the canonical basis The identity tensor of order N and dimensions I × • • • × I, denoted by I N,I or simply I, is a diagonal hypercubic tensor whose elements δ i1,••• ,iN are defined by means of the generalized Kronecker delta, i.e.
, and It can be written as .
Different reduced order tensors can be obtained by slicing the tensor X ∈ C I1×  N − 1 or N − p, respectively.For instance, by slicing X along its mode-n, we get the i th n mode-n slice of X , denoted by X ...in... , that can be written as For instance, by slicing the third-order tensor X ∈ C I×J×K along each mode, we get three types of matrix slices, respectively called horizontal, lateral, and frontal slices: with i = 1, . . ., I; j = 1, . . ., J; k = 1, . . ., K.
Such a tensor Hadamard product can be calculated by means of the matrix Hadamard product of extended tensor unfoldings as defined in Eq. ( 21) and (22) (see also Eq. ( 101)-(103) in the Appendix A.5).For the example above, we have Example:  , and the tensor C such as c r,i1,i2 = a r,i1 b r,i2 , a mode-1 flat matrix unfolding of C is given by .
These mode combinations allow to rewrite the N th -order tensor X ∈ C I1×•••×IN under the form of an x j1,• Two particular mode combinations corresponding to the vectorization and matricization operations are now detailed.

C. Vectorization
The vectorization of X ∈ C I1×•••×IN is associated with a combination of the N modes into a unique mode of dimension J = N n=1 I n , which amounts to replace the outer product in (4) by the Kronecker the element x i1,••• ,iN of X being the i th entry of vec(X ) with i defined as in (3).
The vectorization can also be carried out after a permutation of indices π(i n ), n = 1, • • • , N .

D. Matricization or Unfolding
There are different ways of matricizing the tensor X according to the partitioning of the set {1, . . ., N } into two ordered subsets S 1 and S 2 , constituted of p and N − p indices, respectively.A general formula for the matricization is, for p ∈ [1, N − 1] with , for n 1 = 1 and 2. From (7), we can deduce the following expression of the element E. Particular case: mode-n matrix unfoldings X n A flat mode-n matrix unfolding of the tensor X corresponds to an unfolding of the form X S1;S2 with We can also define a tall mode-n matrix unfolding of X , by choosing and S 2 = {n}.Then, we have The column vectors of a flat mode-n matrix unfolding X n are the mode-n vectors of X , and the rank of X n , i.e. the dimension of the mode-n linear space spanned by the mode-n vectors, is called mode-n rank of X , denoted by rank n (X ).
In the case of a third-order tensor X ∈ C I×J×K , there are six different flat unfoldings, denoted X I×JK , X I×KJ , X J×KI , X J×IK , X K×IJ , X K×JI .For instance, we have Using the properties ( 84), (85), and (87) of the Kronecker product gives i (e Similarly, there are six tall matrix unfoldings, denoted X JK×I , X KJ×I , X KI×J , X IK×J , X IJ×K , X JI×K , like for instance Applying ( 8) to (10) gives

F. Mode-n product of a tensor with a matrix or a vector
The mode-n product of gives the tensor Y of order N and dimensions which can be expressed in terms of mode-n matrix unfoldings of X and Y This operation can be interpreted as the linear map from the mode-n space of X to the mode-n space of Y, associated with the matrix A.
The mode-n product of X ∈ C I1×•••×IN with the row vector u T ∈ C 1×In along the n th mode, denoted by X × n u T , gives a tensor Y of order N − 1 and dimensions that can be written in vectorized form as vec When multiplying a N th -order tensor by row vectors along p different modes, we get a tensor of order N − p.For instance, for a third-order tensor X ∈ C I×J×K , we have Considering an ordered subset S = {m 1 , . . ., m P } of the set {1, . . ., N }, a series of mode-m p products of X ∈ C I1×•••×IN with A (mp) ∈ C Jm p ×Im p , p ∈ {1, . . ., P }, P ≤ N , will be concisely noted as
p ∈ {1, . . ., P }, with P ≤ N , we have which means that the order of the mode-m p products is irrelevant when the indices m p are all distinct.
• For two products of

G. Kronecker products based approach using index notation
In this subsection, we propose to reformulate our Kronecker products based approach for tensor matricization in terms of the index notation introduced in [48].Using this index notation, a column vector u ∈ C I×1 , a row vector v T ∈ C 1×J , and a matrix X ∈ C I×J are respectively written as follows x ij (e As with Einstein summation convention, the index notation allows to drop summation signs.If an index i ∈ [1, I] is repeated in an expression (or more generally in a term of an equation), it means that this expression (or this term) must be summed over that index from 1 to I.
Using the index notation, the horizontal, lateral, and frontal slices of a third-order tensor X ∈ C I×J×K can be written as The Kronecker products u ⊗ v and A ⊗ B, with A ∈ C I×J and B ∈ C K×L , can be concisely written as We have also Using this formalism, the Khatri-Rao product A ⋄ B can be written as follows Considering the set S = {n 1 , . . ., n N } obtained by permuting the elements of {1, . . ., N }, and noting e I the Kronecker product in , with The Kronecker and Khatri-Rao products defined in ( 1) and ( 2), with a in,rn as entry of A (n) , can then be defined as where Applying these results, the unfoldings ( 7), ( 10) and (11), and the formula (8) can be rewritten respectively as where I 1 and I 2 represent the sets of indices i n associated with the sets S 1 and S 2 of index n, respectively.
We can also use the index notation for deriving matrix unfoldings of tensor extensions of a matrix mode-1 flat unfoldings of A are given by These two formulae will be used later for establishing the link between PARATUCK-(2,4) models and constrained PARAFAC-4 models.See the Appendix A.4.It is worth noting two differences between the index notation used in this paper and Einstein summation convention: (i) each index can be repeated more than twice in any expression; (ii) the index notation can be used with ordered sets of indices.

H. Basic Tensor Models
We now present the two most common tensor models, i.e. the Tucker [3] and PARAFAC [4] models.
In [7], these models are introduced in a constructive way, in the context of three-way data analysis.
The Tucker models are presented as extensions of the matrix singular value decomposition (SVD) to three-way arrays, which gave rise to the generalization as HOSVD ( [13], [49]), whereas the PARAFAC model is introduced by emphasizing the Cattell's principle of parallel proportional profiles [50] that underlies this model, so explaining the acronym PARAFAC.In the following, we adopt a more general presentation for multi-way arrays, i.e. tensors of any order N .
1) Tucker Models: For a N th -order tensor X ∈ C I1×•••×IN , a Tucker model is defined in an element-wise form as with in,rn is an element of the matrix factor A (n) ∈ C In×Rn .Using the index notation, and defining the set of indices R = {r n1 , • • • , r nN }, the Tucker model can also be written simply as Taking the definition (4) into account, and noting that .rn , this model can be written as a weighted sum of N n=1 R n outer products, i.e. rank-one tensors .rn (with the index notation) Using the definition (12) allows to write (23) in terms of mode-n products as This expression evidences that the Tucker model can be viewed as the transformation of the core tensor resulting from its multiplication by the factor matrix A (n) along its mode-n, which corresponds to a linear map applied to the mode-n space of G, for n = 1, • • • , N , i.e. a multilinear map applied to G.
From a transformation point of view, G and X can be interpreted as the input tensor and the transformed tensor, or output tensor, respectively.

Matrix representations of the Tucker model:
A matrix representation of a Tucker model is directly linked with a matricization of tensor like (7), corresponding to the combination of two sets of modes S 1 and S 2 .These combinations must be applied both to the tensor X and its core tensor G.
Proof: See the Appendix.
For the flat mode-n unfolding, defined in (9), the formula (27) gives Applying the vec formula (92) to the right hand-side of (28), we obtain the vectorized form of X associated with its mode-n unfolding 2) , corresponds to the case where N − N 1 factor matrices are equal to identity matrices.For instance, assuming that 23) and ( 26) become One such model that is currently used in applications is the Tucker-(2,3) model, usually denoted Tucker2, for third-order tensors X ∈ C I×J×K .Assuming A (1) = A ∈ C I×P , A (2) = B ∈ C J×Q , and such a model is defined by the following equations with the core tensor G ∈ C P ×Q×K .
3) PARAFAC Models: A PARAFAC model for a N th -order tensor corresponds to the particular case of a Tucker model with an identity core tensor of order N and dimensions Equations ( 23)-( 26) then become, respectively in,r (with the index notation) (34) with the factor matrices

Remarks
• The expression (33) as a sum of polyads is called a polyadic form of X by Hitchcock (1927) [51].
• The PARAFAC model ( 33)-( 35) amounts to decomposing the tensor X into a sum of R components, each component being a rank-one tensor.When R is minimal in (33), it is called the rank of X [52].
This rank is related to the mode-n ranks by the following inequalities rank Furthermore, contrary to the matrices for which the rank is always at most equal to the smallest of the dimensions, for higher-order tensors the rank can exceed any mode-n dimension I n .
There exists different definitions of rank for tensors, like typical and generic ranks, or also symmetric rank for a symmetric tensor.See [53] and [54] for more details.
• In telecommunication applications, the structure parameters (rank, mode dimensions, and core tensor dimensions) of a PARAFAC or Tucker model, are design parameters that are chosen in function of the performance desired for the communication system.However, in most of the applications, as for instance in multi-way data analysis, the structure parameters are generally unknown and must be determined a priori.Several techniques have been proposed for determining these parameters.See [55], [56], [57], [58], and references therein.
• The PARAFAC model is also sometimes defined by the following equation In this case, the identity tensor I N,R in ( 35) is replaced by the diagonal tensor G ∈ C R×•••×R whose diagonal elements are equal to scaling factors g r , i.e.
and all the column vectors A
• It is important to notice that the PARAFAC model ( 33) is multilinear (more precisely N -linear) in its parameters in the sense that it is linear with respect to each matrix factor.This multilinearity property is exploited for parameter estimation using the standard alternating least squares (ALS) algorithm ( [4], [5]) that consists in alternately estimating each matrix factor by minimizing a least squares error criterion conditionally to the knowledge of the other matrix factors that are fixed with their previously estimated values.

Matrix representations of the PARAFAC model:
The matrix representation ( 7) of the PARAFAC model ( 33)-( 35) is given by Proof: See the Appendix.

Remarks
• From (37), we can deduce that a mode combination results in a Khatri-Rao product of the corresponding factor matrices.Consequently, the tensor contraction (5) associated with the PARAFAC-N model (35) gives a PARAFAC-N 1 model whose factor matrices are equal to .
• For the PARAFAC model, the flat mode-n unfolding, defined in (9), is given by and the associated vectorized form is obtained in applying the vec formula (93) to the right hand-side of the above equation, with • In the case of the normalized PARAFAC model (36), Eq. ( 37) and (39) become, respectively • For the PARAFAC model of a third-order tensor X ∈ C I×J×K with factor matrices (A, B, C), the formula (37) gives for S 1 = {i, j} and S 2 = {k} , we deduce the following expression for mode-1 matrix slices Similarly, we have • For the PARAFAC model of a fourth-order tensor X ∈ C I×J×K×L with factor matrices (A, B, C, D), we obtain Other matrix slices can be deduced from (40) by simple permutations of the matrix factors.
In the next section, we introduce two constrained PARAFAC models, the so called PARALIND and CONFAC models, and then PARATUCK models.

III. Constrained PARAFAC Models
The introduction of constraints in tensor models can result from the system itself that is under study, or from a system design.In the first case, the constraints are often interpreted as interactions or linear dependencies between the PARAFAC factors.Examples of such dependencies are encountered in psychometrics and chemometrics applications that gave origin, respectively, to the PARATUCK-2 model [59] and the PARALIND (PARAllel profiles with LINear Dependencies) model ( [60], [61]), introduced in [47] under the name CANDELINC (CANonical DEcomposition with LINear Constraints), for the multiway case.A first application of the PARATUCK-2 model in signal processing was made in [62] for blind joint identification and equalization of Wiener-Hammerstein communication channels.The PARALIND model was recently applied for identifiability and propagation parameter estimation purposes in a context of array signal processing [63], [64].
In the second case, the constraints are used as design parameters.For instance, in a telecommunications context, we recently proposed two constrained tensor models: the CONFAC (CONstrained FACtor) model [65], and the PARATUCK-(N 1 , N ) model [66], [67].The PARATUCK-2 model was also applied for designing space-time spreading-multiplexing MIMO systems [68].For these telecommunication applications of constrained tensor models, the constraints are used for resource allocation.We are now going to describe these various constrained PARAFAC models.

A. PARALIND models
Let us define the core tensor of the Tucker model ( 26) as follows: where , are constraint matrices.In this case, G will be called the "interaction tensor", or "constraint tensor".
The PARALIND model is obtained by substituting (41) into (26), and applying the property (13), which gives Equation ( 42) leads to two different interpretations of the PARALIND model, as a constrained Tucker model whose core tensor admits a PARAFAC decomposition with factor matrices Φ (n) , called "interaction matrices", and as a constrained PARAFAC model with constrained factor matrices The interaction matrix Φ (n) allows taking into account linear dependencies between the columns of , implying a rank deficiency for this factor matrix.When the columns of Φ (n) are formed with 0 ′ s and 1 ′ s, the dependencies simply consist in a repetition or an addition of certain columns of A (n) .In this particular case, the diagonal element ξ n) that are added to form the r th column of the constrained factor The choice Φ (n) = I R means that there is no such dependency among the columns of A (n) .
Equation ( 42) can be written element-wise as This constrained PARAFAC model constitutes an N -way form of the three-way PARALIND model, used for chemometrics applications in [60], and [61].

B. CONFAC models
When the constraint matrices Φ (n) ∈ R Rn×R are full row-rank, and their columns are chosen as canonical vectors of the Euclidean space R Rn , for n = 1, • • • , N , the constrained PARAFAC model ( 42) constitutes a generalization to N th -order of the third-order CONFAC model, introduced in [65] for designing MIMO communication systems with resource allocation.This CONFAC model was used in [69] for solving the problem of blind identification of underdetermined mixtures based on cumulant generating function of the observations.In a telecommunications context where X represents the tensor of received signals, such a constraint matrix Φ (n) can be interpreted as an "allocation matrix" allowing to allocate resources, like data streams, codes, and transmit antennas, to the R components of the signal to be transmitted.In this case, the core tensor G will be called the "allocation tensor".By assumption, each column of the allocation matrix Φ (n) is a canonical vector of R Rn , which means that there is only one value of r n such that φ meaning that each resource r n is allocated at least once, and the diagonal element of , because only one resource r n is allocated to each component r.
Moreover, we have to notice that the assumption R ≥ max n (R n ) implies that each resource can be allocated several times, i.e. to several components.Defining the interaction matrices represents the number of times that the r th n column of A (n) is repeated, i.e. the number of times that the r th n resource is allocated to the R components, whereas γ (n1,n2) rn 1 ,rn 2 determines the number of interactions between the r th n1 column of A (n1) and the r th n2 column of A (n2) , i.e. the number of times that the r th n1 and r th n2 resources are combined in the R components.If we choose R n = R and Φ (n) = I R , ∀n = 1, • • • , N , the PARALIND/CONFAC model ( 42) becomes identical to the PARAFAC one (35).
The matrix representation (7) of the PARALIND/CONFAC model can be deduced from (37) Using the identity (86) gives or, equivalently, where the matrix representation G S1;S2 of the constraint/allocation tensor G, defined by means of its PARAFAC model (41), can also be deduced from (37) as

C. Nested Tucker models
The PARALIND/CONFAC models can be viewed as particular cases of a new family of tensor models that we shall call nested Tucker models, defined by means of the following recursive equation with the factor matrices and X (P ) ∈ C I1×•••×IN .This equation can be interpreted as P successive linear transformations applied to each mode-n space of the core tensor G. So, P nested Tucker models can then be interpreted as a Tucker model for which the factor matrices are products of P matrices.When obtain nested PARAFAC models.The PARALIND/CONFAC models correspond to two nested PARAFAC models (P = 2), with By considering nested PARAFAC models with P = 3, A (1,n) we deduce doubly PARALIND/CONFAC models described by the following equation Such a model can be viewed as a doubly constrained PARAFAC model, with factor matrices , the constraint matrix Ψ (n) , assumed to be full column-rank, allowing to take into account linear dependencies between the rows of A (n) .

D. Block PARALIND/CONFAC models
In some applications, the data tensor X ∈ C I1×•••×IN is written as a sum of P sub-tensors X (p) , each sub-tensor admitting a tensor model with a possibly different structure.So, we can define a block-PARALIND/CONFAC model as where p) , and N ) are the mode-n factor matrix, the mode-n constraint/allocation matrix, and the core tensor of the PARALIND/CONFAC model of the p th sub-tensor, respectively.The matrix representation (44) then becomes Defining the following block partitioned matrices where R (p,n) , Eq. ( 47) can be rewritten in the following more compact form where ⊗ b denotes the block-wise Kronecker product defined as A (q) being partitioned in P blocks as in (48), and where ⋄ b denotes the block-wise Khatri-Rao product defined in the same way as the block-wise Kronecker product, with In the case of a block PARAFAC model, Eq. ( 46) is replaced by and the matrix representation (37) then becomes with . Block constrained PARAFAC models were used in [70], [71], [72] for modeling different types of multiuser wireless communication systems.
Block constrained Tucker models were used for space-time multiplexing MIMO-OFDM systems [73], and for blind beamforming [74].In these applications, the symbol matrix factor is in Toeplitz or block-Toeplitz form.
The block tensor model defined by Eq. ( 45)-( 46) can be viewed as a generalization of the block term decomposition introduced in [76] for third-order tensors X ∈ C I×J×K that are decomposed into a sum of P Tucker models of rank-(L, M, N ), which corresponds to the particular case where all the factor matrices are full column rank, with A (p,1) ∈ C I×L , A (p,2) ∈ C J×M , and A (p,3) ∈ C K×N , for p = 1, • • • , P , and G ∈ C L×M ×N , and each sub-tensor X (p) is decomposed by means of its HOSVD.
This figure is to be compared with Figure 5 in [77] representing a block term decomposition of a third-order tensor into rank-(L p , M p , N p ) terms, when each term has a PARALIND/CONFAC structure.
The block term decomposition (BTD) in rank-(1, L p , L p ) terms of a third-order tensor X ∈ C I×J×K , which is compared to a third-order PARATREE model in [75], can also be viewed as a particular CONFAC-(1,3) model.Indeed, such a decomposition can be written as [79] where the matrices B p ∈ C J×Lp and C p ∈ C K×Lp are rank-L p , and L p , it is easy to verify that the BTD ( 50) can be rewritten as the following CONFAC-(1,3) model with the constraint matrix

F. PARATUCK models
, is defined in scalar form as follows [66], [67] x where a (n) in,rn , and φ rn,iN 1 +1 are entries of the factor matrix A (n) ∈ C In×Rn and of the interaction/allocation matrix Φ the PARATUCK-(N 1 , N ) model can be rewritten as a Tucker-(N 1 , N ) model ( 29)- (30).

Defining the allocation/interaction tensor
the core tensor G can then be written as the Hadamard product of the tensors C and F along their first

Remarks
• The PARATUCK-(N 1, N ) model can be interpreted as the transformation of the input tensor C via its multiplication by the factor matrices A (n) , n = 1, • • • , N 1 , along its first N 1 modes, combined with a mode-n resource allocation (n = 1, • • • , N 1 ) relatively to the mode-(N 1 + 1) of the transformed tensor X , by means of the allocation matrices Φ (n) .
• In telecommunications applications, the output modes will be called diversity modes because they correspond to time, space and frequency diversities, whereas the input modes are associated with resources like transmit antennas, codes, and data streams.For these applications, the matrices Φ (n)   are formed with 0's and 1's, and they can be interpreted as allocation matrices used for allocating some resources r n to the output mode-(N 1 + 1).Another way to take resource allocations into account consists in replacing the N 1 allocation matrices Φ (n) by the (N 1 + 1) th -order allocation (53).

F
This combination of a Tucker-2 model for X with a PARAFAC model for F gave rise to the name PARATUCK-2.The constraint matrices (Φ (1) , Φ (2) ) define interactions between columns of the factor matrices (A (1) , A (2) ), along the mode-3 of X , while the matrix C contains the weights of these interactions.

G. Rewriting of PARATUCK models as Constrained PARAFAC Models
This rewriting of PARATUCK models as constrained PARAFAC models can be used to deduce both matrix unfoldings by means of the general formula (37), and sufficient conditions for essential uniqueness of such PARATUCK models, as will be shown in Section IV.
• Now, we show the equivalence of the expressions ( 72) and ( 54) of the core tensor.Applying the formula (38) to the PARAFAC model (72) gives Using the identity (67) in Eq. (73) gives For the formula (54), with N = 3 and N 1 = 2, we have or equivalently in terms of matrix Hadamard product ) T , and c 1×R1R2 = vec T (C T ), which gives showing the equivalence of the two core tensor expressions (72) and (54).
3) Link between PARATUCK-(N − 2, N ) and constrained PARAFAC-N models: Let us consider the PARATUCK-(N 1 , N ) model ( 52) in the case R i corresponding to a combination of the N 1 modes associated with the constraints/allocations. Eq. ( 74) can then be written as the following constrained PARAFAC-N model with the following matrix factors where , and the constraint matrices are given in (101) as The constrained PARAFAC model ( 75) can also be written as a Tucker-(N 1 , N ) model ( 30) with the core tensor defined in (54), or, equivalently,

H. Comparison of constrained tensor models
To conclude this presentation, we compare the so called CONFAC-( There are two main differences between the PARATUCK-(N 1 , N ) models ( 52) and the CONFAC models (42).The first one is that the allocation matrices of PARATUCK models, formed with 0's and 1's, have not necessarily unit vectors as column vectors, which means that it is possible to allocate rn,iN 1 +1 resources r n to the (N 1 + 1) th -mode of the output tensor X .The second one results from the interpretation of PARATUCK-(N 1 , N ) models as Tucker-(N 1 , N ) models, implying that each is an entry of the allocation tensor F defined in (53), each term being a combination of resources under the form of products N1 n=1 a (n) in,rn .Moreover, in telecommunication applications, the input tensor C can be used as a code tensor.
Another way to compare PARALIND/CONFAC and PARATUCK models is in terms of dependencies/interactions between their factor matrices.In the case of PARALIND/CONFAC models, as pointed out by Eq. ( 42), the constraint matrices act independently on each factor matrix, expliciting linear dependencies between columns of these matrices.For PARATUCK models, their writing as Tucker-(N 1 , N ) models with the core tensor defined in (54) allows to interpret the tensor F as an interaction tensor which defines interactions between N 1 factor matrices, the tensor C providing the strength of these interactions.
The main constrained PARAFAC models are summarized in Tables I and II.

IV. Uniqueness Issue
Several results exist for essential uniqueness of PARAFAC models, i.e. uniqueness of factor matrices up to column permutation and scaling.These results concern both deterministic and generic uniqueness, i.e. uniqueness for a particular PARAFAC model, or uniqueness with probability one in the case where the entries of the factor matrices are drawn from continuous distributions.An overview of main uniqueness conditions of PARAFAC models of third-order tensors can be found in [81] for the deterministic case, and in [82] for the generic case.Hereafter, we briefly summarized some basic results on uniqueness of PARAFAC models.The case with linearly dependent loadings is also discussed.Then, we present new results concerning the uniqueness of PARATUCK models.These results are directly deduced from sufficient conditions for essential uniqueness of their associated constrained PARAFAC models, as established in the previous section.These conditions involving the notion of k-rank of a matrix, we first recall the definition of k-rank.

A. Uniqueness of PARAFAC-N models [80]
The PARAFAC-N model ( 33)-( 35) is essentially unique, i.e. its factor matrices A Essential uniqueness means that two sets of factor matrices are linked by the following relations Â In the generic case, the factor matrices are full rank, which implies k A (n) = min(I n , R), and the Kruskal's condition (76) becomes

Case of third-order PARAFAC models
Consider a third-order tensor X ∈ C I×J×K of rank R, satisfying a PARAFAC model with matrix factors (A, B, C).The Kruskal's condition (76) becomes

Remarks
• The condition ( 76) is sufficient but not necessary for essential uniqueness.This condition does not hold when R = 1.It is also necessary for R = 2 and R = 3 but not for R > 3. See [83].
• The first sufficient condition for essential uniqueness of third-order PARAFAC models was established by Harshman in [84], then generalized by Kruskal in [52] using the concept of k-rank.
A more accessible proof of Kruskal's condition is provided in [85].The Kruskal's condition was extended to complex-valued tensors in [15] and to N -way arrays, with N > 3, in [80].
• Necessary and sufficient uniqueness conditions more relaxed than the Kruskal's one were established for third-and fourth-order tensors, under the assumption that at least one matrix factor is full column-rank [86], [87].These conditions are complicated to apply.Other more relaxed conditions have been recently derived, independently by Stegeman [88] and Guo et al. [89], for third-order PARAFAC models with a full column-rank matrix factor.
• From the condition (78), we can conclude that, if two matrix factors (A and B) are full column rank (k A = k B = R) , then the PARAFAC model is essentially unique if the third matrix factor (C) has no proportional columns (k C > 1).
• If one matrix factor (C for instance) is full column rank, then (78) gives In [88] and [89], it is shown that the PARAFAC model (A, B, C), with C of full column rank, is essentially unique if the other two matrix factors A and B satisfy the following conditions Conditions (80) are more relaxed than (79).Indeed, if for instance k A = 2 and r A = k A + δ with δ > 0, application of (79) implies k B = R, i.e.B must be full column rank, whereas (80) gives k B ≥ R − δ which does not require that B be full column rank.
• When one matrix factor (C for instance) is known and the Kruskal's condition ( 78) is satisfied, as it is often the case in telecommunication applications, essential uniqueness is ensured without permutation ambiguity and with only scaling ambiguities (Λ A , Λ B ) such as Λ A Λ B = I R .

B. Uniqueness of PARAFAC models with linearly dependent loadings
If one matrix factor contains at least two proportional columns, i.e. its k-rank is equal to one, then the Kruskal's condition (78) cannot be satisfied.In this case, partial uniqueness can be ensured, i.e. some columns of some matrix factors are essentially unique while the others are unique up to multiplication by a non-singular matrix [90].To illustrate this result, let us consider the case of the PARAFAC model of a fourth-order tensor X ∈ C I×J×K×L with factor matrices (A, B, C, D) whose two of them have two identical columns, at the same position , and consequently the uniqueness condition (76) for which cannot be satisfied.In this case, we have partial uniqueness.
Indeed, the matrix slices (40) can be developed follows From this expression, it is easy to conclude that the last two columns of In [91], sufficient conditions are provided for essential uniqueness of fourth-order CP models with one full column rank factor matrix, and at most three collinear factor matrices, i.e. having one (or more) column(s) proportional to another column.Uniqueness is ensured if any pair of proportional columns can not be common to two collinear factors, which is not the case of the example above due to the fact that the last two columns of A and B are assumed to be equal.
The PARALIND and CONFAC models represent a class of constrained PARAFAC models where the columns of one or more matrix factors are linearly dependent or collinear.In the case of CONFAC models, such a collinearity takes the form of repeated columns that are explicitly modeled by means of constraint matrices.The work [92] derived both essential uniqueness conditions and partial uniqueness conditions for PARALIND/CONFAC models of third-order tensors.Therein, the relation with uniqueness of constrained Tucker3 models and the block decomposition in rank-(L,L,1) terms is also discussed.The Note that uniqueness of the matrix factors of the contracted PARAFAC model (83) implies the uniqueness of the matrix factors A (1) and A (2) of the original PARATUCK- (2,4) model.This comes from the fact that A (1) and A (2) can be recovered (up to a scaling factor) from their Kronecker product [95].Application of the conditions (80) to the contracted PARAFAC model ( 83) allows deriving the following theorem.
Therein, the matrix factors A (1) and A (2) represent the symbol and channel matrices to be estimated while the constraint matrices Φ (1) and Φ (2) play the role of allocation matrices of the transmission system and the tensor C is the coding tensor.In this context, Φ (1) , Φ (2) and C can be properly designed to satisfy the sufficient conditions of item 1) of the Theorem.
A5. Tensor extension of a matrix Following the same demonstration as for ( 21) and (22), it is easy to deduce the following more general formula for the extension of B ∈ C I×Rn into a tensor A ∈ R n , we have where I = which can be written as with Ψ 1 = 1 M N ⊗ I I and Ψ 2 = I J ⊗ 1 T KL .

a 3 b 3 a 3 b 4 a 4 b 3 a 4 b 4 
B. Mode CombinationDifferent contraction operations can be defined depending on the way according to which the modes are combined.Let us partition the set {1, . . ., N } in N 1 ordered subsets S n1 , constituted of p(n 1 ) elements with N1 n1=1 p(n 1 ) = N .Each subset S n1 is associated with a combined mode of dimension J n1 = I n n∈Sn 1

= 1 ,= 1 .= 1 ,
and this value of r n corresponds to the n th resource allocated to the r th component.Each element x i1,••• ,iN of the received signal tensor X is equal to the sum of R components, each component r resulting from the combination of N resources, each resource being associated with a column of the matrix factor A (n) , n = 1, • • • , N .This combination, determined by the allocation matrices, is defined by a set of N indices {r 1 , • • • , r N } such that As for any r ∈ [1, R], there is one and only one N -uplet (r 1 , • • • , r N ) such as we can deduce that each component r of x i1,••• ,iN in (43) is the result of one and only one combination of the N resources under the form of the product .For the CONFAC model, we have Rn rn=1
) and let us define the change of variables r = r N1 + N 1 , N ) and PARATUCK-(N 1 , N ) constrained tensor models, introduced in this paper with a resource allocation point of view.Due to the PARAFAC structure (41) of the core tensor of CONFAC models, each element x i1,••• ,iN of the output tensor X is the sum of R components as shown in (43).Moreover, due to the special structure of the allocation matrices Φ (n) whose the columns are unit vectors, each component r is the result of a combination of N resources, under the form of the product , the N resources being fixed by the allocation matrices Φ (n) ∈ C Rn×R .With the CONFAC-(N 1 , N ) model (49), each component r is a combination of N 1 resources (r 1 , • • • , r N1 ) determined by the allocation matrices Φ

2 and D 2
C and D are unique up to a rotational indeterminacy.Indeed, if one replaces the matrices (C 2 , D 2 ) by (C 2 T, D 2 T −T ), where T ∈ C 2×2 is a non-singular matrix, the matrix slices X ij.. remain unchanged.So, the PARAFAC model is said partially unique in the sense that only the blocks (A 1 , B 1 , C 1 , D 1 ) are essentially unique, the blocks C being unique up to a non-singular matrix.Essential uniqueness means that any alternative blocks ( Â1 , B1 , Ĉ1 , D1 ) are such as Â1 = A 1 Π∆ a , B1 = B 1 Π∆ b , Ĉ1 = C 1 Π∆ c , D1 = D 1 Π∆ d , where Π is a permutation matrix, and ∆ a , ∆ b , ∆ c , and ∆ d are diagonal matrices such as ∆ a ∆ b ∆ c ∆ d = I R−2 .

Φ 37 )
(n) , C IN 1 +2...IN ×R ,Several tensor models among which some are new, have been presented in a general and unified framework.The use of the index notation for mode combination based on Kronecker products provides A3.Proof of (Substituting the expression (34) of x i1,••• ,iN into (19) and using the identities (16) and(14) giveX S1;S2 = x i1,••• ,iN e I2 I1