Skip to main content

Overview of constrained PARAFAC models

Abstract

In this paper, we present an overview of constrained parallel factor (PARAFAC) models where the constraints model linear dependencies among columns of the factor matrices of the tensor decomposition or, alternatively, the pattern of interactions between different modes of the tensor which are captured by the equivalent core tensor. Some tensor prerequisites with a particular emphasis on mode combination using Kronecker products of canonical vectors that makes easier matricization operations, are first introduced. This Kronecker product‐based approach is also formulated in terms of an index notation, which provides an original and concise formalism for both matricizing tensors and writing tensor models. Then, after a brief reminder of PARAFAC and Tucker models, two families of constrained tensor models, the co‐called PARALIND/CONFAC and PARATUCK models, are described in a unified framework, for N th‐order tensors. New tensor models, called nested Tucker models and block PARALIND/CONFAC models, are also introduced. A link between PARATUCK models and constrained PARAFAC models is then established. Finally, new uniqueness properties of PARATUCK models are deduced from sufficient conditions for essential uniqueness of their associated constrained PARAFAC models.

1 Review

1.1 Introduction

Tensor calculus was introduced in differential geometry, at the end of the nineteenth century, and then tensor analysis was developed in the context of Einstein’s theory of general relativity, with the introduction of index notation, the so‐called Einstein summation convention, at the beginning of the twentieth century, which allows to simplify and shorten physics equations involving tensors. Index notation is also useful for simplifying multivariate statistical calculations, particularly those involving cumulant tensors[1]. Generally speaking, tensors are used in physics and differential geometry for characterizing the properties of a physical system, representing fundamental laws of physics, and defining geometrical objects whose components are functions. When these functions are defined over a continuum of points of a mathematical space, the tensor forms what is called a tensor field, a generalization of vector field used to solve problems involving curved surfaces or spaces, as it is the case of curved space‐time in general relativity. From a mathematical point of view, two other approaches are possible for defining tensors, in terms of tensor products of vector spaces, or multilinear maps. Symmetric tensors can also be linked with homogeneous polynomials[2].

After the first tensor developments by mathematicians and physicists, the need of analyzing collections of data matrices that can be seen as three‐way data arrays gave rise to three‐way models for data analysis, with the pioneering works of Tucker in psychometrics[3], and Harshman in phonetics[4], who proposed what is now referred to as the Tucker and parallel factor (PARAFAC) decompositions, respectively. The PARAFAC decomposition was independently proposed by Carroll and Chang[5] under the name canonical decomposition (CANDECOMP) and then called CANDECOMP/PARAFAC (CP) in[6]. For a history of the development of multi‐way models in the context of data analysis, see[7]. Since the 1990s, multi‐way analysis has known a growing success in chemistry and especially in chemometrics (see Bro’s thesis[8] and the book by Smilde et al.[9] for a description of various chemical applications of three‐way models, with a pedagogical presentation of these models and of various algorithms for estimating their parameters). At the same period, tensor tools were developed for signal processing applications, more particularly for solving the so‐called blind source separation (BSS) problem using cumulant tensors (see[10]‐[12] and De Lathauwer’s thesis[13] where the concept of high‐order singular value decomposition (HOSVD) is introduced, a tensor tool generalizing the standard matrix SVD to arrays of order higher than two). A recent overview of BSS approaches and applications can be found in the handbook co‐edited by Comon and Jutten[14].

Nowadays, (high‐order) tensors, also called multi‐way arrays in the data analysis community, play an important role in many fields of application for representing and analyzing multidimensional data, as in psychometrics, chemometrics, food industry, environmental sciences, signal/image processing, computer vision, neuroscience, information sciences, data mining, pattern recognition, among many others. Then, they are simply considered as multidimensional arrays of numbers, constituting a generalization of vectors and matrices that are first‐ and second‐order tensors, respectively, to orders higher than two. Tensor decompositions, also called tensor models, are very useful for analyzing multidimensional data under the form of signals, images, speech, music sequences, or texts and also for designing new systems as it is the case of wireless communication systems since the publication of the seminal paper by Sidiropoulos et al.[15]. Besides the references already cited, overviews of tensor tools, models, algorithms, and applications can be found in[16]‐[19].

Tensor models incorporating constraints (sparsity; non‐negativity; smoothness; symmetry; column orthonormality of factor matrices; Hankel, Toeplitz, and Vandermonde structured matrix factors; allocation constraints...) have been the object of intensive works, during the last years. Such constraints can be inherent to the problem under study or the result of a system design. An overview of constraints on components of tensor models most often encountered in multi‐way data analysis can be found in[7]. Incorporation of constraints in tensor models may facilitate physical interpretability of matrix factors. Moreover, imposing constraints may allow to relax uniqueness conditions and to develop specialized parameter estimation algorithms with improved performance both in terms of accuracy and computational cost, as it is the case of CP models with a column‐wise orthonormal factor matrix[20]. One can classify the constraints into three main categories: i) sparsity/non‐negativity, ii) structural, and iii) linear dependencies/mode interactions. It is worth noting that the three categories of constraints involve specific parameter estimation algorithms, the first two generally inducing an improvement of uniqueness property of the tensor decomposition, while the third category implies a reduction of uniqueness, named partial uniqueness. We briefly review the main results concerning the first two types of constraints, Section 1.3 of this paper being dedicated to the third category.

Sparse and non‐negative tensor models have recently been the subject of many works in various fields of applications like computer vision[21, 22], image compression[23], hyperspectral imaging[24], music genre classification[25] and audio source separation[26], multi‐channel EEG (electroencephalography) and network traffic analysis[27], fluorescence analysis[28], data denoising and image classification[29], among many others. Two non‐negative tensor models have been more particularly studied in the literature, the so‐called non‐negative tensor factorization (NTF), i.e., PARAFAC models with non‐negativity constraints on the matrix factors, and non‐negative Tucker decomposition (NTD), i.e., Tucker models with non‐negativity constraints on the core tensor and/or the matrix factors. The crucial importance of NTF/NTD for multi‐way data analysis applications results from the very large volume of real‐world data to be analyzed under constraints of sparseness and non‐negativity of factors to be estimated, when only non‐negative parameters are physically interpretable. Many NTF/NTD algorithms are now available. Most of them can be viewed as high‐order extensions of non‐negative matrix factorization (NMF) methods, in the sense that they are based on an alternating minimization of cost functions incorporating sparsity measures (also named distances or divergences) with application of NMF methods to matricized or vectorized forms of the tensor to be decomposed (see for instance[16, 23, 28, 30] for NTF and[29, 31] for NTD). An overview of NMF and NTF/NTD algorithms can be found in[16].

The second category of constraints concerns the case where the core tensor and/or some matrix factors of the tensor model have a special structure. For instance, we recently proposed a nonlinear CDMA scheme for multiuser SIMO communication systems that is based on a constrained block‐Tucker2 model whose core tensor, composed of the information symbols to be transmitted and their powers up to a certain degree, is characterized by matrix slices having a Vandermonde or a Hankel structure[32, 33]. We also developed Volterra‐PARAFAC models for nonlinear system modeling and identification. These models are obtained by expanding high‐order Volterra kernels, viewed as symmetric tensors, by means of symmetric or doubly symmetric PARAFAC decompositions[34, 35]. Block structured nonlinear systems like Wiener, Hammerstein, and parallel‐cascade Wiener systems can be identified from their associated Volterra kernels that admit symmetric PARAFAC decompositions with Toeplitz factors[36, 37]. Symmetric PARAFAC models with Hankel factors and symmetric block PARAFAC models with block Hankel factors are encountered for blind identification of multiple‐input multiple‐output (MIMO) linear channels using fourth‐order cumulant tensors, in the cases of memoryless and convolutive channels, respectively[38, 39]. In the presence of structural constraints, specific estimation algorithms can be derived as it is the case for symmetric CP decompositions[40], CP decompositions with Toeplitz factors (in[41], an iterative solution was proposed, whereas in[42], a non‐iterative algorithm was developed), Vandermonde factors[43], circulant factors[44], banded and/or structured matrix factors[45, 46], and also for Hankel and Vandermonde structured core tensors[33].

The rest of this paper is organized as follows: In Section 1.2, we present some tensor prerequisites with a particular emphasis on mode combination using Kronecker products of canonical vectors that makes easier the matricization operations, especially to derive matrix representations of tensor models. This Kronecker product‐based approach is also formulated in terms of an index notation, which provides an original and concise formalism for both matricizing tensors and writing tensor models. Then, we present the two most common tensor models, the so‐called Tucker and PARAFAC models, in a general framework, i.e., for N th‐order tensors. In Section 1.3, two families of constrained tensor models, the co‐called PARALIND/CONFAC and PARATUCK models, are described in a unified way, with a generalization to N th‐order tensors. New tensor models, called nested Tucker models and block PARALIND/CONFAC models, are also introduced. A link between PARATUCK models and constrained PARAFAC models is also established. In Section 1.4, uniqueness properties of PARATUCK models are deduced using this link. The paper is concluded in Section 2.

Notations and definitions. and denote the fields of real and complex numbers, respectively. Scalars, column vectors, matrices, and high‐order tensors are denoted by lowercase, boldface lowercase, boldface uppercase, and calligraphic uppercase letters, e.g., a, a, A, and, respectively. The vector Ai. (resp. A.j) represents the i th row (resp. j th column) of A.

I N , 1 N T , and e n ( N ) stand for the identity matrix of order N, the all‐ones row vector of dimensions 1×N, and the n th canonical vector of the Euclidean space R N , respectively.

AT, AH, A, tr (A), and r A denote the transpose, the conjugate (Hermitian) transpose, the Moore‐Penrose pseudo‐inverse, the trace, and the rank of A, respectively. D i (A)=diag(Ai.) represents the diagonal matrix having the elements of the i th row of A on its diagonal. The operator bdiag(.) forms a block diagonal matrix from its matrix arguments, while the operator vec(.) transforms a matrix into a column vector by stacking the columns of its matrix argument one on top of the other one. In case of a tensor, the vec(.) operation is defined in (6).

The outer product (also called tensor product), and the matrix Kronecker, Khatri‐Rao (column‐wise Kronecker), and Hadamard (element‐wise) products are denoted by , , , and , respectively.

Let us consider the setS={ n 1 ,, n N } obtained by permuting the elements of the set {1,…,N}. For A ( n ) C I n × R n and u ( n ) C I n × 1 , n=1,…,N, we define

n S A ( n ) = A ( n 1 ) A ( n 2 ) A ( n N ) C I n 1 I n N × R n 1 R n N ;
(1)
n S A ( n ) = A ( n 1 ) A ( n 2 ) A ( n N ) C I n 1 I n N × R , when R n = R , n = 1 , , N ; n S A ( n ) = A ( n 1 ) A ( n 2 ) A ( n N ) C I × R , when I n = I , and R n = R , n = 1 , , N ; n S u ( n ) = u ( n 1 ) u ( n 2 ) u ( n N ) C I n 1 × × I n N .
(2)

The outer product of N non‐zero vectors defines a rank‐one tensor of order N.

By convention, the order of dimensions is directly related to the order of variation of the associated indices. For instance, in (1) and (2), the product I n 1 I n 2 I n N of dimensions means that n1 is the index varying the most slowly while n N is the index varying the most fastly in the Kronecker products computation.

ForS={1,,N}, we have the following identities:

n S u ( n ) i 1 , , i N = n = 1 N u ( n ) i 1 , , i N = n = 1 N u i n ( n ) , n S u ( n ) i = n = 1 N u ( n ) i = n = 1 N u i n ( n ) with i = i N + n = 1 N - 1 ( i n - 1 ) j = n + 1 N I j .
(3)

In particular, foru C I × 1 ,v C J × 1 ,w C K × 1

X = u v w C I × J × K x ijk = u i v j w k , x = u v w C IJK × 1 x k + ( j - 1 ) K + ( i - 1 ) JK = u i v j w k .

Some useful matrix formulae are recalled in Appendix 1.

1.2 Tensor prerequisites

In this paper, a tensor is simply viewed as a multidimensional array of measurements. Depending that these measurements are real‐ or complex‐valued, we have a real‐ or complex‐valued tensor, respectively. The order N of a tensor refers to the number of indices that characterize its elements x i 1 , , i N , each index i n (i n =1,…,I N ,for n=1,…,N) being associated with a dimension, also called a way, or a mode, and I n denoting the mode‐n dimension.

An N th‐order complex‐valued tensorX C I 1 × × I N , also called an N‐way array, of dimensions I1××I N , can be written as

X= i 1 = 1 I 1 i N = 1 I N x i 1 , , i N n = 1 N e i n ( I n ) .
(4)

The coefficients x i 1 , , i N represent the coordinates of in the canonical basis n = 1 N e i n ( I n ) , i n = 1 , , I n ; n = 1 , , N of the space C I 1 × × I N .

The identity tensor of order N and dimensions I××I, denoted by I N , I or simply, is a diagonal hypercubic tensor whose elements δ i 1 , , i N are defined by means of the generalized Kronecker delta, i.e., δ i 1 , , i N = 1 if i 1 = = i N 0 otherwise , and I n =I,n=1,…,N. It can be written as

I N , I = i = 1 I e i ( I ) e i ( I ) N terms .

Different reduced order tensors can be obtained by slicing the tensorX C I 1 × × I N along one mode or p modes, i.e., by fixing one index i n or a set of p indices{ i n 1 ,, i n p }, which gives a tensor of order N-1 or N-p, respectively. For instance, by slicing along its mode‐n, we get the i n th mode‐n slice of, denoted by X i n , that can be written as

X i n = i 1 = 1 I 1 i n - 1 = 1 I n - 1 i n + 1 = 1 I n + 1 i N = 1 I N x i 1 , , i n , , i N × e i n + 1 ( I n + 1 ) e i N ( I N ) e i 1 ( I 1 ) e i n - 1 ( I n - 1 ) C I n + 1 × × I N × I 1 × × I n - 1 .

For instance, by slicing the third‐order tensorX C I × J × K along each mode, we get three types of matrix slices, respectively called horizontal, lateral, and frontal slices:

X i.. C J × K , X .j. C K × I and X ..k C I × J , with i = 1 , , I ; j = 1 , , J ; k = 1 , , K.

1.2.1 Tensor Hadamard product

ConsiderA C R 1 × × R N × I 1 × × I P 1 andB C R 1 × × R N × I P 1 + 1 × × I P , where{ i 1 ,, i P 1 } and{ i P 1 + 1 ,, i P } are two disjoint ordered subsets of the set of indices {i1,…,i P } andR={ r 1 ,, r N }.

We define the Hadamard product of with along their common modes, as the tensorC C R 1 × × R N × I 1 × × I P such that

C = A R B c r 1 , , r N , i 1 , , i P = a r 1 , , r N , i 1 , , i P 1 b r 1 , , r N , i P 1 + 1 , , i P

For instance, given two third‐order tensorsA C R 1 × R 2 × I 1 andB C R 1 × R 2 × I 2 , the Hadamard productA { r 1 , r 2 } B gives a fourth‐order tensorC C R 1 × R 2 × I 1 × I 2 such that

c r 1 , r 2 , i 1 , i 2 = a r 1 , r 2 , i 1 b r 1 , r 2 , i 2 .

Such a tensor Hadamard product can be calculated by means of the matrix Hadamard product of matrix unfoldings of extended tensors, as defined in (21) and (22) (see also (94) to (96) in Appendix 2). For the example above, we have

C R 1 R 2 × I 1 I 2 = A R 1 R 2 × I 1 I I 1 1 I 2 T B R 1 R 2 × I 2 1 I 1 T I I 2
1.2.1.0 Example

For A R × I 1 = a 1 a 2 a 3 a 4 , B R × I 2 = b 1 b 2 b 3 b 4 , and the tensor such as c r , i 1 , i 2 = a r , i 1 b r , i 2 , a mode‐1 flat matrix unfolding of is given by

C R × I 1 I 2 = A R × I 1 I 2 1 2 T B R × I 2 1 2 T I 2 = a 1 a 1 a 2 a 2 a 3 a 3 a 4 a 4 b 1 b 2 b 1 b 2 b 3 b 4 b 3 b 4 = a 1 b 1 a 1 b 2 a 2 b 1 a 2 b 2 a 3 b 3 a 3 b 4 a 4 b 3 a 4 b 4

1.2.2 Mode combination

Different contraction operations can be defined depending on the way according to which the modes are combined. Let us partition the set {1,…,N} in N1 ordered subsets S n 1 , constituted of p(n1) elements with n 1 = 1 N 1 p( n 1 )=N. Each subset S n 1 is associated with a combined mode of dimension J n 1 = I n n S n 1 . These mode combinations allow to rewrite the N th‐order tensorX C I 1 × × I N under the form of an N 1 th ‐order tensorY C J 1 × × J N 1 as follows

Y= j 1 = 1 J 1 j N 1 = 1 J N 1 x j 1 , , j N 1 n 1 = 1 N 1 e j n 1 ( J n 1 ) with e j n 1 ( J n 1 ) = n S n 1 e i n ( I n ) .
(5)

Two particular mode combinations corresponding to the vectorization and matricization operations are now detailed.

1.2.3 Vectorization

The vectorization ofX C I 1 × × I N is associated with a combination of the N modes into a unique mode of dimensionJ= n = 1 N I n , which amounts to replace the outer product in (4) by the Kronecker product

vec(X)= i 1 = 1 I 1 i N = 1 I N x i 1 , , i N n = 1 N e i n ( I n ) C I 1 I N × 1
(6)

the element x i 1 , , i N of being the i th entry ofvec(X) with i defined as in (3).

The vectorization can also be carried out after a permutation of indices π(i n ),n=1,…,N.

1.2.4 Matricization or unfolding

There are different ways of matricizing the tensor according to the partitioning of the set {1,…,N} into two ordered subsets S 1 and S 2 , constituted of p and N-p indices, respectively. A general formula for the matricization, forp 1 , N - 1 , is

X S 1 ; S 2 = i 1 = 1 I 1 i N = 1 I N x i 1 , , i N n S 1 e i n ( I n ) n S 2 e i n ( I n ) T C J 1 × J 2
(7)

with J n 1 = I n n S n 1 , for n1=1 and 2. From (7), we can deduce the following expression of the element x i 1 , , i N in terms of the matrix unfolding X S 1 ; S 2

x i 1 , , i N = n S 1 e i n ( I n ) T X S 1 ; S 2 n S 2 e i n ( I n ) .
(8)

1.2.5 Particular case: mode‐ n matrix unfoldings X n

A flat mode‐n matrix unfolding of the tensor corresponds to an unfolding under the form X S 1 ; S 2 with S 1 ={n} and S 2 ={n+1,,N,1,,n-1}, which gives

X I n × I n + 1 I N I 1 I n - 1 = X n = i 1 = 1 I 1 i N = 1 I N x i 1 , , i N e i n ( I n ) n S 2 e i n ( I n ) T C I n × I n + 1 I N I 1 I n - 1 .
(9)

We can also define a tall mode‐n matrix unfolding of, by choosing S 1 ={n+1,,N,1,,n-1} and S 2 ={n}. Then, we have X I n + 1 I N I 1 I n - 1 × I n = X n T C I n + 1 I N I 1 I n - 1 × I n .

The column vectors of a flat mode‐n matrix unfolding X n are the mode‐n vectors of, and the rank of X n , i.e., the dimension of the mode‐n linear space spanned by the mode‐n vectors, is called mode‐n rank of, denoted by rank n (X).

In the case of a third‐order tensorX C I × J × K , there are six different flat unfoldings, denoted XI×J K, XI×K J, XJ×K I, XJ×I K, XK×I J, and XK×J I. For instance, we have

X I × JK = X { 1 } ; { 2 , 3 } = i = 1 I j = 1 J k = 1 K x i , j , k e i ( I ) e j ( J ) e k ( K ) T .
(10)

Using the properties (84), (85), and (87) of the Kronecker product gives

X I × JK = j = 1 J e j ( J ) T i = 1 I k = 1 K x i , j , k e i ( I ) e k ( K ) T = j = 1 J e j ( J ) T ( X .j. ) T = X . 1 . T X .J. T C I × JK .

Similarly, there are six tall matrix unfoldings, denoted XJ K×I, XK J×I, XK I×J, XI K×J, XI J×K, XJ I×K, like for instance

X JK × I = i = 1 I j = 1 J k = 1 K x i , j , k e j ( J ) e k ( K ) e i ( I ) T = X I × JK T C JK × I .
(11)

Applying (8) to (10) gives

x i , j , k = e i ( I ) T X I × JK e j ( J ) e k ( K ) = X I × JK i , ( j - 1 ) K + k .

1.2.6 Mode‐ n product of a tensor with a matrix or a vector

The mode‐n product ofX C I 1 × × I N withA C J n × I n along the n th mode, denoted byX × n A, gives the tensor of order N and dimensions I1××In-1×J n ×In+1××I N , such as[47]

y i 1 , , i n - 1 , j n , i n + 1 , , i N = i n = 1 I n a j n , i n x i 1 , , i n - 1 , i n , i n + 1 , , i N
(12)

which can be expressed in terms of mode‐n matrix unfoldings of and

Y n = AX n .

This operation can be interpreted as the linear map from the mode‐n space of to the mode‐n space of, associated with the matrix A.

The mode‐n product ofX C I 1 × × I N with the row vector u T C 1 × I n along the n th mode, denoted byX × n u T , gives a tensor of order N-1 and dimensions I1××In-1×In+1××I N , such as

y i 1 , , i n - 1 , i n + 1 , , i N = i n = 1 I n u i n x i 1 , , i n - 1 , i n , i n + 1 , , i N

that can be written in vectorized form as vec T (Y)= u T X n C 1 × I n + 1 I N I 1 I n - 1 .

When multiplying a N th‐order tensor by row vectors along p different modes, we get a tensor of order N-p. For instance, for a third‐order tensorX C I × J × K , we have

x ij. = X × 1 e i ( I ) T × 2 e j ( J ) T , x ijk = X × 1 e i ( I ) T × 2 e j ( J ) T × 3 e k ( K ) T .

Considering an ordered subsetS={ m 1 ,, m P } of the set {1,…,N}, a series of mode‐ m p products ofX C I 1 × × I N with A ( m p ) C J m p × I m p , p{1,…,P}, PN, will be concisely noted as

X × m 1 A ( m 1 ) × m P A ( m P ) = X × m = m 1 m P A ( m ) .
1.2.6.0 Properties
  • For any permutation π(.) of P distinct indices m p {1,…,N} such as q p =π(m p ), p{1,…,P}, with PN, we have

    X × q = q 1 q P A ( q ) = X × m = m 1 m P A ( m )

    which means that the order of the mode‐ m p products is irrelevant when the indices m p are all distinct.

  • For two products ofX C I 1 × × I N along the same mode‐n, withA C J n × I n andB C K n × J n , we have[13]

    Y = X × n A × n B = X × n ( B A ) C I 1 × × I n - 1 × K n × I n + 1 × × I N .
    (13)

1.2.7 Kronecker product‐based approach using index notation

In this subsection, we propose to reformulate our Kronecker product‐based approach for tensor matricization in terms of an index notation introduced in[48]. Using this index notation, foru C I × 1 , v T C 1 × J , andX C I × J , we can write

u = i = 1 I u i e i ( I ) = u i e i v T = j = 1 J v j e j ( J ) T = v j e j X = i = 1 I j = 1 j x ij e i ( I ) e j ( J ) T = x ij e i j X T = x ij e j i vec ( X ) = x ij e ji

As with Einstein summation convention, the index notation allows to drop summation signs. If an index i[ 1,I] is repeated in an expression (or more generally in a term of an equation), it means that this expression (or this term) must be summed over that index from 1 to I. However, it is worth noting the two differences between the index notation used in this paper and Einstein summation convention: (i) each index can be repeated more than twice in any expression and (i i) the index notation can be used with ordered sets of indices. We have to notice that the index notation can be interpreted in terms of two separate combinations of indices, one associated with the column (superscript) indices and the other one with the row (subscript) indices, with the following rules:

  • the ordering of the column indices is independent of that of the row indices;

  • the ordering of the column and row indices cannot be changed.

Considering the setS={ n 1 ,, n N } obtained by permuting the elements of {1,…,N} and defining the ordered set of indicesI={ i n 1 ,, i n N } associated with, we denote by e I and e I the Kronecker products n S e i n and n S e i n T , respectively. So, we have

n S u ( n ) = n S u i n ( n ) e I
(14)

Partitioning two ordered sets of indices and into two subsets ( I 1 , I 2 ) and ( J 1 , J 2 ), respectively, the rules enounced previously imply the following identities:

e I J = e I e J = e J e I = e I 1 I 2 J 1 J 2 = e I 1 e I 2 e J 1 e J 2 = e I 1 e J 1 e I 2 e J 2 = e I 1 e J 1 e J 2 e I 2 = e J 1 e J 2 e I 1 e I 2 = e J 1 e I 1 e J 2 e I 2 = e J 1 e I 1 e I 2 e J 2

These identities directly result from the property that the Kronecker product of a column vector with a row vector is independent of the order of the vectors (uvT=vTu), which implies that in a sequence of Kronecker products of column and row vectors, a column vector can be permuted with a row vector without altering the final result, if the proper ordering of the column vectors and of the row vectors is not changed in the sequence ( u 1 u 2 v T = u 1 v T u 2 = v T u 1 u 2 v T u 2 u 1 if u1u2).

Using the index notation, the horizontal, lateral, and frontal slices of a third‐order tensorX C I × J × K can be written as

X i.. = x ijk e j k ; X .j. = x ijk e k i ; X ..k = x ijk e i j .

The Kronecker products of vectors (u C I × 1 ,v C J × 1 ) and matrices (A C I × J ,B C K × L ) can be concisely written as

u v = ( u i e i ) ( v j e j ) = u i v j e ij u T v T = u i v j e ij u v T = u i v j e i j A B = a ij e i j b kl e k l = a ij b kl e ik jl A T B T = a ij b kl e jl ik

ForU= u ( 1 ) u ( N ) C I × N andV= v 1 v N C J × N , we have

U V T = n = 1 N u ( n ) ( v ( n ) ) T = u i ( n ) v j ( n ) e i j
(15)

where the summation over n is to be done after the matricization u(n)(v(n))T.

Using the index notation, the Khatri‐Rao product can be written as follows:

A B = a ik b jk e ij k ( A B ) T = a ik b jk e k ij
(16)

The Kronecker and Khatri‐Rao products defined in (1) and (2), with a i n , r n ( n ) as entry of A(n), can then be defined as

n S A ( n ) = n S a i n , r n ( n ) e i n 1 , , i n N r n 1 , , r n N = n S a i n , r n ( n ) e I R
(17)
n S A ( n ) = n S a i n , r ( n ) e i n 1 , , i n N r = n S a i n , r ( n ) e I r
(18)

whereR={ r n 1 ,, r n N }.

Applying these results, the unfoldings (7), (10), and (11) and the formula (8) can be rewritten respectively as

X S 1 ; S 2 = x i 1 , , i N e I 1 I 2
(19)
X I × JK = x i , j , k e i jk X JK × I = x i , j , k e jk i x i 1 , , i N = e I 1 X S 1 ; S 2 e I 2
(20)

where I 1 and I 2 represent the sets of indices i n associated with the sets S 1 and S 2 of index n, respectively.

We can also use the index notation for deriving matrix unfoldings of tensor extensions of a matrixB C I × J . For instance, if we define the tensorA C I × J × K such as ai,j,k=bi,j for k=1,…,K, mode‐1 flat unfoldings of are given by

A I × JK = i , j , k a i , j , k e i e j e k = a i , j , k e i jk = i , j b i , j e i j k = 1 K e k = B 1 K T = B I J 1 K T
(21)
A I × KJ = a i , j , k e i kj = k = 1 K e k b i , j e i j = 1 K T B = B 1 K T I J
(22)

These two formulae will be used later for establishing the link between PARATUCK‐(2,4) models and constrained PARAFAC‐4 models.

1.2.8 Basic tensor models

We now present the two most common tensor models, i.e., the Tucker[3] and PARAFAC[4] models. In[7], these models are introduced in a constructive way, in the context of a three‐way data analysis. The Tucker models are presented as extensions of the matrix singular value decomposition (SVD) to three‐way arrays, which gave rise to the generalization as HOSVD[13, 49], whereas the PARAFAC model is introduced by emphasizing Cattell’s principle of parallel proportional profiles[50] that underlies this model, so explaining the acronym PARAFAC. In the following, we adopt a more general presentation for multi‐way arrays, i.e., tensors of any order N.

1.2.8.0 Tucker models

For a N th‐order tensorX C I 1 × × I N , a Tucker model is defined in an element‐wise form as

x i 1 , , i N = r 1 = 1 R 1 r N = 1 R N g r 1 , , r N n = 1 N a i n , r n ( n )
(23)

with i n =1,…,I n for n=1,…,N, where g r 1 , , r N is an element of the core tensorG C R 1 × × R N and a i n , r n ( n ) is an element of the matrix factor A ( n ) C I n × R n .

Using the index notation and defining the set of indicesR={ r 1 ,, r N }, the Tucker model can also be written simply as

x i 1 , , i N = g r 1 , , r N R a i n , r n ( n )
(24)

Taking the definition (4) into account and noting that i n = 1 I n a i n , r n ( n ) e i n ( I n ) = A . r n ( n ) , this model can be written as a weighted sum of n = 1 N R n outer products, i.e., rank‐one tensors

X = r 1 = 1 R 1 r N = 1 R N g r 1 , , r N n = 1 N A . r n ( n ) = g r 1 , , r N R A . r n ( n ) ( with the index notation )
(25)

Using the definition (12) allows to write (23) in terms of mode‐n products as

X = G × 1 A ( 1 ) × 2 A ( 2 ) × 3 × N A ( N ) = G × n = 1 N A ( n ) .
(26)

This expression evidences that the Tucker model can be viewed as the transformation of the core tensor resulting from its multiplication by the factor matrix A(n) along its mode‐n, which corresponds to a linear map applied to the mode‐n space of, for n=1,…,N, i.e., a multilinear map applied to. From a transformation point of view, and can be interpreted as the input tensor and the transformed tensor, or output tensor, respectively.

Matrix representations of the Tucker model. A matrix representation of a Tucker model is directly linked with a matricization of tensor like (7), corresponding to the combination of two sets of modes S 1 and S 2 . These combinations must be applied both to the tensor and its core tensor.

The matrix representation (7) of the Tucker model (23) is given by

X S 1 ; S 2 = n S 1 A ( n ) G S 1 ; S 2 n S 2 A ( n ) T
(27)

with G S 1 ; S 2 C J 1 × J 2 , and J n 1 = R n n S n 1 , for n1=1 and 2.

Proof

See Appendix 3.

For the flat mode‐n unfolding, defined in (9), the formula (27) gives

X n = A ( n ) G n A ( n + 1 ) A ( N ) A ( 1 ) A ( n - 1 ) T .
(28)

Applying the vec formula (92) to the right‐hand side of (28), we obtain the vectorized form of associated with its mode‐n unfolding X n

vec ( X ) = vec ( X n ) = A ( n + 1 ) A ( N ) A ( 1 ) A ( n ) vec ( G n ) .
1.2.8.0 Tucker‐(N 1 ,N) models

A Tucker‐ (N1,N) model for a N th‐order tensorX C I 1 × × I N , with NN1, corresponds to the case where N-N1 factor matrices are equal to identity matrices. For instance, assuming that A ( n ) = I I n , which implies R n =I n , for n=N1+1,…,N, (23) and (26) become

x i 1 , , i N = r 1 = 1 R 1 r N 1 = 1 R N 1 g r 1 , , r N 1 , i N 1 + 1 , , i N n = 1 N 1 a i n , r n ( n ) X = G × 1 A ( 1 ) × 2 × N 1 A ( N 1 ) × N 1 + 1 I I N 1 + 1 × N I I N
(29)
= G × n = 1 N 1 A ( n ) .
(30)

One such model that is currently used in applications is the Tucker‐(2,3) model, usually denoted Tucker2, for third‐order tensorsX C I × J × K . Assuming A ( 1 ) =A C I × P , A ( 2 ) =B C J × Q , and A ( 3 ) = I K , such a model is defined by the following equations:

x ijk = p = 1 P q = 1 Q g pqk a ip b jq
(31)
X = G × 1 A × 2 B
(32)

with the core tensorG C P × Q × K .

1.2.8.0 PARAFAC models

A PARAFAC model for a N th‐order tensor corresponds to the particular case of a Tucker model with an identity core tensor of order N and dimensions R××R

G = I N , R = I g r 1 , , r N = δ r 1 , , r N

(23) to (26) then become, respectively

x i 1 , , i N = r = 1 R n = 1 N a i n , r ( n )
(33)
= n = 1 N a i n , r ( n ) ( with the index notation )
(34)
X = r = 1 R n = 1 N A .r ( n ) X = I N , R × n = 1 N A ( n )
(35)

with the factor matrices A ( n ) C I n × R ,n=1,,N.

1.2.8.0 Remarks
  • The expression (33) as a sum of polyads is called a polyadic form of by Hitchcock[51].

  • The PARAFAC model (33, 34 and 35) amounts to decomposing the tensor into a sum of R components, each component being a rank‐one tensor. When R is minimal in (33), it is called the rank of[52]. This rank is related to the mode‐n ranks by the following inequalities rank n (X)R,n=1,,N. Furthermore, contrary to the matrices for which the rank is always at most equal to the smallest of the dimensions, for higher‐order tensors, the rank can exceed any mode‐n dimension I n .

  • There exists different definitions of rank for tensors, like typical and generic ranks, or also symmetric rank for a symmetric tensor (see[53, 54] for more details).

  • In telecommunication applications, the structure parameters (rank, mode dimensions, and core tensor dimensions) of a PARAFAC or Tucker model are design parameters that are chosen in function of the performance desired for the communication system. However, in most of the applications, as for instance in multi‐way data analysis, the structure parameters are generally unknown and must be determined a priori. Several techniques have been proposed for determining these parameters (see[55]‐[58] and references therein).

  • The PARAFAC model is also sometimes defined by the following equation

    x i 1 , , i N = r = 1 R g r n = 1 N a i n , r ( n ) with g r >0.
    (36)
  • In this case, the identity tensor I N , R in (35) is replaced by the diagonal tensorG C R × × R whose diagonal elements are equal to scaling factors g r , i.e.

    g r 1 , , r N = g r if r 1 = = r N = r 0 otherwise

    and all the column vectors A .r ( n ) are normalized, i.e., with a unit norm, for 1≤nN.

  • It is important to notice that the PARAFAC model (33) is multilinear (more precisely N‐linear) in its parameters in the sense that it is linear with respect to each matrix factor. This multilinearity property is exploited for parameter estimation using the standard alternating least squares (ALS) algorithm[4, 5] that consists in alternately estimating each matrix factor by minimizing a least squares error criterion conditionally to the knowledge of the other matrix factors that are fixed with their previously estimated values.

Matrix representations of the PARAFAC model. The matrix representation (7) of the PARAFAC model (33)‐(35) is given by

X S 1 ; S 2 = n S 1 A ( n ) n S 2 A ( n ) T .
(37)
Proof.

See Appendix 4.

1.2.8.0 Remarks
  • From (37), we can deduce that a mode combination results in a Khatri‐Rao product of the corresponding factor matrices. Consequently, the tensor contraction (5) associated with the PARAFAC‐N model (35) gives a PARAFAC‐ N1 model whose factor matrices are equal to n S n 1 A ( n ) C J n 1 × R , n1=1,…,N1, with J n 1 = I n n S n 1 .

  • For the PARAFAC model, the flat mode‐n unfolding, defined in (9), is given by

    X n = A ( n ) A ( n + 1 ) A ( N ) A ( 1 ) A ( n - 1 ) T ,
    (38)
  • and the associated vectorized form is obtained in applying the vec formula (93) to the right‐hand side of the above equation, with I R =diag(1 R )

    vec(X)=vec( X n )= A ( n + 1 ) A ( N ) A ( 1 ) A ( n ) 1 R
    (39)
  • In the case of the normalized PARAFAC model (36), (37) and (39) become, respectively,

    X S 1 ; S 2 = n S 1 A ( n ) diag ( g ) n S 2 A ( n ) T vec ( X ) = vec ( X n ) = A ( n + 1 ) A ( N ) A ( 1 ) A ( n ) g
  • whereg= g 1 g R T C R × 1 .

  • For the PARAFAC model of a third‐order tensorX C I × J × K with factor matrices (A,B,C), the formula (37) gives for S 1 ={i,j} and S 2 ={k}

    X IJ × K = X 1 .. X I.. =(AB) C T C IJ × K .
  • Noting thatAB= B D 1 ( A ) B D I ( A ) , we deduce the following expression for mode‐1 matrix slices

    X i.. =B D i (A) C T .
  • Similarly, we have

    X JK × I = ( B C ) A T , X KI × J = ( C A ) B T , X .j. = C D j ( B ) A T , X ..k = A D k ( C ) B T .
  • For the PARAFAC model of a fourth‐order tensorX C I × J × K × L with factor matrices (A,B,C,D), we obtain

    X IJK × L = ( A B C ) D T = ( B C ) D 1 ( A ) ( B C ) D I ( A ) D T = C D 1 ( B ) D 1 ( A ) C D J ( B ) D I ( A ) D T C IJK × L X ij.. = C D j ( B ) D i ( A ) D T C K × L
    (40)

    Other matrix slices can be deduced from (40) by simple permutations of the matrix factors.

In the next section, we introduce two constrained PARAFAC models, the so‐called PARALIND and CONFAC models, and then PARATUCK models.

1.3 Constrained PARAFAC models

The introduction of constraints in tensor models can result from the system itself that is under study or from a system design. In the first case, the constraints are often interpreted as interactions or linear dependencies between the PARAFAC factors. Examples of such dependencies are encountered in psychometric and chemometric applications that gave origin, respectively, to the PARATUCK‐2 model[59] and the parallel profiles with linear dependencies (PARALIND) model[60, 61], introduced in[47] under the name canonical decomposition with linear constraints (CANDELINC), for the multiway case. A first application of the PARATUCK‐2 model in signal processing was made in[62] for blind joint identification and equalization of Wiener‐Hammerstein communication channels. The PARALIND model was applied for identifiability and propagation parameter estimation purposes in a context of array signal processing[63, 64].

In the second case, the constraints are used as design parameters. For instance, in a telecommunications context, we proposed two constrained tensor models: the CONFAC (constrained factor) model[65] and the PARATUCK‐ (N1,N) model[66, 67]. The PARATUCK‐2 model was also applied for designing space‐time spreading‐multiplexing MIMO systems[68]. For these telecommunication applications of constrained tensor models, the constraints are used for resource allocation. We are now going to describe these various constrained PARAFAC models.

1.3.1 PARALIND models

Let us define the core tensor of the Tucker model (26) as follows:

G= I N , R × n = 1 N Φ ( n )
(41)

where Φ ( n ) R R n × R ,n=1,,N, withR max n ( R n ), are constraint matrices. In this case, will be called the ‘interaction tensor’, or ‘constraint tensor’.

The PARALIND model is obtained by substituting (41) into (26) and applying the property (13), which gives

X=G × n = 1 N A ( n ) = I N , R × n = 1 N A ( n ) Φ ( n ) .
(42)

This equation leads to two different interpretations of the PARALIND model, as a constrained Tucker model whose core tensor admits a PARAFAC decomposition with factor matrices Φ(n), called ‘interaction matrices,’ and as a constrained PARAFAC model with constrained factor matrices A ̄ ( n ) = A ( n ) Φ ( n ) .

The interaction matrix Φ(n) allows taking into account linear dependencies between the columns of A(n), implying a rank deficiency for this factor matrix. When the columns of Φ(n) are formed with 0’s and 1’s, the dependencies simply consist in a repetition or an addition of certain columns of A(n). In this particular case, the diagonal element ξ r , r ( n ) 1 of the matrix Ξ ( n ) = Φ ( n ) T Φ ( n ) R R × R represents the number of columns of A(n) that are added to form the r th column of the constrained factor A(n)Φ(n). The choice Φ ( n ) = I R means that there is no such dependency among the columns of A(n).

Note that (42) can be written element‐wise as

x i 1 , , i N = r 1 = 1 R 1 r N = 1 R N g r 1 , , r N n = 1 N a i n , r n ( n ) = r = 1 R n = 1 N ā i n , r ( n ) with ā i n , r ( n ) = r n = 1 R n a i n , r n ( n ) ϕ r n , r ( n ) . with g r 1 , , r N = r = 1 R n = 1 N ϕ r n , r ( n )
(43)

This constrained PARAFAC model constitutes an N‐way form of the three‐way PARALIND model, used for chemometric applications in[60, 61].

1.3.2 CONFAC models

When the constraint matrices Φ ( n ) R R n × R are full row rank and their columns are chosen as canonical vectors of the Euclidean space R R n , for n=1,…,N, the constrained PARAFAC model (42) constitutes a generalization to N th‐order of the third‐order CONFAC model, introduced in[65] for designing MIMO communication systems with resource allocation. This CONFAC model was used in[69] for solving the problem of blind identification of underdetermined mixtures based on cumulant generating function of the observations. In a telecommunications context where represents the tensor of received signals, such a constraint matrix Φ(n) can be interpreted as an ‘allocation matrix’ allowing to allocate resources, like data streams, codes, and transmit antennas, to the R components of the signal to be transmitted. In this case, the core tensor will be called the ‘allocation tensor.’ By assumption, each column of the allocation matrix Φ(n) is a canonical vector of R R n , which means that there is only one value of r n such that ϕ r n , r ( n ) =1, and this value of r n corresponds to the n th resource allocated to the r th component.

Each element x i 1 , , i N of the received signal tensor is equal to the sum of R components, each component r resulting from the combination of N resources, each resource being associated with a column of the matrix factor A(n), n=1,…,N. This combination, determined by the allocation matrices, is defined by a set of N indices {r1,…,r N } such that n = 1 N ϕ r n , r ( n ) =1. As for anyr 1 , R , there is one and only one N‐uplet (r1,…,r N ) such as n = 1 N ϕ r n , r ( n ) =1, we can deduce that each component r of x i 1 , , i N in (43) is the result of one and only one combination of the N resources under the form of the product n = 1 N a i n , r n ( n ) . For the CONFAC model, we have

r n = 1 R n D r n Φ ( n ) = I R ,n=1,,N

meaning that each resource r n is allocated at least once, and the diagonal element of Ξ(n)=Φ(n)TΦ(n) is such as ξ r , r ( n ) =1,n=1,,N, because only one resource r n is allocated to each component r. Moreover, we have to notice that the assumptionR max n ( R n ) implies that each resource can be allocated several times, i.e., to several components. Defining the interaction matrices

Γ ( n ) = Φ ( n ) Φ ( n ) T R R n × R n , Γ ( n 1 , n 2 ) = Φ ( n 1 ) Φ ( n 2 ) T R R n 1 × R n 2

the diagonal element γ r n , r n ( n ) 1 , R - R n + 1 represents the number of times that the r n th column of A(n) is repeated, i.e., the number of times that the r n th resource is allocated to the R components, whereas γ r n 1 , r n 2 ( n 1 , n 2 ) determines the number of interactions between the r n 1 th column of A ( n 1 ) and the r n 2 th column of A ( n 2 ) , i.e., the number of times that the r n 1 th and r n 2 th resources are combined in the R components. If we choose R n =R and Φ ( n ) = I R ,n=1,,N, the PARALIND/CONFAC model (42) becomes identical to the PARAFAC one (35).

The matrix representation (7) of the PARALIND/CONFAC model can be deduced from (37) in replacing A(n) by A(n)Φ(n)

X S 1 ; S 2 = n S 1 A ( n ) Φ ( n ) n S 2 A ( n ) Φ ( n ) T .

Using the identity, (86) gives

X S 1 ; S 2 = n S 1 A ( n ) n S 1 Φ ( n ) n S 2 Φ ( n ) T n S 2 A ( n ) T ,
(44)

or, equivalently,

X S 1 ; S 2 = n S 1 A ( n ) G S 1 ; S 2 n S 2 A ( n ) T ,

where the matrix representation G S 1 ; S 2 of the constraint/allocation tensor, defined by means of its PARAFAC model (41), can also be deduced from (37) as

G S 1 ; S 2 = n S 1 Φ ( n ) n S 2 Φ ( n ) T .

1.3.3 Nested Tucker models

The PARALIND/CONFAC models can be viewed as particular cases of a new family of tensor models that we shall call nested Tucker models, defined by means of the following recursive equation:

X ( p ) = X ( p - 1 ) × n = 1 N A ( p , n ) for p = 1 , , P = G × n = 1 N q = P 1 A ( q , n )

with the factor matrices A ( p , n ) C R ( p , n ) × R ( p - 1 , n ) for p=1,…,P, such as R(0,n)=R n and R(P,n)=I n , for n=1,…,N, the core tensor X ( 0 ) =G C R 1 × × R N , and X ( P ) C I 1 × × I N . This equation can be interpreted as P successive linear transformations applied to each mode‐n space of the core tensor. So, P nested Tucker models can then be interpreted as a Tucker model for which the factor matrices are products of P matrices. WhenG= I N , R , which implies R(0,n)=R n =R for n=1,…,N, we obtain nested PARAFAC models. The PARALIND/CONFAC models correspond to two nested PARAFAC models (P=2), with A(1,n)=Φ(n), A(2,n)=A(n), R(0,n)=R, R(1,n)=R n , and R(2,n)=I n , for n=1,…,N.

By considering nested PARAFAC models with P=3, A ( 1 , n ) = Φ ( n ) C K n × R , A ( 2 , n ) = A ( n ) C J n × K n , and A ( 3 , n ) = Ψ ( n ) C I n × J n , for n=1,…,N, we deduce doubly PARALIND/CONFAC models described by the following equation:

X= I N , R × n = 1 N Ψ ( n ) A ( n ) Φ ( n ) .

Such a model can be viewed as a doubly constrained PARAFAC model, with factor matrices Ψ(n)A(n)Φ(n), the constraint matrix Ψ(n), assumed to be full column rank, allowing to take into account linear dependencies between the rows of A(n). A third‐order nested Tucker model is visualized in Figure1.

Figure 1
figure 1

Visualization of a third‐order nested Tucker model.

1.3.4 Block PARALIND/CONFAC models

In some applications, the data tensorX C I 1 × × I N is written as a sum of P sub‐tensors X ( p ) , each sub‐tensor admitting a tensor model with a possibly different structure. So, we can define a block‐PARALIND/CONFAC model as

X = p = 1 P X ( p ) ,
(45)
X ( p ) = G ( p ) × n = 1 N A ( p , n ) , G ( p ) = I N , R ( p ) × n = 1 N Φ ( p , n ) ,
(46)

where A ( p , n ) C I n × R ( p , n ) , Φ ( p , n ) C R ( p , n ) × R ( p ) , and G ( p ) C R ( p , 1 ) × × R ( p , N ) are the mode‐n factor matrix, the mode‐n constraint/allocation matrix, and the core tensor of the PARALIND/CONFAC model of the p th sub‐tensor, respectively. The matrix representation (44) then becomes

X S 1 ; S 2 = p = 1 P n S 1 A ( p , n ) n S 1 Φ ( p , n ) × n S 2 Φ ( p , n ) T n S 2 A ( p , n ) T .
(47)

Defining the following block partitioned matrices

A ( n ) = A ( 1 , n ) A ( P , n ) C I n × R ( n )
(48)

where R ( n ) = p = 1 P R ( p , n ) , (47) can be rewritten in the following more compact form

X S 1 ; S 2 = b n S 1 A ( n ) G S 1 ; S 2 b n S 2 A ( n ) T

where b denotes the block‐wise Kronecker product defined as

A ( n ) b A ( q ) = A ( 1 , n ) A ( 1 , q ) A ( P , n ) A ( P , q )

A(q) being partitioned in P blocks as in (48), and

G S 1 ; S 2 = bdiag ( G S 1 ; S 2 ( 1 ) G S 1 ; S 2 ( P ) ) C J 1 × J 2 G S 1 ; S 2 ( p ) = b n S 1 Φ ( p , n ) b n S 2 Φ ( p , n ) T C J 1 ( p ) × J 2 ( p )

where b denotes the block‐wise Khatri‐Rao product defined in the same way as the block‐wise Kronecker product, with J n 1 = p = 1 P J n 1 ( p ) and J n 1 ( p ) = n S n 1 R ( p , n ) for n1=1 and 2.

In the case of a block PARAFAC model, (46) is replaced by

X ( p ) = I N , R ( p ) × n = 1 N A ( p , n ) with A ( p , n ) C I n × R ( p )

and the matrix representation (37) then becomes

X S 1 ; S 2 = b n S 1 A ( n ) b n S 2 A ( n ) T

with A ( n ) = A ( 1 , n ) A ( P , n ) C I n × R , andR= p = 1 P R ( p ) . Block constrained PARAFAC models were used in[70]‐[72] for modeling different types of multiuser wireless communication systems. Block constrained Tucker models were used for space‐time multiplexing MIMO‐OFDM systems[73] and for blind beamforming[74]. In these applications, the symbol matrix factor is in Toeplitz or block‐Toeplitz form.

The block tensor model defined by (45) and (46) can be viewed as a generalization of the block term decomposition introduced in[75] for third‐order tensorsX C I × J × K that are decomposed into a sum of P Tucker models of rank‐ (L,M,N), which corresponds to the particular case where all the factor matrices are full column rank, with A ( p , 1 ) C I × L , A ( p , 2 ) C J × M , A ( p , 3 ) C K × N , for p=1,…,P,G C L × M × N , and each sub‐tensor X ( p ) is decomposed by means of its HOSVD.

A third‐order block PARALIND/CONFAC model is visualized in Figure2. This figure is to be compared with Figure five in[76] representing a block term decomposition of a third‐order tensor into rank‐ (L p ,M p ,N p ) terms, when each term has a PARALIND/CONFAC structure.

Figure 2
figure 2

Visualization of a third‐order block PARALIND/CONFAC model.

1.3.5 PARALIND/CONFAC‐ (N1,N) models

Now, we introduce a variant of PARALIND/CONFAC models that we shall call PARALIND/CONFAC‐ (N1,N) models. This variant corresponds to PARALIND/CONFAC models (42) with only N1 constrained matrix factors, which implies R n =R and A ( n ) C I n × R for n=N1+1,…,N

X= I N , R × n = 1 N 1 A ( n ) Φ ( n ) × n = N 1 + 1 N A ( n ) .
(49)

In[77], a block PARALIND/CONFAC‐(2,3) model that can be deduced from (49) was used for modeling uplink multiple‐antenna code division multiple access (CDMA) multiuser systems.

The block term decomposition (BTD) in rank‐ (1,L p ,L p ) terms of a third‐order tensorX C I × J × K , which is compared to a third‐order PARATREE model in[78], can also be viewed as a particular CONFAC‐(1,3) model. Indeed, such a decomposition can be written as[79]

X= p = 1 P a p ( B p C p T )
(50)

where the matrices B p C J × L p and C p C K × L p are rank‐ L p , and a p C I × 1 . DefiningB= B 1 B P C J × R ,C= C 1 C P C K × R , andA= a 1 a P C I × P , withR= p = 1 P L p , it is easy to verify that the BTD (50) can be rewritten as the following CONFAC‐(1,3) model:

X= I 3 , R × 1 AΦ × 2 B × 3 C
(51)

with the constraint matrixΦ= 1 L 1 T 1 L P T C P × R .

1.3.6 PARATUCK models

A PARATUCK‐ (N1,N) model for a N th‐order tensorX C I 1 × × I N , with N>N1, is defined in scalar form as follows[66, 67]:

x i 1 , , i N 1 + 1 , , i N = r 1 = 1 R 1 r N 1 = 1 R N 1 c r 1 , , r N 1 , i N 1 + 2 , , i N × n = 1 N 1 a i n , r n ( n ) ϕ r n , i N 1 + 1 ( n )
(52)

where a i n , r n ( n ) and ϕ r n , i N 1 + 1 ( n ) are entries of the factor matrix A ( n ) C I n × R n and of the interaction/allocation matrix Φ ( n ) C R n × I N 1 + 1 ,n=1,, N 1 , respectively, andC C R 1 × × R N 1 × I N 1 + 2 × × I N is the (N-1)th‐order input tensor. Defining the core tensorG C R 1 × × R N 1 × I N 1 + 1 × × I N element‐wise as

g r 1 , , r N 1 , i N 1 + 1 , , i N = c r 1 , , r N 1 , i N 1 + 2 , , i N n = 1 N 1 ϕ r n , i N 1 + 1 ( n ) ,

the PARATUCK‐ (N1,N) model can be rewritten as a Tucker‐ (N1,N) model (29)‐(30).

Defining the allocation/interaction tensorF C R 1 × × R N 1 × I N 1 + 1 of order N1+1, such as

f r 1 , , r N 1 , i N 1 + 1 = n = 1 N 1 ϕ r n , i N 1 + 1 ( n ) ,
(53)

the core tensor can then be written as the Hadamard product of the tensors and along their first N1 modes

G=C { r 1 , , r N 1 } F.
(54)
1.3.6.0 Remarks
  • The PARATUCK‐ (N 1,N) model can be interpreted as the transformation of the input tensor via its multiplication by the factor matrices A(n),n=1,…,N1, along its first N1 modes, combined with a mode‐n resource allocation (n=1,…,N1) relatively to the mode‐ (N1+1) of the transformed tensor, by means of the allocation matrices Φ(n).

  • In telecommunications applications, the output modes will be called diversity modes because they correspond to time, space, and frequency diversities, whereas the input modes are associated with resources like transmit antennas, codes, and data streams. For these applications, the matrices Φ(n) are formed with 0’s and 1’s, and they can be interpreted as allocation matrices used for allocating some resources r n to the output mode‐ (N1+1). Another way to take resource allocations into account consists in replacing the N1 allocation matrices Φ(n) by the (N1+1)th‐order allocation tensorF C R 1 × × R N 1 × I N 1 + 1 defined in (53).

  • Special cases:

    • For N1=2 and N=3, we obtain the standard PARATUCK‐2 model introduced in[59]. (52) then becomes

      x i 1 , i 2 , i 3 = r 1 = 1 R 1 r 2 = 1 R 2 c r 1 , r 2 a i 1 , r 1 ( 1 ) a i 2 , r 2 ( 2 ) ϕ r 1 , i 3 ( 1 ) ϕ r 2 , i 3 ( 2 )
      (55)

      The allocation tensor defined in (53) can be rewritten as

      f r 1 , r 2 , i 3 = ϕ r 1 , i 3 ( 1 ) ϕ r 2 , i 3 ( 2 ) = j = 1 I 3 ϕ r 1 , j ( 1 ) ϕ r 2 , j ( 2 ) δ i 3 , j
      (56)

      which corresponds to a PARAFAC model with matrix factors ( Φ ( 1 ) , Φ ( 2 ) , I I 3 ). The PARATUCK‐2 model (55) can then be viewed as a Tucker‐2 modelX=G × 1 A ( 1 ) × 2 A ( 2 ) with the core tensorG C R 1 × R 2 × I 3 given by the Hadamard product ofC C R 1 × R 2 andF C R 1 × R 2 × I 3 along their common modes {r1,r2}

      G=C { r 1 , r 2 } F

      This combination of a Tucker‐2 model for with a PARAFAC model for gave rise to the name PARATUCK‐2. The constraint matrices (Φ(1),Φ(2)) define interactions between columns of the factor matrices (A(1),A(2)), along the mode‐3 of, while the matrix C contains the weights of these interactions.

    • For N1=2 and N=4, we obtain the PARATUCK‐(2,4) model introduced in[66]

      x i 1 , i 2 , i 3 , i 4 = r 1 = 1 R 1 r 2 = 1 R 2 c r 1 , r 2 , i 4 a i 1 , r 1 ( 1 ) a i 2 , r 2 ( 2 ) ϕ r 1 , i 3 ( 1 ) ϕ r 2 , i 3 ( 2 )
      (57)

      As for the PARATUCK‐2 model, the PARATUCK‐(2,4) can be viewed as a combination of a Tucker‐(2,4) modelX=G × 1 A ( 1 ) × 2 A ( 2 ) C I 1 × I 2 × I 3 × I 4 with a core tensorG C R 1 × R 2 × I 3 × I 4 given by the Hadamard product of the tensorsC C R 1 × R 2 × I 4 andF C R 1 × R 2 × I 3 along their common modes {r1,r2}

      G=C { r 1 , r 2 } F

      with the same allocation tensor defined in (56).

1.3.7 Rewriting of PARATUCK models as constrained PARAFAC models

This rewriting of PARATUCK models as constrained PARAFAC models can be used to deduce both matrix unfoldings by means of the general formula (37) and sufficient conditions for essential uniqueness of such PARATUCK models, as will be shown in Section 1.4.

1.3.7.0 Link between PARATUCK‐(2,4) and constrained PARAFAC‐4 models

We now establish the link between the PARATUCK‐(2,4) model (57) and the fourth‐order constrained PARAFAC model

x i 1 , i 2 , i 3 , i 4 = r = 1 R a i 1 , r b i 2 , r f i 3 , r d i 4 , r withR= R 1 R 2
(58)

whose matrix factors (A C I 1 × R ,B C I 2 × R ,F C I 3 × R ,D C I 4 × R ), and constraint matrices (Ψ(1),Ψ(2)) acting on the original factors (A(1),A(2)), are given by

A = A ( 1 ) Ψ ( 1 ) , B = A ( 2 ) Ψ ( 2 ) , F = ( Φ ( 1 ) Φ ( 2 ) ) T , D = C I 4 × R 1 R 2
(59)
Ψ ( 1 ) = I R 1 1 R 2 T C R 1 × R 1 R 2 , Ψ ( 2 ) = 1 R 1 T I R 2 C R 2 × R 1 R 2
(60)

where C I 4 × R 1 R 2 C I 4 × R 1 R 2 is a mode‐3 unfolded matrix of the tensorC C R 1 × R 2 × I 4 .

Proof.

See Appendix 5.

1.3.7.0 Remarks
  • Application of the formula (38) to the constrained PARAFAC model (58), with the matrix factors(A,B,F,D)=( A ( 1 ) Ψ ( 1 ) , A ( 2 ) Ψ ( 2 ) , ( Φ ( 1 ) Φ ( 2 ) ) T , C I 4 × R 1 R 2 ), gives the following flat mode‐1 and mode‐2 matrix unfoldings for the PARATUCK‐(2,4) model (57)

    X I 1 × I 2 I 3 I 4 = A ( 1 ) Ψ ( 1 ) A ( 2 ) Ψ ( 2 ) F D T C I 1 × I 2 I 3 I 4 , X I 2 × I 3 I 4 I 1 = A ( 2 ) Ψ ( 2 ) F D A ( 1 ) Ψ ( 1 ) T C I 2 × I 3 I 4 I 1 .
  • The constrained PARAFAC‐4 model (58)‐(60) can be written in mode‐n product notation as

    X= I 4 , R × 1 A ( 1 ) Ψ ( 1 ) × 2 A ( 2 ) Ψ ( 2 ) × 3 F × 4 D.
    (61)
  • Defining the core tensorG C R 1 × R 2 × I 3 × I 4 as

    G= I 4 , R × 1 Ψ ( 1 ) × 2 Ψ ( 2 ) × 3 F × 4 D
    (62)
  • the constrained PARAFAC‐4 model can also be viewed as the following Tucker‐(2,4) model

    X=G × 1 A ( 1 ) × 2 A ( 2 ) .
    (63)
  • It can also be viewed as a CONFAC‐(2,4) model with matrix factors (A(1),A(2),F,D), and constraint matrices Ψ(1) and Ψ(2) defined in (60).

  • Choosing S 1 ={ i 1 , i 2 } and S 2 ={ i 3 , i 4 }, the matrix unfolding (37) of the PARAFAC model (61) is given by

    X I 1 I 2 × I 3 I 4 = A ( 1 ) Ψ ( 1 ) A ( 2 ) Ψ ( 2 ) F D T = A ( 1 ) A ( 2 ) F D T C I 1 I 2 × I 3 I 4
    (64)

    Proof.Using the identity (90) gives

    A ( 1 ) Ψ ( 1 ) A ( 2 ) Ψ ( 2 ) = A ( 1 ) A ( 2 ) Ψ ( 1 ) Ψ ( 2 )
    (65)

    Replacing Ψ(1) and Ψ(2) by their expressions (102) and (103) leads to

    Ψ ( 1 ) Ψ ( 2 ) = I R 1 1 R 2 T 1 R 1 T I R 2 = I R 2 I R 2 R 1 blocks = I R 1 R 2
    (66)

    which implies

    A ( 1 ) Ψ ( 1 ) A ( 2 ) Ψ ( 2 ) = A ( 1 ) A ( 2 ) ,
    (67)

    and consequently (64) can be deduced.

    This equation can also be obtained from the equivalent Tucker‐(2,4) model (62)‐(63) as

    X I 1 I 2 × I 3 I 4 = A ( 1 ) A ( 2 ) G R 1 R 2 × I 3 I 4
    (68)

    with

    G R 1 R 2 × I 3 I 4 = Ψ ( 1 ) Ψ ( 2 ) F D T

    Using the identity (66), we obtain

    G R 1 R 2 × I 3 I 4 = ( F D ) T
    (69)

    and replacing G R 1 R 2 × I 3 I 4 by its expression (69) into (68) gives (64).

  • When the allocation matrices Φ ( 1 ) , Φ ( 2 ) and the input tensor are known, the matrix factors A ( 1 ) , A ( 2 ) can be estimated through the LS estimation of their Kronecker product using the matrix unfolding (64).

  • The product ϕ r 1 , i 3 ( 1 ) ϕ r 2 , i 3 ( 2 ) in (57) can be replaced by f i 3 , r 1 , r 2 , which amounts to replace the allocation matrices Φ(1) and Φ(2) by the third‐order allocation tensorF C I 3 × R 1 × R 2 , the matrixF= Φ ( 1 ) Φ ( 2 ) T C I 3 × R 1 R 2 being equivalent to F I 3 × R 1 R 2 C I 3 × R 1 R 2 , i.e., a mode‐1 flat matrix unfolding of the allocation tensor.

1.3.7.0 Link between PARATUCK‐2 and constrained PARAFAC‐3 models

By proceeding in the same way as for the PARATUCK‐(2,4) model, it is easy to show that the PARATUCK‐2 model (55) is equivalent to a third‐order constrained PARAFAC model whose matrix factorsA C I 1 × R ,B C I 2 × R , andF C I 3 × R , with R=R1R2, are given by

A = A ( 1 ) Ψ ( 1 ) , B = A ( 2 ) Ψ ( 2 ) , F = Φ ( 1 ) Φ ( 2 ) T diag vec C T
(70)

with the same constraint matrices Ψ(1) and Ψ(2) defined in (60).

By analogy with the PARATUCK‐(2,4) model, (61), (63), and (64) become for the PARATUCK‐2 model

X = I 3 , R × 1 A ( 1 ) Ψ ( 1 ) × 2 A ( 2 ) Ψ ( 2 ) × 3 F = G × 1 A ( 1 ) × 2 A ( 2 )
(71)

with the core tensorG C R 1 × R 2 × I 3 defined as

G= I 3 , R × 1 Ψ ( 1 ) × 2 Ψ ( 2 ) × 3 F,
(72)

and

X I 1 I 2 × I 3 =( A ( 1 ) A ( 2 ) ) F T C I 1 I 2 × I 3 .
1.3.7.0 Remarks
  • Note that (71) and (72) allow interpreting the PARATUCK‐2 model as a Tucker‐(2,3) model, defined in (31)‐(32). If we choose c r 1 , r 2 =1, r k =1,, R k , for k=1 and 2 and define the allocation tensorF C R 1 × R 2 × I 3 such as f r 1 , r 2 , i 3 = ϕ r 1 , i 3 ( 1 ) ϕ r 2 , i 3 ( 2 ) , the PARATUCK‐2 model (55) becomes the following Tucker‐(2,3) model:

    x i 1 , i 2 , i 3 = r 1 = 1 R 1 r 2 = 1 R 2 f r 1 , r 2 , i 3 a i 1 , r 1 ( 1 ) a i 2 , r 2 ( 2 )
  • and the associated constrained PARAFAC‐3 model can be deduced from (70)

    A = A ( 1 ) Ψ ( 1 ) , B = A ( 2 ) Ψ ( 2 ) , F = F I 3 × R 1 R 2 = Φ ( 1 ) Φ ( 2 ) T
  • with the same constraint matrices Ψ(1) and Ψ(2) as those defined in (60). A block Tucker‐(2,3) model transformed into a block constrained PARAFAC‐3 model was used in[72] for modeling in a unified way three multiuser wireless communication systems.

  • Now, we show the equivalence of the expressions (72) and (54) of the core tensor. Applying the formula (38) to the PARAFAC model (72) gives

    G I 3 × R 1 R 2 = ( Φ ( 1 ) Φ ( 2 ) ) T diag(vec( C T )) ( Ψ ( 1 ) Ψ ( 2 ) ) T .
    (73)
  • Using the identity (66) in (73) gives G I 3 × R 1 R 2 = ( Φ ( 1 ) Φ ( 2 ) ) T diag(vec( C T )).

  • For the formula (54), with N=3 and N1=2, we have

    G=F { r 1 , r 2 } C
  • or equivalently in terms of matrix Hadamard product

    G I 3 × R 1 R 2 = F I 3 × R 1 R 2 1 I 3 c T
  • with F I 3 × R 1 R 2 = ( Φ ( 1 ) Φ ( 2 ) ) T , andc=vec( C T ) C R 1 R 2 × 1 , which gives

    G I 3 × R 1 R 2 = F I 3 × R 1 R 2 c T c T I 3 rows
  • and consequently G I 3 × R 1 R 2 = ( Φ ( 1 ) Φ ( 2 ) ) T diag(c), showing the equivalence of the two core tensor expressions (72) and (54).

1.3.7.0 Link between PARATUCK‐ (N- 2,N) and constrained PARAFAC‐ N models

Let us consider the PARATUCK‐ (N1,N) model (52) in the case N1=N-2

x i 1 , , i N 1 + 1 , , i N = r 1 = 1 R 1 r N 1 = 1 R N 1 c r 1 , , r N 1 , i N n = 1 N 1 a i n , r n ( n ) ϕ r n , i N 1 + 1 ( n )
(74)

and let us define the change of variablesr= r N 1 + n = 1 N 1 - 1 ( r n -1) i = n + 1 N 1 R i corresponding to a combination of the N1 modes associated with the constraints/allocations. Then, (74) can be written as the following constrained PARAFAC‐N model:

x i 1 , , i N = r = 1 R n = 1 N ā i n , r ( n ) ,R= i = 1 N 1 R i
(75)

with the following matrix factors

A ̄ ( n ) = A ( n ) Ψ ( n ) , n = 1 , , N 1 ; F = n = 1 N Φ ( n ) T ; D = C I N × R 1 R N 1 ,

where C I N × R 1 R N 1 C I N × R 1 R N 1 is a mode‐ (N1+1) unfolded matrix of the tensorC C R 1 × × R N 1 × I N , and the constraint matrices are given in (94) as

Ψ ( n ) = 1 R 1 T 1 R n - 1 T I R n 1 R n + 1 T 1 R N 1 T C R n × R , n = 1 , , N 1 .

The constrained PARAFAC model (75) can also be written as a Tucker‐ (N1,N) model (30) with the core tensor defined in (54) or, equivalently,

G= I N , R × n = 1 N - 2 Ψ ( n ) × N - 1 F × N D.

1.3.8 Comparison of constrained tensor models

To conclude this presentation, we compare the so‐called CONFAC‐ (N1,N) and PARATUCK‐ (N1,N) constrained tensor models, introduced in this paper with a resource allocation point of view. Due to the PARAFAC structure (41) of the core tensor of CONFAC models, each element x i 1 , , i N of the output tensor is the sum of R components as shown in (43). Moreover, due to the special structure of the allocation matrices Φ(n) whose columns are unit vectors, each component r is the result of a combination of N resources, under the form of the product n = 1 N a i n , r n ( n ) , the N resources being fixed by the allocation matrices Φ ( n ) C R n × R .

With the CONFAC‐ (N1,N) model (49), each component r is a combination of N1 resources( r 1 ,, r N 1 ) determined by the allocation matrices Φ ( n ) C R n × R for n=1,…,N1.

There are two main differences between the PARATUCK‐ (N1,N) models (52) and the CONFAC models (42). The first one is that the allocation matrices of PARATUCK models, formed with 0’s and 1’s, have not necessarily unit vectors as column vectors, which means that it is possible to allocate γ n = r n = 1 R n ϕ r n , i N 1 + 1 ( n ) resources r n to the (N1+1)th mode of the output tensor. The second one results from the interpretation of PARATUCK‐ (N1,N) models as Tucker‐ (N1,N) models, implying that each element x i 1 , , i N of is equal to the sum of r 1 = 1 R 1 r N 1 = 1 R N 1 f r 1 , , r N 1 , i N 1 + 1 terms, where f r 1 , , r N 1 , i N 1 + 1 is an entry of the allocation tensor defined in (53), each term being a combination of resources under the form of products n = 1 N 1 a i n , r n ( n ) . Moreover, in telecommunication applications, the input tensor can be used as a code tensor.

Another way to compare PARALIND/CONFAC and PARATUCK models is in terms of dependencies/interactions between their factor matrices. In the case of PARALIND/CONFAC models, as pointed out by (42), the constraint matrices act independently on each factor matrix, expliciting linear dependencies between columns of these matrices. For PARATUCK models, their writing as Tucker‐ (N1,N) models with the core tensor defined in (54) allows to interpret the tensor as an interaction tensor which defines interactions between N1 factor matrices, the tensor providing the strength of these interactions.

The main constrained PARAFAC models are summarized in Tables1 and2.

Table 1 Main tensor models
Table 2 Equivalent constrained PARAFAC models

1.4 Uniqueness issue

Several results exist for essential uniqueness of PARAFAC models, i.e., uniqueness of factor matrices up to column permutation and scaling. These results concern both deterministic and generic uniqueness, i.e., uniqueness for a particular PARAFAC model or uniqueness with probability one in the case where the entries of the factor matrices are drawn from continuous distributions. An overview of main uniqueness conditions of PARAFAC models of third‐order tensors can be found in[80] for the deterministic case and in[81] for the generic case. Hereafter, we briefly summarize some basic results on uniqueness of PARAFAC models. The case with linearly dependent loadings is also discussed. Then, we present new results concerning the uniqueness of PARATUCK models. These results are directly deduced from sufficient conditions for essential uniqueness of their associated constrained PARAFAC models, as established in the previous section. As these conditions involve the notion of k‐rank of a matrix, we first recall the definition of k‐rank.

Definition of k‐rank

The k‐rank (also called Kruskal’s rank) of a matrixA C I × R , denoted by k A , is the largest integer such that any set of k A columns of A is linearly independent.

It is obvious that k A r A .

1.4.1 Uniqueness of PARAFAC‐ N models[82]

The PARAFAC‐N model (33)‐(35) is essentially unique, i.e., its factor matrices A ( n ) C I n × R ,n=1,,N, are unique up to column permutation and scaling, if

n = 1 N k A ( n ) 2R+N-1
(76)

Essential uniqueness means that two sets of factor matrices are linked by the following relations A ̂ ( n ) = A ( n ) Π Λ ( n ) , for n=1,…,N, where Π is a permutation matrix and Λ(n) are non‐singular diagonal matrices such as n = 1 N Λ ( n ) = I R .

In the generic case, the factor matrices are full rank, with k A ( n ) = r A ( n ) =min( I n ,R), and Kruskal’s condition (76) becomes

n = 1 N min( I n ,R)2R+N-1
(77)

Case of third‐order PARAFAC models. Consider a third‐order tensorX C I × J × K of rank R, satisfying a PARAFAC model with matrix factors (A,B,C). Kruskal’s condition (76) becomes

k A + k B + k C 2R+2
(78)
1.4.1.0 Remarks
  • The condition (76) is sufficient but not necessary for essential uniqueness. This condition does not hold when R=1. It is also necessary for R=2 and R=3 but not for R>3 (see[83]).

  • The first sufficient condition for essential uniqueness of third‐order PARAFAC models was established by Harshman[84] and then generalized by Kruskal[52] using the concept of k‐rank. A more accessible proof of Kruskal’s condition is provided in[85]. Kruskal’s condition was extended to complex‐valued tensors in[15] and to N‐way arrays, with N>3, in[82].

  • Necessary and sufficient uniqueness conditions more relaxed than the Kruskal’s one were established for third‐ and fourth‐order tensors, under the assumption that at least one matrix factor is full column rank[86, 87]. These conditions are complicated to apply. Other more relaxed conditions have been derived independently by Stegeman[88] and Guo et al.[89], for third‐order PARAFAC models with a full column rank matrix factor.

  • From the condition (78), we can conclude that if two matrix factors (A and B) are full column rank (k A =k B =R), then the PARAFAC model is essentially unique if the third matrix factor (C) has no proportional columns (k C >1).

  • If one matrix factor (C for instance) is full column rank, then (78) gives

    k A + k B R+2
    (79)
  • In[88] and[89], it is shown that the PARAFAC model (A,B,C), with C of full column rank, is essentially unique if the other two matrix factors A and B satisfy the following conditions:

    1 ) k A , k B 2 2 ) r A + k B R + 2 or r B + k A R + 2
    (80)
  • Conditions (80) are more relaxed than (79). Indeed, if for instance k A =2 and r A =k A +δ with δ>0, application of (79) implies k B =R, i.e., B must be full column rank, whereas (80) gives k B R-δ which does not require that B be full column rank.

  • When one matrix factor (C for instance) is known and Kruskal’s condition (78) is satisfied, as it is often the case in telecommunication applications, essential uniqueness is ensured without permutation ambiguity and with only scaling ambiguities (Λ A ,Λ B ) such as Λ A Λ B =I R .

1.4.2 Uniqueness of PARAFAC models with linearly dependent loadings

If one matrix factor contains at least two proportional columns, i.e., its k‐rank is equal to one, then Kruskal’s condition (78) cannot be satisfied. In this case, partial uniqueness can be ensured, i.e., some columns of some matrix factors are essentially unique while the others are unique up to multiplication by a non‐singular matrix[90]. To illustrate this result, let us consider the case of the PARAFAC model of a fourth‐order tensorX C I × J × K × L with factor matrices (A,B,C,D) whose two of them have two identical columns at the same position

A = A 1 a a , B = B 1 b b , C = C 1 C 2 , D = D 1 D 2

with . We have k A =k B =1, and consequently, the uniqueness condition (76) for N=4 becomes k C +k D ≥2R+1, which cannot be satisfied. In this case, we have partial uniqueness. Indeed, the matrix slices (40) can be developed as follows:

X ij.. = C D j ( B ) D i ( A ) D T = C 1 C 2 D j ( B 1 ) D i ( A 1 ) 0 ( R - 2 ) × 2 0 2 × ( R - 2 ) a i b j I 2 D 1 T D 2 T = C 1 D j ( B 1 ) D i ( A 1 ) D 1 T + a i b j C 2 D 2 T .

From this expression, it is easy to conclude that the last two columns of C and D are unique up to a rotational indeterminacy. Indeed, if one replaces the matrices (C2,D2) by( C 2 T, D 2 T - T ), whereT C 2 × 2 is a non‐singular matrix, the matrix slices Xi j.. remain unchanged. So, the PARAFAC model is said partially unique in the sense that only the blocks (A1,B1,C1,D1) are essentially unique, the blocks C2 and D2 being unique up to a non‐singular matrix. Essential uniqueness means that any alternative blocks ( A ̂ 1 , B ̂ 1 , C ̂ 1 , D ̂ 1 ) are such as A ̂ 1 = A 1 Π Δ a , B ̂ 1 = B 1 Π Δ b , C ̂ 1 = C 1 Π Δ c , D ̂ 1 = D 1 Π Δ d , where Π is a permutation matrix and Δ a , Δ b , Δ c , and Δ d are diagonal matrices such as Δ a Δ b Δ c Δ d =IR-2. In[91], sufficient conditions are provided for essential uniqueness of fourth‐order PARAFAC models with one full column rank factor matrix and at most three collinear factor matrices, i.e., having one (or more) column(s) proportional to another column. Note that this type of model can be interpreted as a fourth‐order CONFAC model with constraints on at most three matrix factors. Uniqueness is ensured if any pair of proportional columns cannot be common to two collinear factors, which is not the case of the example above due to the fact that the two equal columns of A and B are in the same position.

The PARALIND and CONFAC models represent a class of constrained PARAFAC models where the columns of one or more matrix factors are linearly dependent or collinear. In the case of CONFAC models, such a collinearity takes the form of repeated columns, the repetitions being explicitly modeled by means of constraint matrices. The work[92] derived both essential uniqueness conditions and partial uniqueness conditions for PARALIND/CONFAC models of third‐order tensors. Therein, the relation with uniqueness of constrained Tucker3 models and the block decomposition in rank‐(L,L,1) terms is also discussed. The essential uniqueness condition for a given matrix factor in PARALIND models makes use of Kruskal’s permutation lemma[52, 86].

Consider a third‐order tensorX C I × J × K satisfying a PARALIND model with matrix factors (A,B,C) and constraint matrices Φ(i), i=1,2,3. Suppose(BC) G R 2 R 3 × R 1 and A have full column rank and let ω(·) denote the number of nonzero elements of its vector argument. Define N i =rank( Φ ( 2 ) diag( Φ i , . ( 1 ) ) Φ ( 3 ) T ), i=1,…,R1. If for any vector d

rank B Φ ( 2 ) diag ( d T Φ ( 1 ) ) ( C Φ ( 3 ) ) T max ( N 1 , , N R 1 ) implies ω ( d ) 1
(81)

then A is essentially unique[92]. The uniqueness condition for B and C is analogous to condition (81) by interchanging the roles of Φ(1), Φ(2), and Φ(3).

When PARALIND model reduces to PARAFAC model, condition (81) is identical to Condition B of[86] for the essential uniqueness of the PARAFAC model in the case of a full column rank matrix factor. More recently in[93], improved versions of the main uniqueness conditions of PARALIND/CONFAC models have been derived. The results presented therein involve simpler proofs than those of[92]. Moreover, the associated uniqueness conditions are easy to check in comparison with the ones presented earlier in[92].

In[94], a ‘uni‐mode’ uniqueness condition is derived for a PARAFAC model with linearly dependent (proportional/identical) columns in one matrix factor. This condition is particularly useful for a subclass of PARALIND/CONFAC models with Φ ( 2 ) = Φ ( 3 ) = I R , i.e., when collinearity is confined within the first matrix factor. Let A ̄ =A Φ ( 1 ) , where A ̄ C I 1 × R contains collinear columns, the collinearity pattern being captured by Φ(1). Assuming that A ̄ does not contain an all‐zero column, if

r A ̄ + k B + k C 2R+2,
(82)

then A is essentially unique. Generalizations of this condition can be obtained by imposing additional constraints on the ranks and k‐ranks of the matrix factors (see[94] for details).

1.4.3 Uniqueness of Tucker models

Contrary to PARAFAC models, the Tucker ones are generally not essentially unique. Indeed, the parameters of Tucker models can be only estimated up to non‐singular transformations characterized by non‐singular matrices T(n) that act on the mode‐n matrix factors A(n) and can be cancelled in replacing the core tensor byG × n = 1 N T ( n ) - 1 . This result is easy to verify by applying the property (13) of mode‐n product

G × n = 1 N T ( n ) - 1 × n = 1 N A ( n ) T ( n ) = G × n = 1 N A ( n ) T ( n ) T ( n ) - 1 = G × n = 1 N A ( n ) .

Uniqueness can be obtained by imposing some constraints on the core tensor or the matrix factors (see[9] for a review of main results concerning uniqueness of Tucker models, with discussion of three different approaches for simplifying core tensors so that uniqueness is ensured). Uniqueness can also result from a core with information redundancy and structure constraints as in[33] where the core is characterized by matrix slices in Hankel and Vandermonde forms.

1.4.4 Uniqueness of the PARATUCK‐(2,4) model

Let us consider the PARATUCK‐(2,4) model defined by (57), with matrix factors A(1) and A(2), constraint matrices Φ(1) and Φ(2) and core tensor. As previously shown, this model is equivalent to the constrained PARAFAC model (58) whose matrix factors are

A = A ( 1 ) Ψ ( 1 ) , B = A ( 2 ) Ψ ( 2 ) , F = ( Φ ( 1 ) Φ ( 2 ) ) T , D = C I 4 × R 1 R 2

with Ψ(1) and Ψ(2) defined in (60). Due to the repetition of some columns of A(1) and A(2) and assuming that these matrices do not contain an all‐zero column, we have k A =k B =1, and application of Kruskal’s condition (76), with N=4, gives

k A + k B + k F + k D 2 R 1 R 2 +3 k F + k D 2 R 1 R 2 +1,

which can never be satisfied. However, more relaxed sufficient conditions can be established for essential uniqueness of the PARATUCK‐(2,4) model. For that purpose, we consider the contracted constrained PARAFAC model obtained by combining the first two modes and using (67), which leads to a third‐order PARAFAC model with matrix factors

(AB,F,D)=( A ( 1 ) A ( 2 ) , ( Φ ( 1 ) Φ ( 2 ) ) T , C I 4 × R 1 R 2 )
(83)

Note that uniqueness of the matrix factors of the contracted PARAFAC model (83) implies uniqueness of the matrix factors A(1) and A(2) of the original PARATUCK‐(2,4) model. This comes from the fact that A(1) and A(2) can be recovered (up to a scaling factor) from their Kronecker product[95]. Application of the conditions (80) to the contracted PARAFAC model (83) allows deriving the following theorem.

1.4.4.0 Theorem

The PARATUCK‐(2,4) model defined by (57) is essentially unique

  • 1) When A(1) and A(2) are full column rank ( r A ( 1 ) A ( 2 ) = R 1 R 2 k A ( 1 ) A ( 2 ) = R 1 R 2 )

  • If k ( Φ ( 1 ) Φ ( 2 ) ) T 2 k C I 4 × R 1 R 2 2 and

    r ( Φ ( 1 ) Φ ( 2 ) ) T + k C I 4 × R 1 R 2 R 1 R 2 + 2 or r C I 4 × R 1 R 2 + k ( Φ ( 1 ) Φ ( 2 ) ) T R 1 R 2 + 2
  • 2) When (Φ(1)Φ(2))T is full column rank

  • If k A ( 1 ) A ( 2 ) 2 k C I 4 × R 1 R 2 2 and

    r A ( 1 ) r A ( 2 ) + k C I 4 × R 1 R 2 R 1 R 2 + 2 or r C I 4 × R 1 R 2 + k A ( 1 ) A ( 2 ) R 1 R 2 + 2
  • 3) When C I 4 × R 1 R 2 is full column rank

  • If k A ( 1 ) A ( 2 ) 2 k ( Φ ( 1 ) Φ ( 2 ) ) T 2 and

    r A ( 1 ) r A ( 2 ) + k ( Φ ( 1 ) Φ ( 2 ) ) T R 1 R 2 + 2 or r ( Φ ( 1 ) Φ ( 2 ) ) T + k A ( 1 ) A ( 2 ) R 1 R 2 + 2

In[67], an application of the PARATUCK‐(2,4) model to tensor space‐time (TST) coding is considered. Therein, the matrix factors A(1) and A(2) represent the symbol and channel matrices to be estimated, while the constraint matrices Φ(1) and Φ(2) play the role of allocation matrices of the transmission system, and the tensor is the coding tensor. In this context, Φ(1), Φ(2), and can be properly designed to satisfy the sufficient conditions of item 1) of the theorem.

The sufficient conditions of this theorem can easily be extended to the case of PARATUCK‐(N1,N) models in replacing A(1)A(2), Φ(1)Φ(2), C I 4 × R 1 R 2 , and R1R2, by n = 1 N 1 A ( n ) , n = 1 N 1 Φ ( n ) , C I N 1 + 2 ... I N × R , andR= n = 1 N 1 R n , respectively.

2 Conclusions

Several tensor models among which some are new have been presented in a general and unified framework. The use of the index notation for mode combination based on Kronecker products provides an original and concise way to derive vectorized and matricized forms of tensor models. A particular focus on constrained tensor models has been made with a perspective of designing MIMO communication systems with resource allocation. A link between PARATUCK models and constrained PARAFAC models has been established, which allows to apply results concerning PARAFAC models to derive uniqueness properties and parameter estimation algorithms for PARATUCK models. In a companion paper, several tensor‐based MIMO systems are presented in a unified way based on constrained PARAFAC models, and a new tensor‐based space‐time‐frequency (TSTF) MIMO transmission system with a blind receiver is proposed using a generalized PARATUCK model[96]. Even if this presentation of constrained tensor models has been made with the aim of designing MIMO transmission systems, we believe that such tensor models can be applied to other areas than telecommunications, like for instance biomedical signal processing, and more particularly for ECG and EEG signals modeling, with spatial constraints allowing to take into account the relative weight of the contributions of different areas of surface to electrodes. The considered constrained tensor models allow to take constraints into account either independently on each matrix factor of a PARAFAC decomposition, in the case of PARALIND/CONFAC models, or between factors, in the case of PARATUCK models. A perspective of this work is to consider constraints into tensor networks which decompose high‐order tensors into lower‐order tensors for big data processing[97]. In this case, the constraints could act either separately on each tensor component to facilitate their physical interpretability or between tensor components to explicit their interactions.

Appendices

Appendix 1

Some matrix formulae

For A ( n ) C I n × R n , B ( n ) C R n × J n , Φ ( n ) C R n × R , and Ψ ( n ) C R n × Q , n=1,…,N

n = 1 N A ( n ) T = n = 1 N A ( n ) T C R 1 R N × I 1 I N
(84)
( Associative property ) n = 1 N A ( n ) n = 1 N B ( n ) = n = 1 N A ( n ) B ( n ) C I 1 I N × J 1 J N
(85)
n = 1 N A ( n ) n = 1 N Φ ( n ) = n = 1 N A ( n ) Φ ( n ) C I 1 I N × R
(86)
n = 1 N Ψ ( n ) T n = 1 N Φ ( n ) = n = 1 N Ψ ( n ) T Φ ( n ) C Q × R .

For A ( n ) C I × J , n=1,…,N, and B ( p ) C K × L , p=1,…,P

( Distributive property ) n = 1 N A ( n ) p = 1 P B ( p ) = n = 1 N p = 1 P A ( n ) B ( p ) C IK × JL
(87)

In particular, forA C I × M ,B C J × N ,C C M × P ,D C N × Q ,E C P × J ,Φ C M × R ,Ψ C N × R ,Ω C M × Q ,Ξ C N × Q , andx C M × 1 , we have

( A B ) T = A T B T ,
(88)
( A B ) ( C D ) = A C B D ,
(89)
( A B ) ( Φ Ψ ) = A Φ B Ψ ,
(90)
( Ω Ξ ) T ( Φ Ψ ) = Ω T Φ Ξ T Ψ ,
(91)
vec ( A C E ) = ( E T A ) vec ( C ) ,
(92)
vec A diag ( x ) C = ( C T A ) x .
(93)

Appendix 2

Tensor extension of a matrix

Following the same demonstration as for (21) and (22), it is easy to deduce the following more general formula for the extension ofB C I × R n into a tensorA C I × R 1 × × R N such as a i , r 1 , , r n , , r N = b i , r n r k =1,, R k ,fork=1,,n-1,n+1,,N. DefiningR= n = 1 N R n , we have

A I × R =B 1 R 1 T 1 R n - 1 T I R n 1 R n + 1 T 1 R N T C I × R .
(94)

Similarly, for the extension ofB C I n × R into a tensorA C I 1 × × I N × R such as a i 1 , , i n , , i N , r = b i n , r i k =1,, I k ,fork=1,,n-1,n+1,,N, we have

A I × R =( 1 I 1 1 I n - 1 I I n 1 I n + 1 1 I N )B C I × R .
(95)

whereI= n = 1 N I n .

For instance, if we consider the following tensor extension ofB C I × J

a m , n , i , j , k , l = b i , j m = 1 , , M , n = 1 , , N , k = 1 , , K , l = 1 , , L

the combination of formulae (94) and (95) gives

A MNI × JKL =( 1 MN I I )B I J 1 KL T
(96)

which can be written as

A MNI × JKL =B × 1 Ψ 1 × 2 ( Ψ 2 ) T

with Ψ1=1 MN I I and Ψ 2 = I J 1 KL T .

Appendix 3

Proof of (27)

Defining ( I 1 , I 2 ) and ( R 1 , R 2 ) as the sets of indices i n and r n associated respectively with the sets ( S 1 , S 2 ) of index n, the formula (20) allows writing the element g r 1 , , r N of the core tensor as

g r 1 , , r N = e R 1 G S 1 ; S 2 e R 2 .
(97)

where R 1 ={ r n ,n S 1 } and R 2 ={ r n ,n S 2 }. Substituting x i 1 , , i N and g r 1 , , r N by their expressions (24) and (97) into (19) gives

X S 1 ; S 2 = x i 1 , , i N e I 1 I 2 = e I 1 x i 1 , , i N e I 2 = e I 1 g r 1 , , r N n = 1 N a i n , r n ( n ) e I 2 = n = 1 N a i n , r n ( n ) e I 1 e R 1 G S 1 ; S 2 e R 2 e I 2 = n S 1 a i n , r n ( n ) e I 1 R 1 G S 1 ; S 2 n S 2 a i n , r n ( n ) e R 2 I 2
(98)

Applying the general Kronecker formula (17) in terms of the index notation allows to rewrite this matrix unfolding as

X S 1 ; S 2 = n S 1 A ( n ) G S 1 ; S 2 n S 2 A ( n ) T .

Appendix 4

Proof of (37)

Substituting the expression (34) of x i 1 , , i N into (19) and using the identities (14) and (15) give

X S 1 ; S 2 = x i 1 , , i N e I 1 I 2 = n S 1 a i n , r ( n ) e I 1 n S 2 a i n , r ( n ) e I 2 = n S 1 A .r ( n ) n S 2 A .r ( n ) T = n S 1 A ( n ) n S 2 A ( n ) T
(99)

which ends the proof of (37).

Appendix 5

Proof of (59) and (60)

Let us define the third‐order tensorsA C I 1 × R 1 × R 2 ,B C I 2 × R 1 × R 2 ,F C I 3 × R 1 × R 2 , andD C I 4 × R 1 × R 2 such as

a i 1 , r 1 , r 2 = a i 1 , r 1 ( 1 ) r 2 = 1 , , R 2 ; b i 2 , r 1 , r 2 = a i 2 , r 2 ( 2 ) r 1 = 1 , , R 1 ; f i 3 , r 1 , r 2 = ϕ r 1 , i 3 ( 1 ) ϕ r 2 , i 3 ( 2 ) ; d i 4 , r 1 , r 2 = c r 1 , r 2 , i 4 .
(100)

The tensor model (57) can be rewritten as

x i 1 , i 2 , i 3 , i 4 = r 1 = 1 R 1 r 2 = 1 R 2 a i 1 , r 1 , r 2 b i 2 , r 1 , r 2 f i 3 , r 1 , r 2 d i 4 , r 1 , r 2 .
(101)

Defining the change of variables r=(r1-1)R2+r2 that corresponds to a combination of the last two modes of the tensors,,, and, (101) can be rewritten as the constrained PARAFAC‐4 model (58), where a i 1 , r , b i 2 , r , f i 3 , r , and d i 4 , r are entries of mode‐1 matrix unfoldings of the tensors,,, and, i.e., entries ofA = A I 1 × R 1 R 2 ,B = B I 2 × R 1 R 2 ,F = F I 3 × R 1 R 2 , andD = D I 4 × R 1 R 2 , respectively. Using the formulae (21) and (22), we can directly deduce the following expressions of A and B:

A = A ( 1 ) 1 R 2 T = A ( 1 ) I R 1 1 R 2 T = A ( 1 ) Ψ ( 1 ) .
(102)
B = 1 R 1 T A ( 2 ) = A ( 2 ) 1 R 1 T I R 2 = A ( 2 ) Ψ ( 2 )
(103)

For the matrix F, using the index notation with the definition (100) gives

F= f i 3 , r 1 , r 2 e i 3 r 1 r 2 = ϕ r 1 , i 3 ( 1 ) ϕ r 2 , i 3 ( 2 ) e i 3 r 1 r 2

Applying the formula (16), we directly obtain

F= Φ ( 1 ) Φ ( 2 ) T .

References

  1. McCullagh P: Tensor Methods in Statistics. Chapman and Hall, New York; 1987.

    MATH  Google Scholar 

  2. Comon P: Tensor decompositions: state of the art and applications. In Mathematics in Signal Processing V. Edited by: McWhirter JG, Proudler IK. Clarendon Press,, Oxford; 2002:1-24.

    Google Scholar 

  3. Tucker LR: Some mathematical notes on three‐mode factor analysis. Psychometrika 1966, 31: 279-311.

    Article  MathSciNet  Google Scholar 

  4. Harshman RA: Foundations of the PARAFAC procedure: model and conditions for an “explanatory” multimodal factor analysis. UCLA Working Pap. Phon 1970, 16: 1-84.

    Google Scholar 

  5. Carroll JD, Chang J: Analysis of individual differences in multidimensional scaling via an N‐way generalization of “Eckart‐Young” decomposition. Psychometrika 1970, 35(3):283-319.

    Article  MATH  Google Scholar 

  6. Kiers HAL: Towards a standardized notation and terminology in multiway analysis J. Chemometrics 2000, 14(2):105-122.

    Article  MathSciNet  Google Scholar 

  7. Kroonenberg PM: Applied Multiway Data Analysis. Wiley, Hoboken; 2008.

    Book  MATH  Google Scholar 

  8. Bro R: Multi‐way analysis in the food industry: models, algorithms and applications. Ph.D. dissertation, University of Amsterdam, Amsterdam 1998.

    Google Scholar 

  9. Smilde A, Bro R, Geladi P: Multi‐way Analysis: Applications in the Chemical Sciences. Wiley, Chichester; 2004.

    Book  Google Scholar 

  10. Cardoso J‐F: Eigen‐structure of the fourth‐order cumulant tensor with application to the blind source separation problem. In Proc. of IEEE ICASSP’90. Albuquerque; 1990:2655-2658.

    Google Scholar 

  11. Cardoso J‐F, Comon P: Tensor‐based independent component analysis. In Proc. of EUSIPCO’90. Barcelona; 1990:673-676.

    Google Scholar 

  12. Cardoso J‐F: Super‐symmetric decomposition of the fourth‐order cumulant tensor: blind identification of more sources than sensors. In Proc. of IEEE ICASSP’91. Toronto; 1991:3109-3112.

    Google Scholar 

  13. De Lathauwer L: Signal processing based on multilinear algebra. Ph.D. dissertation, KU Leuven, Leuven 1997.

    Google Scholar 

  14. Comon P, Jutten C: Handbook of Blind Source Separation. Independent Component Analysis and Applications. Elsevier, Oxford; 2010.

    Google Scholar 

  15. Sidiropoulos ND, Giannakis GB, Bro R: Blind PARAFAC receivers for DS‐CDMA systems. IEEE Trans. Signal Process 2000, 48(3):810-823.

    Article  Google Scholar 

  16. Cichocki A, Zdunek R, Phan AH, Amari S‐I: Nonnegative Matrix and Tensor Factorizations. Applications to Exploratory Multi‐way Data Analysis and Blind Source Separation. Wiley, Chichester; 2009.

    Google Scholar 

  17. Kolda TG, Bader BW: Tensor decompositions and applications. SIAM J. Matrix Anal. Appl 2009, 51(3):455-500.

    MathSciNet  MATH  Google Scholar 

  18. Acar E, Yener B: Unsupervised multiway data analysis: a literature survey. IEEE Trans. Knowledge Data Eng 2009, 21(1):6-20.

    Article  Google Scholar 

  19. Morup M: Applications of tensor (multiway array) factorizations and decompositions in data mining. In Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. Wiley,, Chichester; 2011:24-40.

    Google Scholar 

  20. Sorensen M, De Lathauwer L, Comon P, Icart S, Deneire L: Canonical polyadic decomposition with a columnwise orthonormal factor matrix. SIAM J. Matrix Anal. Appl 2012, 33(4):1190-1213.

    Article  MathSciNet  MATH  Google Scholar 

  21. Shashua A, Hazan T: Non‐negative tensor factorization with applications to statistics and computer vision. In Proc. of 22nd Int. Conf. on Machine Learning. Bonn; 2005:792-799.

    Google Scholar 

  22. Hazan S, Polak S, Shashua A: Sparse image coding using a 3D non‐negative tensor factorization. In Proc. of 10th IEEE Int. Conf. on Computer Vision (ICCV’ 2005). Beijing; 2005:50-57.

    Google Scholar 

  23. Friedlander MP, Hatz K: Computing nonnegative tensor factorizations. Optim. Meth. Software 2008, 23(4):631-647.

    Article  MathSciNet  MATH  Google Scholar 

  24. Zhang Q, Wang H, Plemmons R, Pauca P: Tensor methods for hyperspectral data processing: a space object identification study. J. Opt. Soc. Am. A 2008, 25(12):3001-3012.

    Article  Google Scholar 

  25. Benetos E, Kotropoulos C: Non‐negative tensor factorization applied to music genre classification. IEEE Trans. Audio, Speech, Language Proc 2010, 18(8):1955-1967.

    Article  Google Scholar 

  26. Ozerov A, Févote C, Blouet R, Durrieu G: Multichannel nonnegative tensor factorization with structured constraints for user‐guided audio source separation. In Proc. of ICASSP’ 2011. Prague; 2011.

    Google Scholar 

  27. Acar E, Dunlavy DM, Kolda TG, Morup M: Scalable tensor factorizations with missing data. In Proc. of 10th SIAM Int. Conf. on Data Mining. Columbus; 2010:701-712.

    Google Scholar 

  28. Royer J‐P, Thirion‐Moreau N, Comon P: Computing the polyadic decomposition of nonnegative third order tensors. Signal Process 2011, 91: 2159-2171.

    Article  MATH  Google Scholar 

  29. Phan A‐H, Cichocki A: Extended HALS algorithm for nonnegative Tucker decomposition and its applications for multiway analysis and classification. Neurocomputing 2011, 74: 1956-1969.

    Article  Google Scholar 

  30. Welling M, Weber M: Positive tensor factorization. Pattern Recogn. Lett 2001, 22(12):1255-1261.

    Article  MATH  Google Scholar 

  31. Morup M, Hansen LK: Algorithms for sparse non‐negative Tucker decompositions. Neural Comput 2008, 20: 2112-2131.

    Article  MATH  Google Scholar 

  32. Favier G, Bouilloc T: A constrained tensor based approach for MIMO NL‐CDMA systems. In Proc. of EUSIPCO’ 2010. Aalborg; 2010.

    Google Scholar 

  33. Favier G, Bouilloc T, de Almeida ALF: Blind constrained block‐Tucker2 receiver for multiuser SIMO NL‐CDMA communication systems. Signal Process 2012, 92(7):1624-1636.

    Article  Google Scholar 

  34. Favier G, Kibangou AY, Bouilloc T: Nonlinear system modeling and identification using Volterra‐PARAFAC models. Int. J. of Adaptive Control and Sig. Proc 2012, 26: 30-53.

    Article  MathSciNet  MATH  Google Scholar 

  35. Bouilloc T, Favier G: Nonlinear channel modeling and identification using bandpass Volterra‐PARAFAC models. Signal Process 2012, 92(6):1492-1498.

    Article  MATH  Google Scholar 

  36. Kibangou AY, Favier G: Identification of parallel‐cascade Wiener systems using joint diagonalization of third‐order Volterra kernel slices. IEEE Signal Proc. Lett 2009, 16(3):188-191.

    Article  Google Scholar 

  37. Favier G: Nonlinear system modeling and identification using tensor approaches. In Proc. of 10th Int. Conf. on Sciences and Techniques of Automatic Control and Computer Engineering (STA’ 2009). Hammamet; 2009.

    Google Scholar 

  38. Fernandes CER, Favier G, Mota JCM: Blind channel identification algorithms based on the Parafac decomposition of cumulant tensors: the single and multiuser cases. Signal Process 2008, 88: 1382-1401.

    Article  MATH  Google Scholar 

  39. Fernandes CER, Favier G, Mota JCM: Parafac‐based blind identification of convolutive MIMO linear systems. In Proc. of 15th IFAC Symp. on System Identification (SYSID’ 2009). Saint‐Malo; 2009.

    Google Scholar 

  40. Brachat J, Comon P, Mourrain B, Tsigaridas E: Symmetric tensor decomposition. Lin. Algebra Appl 2010, 433(11–12):1851-1872.

    Article  MathSciNet  MATH  Google Scholar 

  41. Nion D, De Lathauwer L: A block component model‐based blind DS‐CDMA receiver. IEEE Trans. Signal Proc 2008, 56(11):5567-5579.

    Article  MathSciNet  Google Scholar 

  42. Kibangou AY, Favier G: Non‐iterative solution for PARAFAC with a Toeplitz matrix factor. In Proc. of EUSIPCO’ 2009. Glasgow; 2009:691-695.

    Google Scholar 

  43. Sorensen M, De Lathauwer L: Blind signal separation via tensor decomposition with Vandermonde factor: canonical polyadic decomposition. IEEE Trans. Signal Process 2013, 61(22):5507-5519.

    Article  MathSciNet  Google Scholar 

  44. Goulart JH, Favier G: An algebraic solution for the Candecomp/PARAFAC decomposition with circulant factors. SIAM J. Matrix Analysis and Appl 2014. http://hal.archives-ouvertes.fr/docs/00/96/72/63/PDF/RR-2014-02_I3S.pdf

    Google Scholar 

  45. Comon P, Sorensen M, Tsigaridas E: Decomposing tensors with structured matrix factors reduces to rank‐1 approximations. In Proc. of IEEE ICASSP’ 2010. Dallas; 2010:14-19.

    Google Scholar 

  46. Sorensen M, Comon P: Tensor decompositions with banded matrix factors. Lin. Algebra Appl 2013, 438: 919-941.

    Article  MathSciNet  MATH  Google Scholar 

  47. Carroll JD, Pruzansky S, Kruskal JB: Candelinc: a general approach to multidimensional analysis of many‐way arrays with linear constraints on parameters. Psychometrika 1980, 45(1):3-24.

    Article  MathSciNet  MATH  Google Scholar 

  48. Pollock DSG: On Kronecker products, tensor products and matrix differential calculus, Working paper 11/34. Univ.of Leicester, Dept. of Economics, UK 2011. http://www.le.ac.uk/ec/research/RePEc/lec/leecon/dp11-34.pdf

    Google Scholar 

  49. De Lathauwer L, De Moor B, Vandewalle J: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl 2000, 21(4):1253-1278.

    Article  MathSciNet  MATH  Google Scholar 

  50. Cattell RB: “Parallel proportional profiles” and other principles for determining the choice of factors by rotation. Psychometrika 1944, 9: 267-283.

    Article  Google Scholar 

  51. Hitchcock FL: The expression of a tensor or a polyadic as a sum of products. J. Math. Phys 1927, 6(3):164-189.

    Article  MATH  Google Scholar 

  52. Kruskal JB: Three‐way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Lin. Algebra Appl 1977, 18(2):95-138.

    Article  MathSciNet  MATH  Google Scholar 

  53. Comon P, ten Berge JMF, De Lathauwer L, Castaing J: Generic and typical ranks of multi‐way arrays. Lin. Algebra Appl 2009, 430(11):2997-3007.

    Article  MathSciNet  MATH  Google Scholar 

  54. Comon P, Golub G, Lim L‐H, Mourrain B: Symmetric tensors and symmetric tensor rank. SIAM J. Matrix Anal. Appl 2008, 30(3):1254-1279.

    Article  MathSciNet  MATH  Google Scholar 

  55. Bro R, Kiers HAL: A new efficient method for determining the number of components in PARAFAC models. J. Chemometrics 2003, 17(5):274-286.

    Article  Google Scholar 

  56. da Costa JPCL, Haardt M, Roemer F: Robust methods based on HOSVD for estimating the model order in PARAFAC models. In Proc. of 5th IEEE Sensor Array and Multich. Signal Proc. Workshop (SAM 2008). Darmstadt; 2008:510-514.

    Chapter  Google Scholar 

  57. da Costa JPCL, Roemer F, Weis M, Haardt M: Robust R ‐D parameter estimation via closed‐form PARAFAC. In Proc. of ITG Workshop on Smart Antennas (WSA 2010). Bremen; 2010:99-106.

    Chapter  Google Scholar 

  58. da Costa JPCL, Roemer F, Haardt M, de Sousa RT: Multi‐dimensional model order selection. EURASIP J. Adv. Signal Process 2011., 26:

    Google Scholar 

  59. Harshman RA, Lundy ME: Uniqueness proof for a family of models sharing features of Tucker’s three‐mode factor analysis and PARAFAC/CANDECOMP. Psychometrika 1996, 61: 133-154.

    Article  MathSciNet  MATH  Google Scholar 

  60. Bro R, Harshman RA, Sidiropoulos ND: Modeling multi‐way data with linearly dependent loadings, KVL tech. report 176. 2005.

    Google Scholar 

  61. Bro R, Harshman RA, Sidiropoulos ND, Lundy ME: Modeling multi‐way data with linearly dependent loadings. Chemometrics 2009, 23(7–8):324-340.

    Article  Google Scholar 

  62. Kibangou AY, Favier G: Blind joint identification and equalization of Wiener‐Hammerstein communication channels using PARATUCK‐2 tensor decomposition. In Proc. EUSIPCO’ 2007. Poznan; Sept 2007.

    Google Scholar 

  63. Xu L, Ting J, Longxiang Y, Hongbo Z: PARALIND‐based identifiability results for parameter estimation via uniform linear array. EURASIP J. Adv. Sig. Proc 2012, 2012: 154.

    Article  Google Scholar 

  64. Xu L, Liang G, Longxiang Y, Hongbo Z: PARALIND‐based blind joint angle and delay estimation for multipath signals with uniform linear array. EURASIP J. Adv, Sig. Proc 2012, 2012: 130.

    Article  Google Scholar 

  65. de Almeida ALF, Favier G, Mota JCM: A constrained factor decomposition with application to MIMO antenna systems. IEEE Trans. Signal Process 2008, 56(6):2429-2442.

    Article  MathSciNet  Google Scholar 

  66. Favier G, da Costa MN, de Almeida ALF, Romano JMT: Tensor coding for CDMA‐MIMO wireless communication systems. In Proc. of EUSIPCO’ 2011. Barcelona; 29 Aug–2 Sept 2011.

    Google Scholar 

  67. Favier G, da Costa MN, de Almeida ALF, Romano JMT: Tensor space‐time (TST) coding for MIMO wireless communication systems. Signal Process 2012, 92(4):1079-1092.

    Article  Google Scholar 

  68. de Almeida ALF, Favier G, Mota JCM: Space‐time spreading‐multiplexing for MIMO wireless communication systems using the PARATUCK‐2 tensor model. Signal Process 2009, 89(11):2103-2116.

    Article  MATH  Google Scholar 

  69. de Almeida ALF, Luciani X, Stegeman A, Comon P: CONFAC decomposition approach to blind identification of underdetermined mixtures based on generating function derivatives. IEEE Trans. Signal Process 2012, 60(11):5698-5713.

    Article  MathSciNet  Google Scholar 

  70. de Almeida ALF, Favier G, Mota JCM: Generalized PARAFAC model for multidimensional wireless communications with application to blind multiuser equalization. In Asilomar Conf. Sig. Syst. Comp.. Pacific Grove; Nov 2005.

    Google Scholar 

  71. de Almeida ALF, Favier G, Mota JCM: PARAFAC models for wireless communication systems. In Int. Conf. on Physics in Signal and Image Processing (PSIP). Toulouse; 31 Jan–2 Feb 2005.

    Google Scholar 

  72. de Almeida ALF, Favier G, Mota JCM: PARAFAC‐based unified tensor modeling for wireless communication systems with application to blind multiuser equalization. Signal Process 2007, 87: 337-351.

    Article  MATH  Google Scholar 

  73. de Almeida ALF, Favier G, Mota JCM: Tensor‐based space‐time multiplexing codes for MIMO‐OFDM systems with blind detection. In Proc. of 17th IEEE Symp. Pers. Ind. Mob. Radio Com. (PIMRC’ 2006). Helsinki; Sept 2006.

    Google Scholar 

  74. de Almeida ALF, Favier G, Mota JCM: Constrained Tucker‐3 model for blind beamforming. Signal Process 2009, 89: 1240-1244.

    Article  MATH  Google Scholar 

  75. De Lathauwer L: Decompositions of a higher‐order tensor in block terms‐part II: definitions and uniqueness. SIAM J. Matrix Anal. Appl 2008, 30(3):1033-1066.

    Article  MathSciNet  MATH  Google Scholar 

  76. Cichocki A, Mandic D, Phan A‐H, Caiafa C, Zhou G, Zhao Q, De Lathauwer L: Tensor decompositions for signal processing applications. From two‐way to multiway component analysis. IEEE Signal Process. Mag. 2014. arXiv:1403.4462v1

    Google Scholar 

  77. de Almeida ALF, Favier G, Mota JCM: Constrained tensor modeling approach to blind multiple‐antenna CDMA schemes. IEEE Trans. Signal Process 2008, 56(6):2417-2428.

    Article  MathSciNet  Google Scholar 

  78. Salmi J, Richter A, Koivunen V: Sequential unfolding SVD for tensors with applications in array signal processing. IEEE Trans. Signal Process 2009, 57(12):4719-4733.

    Article  MathSciNet  Google Scholar 

  79. de Lathauwer L: Blind separation of exponential polynomials and the decomposition of a tensor in rank‐( L r , L r , 1) terms. SIAM J. Matrix Anal. Appl 2011, 32(4):1451-1474.

    Article  MathSciNet  MATH  Google Scholar 

  80. Domanov I, De Lathauwer L: On the uniqueness of the canonical polyadic decomposition of third‐order tensors—part I: basic results and uniqueness of one factor matrix. SIAM. J. Matrix Anal. & Appl 2013, 34(3):855-875. arXiv:1301.4602v1

    Article  MathSciNet  MATH  Google Scholar 

  81. Domanov I, De Lathauwer L: Generic uniqueness conditions for the canonical polyadic decomposition and INDSCAL. arXiv:1405.6238v1, (KU Leuven, Belgium, 2014)

  82. Sidiropoulos ND, Bro R: On the uniqueness of multilinear decomposition of N‐way arrays. J. Chemometrics 2000, 14: 229-239.

    Article  Google Scholar 

  83. ten Berge JMF, Sidiropoulos ND: On uniqueness in CANDECOMP/PARAFAC. Psychometrika 2002, 67(3):399-409.

    Article  MathSciNet  MATH  Google Scholar 

  84. Harshman RA: Determination and proof of minimum uniqueness conditions for PARAFAC1. UCLA Working Pap. Phon 1972, 22: 111-117.

    Google Scholar 

  85. Stegeman A, Sidiropoulos ND: On Kruskal’s uniqueness condition for the CANDECOMP/PARAFAC decomposition. Lin. Algebra Appl 2007, 420: 540-552.

    Article  MathSciNet  MATH  Google Scholar 

  86. Jiang T, Sidiropoulos ND: Kruskal’s permutation lemma and the identification of CANDECOMP/PARAFAC and bilinear models with constant modulus constraints. IEEE Trans. Signal Process 2004, 52(9):2625-2636.

    Article  MathSciNet  Google Scholar 

  87. De Lathauwer L: A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization. SIAM J. Matrix Anal. Appl 2006, 28(3):642-666.

    Article  MathSciNet  MATH  Google Scholar 

  88. Stegeman A: On uniqueness conditions for CANDECOMP/PARAFAC and INDSCAL with full column rank in one mode. Lin. Algebra Appl 2008, 431(1–2):211-227.

    MathSciNet  MATH  Google Scholar 

  89. Guo X, Miron S, Brie D, Zhu S, Liao X: A CANDECOMP/PARAFAC perspective on uniqueness of DOA estimation using a vector sensor array. IEEE Trans. Signal Process 2011, 59(7):3475-3481.

    Article  MathSciNet  Google Scholar 

  90. Ten Berge JMF: Partial uniqueness in CANDECOMP/PARAFAC. J. Chemometrics 2004, 18: 12-16.

    Article  Google Scholar 

  91. Brie D, Miron S, Caland F, Mustin C: An uniqueness condition for the 4‐way CANDECOMP/PARAFAC model with collinear loadings in three modes. In Proc. of IEEE ICASSP’ 2011. Prague; May 2011.

    Google Scholar 

  92. Stegeman A, de Almeida ALF: Uniqueness conditions for constrained three‐way factor decompositions with linearly dependent loadings. SIAM J. Matrix Anal. Appl 2009, 31(3):1469-1490.

    Article  MathSciNet  MATH  Google Scholar 

  93. Stegeman A, Lam TTT: Improved uniqueness conditions for canonical tensor decompositions with linearly dependent loadings. SIAM J. Matrix Anal. Appl 2012, 33(4):1250-1271.

    Article  MathSciNet  MATH  Google Scholar 

  94. Guo X, Miron S, Brie D, Stegeman A: Uni‐mode and partial uniqueness conditions for CANDECOMP/PARAFAC of three‐way arrays with linearly dependent loadings. SIAM J. Matrix Anal. Appl 2012, 33(1):111-129.

    Article  MathSciNet  MATH  Google Scholar 

  95. Van Loan CF, Pitsianis N: Linear Algebra for Large Scale and Real‐Time Applications. Edited by: Moonen MS, Golub GH, de Moor BLR. Kluwer, The Netherlands; 1993:293-293.

    Chapter  Google Scholar 

  96. Favier G, de Almeida ALF: Tensor space‐time‐frequency coding with semi‐blind receivers for MIMO wireless communication systems. IEEE Trans. Signal Process 2014. in press

    Google Scholar 

  97. Cichocki A: Era of big data processing: a new approach via tensor networks and tensor decompositions. In International Workshop on Smart Info‐Media Systems in Asia (SISA‐2013). Nagoya; 30 Sept–2 Oct 2013.

    Google Scholar 

Download references

Acknowledgements

This work has been developed under the FUNCAP/CNRS bilateral cooperation project (2013‐2014). André L. F. de Almeida is partially supported by CNPq. The authors are thankful to A. Cichocki for his useful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gérard Favier.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Favier, G., de Almeida, A.L. Overview of constrained PARAFAC models. EURASIP J. Adv. Signal Process. 2014, 142 (2014). https://doi.org/10.1186/1687-6180-2014-142

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-142

Keywords