 Research
 Open Access
 Published:
PARALINDbased identifiability results for parameter estimation via uniform linear array
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 154 (2012)
Abstract
This article applies PARAllel profiles with LINear Dependencies (PARALIND) model to analyze identifiability of parameter estimation in the presence of incoherent multipath via uniform linear array (ULA). New identifiability results are derived based on the uniqueness property of PARALIND model and structure property of ULA. With the strong properties of trilinear model, the proposed identifiability conditions for propagation parameter identification are superior to early studies. We give a new tradeoff between the number of receiving antennae and sampling diversity to ensure parameter identification. Furthermore, a new lower bound of the number of receiving antennae for identifiability is derived. It also shows that the identifiability results is not only determined by traditional factors, such as the number of receiving antennae, oversampling factor or the total number of transmitting paths, but also related to the structure of multipath of sources.
Introduction
Deterministic parameter estimation is one major problem in multisensor array system to effectively locate and track various types of signals to minimize interference and maximize intended signal reception, capitalizing various structure property of source signals or (and) received signals [1–4]. The identifiability issue of parameter determination signifies the existence of a unique desired solution under ideal operating conditions and lays the foundation of the capability of estimation techniques. Identifiability results are usually related to the analysis method of the data model and a given algorithm. ESPRIT algorithm, which takes advantage of the rotational invariance property of the uniform linear array (ULA), can be valid to estimate direction of arrivals (DOAs) uniquely only if the number of calibrated receiving antennae is more than the number of sources and all single path signals follow distinct direction to the receiving end [5]. Fonvard/backward spatial smoothing techniques pointed out that 3 K /2 sensor elements should be enough to identify K DOAs of coherent signals [6, 7]. In multipath scenario, wireless channel is characterized not only by its DOA but also time delay of the different propagation paths. Van der Veen proposed a joint angle and delay estimation algorithm based on the smoothing method and joint diagonalization technique. A lower bound of number of receiving antennae and oversampling diversity for parameters identification has been presented for the given algorithm [1, 8, 9]. Recently, Sidiropoulos and Liu [10] linked trilinear decomposition to array signal processing and guaranteed several improved identifiability results of parameter estimation based on PARAFAC analysis, which introduces a new perspective to parameters estimation.
Trilinear data analysis models, such as Tucker3, PARAFAC and PARAllel profiles with LINear Dependencies (PARALIND), were applied into signal processing area in recent years [11–17]. PARALIND model is a kind of trilinear model that was first proposed by Bro et al. [18–20]. It can further be viewed as a new family of PARAFAC models and was developed to extend its usage to problems with linearly dependent factors. Then De Lathauwer and A.L.F. de Almeida introduced the ’Block term decomposition’ and ’Constrained block PARAFAC’, respectively, which have similar formulations but natural extensions to PARALIND [21, 22]. This article links PARALIND analysis to model identifiability of parameter estimation via ULA. Received signals of the ULA, transmitted through incoherent multipath rays of sources with distinct angles and delays, are constructed into PARALIND model. New identifiability results are presented based on the uniqueness issue of PARALIND. The main contributions of this article are listed in the following:

(i)
A new ‘spacetime’ tradeoff between the number of receiving antennae and sampling diversity for parameter identification is derived based on the strong uniqueness properties of trilinear model.

(ii)
We give a new lower bound of the number of receiving antennae to identify parameters in multipath propagation scenario, which is more superior to early studies.

(iii)
Our work shows that the identifiability results for parameters identification are not only determined by some traditional factors, such as the number of receiving antennae, sampling diversity or the number of paths, but also related to the structure of multipath of sources, which was not considered in previous work.
The rest of this article is organized as follows: Section “Data model” lays the data model of array signals in multipath propagation channel. Section “Uniqueness of paralind” gives the basic uniqueness property of PARALIND model. Section “Paralindbased identifiability results for parameter estimation” proposes the main results of parameters identifiability. Some lemmas and theorems will be guaranteed and analyzed. In the last section, we draw the conclusion.
Some notations will be used in this article. diag([ a , b ,$\dots \phantom{\rule{0.3em}{0ex}}\left]\right)$ denotes the diagonal matrix with scalar entries a,b,…while $\text{blockdiag}\left(\right[\mathbf{\text{A, B,}}\dots \phantom{\rule{0.3em}{0ex}}\left]\right)$ denotes the block diagonal matrix with matrix entries A, B, …. (·)^{T} and (·)^{‡} stand for transpose and pseudoinverse, respectively; vec(·) stacks the columns of its matrix argument in a vector; unvec(·) is the inverse operation of vec(·), $\text{unvec}(\mathbf{\text{c}},I,J)=\left[\mathbf{\text{c}}\right(1:J),\mathbf{\text{c}}(J+1:2J),\dots ,\mathbf{\text{c}}((I1)J:\mathrm{IJ}\left)\right]$. ⊗ is Kronecker product; ⊙ denotes the KhatriRao product, which is a columnwise Kronecker product. Define $\mathbf{\text{A}}=[{\mathbf{\text{a}}}_{1},\dots ,{\mathbf{\text{a}}}_{R}]\in {\u2102}^{I\times R}$, $\mathbf{\text{B}}=[{\mathbf{\text{b}}}_{1},\dots ,{\mathbf{\text{b}}}_{R}]\in {\u2102}^{J\times R}$, The KhatriRao product of A and B is:
Data model
Figure 1 gives a schematic communication scenario with multipath channel. F sources are transmitting to an array with K antennae through multipath scattering propagation channel. g ( t ) is the impulse response which collects all temporal aspects, such as pulse shaping, transmitting filter and receiving filter. Signals of f th user followr_{ f }distinct paths on its way from source to receiver, referred as multipath rays with distinct DOA, transmitting delay and attenuation. The j th path of source f is parameterized by a triple (θ_{f , j},β_{f , j},τ_{f , j}), whereθ_{f , j}: DOAβ_{f , j}: complex path attenuationτ_{f , j}: transmitting delay
Assume that the ULA is used in the receiving end and the distance d between adjacent elements is equal to (or less than) half of the wavelength of signals. Define r to be the total number of paths of all sources, as $r=\sum _{f=1}^{F}{r}_{f}$. Let us conveniently index the multiple rays of sources from 1 to r , starting with all rays associated with the first source and then rays associated with the second source, and so on. Index r parameter triples as $\left\{({\theta}_{1},{\beta}_{1},{\tau}_{1}),\dots ,({\theta}_{{r}_{1}},{\beta}_{{r}_{1}},{\tau}_{{r}_{1}}),\dots ,({\theta}_{r},{\beta}_{r},{\tau}_{r})\right\}$. The array manifold matrixA_{ θ }, time manifold matrixG_{ τ } and path attenuation matrix Γ are defined as:
where
Received signals can be formulated as follow [1]:
where
is a KP × N spacetime data matrix collecting samples during N symbol periods with oversampling factor P in the receiving end. x is a K ×1 array receiving signal. S is a data matrix of size N × F , collecting N symbols of all users. J is a selection matrix that joins multipath associated with a given source.
where1_{ m } denotes an m ×1 vector with elements 1. Equation (3) is a classical parameterized data model named “incoherent multipath with small delay spread” [1, 10]. The propagation parameters,θ_{ i }andτ_{ i }, $i=1,\dots ,r$, are involved in array manifold matrixA_{ θ } and time manifold matrixG_{ τ }. The multipath structure is indicated by J.
The time delay τ is usually difficult to estimate from g ( t − τ ) directly. An alternative approach is to map τ into phase shift ϕ in the frequency domain by discrete Fourier transform (DFT) method [8]. Assume that g ( t ) is band limited and the sample rate is at or above the Nyquist rate. Take P points DFT of each antenna output over single symbol period. Then the following model is obtained [1]:
where
The advantage of (7) versus (3) is that, by using DFT method, delays are transformed into certain phase progressions andG_{ τ } is converted into a Vandermonde matrixF_{ ϕ }, which can provide facility for parameters estimation. Although DFT method may cause some extra error during parameter estimation, van der Veen et al. [8] has informed that this kind of error is very small comparing to the estimation errors that will occurred in the presence of noise.
According to [18], Equation (6) can be viewed as one slice formulation of PARALIND model. The link to PARALIND implies that generic PARALIND model fitting algorithms are directly applicable to deterministic parameter estimation [20]. However, the identifiability of the model pertains to the capability of recovering parameters in the absence of noise. The main work of this article is to investigate new identifiability results for parameter estimation in PARALIND decomposition perspective. Some novel results, such as the tradeoff between the number of receiving antennae and sampling diversity and the lower bound of receiving antennae for parameter identification, are also derived. Firstly, we give the basic uniqueness of PARALIND model.
Uniqueness of PARALIND
The uniqueness of the PARALIND model lays the foundation of its applications. Because of the linear dependence of the loading factors, PARALIND model does not follow directly from the uniqueness property of PARAFAC, but only has partial uniqueness (or essential uniqueness, defined in [23]), which depends on the specifics of the imposed dependency structure along with the adequacy of the factor variation information provided by a given set of data [19]. The uniqueness property of PARALIND was first proposed in [18] and improved by Stegeman and de Almeida [24]. De Lathauwer [23] has given an essential uniqueness theorem more quantitatively. Two new concepts are needed in this theorem.
Definition 1
( k rank) [25]: Consider a matrix B of size I × J . If every l columns of B are linearly independent, but this does not hold for every l + 1 columns, then the krank of B is l , denoted ask_{ B }= l .
Definition 2
( k’ rank) [23]: Assume a partitioned matrix $\mathbf{A}=[{\mathbf{A}}_{1},\dots ,{\mathbf{A}}_{M}]$. The k’ rank of A, denoted as ${\text{rank}}_{{k}^{\u2033}}\left(\mathbf{A}\right)$ or ${k}_{\mathbf{A}}^{\u2033}$, is the maximal number r such that any set of r submatrices of A yields a set of linearly independent columns.
Theorem 1
[23]: Rewrite one slice matrix of PARALIND model
where $\mathbf{A}\in {\u2102}^{I\times r},\mathbf{B}\in {\u2102}^{J\times r},\mathbf{C}\in {\u2102}^{K\times F}$. H is dependence matrix
where ${\mathbf{H}}_{f}\in {\u2102}^{F\times {r}_{f}},f=1,.,F$ are submatrices of H and $r=\sum _{f=1}^{F}{r}_{f}$. A B are partitioned as: $\mathbf{A}=[{\mathbf{A}}_{1},\dots ,{\mathbf{A}}_{F}]$, $\mathbf{B}=[{\mathbf{B}}_{1},\dots ,{\mathbf{B}}_{F}]$ with the submatrices ${\mathbf{A}}_{f}\in {\u2102}^{I\times {r}_{f}},{\mathbf{B}}_{f}\in {\u2102}^{J\times {r}_{f}},f=1,\dots ,F$, compatible with the block structure of H. Suppose that the condition:
holds and we have an alternative decomposition of X, represented by $(\widehat{\mathbf{A}},\widehat{\mathbf{B}},\widehat{\mathbf{C}})$ with ${k}_{\widehat{\mathbf{A}}}^{\u2033}$ and ${k}_{\widehat{\mathbf{B}}}^{\u2033}$ maximal under the given dimensionality constraints. Then there holds $\widehat{\mathbf{A}}=\mathbf{A}{\mathit{\pi}}_{a}{\mathit{\Delta}}_{a},\widehat{\mathbf{B}}=\mathbf{B}{\mathit{\pi}}_{b}{\mathit{\Delta}}_{b}$, whereπ_{ a },π_{ b } are block permutation matrices andΔ_{ a },Δ_{ b }are square nonsingular blockdiagonal matrices, compatible with the block structure of A and B.
Theorem 1 presents the uniqueness properties of A and B. The uniqueness of matrix C is also studied in [18, 21]. Bro et al. [18] gives a demonstration of the uniqueness property of C, provided that A, B and C are full rank. Furthermore, de Lathauwer [21] gives the identifiability result of C more quantitatively in terms of highorder block tensor decomposition.
Consider Theorem 1 in submatrix formulation. Partition $\widehat{\mathbf{A}}$ and $\widehat{\mathbf{B}}$ to be compatible with the block structure of A and B, as: $\widehat{\mathbf{A}}=[{\widehat{\mathbf{A}}}_{1},\dots ,{\widehat{\mathbf{A}}}_{F}]$, $\widehat{\mathbf{B}}=[{\widehat{\mathbf{B}}}_{1},\dots ,{\widehat{\mathbf{B}}}_{F}]$. According to Theorem 1, it directly follows:
where ${\mathbf{U}}_{f}\in {\u2102}^{{r}_{f}\times {r}_{f}},{\mathbf{V}}_{f}\in {\u2102}^{{r}_{f}\times {r}_{f}},f=1,\dots ,F$ are 2 F nonsingular square matrices. It follows
Equation (11) gives another representation for uniqueness property of PARALIND model. It shows that the column space ofA_{ m }andB_{ m }are unique. However, it also implies when condition (9) is satisfied, mode matrices A and B are suffered from rotation ambiguity, characterized byU_{ f }andV_{ f }. Bro et al. [20] has pointed out that PARALIND model can also give uniqueness results if some of its mode matrices have theoretically motivated structural constraints. Due to the structure property of multisensor array, we give the PARALINDbased identifiability results for parameter estimation in the next section.
PARALINDbased identifiability results for parameter estimation
As we mentioned, data model (6) can be linked to PARALIND analysis. Array manifoldA_{ θ }, time manifoldF_{ ϕ }, data matrix S and selection matrix J play the roles of A, B, C and H in Theorem 1, respectively. Since the attenuation matrix Γ only leads to column scaling ofA_{ θ }andF_{ ϕ }, which will not affect the identifiability results. Therefore, we simplify Γ to be an identity matrix during the following discussion. Then the data model is simplified as:
With the structure ofA_{ θ } andF_{ ϕ }, if these two matrices are uniquely determined, parameters θ and τ are determined naturally. According to Theorem 1, k rank and k ’rank play important roles in the uniqueness issue of PARALIND. Firstly, we present two lemmas to determine the k rank and k ’rank of a Vandermonde matrix.
Lemma 1
( k rank of Vandermonde matrix) [26]
Consider an I × r Vandermonde matrix A with distinct nonzero generators ${\alpha}_{1},{\alpha}_{2},\dots ,{\alpha}_{r}\in \u2102$
A is not only full rank but also full krank, and ${k}_{A}={r}_{A}=min(I,r)$.
Lemma 2
( k ’rank of Vandermonde matrix)
Consider the Vandermonde matrix A in (13). Partition A to F submatrices, as $\mathbf{A}=\left[{\mathbf{A}}_{1},{\mathbf{A}}_{2},\dots ,{\mathbf{A}}_{F}\right]$, whereA_{ f } is of size I ×r_{ f } and $r={r}_{1}+{r}_{2}+\xb7\xb7\xb7+{r}_{F}$. Resort ${r}_{1},\dots ,{r}_{F}$ in descent order and assume that ${r}_{1}>{r}_{2}>\xb7\xb7\xb7>{r}_{F}$. The k ’rank of A can be determined as:
Note that the k ’rank of A is determined not only by its dimension but also the partition structure. Ifr_{1}=r_{2}=r_{ F }=1,k_{ A }=k_{ A }^{″}.
Proof
See Appendix. □
PARALINDbased identifiability result
According to (6), bothA_{ θ }andF_{ ϕ } are Vandermonde matrices. Capitalizing on the property of PARALIND model and Vandermonde structure, we have the following theorem.
Theorem 2
:
Consider data model (12)
PartitionF_{ ϕ }andA_{ θ } to F submatrices compatible with the structure of J, as: ${\mathbf{F}}_{\varphi}=\left[{\mathbf{F}}_{\varphi}^{1},\dots ,{\mathbf{F}}_{\varphi}^{F}\right]$ and ${\mathbf{A}}_{\theta}=\left[{\mathbf{A}}_{\theta}^{1},\dots ,{\mathbf{A}}_{\theta}^{F}\right]$, where ${\mathbf{F}}_{\varphi}^{f}$ is P ×r_{ f } and ${\mathbf{A}}_{\theta}^{f}$ is K ×r_{ f }. Suppose that the condition
holds. ThenF_{ ϕ } andA_{ θ } can be uniquely determined from $\stackrel{\u0304}{\mathbf{X}}$. The related parameters, DOA θ and delay spread τ , are identifiable.
Proof
See Appendix. □
Although condition (15) in Theorem 2 and condition (9) in Theorem 1 are identical, the identifiability results of these two theorems are different. Theorem 1 shows thatA_{ θ } andF_{ ϕ }only have “columnspace” uniqueness due to the rotation ambiguity in their submatrices when condition (9) is satisfied. However, Theorem 2 indicates that Vandermonde matricesA_{ θ } andF_{ ϕ } can be uniquely determined from $\stackrel{\u0304}{\mathbf{X}}$ (no rotation ambiguity) under condition (15). According to the structure of arraymanifold matrixA_{ θ } and timemanifoldF_{ ϕ }, the elements of the first row in these two matrices are equal to 1. The scaling ambiguous of the estimated matrices can be removed by normalizing the elements ofA_{ θ }andF_{ ϕ }with respect to elements of the first row during parameter estimation.
Remark 1
: As a special case of (15), we assume that data matrix S is full k rank, ask_{ S }= F . It is achievable when the receiving antennae collect enough symbols for parameter estimation. Then condition (15) becomes ${k}_{{A}_{\theta}}^{\u2033}+{k}_{{F}_{\varphi}}^{\u2033}\ge F+2$. According to the definition of k ’rank, the maximum k ’rank ofA_{ θ } orF_{ ϕ } is F . Then it requires that $min(\underset{{\mathbf{A}}_{\theta}}{\overset{\u2033}{k}},{k}_{{F}_{\varphi}}^{\u2033})\ge 2$. This lower bound is similar to the identifiability requirement of PARAFAC model, which uses k rank instead of k ’rank (see Ref. [11]). Furthermore, according to Lemma 2, the k ’rank ofA_{ θ } andF_{ ϕ } can be represented by ${r}_{1},\dots ,{r}_{F}$. Then the minimum value of P and K can be determined as:
where ${r}_{1},\dots ,{r}_{F}$ are descent sorted. Condition (16) shows an interesting result that, to ensure identifiability of parameters estimation, the minimum number of receiving antennae K and oversampling factor P is related to not only the number of sources, but also the number of paths of “some” sources.
Remark 2
Condition (15) shows that the identifiability of data model is determined by ${k}_{{\mathbf{\text{A}}}_{\theta}}^{\u2033}$ and ${k}_{{\mathbf{F}}_{\varphi}}^{\u2033}$. Lemma 2 also indicates that the k ’rank ofA_{ θ }andF_{ ϕ }are related to the structure of their submatrices, which is compatible with the multipath structure of sources. Therefore, the identifiability result based on PARALIND analysis are not only determined by traditional factors, such as P , K and r , but also related to the structure of multipath, denoted as J. The following example can show this phenomenon:
Assume that the number of receiving antennae K =4 and the oversampling factor P =6. Six paths from three sources are arriving at the receiving end. Consider the following two cases:

(1)
The number of paths of each source is: r _{1}=2, r _{2}=2, r _{3}=2. In this case, ${k}_{{\mathbf{\text{A}}}_{\theta}}^{\u2033}=2$ and ${k}_{{\mathbf{F}}_{\varphi}}^{\u2033}=3$. It has
$${k}_{{\mathbf{\text{A}}}_{\theta}}^{\u2033}+{k}_{{\mathbf{F}}_{\varphi}}^{\u2033}=5\ge F+2=5$$According Theorem 2, parameter identification is achievable.

(2)
The number of paths of each source is: r _{1}=3, r _{2}=2, r _{3}=1. In this case, ${k}_{{\mathbf{\text{A}}}_{\theta}}^{\u2033}=1$ and ${k}_{{\mathbf{F}}_{\varphi}}^{\u2033}=3$. It has
$${k}_{{\mathbf{\text{A}}}_{\theta}}^{\u2033}+{k}_{{\mathbf{F}}_{\varphi}}^{\u2033}=4<F+2=5$$Theorem 2 is violated.
It shows that, although the receiving antennae number, oversampling factor and the multipath remain the same, the identifiability results may be different due to the multipath structure. Furthermore, [20] has shown that the dependence matrix J can be uniquely obtained in the PARALIND model by trilinear decomposition method. Therefore, the receive array can get the information of multipath directly from the data model.
PARALINDbased identifiability result with smoothing technique
The identifiability result of (15) can be alleviated by introducing spatial and temporal smoothing techniques from taking advantage of Vandermonde structure in array manifold matrixA_{ θ } and time manifold matrixF_{ ϕ }. Take ’temporal smoothing’ for example. Rewrite (15)
Construct L matrices of size MK × N
where A( a : b ,:) stands for rows a to b (inclusive) of A. L is defined as smoothing factor and M = P − L + 1. Due to the Vandermonde structure, it holds that
whereϕ l^{−1}denotes $[{\varphi}_{1}^{l1},{\varphi}_{2}^{l1},\dots ,{\varphi}_{r}^{l1}]$ and ${\mathbf{F}}_{\varphi}^{M}={\mathbf{F}}_{\varphi}(1:M,:)$, Substitute (18) into (17)
Lay out L matrices ${\stackrel{\u0304}{\mathbf{X}}}^{\left(l\right)},l=1,\dots ,L$ vertically and construct a new matrix $\stackrel{~}{\mathbf{X}}$ of size LMK × N
where ${\mathbf{F}}_{\varphi}^{L}={\mathbf{F}}_{\varphi}(1:L,:)$. Define ${\mathbf{A}}_{\theta}^{M}={\mathbf{F}}_{\varphi}^{M}\odot {\mathbf{A}}_{\theta}$. It follows that
The smoothing data $\stackrel{~}{\mathbf{\text{X}}}$ can also be modeled as PARALIND. The main difference between (21) and (15) is that model matricesF_{ ϕ } andA_{ θ } in (15) is replaced by ${\mathbf{F}}_{\varphi}^{L}$ and ${\mathbf{A}}_{\theta}^{M}$. Parameters can also be determined if ${\mathbf{F}}_{\varphi}^{L}$ and ${\mathbf{A}}_{\theta}^{M}$ are uniquely decomposed from $\stackrel{~}{\mathbf{\text{X}}}$. Before discussing the identifiability result of this smoothed model, we need the following lemma:
Lemma 3
( k ’rank of KhatriRao product of Vandermonde matrix)
Consider two Vandermonde matrices $\mathbf{A}\in {\u2102}^{I\times r}$ and $\mathbf{B}\in {\u2102}^{J\times r}$ with distinct nonzero generators. A and B are partitioned as $\mathbf{A}=\left[{\mathbf{A}}_{1},{\mathbf{A}}_{2},\dots ,{\mathbf{A}}_{F}\right]$ and $\mathbf{B}=\left[{\mathbf{B}}_{1},{\mathbf{B}}_{2},\dots ,{\mathbf{B}}_{F}\right]$, whereA_{ f } is of size I ×r_{ f } andB_{ f } is of size J ×r_{ f }, respectively, and $r={r}_{1}+{r}_{2}+\xb7\xb7\xb7+{r}_{F}$. Resort ${r}_{1},\dots ,{r}_{F}$ in descent order, as ${r}_{1}>{r}_{2}>\xb7\xb7\xb7>{r}_{F}$. If
then ${k}_{\mathbf{A}\odot \mathbf{B}}^{\u2033}\ge K$.
Proof
See Appendix □
Theorem 3
: Consider the smoothed data model (21)
where ${\mathbf{A}}_{\theta}^{M}={\mathbf{F}}_{\varphi}^{M}\odot {\mathbf{A}}_{\theta}$. ${\mathbf{F}}_{\varphi}^{L}$ and ${\mathbf{F}}_{\varphi}^{M}$, presented in (31) and (28), are of size $L\times \sum _{f=1}^{F}{r}_{f}$ and $M\times \sum _{f=1}^{F}{r}_{f}$, respectively. Assume that L is selected as $L=\sum _{f=1}^{R}{r}_{f},R\in \left[2,F\right]$ and S is full k rank. Suppose that the conditions
hold. Then ${\mathbf{F}}_{\varphi}^{L}$, ${\mathbf{F}}_{\varphi}^{M}$ andA_{ θ } can be uniquely determined from $\stackrel{~}{\mathbf{X}}$. Parameters are identifiable.
Proof
See Appendix □
Remark 3
:: Theorem 3 is guaranteed by smoothing matrixF_{ ϕ }based on its Vandermonde structure. Note that array manifoldA_{ θ } is also a Vandermonde matrix. Duality simplifies to symmetry, similar formulation can be obtained by smoothingA_{ θ }, known as ‘spatial smoothing’:
where ${\mathbf{F}}_{\varphi}^{M}={\mathbf{F}}_{\varphi}\odot {\mathbf{A}}_{\theta}^{M}$, ${\mathbf{A}}_{\theta}^{M}={\mathbf{A}}_{\theta}(1:M,:)$, and ${\mathbf{A}}_{\theta}^{L}={\mathbf{A}}_{\theta}(1:L,:)$. Note that data model (24) has the same formulation as (21). It implies that the identifiability results of Theorem 3 is also available whenA_{ θ } is smoothed instead, while R is the k ’rank of ${\mathbf{A}}_{\theta}^{L}$.
Remark 4
: Condition (23) gives a new tradeoff between the number of sensors K and oversampling factor P , referred as “spacetime” tradeoff, to achieve parameter identifiability. As a special case of (23), two antennae are sufficient for r path when the oversampling factor P is more than $\sum _{i=1}^{R}{r}_{i}+\sum _{i=1}^{F+2R}{r}_{i}2$. In Theorem 3, The lower bound of choice of receiving antennae K and sampling diversity P is much superior to that in Theorem 2, discussed in Remark 1. It implies that smoothing technique can further improve the identifiability results of data model. It also shows that the system is capable of supporting many more paths than sensors, provided enough sampling diversity. As the complete symmetry in the roles of P and K , limited samples are also available for r path when enough antennae are used in the receiving end.
Remark 5
: Rewrite condition (23)
Similar to Remark 2, the value of P plus K is also related to the structure of multipath, denoted as ${r}_{1},\dots ,{r}_{F}$. Moreover, it is of interest that the lower bound of P plus K is varied along R , the k ’rank of ${\mathbf{F}}_{\varphi}^{L}$ (or ${\mathbf{A}}_{\theta}^{L}$ in (24)). Now, we prove that the minimum lower bound of (25) can be achieved in the condition of R =2 or R = F . Define a function of variable R : $f\left(R\right)=\sum _{i=1}^{R}{r}_{i}+\sum _{i=1}^{F+2R}{r}_{i}$. Let Δf ( R )= f ( R )− f (2), R ∈(2, F ]. Then we wish to prove that
It follows
Note thatr_{1}≥r_{2}≥,···,≥r_{ F }and R ≤ F . It is clear thatr_{R − i + 1}≥r_{F − i + 1}, Therefore, Δf ( R )≥0. Since$\mathrm{\Delta f}\left(F\right)=\sum _{i=3}^{F}{r}_{i}\sum _{i=3}^{F}{r}_{i}=0$, we have f ( F )= f (2). This result gives a relationship between the smoothing factor R and the minimum value of P plus K in parameter estimation when r path is considered. Since the cost of parameter estimation is usually related to the number of receiving antenna and the oversampling factor, the result also implies that the cost of parameter estimation can be decreased when the smoothing factor is properly selected.
Remark 6
:: If the multiplication SJ is considered as a whole matrix, the data model (12) can be modeled as PARAFAC. However, since matrix multiplication SJ has collinear columns due to the structure of J. According to the uniqueness property of PARAFAC [25], uniqueness of the given model cannot be guaranteed so that meaningful results of parameter identifiability may not be derived directly based on PARAFAC model. Sidiropoulos and Liu [10] utilizes smoothing technique to improve the krank of SJ and gives the identifiability results of (12) based on PARAFAC model. Here we will show that condition (23) is superior to that in [10]. Define matrix C=SJ. Sidiropoulos and Liu [10] presented that the model (12) is identifiable, provided that
Note that C has collinear columns. According to the definition of k rank, the k rank of C is equal to 1. Condition (26) becomes K + P ≥2 r + 1. Because $r=\sum _{i=1}^{F}{r}_{i}$. We have
It can be concluded that condition (23) is more relaxed than condition (26). The following example can guarantee the above improvement. Assume that four users are in consideration. The multipath of users arer_{1}=5,r_{2}=3,r_{3}=2,r_{4}=1, respectively. Figure 2 depicts the minimum requirement of receiving antennae K in these two conditions along with oversampling factor P varied.
Conclusion
This article has discussed identifiability issue of deterministic parameters estimation via multisensor array based on trilinear decomposition theory. With the uniqueness property of PARALIND model, new identifiability results are guaranteed, which are more superior to early studies. According to the proposed identifiability conditions, a new “spacetime” tradeoff between the number of receiving antennae and sampling diversity for parameters identification is presented, and it shows that even two receiving antennae are sufficient for identifying parameters of r path, provided sufficient sampling diversity available. Besides, we find that the identifiability conditions is not only determined by some traditional factors, such as the number of receiving antennae, oversampling factors or number of paths, but also related to the multipath structure of each source, which was not considered in previous work.
Appendix
Proof of Lemma 2
According to the definition of k ’rank, if ${k}_{A}^{\u2033}=K$, it means that any K submatrices of A yield a set of linearly independent columns but it cannot support K + 1 submatrices. Let $\stackrel{~}{\mathbf{A}}$ be an $I\times \sum _{i=1}^{K}{\stackrel{~}{r}}_{i}$ matrix including any K submatrices of A, as $\stackrel{~}{\mathbf{A}}=\left[\stackrel{~}{{\mathbf{\text{A}}}_{1}},\dots ,\stackrel{~}{{\mathbf{\text{A}}}_{K}}\right]$, where $\stackrel{~}{{\mathbf{\text{A}}}_{1}},\dots ,\stackrel{~}{{\mathbf{\text{A}}}_{K}}$ are randomly selected from${\mathbf{\text{A}}}_{1},\dots ,{\mathbf{\text{A}}}_{F}$ and $\stackrel{~}{{\mathbf{\text{A}}}_{i}}\ne \stackrel{~}{{\mathbf{\text{A}}}_{j}},i,j\in \left[1,F\right],i\ne j$. Note that $\stackrel{~}{{\mathbf{\text{A}}}_{k}}$ is of size $I\times {\stackrel{~}{r}}_{k}$ with Vandermonde structure and in most cases we have $\stackrel{~}{{\mathbf{\text{A}}}_{k}}\ne {\mathbf{\text{A}}}_{k}$ and ${\stackrel{~}{r}}_{k}\ne {r}_{k}$. With the assumption of ${r}_{1}>{r}_{2}>\xb7\xb7\xb7>{r}_{F}$, it follows that $\sum _{i=1}^{K}{\stackrel{~}{r}}_{i}\le \sum _{i=1}^{K}{r}_{i}$. Since $I\ge \sum _{f=1}^{K}{r}_{f}$ guarantees $I\ge \sum _{f=1}^{K}{\stackrel{~}{r}}_{f}$. According to Lemma 1, $\stackrel{~}{\mathbf{\text{A}}}$ is full column rank. It implies that any K submatrices of A are guaranteed to yield a set of linearly independent columns so that ${k}_{\mathbf{\text{A}}}^{\u2033}\ge K$. On the other hand, define $\widehat{\mathbf{\text{A}}}$ as $\widehat{\mathbf{\text{A}}}=\left[{\mathbf{\text{A}}}_{1},\dots ,{\mathbf{\text{A}}}_{K},{\mathbf{\text{A}}}_{K+1}\right]$. $\widehat{\mathbf{\text{A}}}$ is a $I\times \sum _{i=1}^{K+1}{r}_{i}$ Vandermonde matrix with K + 1 submatrices of A and, ${k}_{\widehat{\mathbf{A}}}=min\left(I,\sum _{i=1}^{K+1}{r}_{i}\right)=I$. It means that we can find K + 1 submatrices of A which yields a set of dependent columns, so that ${k}_{\mathbf{\text{A}}}^{\u2033}<K+1$. Therefore, ${k}_{\mathbf{\text{A}}}^{\u2033}=K$. The proof is complete. __
Proof of Theorem 2
Before proving Theorem 2, we need the following Lemma:
Lemma 4
[27] Consider a matrix decomposition X= A B^{T}, where $\mathbf{\text{A}}\in {\u2102}^{I\times F}$ is a Vandermonde matrix with distinct nonzero generator and $\mathbf{\text{B}}\in {\u2102}^{J\times F}$ is a ‘tall’ or ‘square’ matrix with full column rank. Suppose that the condition $I\ge F+1$ holds and then A and B can be uniquely decomposed from X under permutation and scaling ambiguous. It means that any other alternative decomposition of X, denoted as $\mathbf{\text{X}}=\stackrel{\u0304}{\mathbf{\text{A}}}\stackrel{\u0304}{{\mathbf{\text{B}}}^{T}}$ in which $\stackrel{\u0304}{\mathbf{\text{A}}}\in {\u2102}^{I\times F}$ has Vandermonde strucure and $\stackrel{\u0304}{\mathbf{\text{B}}}\in {\u2102}^{J\times F}$ is full column rank, is related to A and B via $\stackrel{\u0304}{\mathbf{\text{A}}}=\mathbf{\text{A}}{\mathit{\pi}}_{A}{\mathit{\Delta}}_{A},\stackrel{\u0304}{\mathbf{\text{B}}}=\mathbf{\text{B}}{\mathit{\pi}}_{B}{\mathit{\Delta}}_{B}$, whereπ_{ A }π_{ B }are permutation matrices andΔ_{ A }Δ_{ B }are diagonal scaling matrices with nonzero elements.
According to Theorem 1, when condition (15) is followed, we have ${\widehat{\mathbf{F}}}_{\varphi}^{f}={\mathbf{F}}_{\varphi}^{f}{\mathbf{U}}_{f},{\widehat{\mathbf{A}}}_{\mathit{\theta}}^{f}={\mathbf{A}}_{\mathit{\theta}}^{f}{\mathbf{V}}_{f},f=1,\dots F$, where ${\mathbf{U}}_{1},{\mathbf{V}}_{1},\dots ,{\mathbf{U}}_{F},{\mathbf{V}}_{F}$ are 2 F nonsingular square matrices. Note that any subset of columns of a Vandermonde matrix forms a Vandermonde matrix. Therefore, ${\mathbf{F}}_{\varphi}^{f},{\mathbf{A}}_{\mathit{\theta}}^{f},f=1,\dots ,F$ are all with Vandermonde structure. Lemma 4 provides that ${\mathbf{F}}_{\varphi}^{1},{\mathbf{A}}_{\mathit{\theta}}^{1},\dots ,{\mathbf{F}}_{\varphi}^{F},{\mathbf{A}}_{\mathit{\theta}}^{F}$ can be uniquely determined from ${\widehat{\mathbf{F}}}_{\varphi}^{1},{\widehat{\mathbf{A}}}_{\mathit{\theta}}^{1},\dots ,{\widehat{\mathbf{F}}}_{\varphi}^{F},{\widehat{\mathbf{A}}}_{\mathit{\theta}}^{F}$ only if the following conditions are satisfied:
Assume that ${r}_{1}>{r}_{2}>\xb7\xb7\xb7>{r}_{F}$. Then conditions (27) becomes
Remark 1 derives that the minimum of P and K should be larger thanr_{1} +r_{2}. Sincer_{2} is no less than 1(multiple source assumption), it means that $min(P,K)\ge {r}_{1}+1$ so that (15) is a sufficient condition for (28). Therefore, ${\mathbf{F}}_{\varphi}^{f},{\mathbf{A}}_{\mathit{\theta}}^{f}$ can be uniquely determined from ${\widehat{\mathbf{F}}}_{\varphi}^{f},{\widehat{\mathbf{A}}}_{\mathit{\theta}}^{f}$ when condition (15) is satisfied. ThenF_{ ϕ },A_{ θ } can be uniquely obtained from ${\mathbf{F}}_{\varphi}^{f},{\mathbf{A}}_{\mathit{\theta}}^{f}$. The proof is complete.$\u220e$
Proof of Lemma 3
We need the following Lemma:
Lemma 5
(full rank of KhatriRao product) [28]
Consider $\mathbf{\text{A}}\odot \mathbf{\text{B}}:=\left[{\mathbf{\text{a}}}_{1}\otimes {\mathbf{\text{b}}}_{1},\dots ,{\mathbf{\text{a}}}_{F}\otimes {\mathbf{\text{b}}}_{F}\right]$, where A is of size I × F , B is of size J × F and ${\mathbf{\text{a}}}_{f},{\mathbf{\text{b}}}_{f},f=1,\dots ,F$ are columns of A, B . Ifr_{ A } +k_{ B }≥ F + 1 orr_{ B } +k_{ A }≥ F + 1 holds, then A⊙B is full column rank, asr_{A⊙B}= F .
Assume A, B are Vandermonde matrices. According to Lemma 1, ${r}_{\mathbf{\text{A}}}={k}_{\mathbf{\text{A}}}=min(I,F),{r}_{\mathbf{\text{B}}}={k}_{\mathbf{\text{B}}}=min(J,F)$. As a special case of Lemma 5, the full rank condition of A⊙B with Vandermonde assumption is
Randomly select K submatrices $\stackrel{~}{{\mathbf{\text{C}}}_{1}},\dots ,\stackrel{~}{{\mathbf{\text{C}}}_{K}}$ from C and construct a new matrix
where $\stackrel{~}{{\mathbf{\text{A}}}_{f}}$ is $I\times {\stackrel{~}{r}}_{f}$, $\stackrel{~}{{\mathbf{\text{B}}}_{f}}$ is $J\times {\stackrel{~}{r}}_{f}$, $\stackrel{~}{\mathbf{A}}$ is $I\times \sum _{f=1}^{K}{\stackrel{~}{r}}_{f}$ and $\stackrel{~}{\mathbf{B}}$ is $J\times \sum _{f=1}^{K}{\stackrel{~}{r}}_{f}$. Note that here $\stackrel{~}{{\mathbf{A}}_{f}}\in \left\{{\mathbf{\text{A}}}_{1},\dots ,{\mathbf{\text{A}}}_{F}\right\},{\stackrel{~}{\mathbf{B}}}_{f}\in \left\{{\mathbf{\text{B}}}_{1},\dots ,{\mathbf{\text{B}}}_{F}\right\}$, and ${\stackrel{~}{\mathbf{A}}}_{i}\ne {\stackrel{~}{\mathbf{A}}}_{j},{\stackrel{~}{\mathbf{B}}}_{i}\ne {\stackrel{~}{\mathbf{B}}}_{j},i\ne j$. Similar to the proof procedure of Lemma 2, we only need to show that $\stackrel{~}{\mathbf{C}}$ is full column rank under condition (22). It is equivalent to prove:
In the light of (30), consider the following cases

(1)
$I\ge \sum _{f=1}^{K}{\stackrel{~}{r}}_{f},J\ge \sum _{f=1}^{K}{\stackrel{~}{r}}_{f}$. Then
$$\begin{array}{ll}\phantom{\rule{12.0pt}{0ex}}min\left(I,\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}\right)& +min\left(J,\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}\right)\phantom{\rule{2em}{0ex}}\\ =\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}+\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}\ge \sum _{f=1}^{K}{\stackrel{~}{r}}_{f}+1\phantom{\rule{2em}{0ex}}\end{array}$$Condition (30) is satisfied.

(2)
$I<\sum _{f=1}^{K}{\stackrel{~}{r}}_{f},J\ge \sum _{f=1}^{K}{\stackrel{~}{r}}_{f}$ or $I\ge \sum _{f=1}^{K}{\stackrel{~}{r}}_{f},J<$ $\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}$. Then
$$\begin{array}{ll}\phantom{\rule{6.0pt}{0ex}}min\left(I,\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}\right)& +min\left(J,\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}\right)\phantom{\rule{2em}{0ex}}\\ =min\left(I,J\right)+\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}\ge \sum _{f=1}^{K}{\stackrel{~}{r}}_{f}+1\phantom{\rule{2em}{0ex}}\end{array}$$Condition (30) is satisfied.

(3)
$I<\sum _{f=1}^{K}{\stackrel{~}{r}}_{f},J<\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}$. Then depending on condition (22),
$$min\left(I,\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}\right)+min\left(J,\sum _{f=1}^{K}{\stackrel{~}{r}}_{f}\right)=I+J\ge \sum _{f=1}^{K}{r}_{f}+1$$Sincer_{1}≥r_{2}≥,··· ,≥r_{ F }, and $\{{\stackrel{~}{r}}_{1},\dots ,{\stackrel{~}{r}}_{K}\}\subset \{{r}_{1},\dots ,{r}_{F}\}$, it holds that $\sum _{f=1}^{K}{r}_{f}+1\ge \sum _{f=1}^{K}{\stackrel{~}{r}}_{f}+1$. Condition (30) is satisfied.
Therefore, $\stackrel{~}{\mathbf{C}}$ is full column rank, so that ${k}_{\mathbf{C}}^{\u2033}\ge K$. The proof is complete. $\u220e$
Proof of Theorem 3
Note that P = L + M −1 and $L=\sum _{i=1}^{R}{r}_{i}\le P$. According to Lemma 2, the k’rank of ${\mathbf{F}}_{\varphi}^{L}$ is R . Condition (23) becomes
According to Lemma 3, condition (31) provides that the k’rank of ${\mathbf{A}}_{\theta}^{M}$ is larger than F + 2− R . Then we have
With the assumption ofk_{ S }= F and Theorem 1, condition (32) shows that the partial uniqueness of model (21) is achieved. Then we have
whereV_{ f },U_{ f }are nonsingular square matrices of sizer_{ f }×r_{ f }. Because $L=\sum _{i=1}^{R}{r}_{i}\ge {r}_{f}+1$. According to Lemma 4, ${\mathbf{F}}_{\varphi ,f}^{L}$ can be uniquely determined from ${\widehat{\mathbf{F}}}_{\varphi ,f}^{L}$. Then we wish to prove that ${\mathbf{F}}_{\varphi ,f}^{M}$ andA_{θ , f}can be uniquely determined from ${\widehat{\mathbf{A}}}_{\theta ,f}^{M}$. Consider the following cases with different value ofr_{ f }

(1)
r _{ f }=1If r _{ f }=1, ${\mathbf{F}}_{\varphi ,f}^{M}$ and A _{θ , f} degenerate to vectors ${\mathbf{F}}_{\varphi ,f}^{M}$, a _{θ , f}, and U _{ f } degenerates to a scalar u _{ f }. Equation (34) becomes
$${\widehat{\mathbf{a}}}_{\theta ,f}^{M}={u}_{f}\left({\mathbf{f}}_{\varphi ,f}^{M}\odot {\mathbf{a}}_{\theta ,f}\right)={u}_{f}\left({\mathbf{f}}_{\varphi ,f}^{M}\otimes {\mathbf{a}}_{\theta ,f}\right)$$where ${\widehat{\mathbf{a}}}_{\theta ,f}^{M}\in {\u2102}^{\mathrm{MK}\times 1}$, ${\mathbf{f}}_{\varphi ,f}^{M}\in {\u2102}^{M\times 1}$ and ${\mathbf{a}}_{\theta ,f}\in {\u2102}^{K\times 1}$. Becauseu_{ f } only leads to column scaling of ${\mathbf{F}}_{\varphi ,f}^{M}$ anda_{θ , f}, which will not affect the identifiability result. We simplifyu_{ f }to be 1. Rearrange ${\widehat{\mathbf{A}}}_{\theta ,f}^{M}$ to be a M × K matrix Ω
$$\begin{array}{ll}\phantom{\rule{6.0pt}{0ex}}\mathit{\Omega}& \phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\text{unvec}\left({\widehat{\mathbf{a}}}_{\theta ,f}^{M},M,K\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\text{unvec}\left({\mathbf{f}}_{\varphi ,f}^{M}\otimes {\mathbf{a}}_{\theta ,f},M,K\right)\phantom{\rule{2em}{0ex}}\\ ={\mathbf{a}}_{\theta ,f}{\left({\mathbf{f}}_{\varphi ,f}^{M}\right)}^{T}\phantom{\rule{2em}{0ex}}\end{array}$$Then ${\mathbf{F}}_{\varphi ,f}^{M}$ anda_{θ , f} can be easily determined from Ω by using singular value decomposition method (SVD) up to scaling ambiguity.

(2)
r _{ f }≥2It is of interest that Equation (34) is a standard slice matrix formulation of PARAFAC model when r _{ f }≥2, of which three mode matrices are ${\mathbf{F}}_{\varphi ,f}^{M}$, A _{θ , f} and U _{ f }. Recall that the uniqueness condition of PARAFAC model is [11, 25, 29–31]
$${k}_{{\mathbf{F}}_{\varphi ,f}^{M}}+{k}_{{\mathbf{A}}_{\theta ,f}}+{k}_{{\mathbf{U}}_{f}}\ge 2{r}_{f}+2$$(35)U_{ f } is ar_{ f }×r_{ f } square nonsingular matrix with full krank, ask_{ U }=r_{ f }. ${\mathbf{F}}_{\varphi ,f}^{M}\in {\u2102}^{M\times {r}_{f}}$ and ${\mathbf{A}}_{\theta ,f}\in {\u2102}^{K\times {r}_{f}}$ are Vandermonde matrices, and their kranks can be determined as ${k}_{{\mathbf{F}}_{\varphi ,f}^{M}}=min(M,{r}_{f})$, ${k}_{{\mathbf{A}}_{\varphi ,f}}=min(K,{r}_{f})$. Then condition (35) becomes:
$$min(M,{r}_{f})+min(K,{r}_{f})\ge {r}_{f}+2$$(36)We now prove that condition (31) is sufficient to (36). Four cases need to be discussed:

(2.1)
M ≥r_{ f }, K ≥r_{ f }, then $min(M,{r}_{f})+min(K,{r}_{f})={r}_{f}+{r}_{f}\ge {r}_{f}+2$. Condition (36) is satisfied.

(2.2)
M <r_{ f }, K ≥r_{ f },then $min(M,{r}_{f})+min(K,{r}_{f})=M+{r}_{f}$. Condition (36) is satisfied when M ≥2. However, if M =1, L = P , the structure of ${\mathbf{F}}_{\varphi ,f}^{M}$ shows that model (21) degenerate to $\stackrel{~}{\mathbf{X}}=\left({\mathbf{F}}_{\varphi}\odot {\mathbf{A}}_{\theta}\right){\left(\mathbf{SJ}\right)}^{\text{T}}$. According to (31), $K\ge \sum _{i=1}^{F+2R}{r}_{i}$. The k’rank ofA_{ θ } is larger than F + 2− R . With the assumption of $L=\sum _{i=1}^{R}{r}_{i}$, the k’rank ofF_{ ϕ }is R . It holds
$${k}_{{\mathbf{F}}_{\varphi}}^{\u2033}+{k}_{{\mathbf{A}}_{\theta}}^{\u2033}+{k}_{S}\ge R+FR+2+F=2F+2$$(37)Theorem 2 shows thatA_{ θ }andF_{ ϕ } can be uniquely determined from $\stackrel{~}{\mathbf{X}}$ under condition (37).

(2.3)
M ≥r_{ f }, K <r_{ f }, then $min(M,{r}_{f})+min(K,{r}_{f})={r}_{f}+K$. Since $min(K,P)\ge 2$. $min(M,{r}_{f})+min(K,{r}_{f})$ is larger thanr_{ f } + 2. Condition (36) is satisfied.

(2.4)
M <r_{ f }, K <r_{ f }, then $min(M,{r}_{f})+min(K,{r}_{f})=M+K\ge \sum _{i=1}^{F+2R}{r}_{i}+1$. Because 2≤ R ≤ F , $\sum _{i=1}^{F+2R}{r}_{i}+1\ge {r}_{1}+{r}_{2}+1\ge {r}_{f}+2$. Condition (36) is satisfied.

(2.1)
Therefore, ${\mathbf{F}}_{\varphi ,f}^{L}$, ${\mathbf{F}}_{\varphi ,f}^{M}$ andA_{θ , f} can be uniquely determined from ${\widehat{\mathbf{F}}}_{\varphi ,f}^{L}$ and ${\widehat{\mathbf{A}}}_{\theta ,f}^{M}$ under condition (23). This completes the proof. $\u220e$
References
 1.
van der Veen AJ: Algebraic methods for deterministic blind beamforming. Proc. IEEE 1998, 86: 19872008. 10.1109/5.720249
 2.
Krim H, Viberg M: Two decades of array signal processing research, the parametric approach. IEEE Signal Process. Mag 1996, 13: 6794. 10.1109/79.526899
 3.
Liu ZT, He J, Liu Z: Computationally efficient DOA and polarization estimation of coherent sources with linear electromagnetic vectorsensor array. EURASIP J. Adv. Signal Process 2010. doi:10.1155/2011/490289
 4.
Sohl A, Klein A: Semiblind channel estimation for IFDMA in case of channels with large delay spreads. Eurasip J. Adv. Signal Process 2010. doi:10.1155/2011/857859
 5.
Roy R, Kailath T: ESPRITEstimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process 1989, 37: 984995. 10.1109/29.32276
 6.
Pillai S, Kwon B: Forward–backward spatial smoothing techniques for the coherent signal identification. IEEE Trans. Acoust. Speech Signal Process 1989, 37: 815. 10.1109/29.17496
 7.
Linebarger DA, DeGroat RD, Dowling EM: Efficient directionfinding methods employing forword/backward averaging. IEEE Trans. Signal Process 1994, 42(8):21362145. 10.1109/78.301848
 8.
van der Veen AJ, van der Veen MC, Paulraj A: Joint angle and delay estimation using shiftinvariance techniques. IEEE Trans. Signal Process 1998, 46(2):405418. 10.1109/78.655425
 9.
van der veen MC, van der Veen AJ: Estimation of multipath parameters in wireless communications. IEEE Trans. Signal Process 1998, 46(3):682690. 10.1109/78.661335
 10.
Sidiropoulos ND, Liu X: Identifiability results for blind beamforming in incoherent multipath with small delay spread. IEEE Trans. Signal Process 2001, 49(1):228238. 10.1109/78.890366
 11.
Sidiropoulos ND, Giannakis GB, Bro R: Blind PARAFAC receivers for DSCDMA systems. IEEE Trans. Signal Process 2000, 48(3):810823. 10.1109/78.824675
 12.
de Almeida ALF, Favier G, Mota JCM: Constrained tucker3 model for blind beamforming. Signal Process 2009, 89: 12401244. 10.1016/j.sigpro.2008.11.016
 13.
Sidiropoulos ND, Dimie GZ: Blind multiuser detection in WCDMA systems with large delay spread. IEEE Signal Process. Lett 2001, 8(3):8789.
 14.
Zhang XF, Xu DZ: Blind PARAFAC signal detection for polarization sensitive array. Eurasip J. Adv. Signal Process 2007. doi:10.1155/2007/12025
 15.
Liu X, Xu ZZ: A PARALINDbased blind multiuser detection algorithm in MIMOCDMA system. J. Syst. Eng. Electron. (China) 2011, 33(2):404410.
 16.
Liang JL, Yang SY, Zhang JY: 4D nearfield source localization using cumulant. Eurasip J. Adv. Signal Process 2007. doi:10.1155/2007/17820
 17.
Carvalho LC, Roemer F, Haardt M: Multidimensional model order selection. Eurasip J. Adv. Signal Process 2011. doi:10.1186/16876180201126
 18.
Bro R, Harshman RA, Sidiropoulos ND, Lundy ME: Modeling multiway data with linearly dependent loadings, KVL Technical Report. (2005).
 19.
Bahram M, Bro R: A novel strategy for solving matrix effect in threeway data using parallel profiles with linear dependencies. Anal. Chim. Acta 2007, 584: 397402. 10.1016/j.aca.2006.11.070
 20.
Bro R, Harshman RA, Sidiropoulos ND, Lundy ME: Modeling multiway data with linearly dependent loadings. J. Chemometr 2009, 23(78):324340. 10.1002/cem.1206
 21.
de Lathauwer L: Decomposition of a higherorder tensor in block termspart I: lemmas for partitioned matrices. SIAM J. Matrix Anal. Appl 2008, 30(3):10221032. 10.1137/060661685
 22.
de Almeida ALF, Favier G, Mota JCM: Constrained tensor modeling approach to blind multipleantenna CDMA schemes. IEEE Trans. Signal Process 2008, 56(6):24172428.
 23.
de Lathauwer L: Decomposition of a higherorder tensor in block termspart II: definitions and uniqueness. SIAM J. Matrix Anal. Appl 2008, 30(3):10331066. 10.1137/070690729
 24.
Stegeman A, de Almeida ALF: Uniqueness conditions for constrained threeway factor decompositions with linearly dependent loadings. SIAM J. Matrix Anal. Appl 2009, 31: 14691490.
 25.
Kruskal JB: Threeway arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algeb. Appl 1977, 18: 95138. 10.1016/00243795(77)900696
 26.
Sidiropoulos ND, Giannakis GB, Bro R: Parallel factor analysis in sensor array processing. IEEE Trans. Signal Process 2000, 48(8):23772388. 10.1109/78.852018
 27.
Liu X, Xu ZZ, Lei L: Identification of array signal parameters based on matrix decomposition. J. Appl. Sci. (China) 2010, 28(1):4955.
 28.
Guo X, Brie D, Zhu S, Liao X: A CANDECOMP/PARAFAC perspective on uniqueness of DOA estimation using a vector sensor array. IEEE Trans. Signal Process 2011, 59: 34753481.
 29.
De Lathauwer L: A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization. SIAM J. Matrix Anal. Appl 2006, 28: 642666. 10.1137/040608830
 30.
Stegeman A: On uniqueness conditions for Candecomp/Parafac and Indscal with full column rank in one mode. Linear Algeb. Appl 2009, 431: 211227. 10.1016/j.laa.2009.02.025
 31.
Jiang T, Sidiropoulos ND: Kruskal’s permutation lemma and the identification of Candecomp/Parafac and bilinear models with constant modulus constraints. IEEE Trans. Signal Process 2004, 52: 26252636. 10.1109/TSP.2004.832022
Acknowledgements
This work is supported by the China NSF Grants (61101104, 61071090,61100195), the National Science and Technology Major Project (2011ZX0300500403), the Jiangsu ”973” Project (BK2011027), A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education InstitutionsInformation and Communication Engineering; Nanjing University of Posts & Telecommunications Project (NY211010). The authors wish to thank the anonymous reviewers for their valuable suggestions on improving this article.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Liu, X., Jiang, T., Yang, L. et al. PARALINDbased identifiability results for parameter estimation via uniform linear array. EURASIP J. Adv. Signal Process. 2012, 154 (2012). https://doi.org/10.1186/168761802012154
Received:
Accepted:
Published:
Keywords
 Trilinear model
 Identifiability
 Parameter estimation
 Array signal processing