• Research
• Open Access

Two accurate phase-difference estimators for dual-channel sine-wave model

EURASIP Journal on Advances in Signal Processing20132013:122

https://doi.org/10.1186/1687-6180-2013-122

• Accepted: 16 June 2013
• Published:

Abstract

Two algorithms for estimating the phase-difference between two real sinusoids with common frequency in the presence of white noise are proposed. The first estimator utilizes the maximum likelihood criterion to find the phase of each channel output in a separable manner, and the phase-shift estimate is then given by their difference. The algorithm extension to unknown frequency and/or noise powers is also studied. On the other hand, the development of the second method is based on the linear prediction approach with a properly chosen sampling frequency. Furthermore, variance expressions for the two estimators are derived. Computer simulations are included to corroborate the theoretical calculations and to contrast the performance of the proposed schemes with several existing phase-difference estimators as well as Cramér-Rao lower bound.

Keywords

• Phase-difference; Maximum likelihood estimation; Linear prediction; Time delay estimation; Sine-wave model

1 Introduction

Finding the phase-difference between two noisy sinusoids with common frequency has applications such as particle size and velocity estimation in laser anemometry , impedance measurements , and electric power calibration . In this work, we consider the following dual-channel discrete-time sine-wave model:
$\begin{array}{l}{x}_{i}\left(n\right)={s}_{i}\left(n\right)+{v}_{i}\left(n\right),\phantom{\rule{1em}{0ex}}i=1,2,\phantom{\rule{1em}{0ex}}n=1,\cdots \phantom{\rule{0.3em}{0ex}},N\end{array}$
(1)
where
$\begin{array}{l}{s}_{i}\phantom{\rule{1pt}{0ex}}\left(n\right)\phantom{\rule{1pt}{0ex}}=\phantom{\rule{1pt}{0ex}}{A}_{i}\phantom{\rule{1pt}{0ex}}sin\phantom{\rule{0.1pt}{0ex}}\left(\mathrm{\omega n}+{\varphi }_{i}\right).\end{array}$
(2)

The A i >0 and ϕ i [0,2π) denote the sinusoidal amplitude and initial phase at the i-th channel, respectively, while ω=Ω T [0,π) is the common frequency with Ω and T being the frequency of the continuous-time counterpart and sampling period. The additive noises v 1(n) and v 2(n) are uncorrelated white Gaussian processes with variances ${\sigma }_{1}^{2}$ and ${\sigma }_{2}^{2}$. The task is to estimate the phase-shift, denoted by θ=ϕ 1ϕ 2, from x 1(n) and x 2(n).

Handel  has proposed the nonlinear least squares approach for (1) and (2), where there is an additional DC offset term at each channel output, which corresponds to the estimation of seven unknown parameters. Alternatively, the seven unknowns can also be solved by the ellipse-fitting  technique. When Ω in (2) is known, we have proposed the unbiased quadratic delay estimator (UQDE) and discrete-time Fourier transform-based method in  for accurately estimating θ. The derivation of both algorithms is based on utilizing the in-phase and quadrature-phase components of {s i (n)}, but they can give optimum estimation performance only when π/(2Ω T)=π/(2ω) is a positive integer. Using the idea of , the modified simple algorithm (MSAL)  has recently been developed for phase-difference estimation even if π/(2ω) is not an integer.

The main contribution of this work is to develop and analyze two accurate phase-shift estimation approaches for the signal model of (1) and (2). In Section 2, the maximum likelihood (ML) estimator for θ is presented. It is proved that its variance is equal to the Cramér-Rao lower bound (CRLB) in the presence of white Gaussian noises. We also extend the ML algorithm to the scenarios of unknown frequency and/or noise powers. By properly choosing T such that ω=π/2, a linear prediction (LP)-based phase-difference estimator is derived and its performance is investigated in Section 3. Section 4 contains numerical examples for corroborating our theoretical development and comparing the performance of the ML and LP algorithms with the UQDE and MSAL, as well as CRLB. Finally, conclusions are drawn in Section 5.

2 Maximum likelihood estimator

First, we consider that Ω and the ratio of noise powers, denoted by $r={\sigma }_{1}^{2}/{\sigma }_{2}^{2}$, are known a priori. By letting α i =A i cos(ϕ i ) and β i =A i sin(ϕ i ) , s i (n) can be written as
$\begin{array}{l}{s}_{i}\left(n\right)={\alpha }_{i}sin\left(\mathrm{\omega n}\right)+{\beta }_{i}cos\left(\mathrm{\omega n}\right),\phantom{\rule{1em}{0ex}}i=1,2.\end{array}$
(3)
The logarithm of the probability density function of {x i (n)} parameterized by α i and β i is
$\begin{array}{l}lnp\left(\left\{{x}_{i}\left(n\right)\right\}|{\alpha }_{i},{\beta }_{i}\right)=-\frac{N}{2}ln\left(2\pi \underset{i}{\overset{2}{\sigma }}\right)-\frac{1}{2{\sigma }_{i}^{2}}\sum _{n=1}^{N}\left({x}_{i}\left(n\right)\\ \phantom{\rule{10.7em}{0ex}}-{\alpha }_{i}sin\left(\mathrm{\omega n}\right)-{\beta }_{i}cos\left(\mathrm{\omega n}\right){\right)}^{2}.\end{array}$
(4)
Eliminating the terms in (4), which are irrelevant to α i and β i , their ML estimates are given by the minimum of the following function, denoted by Λ i (α i ,β i ):
$\begin{array}{l}{\Lambda }_{i}\left({\alpha }_{i},{\beta }_{i}\right)={\sum }_{n=1}^{N}{\left({x}_{i}\left(n\right)-{\alpha }_{i}sin\left(\mathrm{\omega n}\right)-{\beta }_{i}cos\left(\mathrm{\omega n}\right)\right)}^{2}.\end{array}$
(5)
Taking the partial derivatives of (5) with respect to α i and β i , and setting the resultant expressions to zero lead to
$\begin{array}{l}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\stackrel{̂}{\alpha }}_{i}{\sum }_{n=1}^{N}\stackrel{2}{sin}\left(\mathrm{\omega n}\right)+{\stackrel{̂}{\beta }}_{i}{\sum }_{n=1}^{N}sin\phantom{\rule{1pt}{0ex}}\left(\mathrm{\omega n}\right)cos\phantom{\rule{1pt}{0ex}}\left(\mathrm{\omega n}\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\sum }_{n=1}^{N}{x}_{i}\left(n\right)sin\phantom{\rule{1pt}{0ex}}\left(\mathrm{\omega n}\right)\end{array}$
(6)
and
$\begin{array}{l}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\stackrel{̂}{\alpha }}_{i}{\sum }_{n=1}^{N}sin\phantom{\rule{1pt}{0ex}}\left(\mathrm{\omega n}\right)cos\left(\mathrm{\omega n}\right)+{\stackrel{̂}{\beta }}_{i}{\sum }_{n=1}^{N}\stackrel{2}{cos}\phantom{\rule{1pt}{0ex}}\left(\mathrm{\omega n}\right)=\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\sum }_{n=1}^{N}{x}_{i}\left(n\right)cos\phantom{\rule{1pt}{0ex}}\left(\mathrm{\omega n}\right),\end{array}$
(7)

where ${\stackrel{̂}{\alpha }}_{i}$ and ${\stackrel{̂}{\beta }}_{i}$ are the ML estimates of α i and β i , respectively. Solving (6) and (7), we obtain:

[b]
$\begin{array}{l}{\stackrel{̂}{\alpha }}_{i}=\frac{\sum _{n=1}^{N}{x}_{i}\left(n\right)sin\left(\mathrm{\omega n}\right)·\sum _{n=1}^{N}\stackrel{2}{cos}\left(\mathrm{\omega n}\right)-\sum _{n=1}^{N}{x}_{i}\left(n\right)cos\left(\mathrm{\omega n}\right)·\sum _{n=1}^{N}sin\left(\mathrm{\omega n}\right)cos\left(\mathrm{\omega n}\right)}{\sum _{n=1}^{N}\stackrel{2}{sin}\left(\mathrm{\omega n}\right)·\sum _{n=1}^{N}\stackrel{2}{cos}\left(\mathrm{\omega n}\right)-{\left(\sum _{n=1}^{N}sin\left(\mathrm{\omega n}\right)cos\left(\mathrm{\omega n}\right)\right)}^{2}}\end{array}$
(8)
and
$\begin{array}{l}{\stackrel{̂}{\beta }}_{i}=\frac{\sum _{n=1}^{N}{x}_{i}\left(n\right)cos\left(\mathrm{\omega n}\right)·\sum _{n=1}^{N}\stackrel{2}{sin}\left(\mathrm{\omega n}\right)-\sum _{n=1}^{N}{x}_{i}\left(n\right)sin\left(\mathrm{\omega n}\right)·\sum _{n=1}^{N}sin\left(\mathrm{\omega n}\right)cos\left(\mathrm{\omega n}\right)}{\sum _{n=1}^{N}\stackrel{2}{sin}\left(\mathrm{\omega n}\right)·\sum _{n=1}^{N}\stackrel{2}{cos}\left(\mathrm{\omega n}\right)-{\left(\sum _{n=1}^{N}sin\left(\mathrm{\omega n}\right)cos\left(\mathrm{\omega n}\right)\right)}^{2}.}\end{array}$
(9)
According to , the ML estimate of ϕ i is
$\begin{array}{l}{\stackrel{̂}{\varphi }}_{i}={tan}^{-1}\left(\frac{{\stackrel{̂}{\beta }}_{i}}{{\stackrel{̂}{\alpha }}_{i}}\right),\phantom{\rule{1em}{0ex}}i=1,2.\end{array}$
(10)
As a result, the ML estimate of θ is:
$\begin{array}{l}\stackrel{̂}{\theta }={\stackrel{̂}{\varphi }}_{1}-{\stackrel{̂}{\varphi }}_{2}.\end{array}$
(11)

Note that this estimation procedure corresponds to a simplified form for the ML formulation of the seven-parameter model  with unknown frequency and additional DC offsets.

In Appendix 1, we have derived the variance of $\stackrel{̂}{\theta }$, denoted by $\text{var}\left(\stackrel{̂}{\theta }\right)$, which has the form of:
$\begin{array}{l}\text{var}\left(\stackrel{̂}{\theta }\right)=\frac{1}{C}\left(\frac{{\sigma }_{1}^{2}{C}_{1}}{{A}_{1}^{2}}+\frac{{\sigma }_{2}^{2}{C}_{2}}{{A}_{2}^{2}}\right),\end{array}$
(12)

where $C={\sum }_{n=1}^{N}\stackrel{2}{sin}\left(\mathrm{n\omega }\right)·{\sum }_{n=1}^{N}\stackrel{2}{cos}\left(\mathrm{n\omega }\right)-\left({\sum }_{n=1}^{N}$ sin(n ω)cos(n ω))2, ${C}_{i}=\stackrel{2}{cos}\left({\varphi }_{i}\right){\sum }_{n=1}^{N}\stackrel{2}{sin}\left(\mathrm{n\omega }\right)+\stackrel{2}{sin}\left({\varphi }_{i}\right)$ ${\sum }_{n=1}^{N}\stackrel{2}{cos}\left(\mathrm{n\omega }\right)+2sin\left({\varphi }_{i}\right)cos\left({\varphi }_{i}\right){\sum }_{n=1}^{N}sin\left(\mathrm{n\omega }\right)cos\left(\mathrm{n\omega }\right)$, i=1,2, which is identical to the CRLB (see Appendix 2).

When N, we simply obtain C=(N/2)2 and C 1=C 2=N/2. As a result, the asymptotic variance is:
$\begin{array}{l}\text{var}\left(\stackrel{̂}{\theta }\right)\phantom{\rule{2pt}{0ex}}=\phantom{\rule{2pt}{0ex}}\frac{1}{N}\left(\frac{1}{{\text{SNR}}_{1}}+\frac{1}{{\text{SNR}}_{2}}\right),\end{array}$
(13)

where ${\text{SNR}}_{i}={A}_{i}^{2}/\left(2{\sigma }_{i}^{2}\right)$, i=1,2, is the signal-to-noise ratio at the i-th channel.

Analogous to (5), when the frequency is unknown, ML estimation is achieved by minimizing
$\begin{array}{l}\Lambda \left(\omega ,{\alpha }_{1},{\beta }_{1},{\alpha }_{2},{\beta }_{2}\right)=\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\sum _{n=1}^{N}{\left({x}_{1}\left(n\right)-{\alpha }_{1}sin\left(\mathrm{\omega n}\right)-{\beta }_{1}cos\left(\mathrm{\omega n}\right)\right)}^{2}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+r\sum _{n=1}^{N}{\left({x}_{2}\left(n\right)-{\alpha }_{2}sin\left(\mathrm{\omega n}\right)-{\beta }_{2}cos\left(\mathrm{\omega n}\right)\right)}^{2}\phantom{\rule{2em}{0ex}}\end{array}$
(14)
or in matrix form:
$\begin{array}{ll}\Lambda \left(\omega ,{\mathbit{\kappa }}_{1},{\mathbit{\kappa }}_{2}\right)=\phantom{\rule{1em}{0ex}}& {\left({\mathbf{x}}_{1}-\mathbf{\Xi }{\mathbit{\kappa }}_{1}\right)}^{T}\left({\mathbf{x}}_{1}-\mathbf{\Xi }{\mathbit{\kappa }}_{1}\right)\phantom{\rule{2em}{0ex}}\\ +r{\left({\mathbf{x}}_{2}-\mathbf{\Xi }{\mathbit{\kappa }}_{2}\right)}^{T}\left({\mathbf{x}}_{2}-\mathbf{\Xi }{\mathbit{\kappa }}_{2}\right),\phantom{\rule{2em}{0ex}}\end{array}$
(15)
where
$\mathbf{\Xi }={\left[\begin{array}{ccc}sin\left(\omega \right)& \cdots & sin\left(\mathrm{N\omega }\right)\\ cos\left(\omega \right)& \cdots & cos\left(\mathrm{N\omega }\right)\end{array}\right]}^{T}$
(16)
$\begin{array}{l}{\mathbf{x}}_{i}={\left[{x}_{i}\left(1\right)\phantom{\rule{1em}{0ex}}\cdots \phantom{\rule{1em}{0ex}}{x}_{i}\left(N\right)\right]}^{T},\phantom{\rule{1em}{0ex}}i=1,2\end{array}$
(17)
$\begin{array}{l}{\mathbit{\kappa }}_{i}={\left[{\alpha }_{i},{\beta }_{i}\right]}^{T},\phantom{\rule{1em}{0ex}}i=1,2.\end{array}$
(18)
Expressing κ i in terms of Ω as (Ξ T Ξ)−1 Ξ T x i , which corresponds to (8) and (9), the ML frequency estimate is given by
$\begin{array}{l}\stackrel{̂}{\omega }=arg\phantom{\rule{2pt}{0ex}}\underset{\omega }{\text{max}}\phantom{\rule{1em}{0ex}}g\left(\omega \right),\end{array}$
(19)
where
$\begin{array}{l}g\left(\omega \right)={\mathbf{x}}_{1}^{T}\mathbf{\Xi }{\left({\mathbf{\Xi }}^{T}\mathbf{\Xi }\right)}^{-1}{\mathbf{\Xi }}^{T}{\mathbf{x}}_{1}+r{\mathbf{x}}_{2}^{T}\mathbf{\Xi }{\left({\mathbf{\Xi }}^{T}\mathbf{\Xi }\right)}^{-1}{\mathbf{\Xi }}^{T}{\mathbf{x}}_{2}.\end{array}$
(20)

Once $\stackrel{̂}{\omega }$ is obtained by solving (19), the phase-difference is estimated as in (8) to (11).

The proposed methodology can also be generalized to the scenario when both r and Ω are not known a priori, and we have to estimate the noise powers and frequency in an iterative manner as follows:
• Step 1. Set r=1.

• Step 2. Compute $\stackrel{̂}{\omega }$ using (19).

• Step 3. Use $\omega =\stackrel{̂}{\omega }$ to estimate the noise powers as ${\sigma }_{i}^{2}={\mathbf{x}}_{i}^{T}\left({\mathbf{I}}_{N}-\mathbf{\Xi }{\left({\mathbf{\Xi }}^{T}\mathbf{\Xi }\right)}^{-1}{\mathbf{\Xi }}^{T}\right){\mathbf{x}}_{i}/N$, i=1,2, where I N is the N×N identity matrix.

• Step 4. Repeat steps 2 and 3 until a stopping criterion is reached.

• Step 5. Compute the phase-difference estimate $\stackrel{̂}{\theta }$ based on (8) to (11).

3 Linear prediction estimator

The basic idea is to approximate x 2(n+1) using a linear combination of x 1(n) and x 1(n+1), and then minimize the least squares cost function for the resultant LP error vector. To facilitate the development of the LP approach, the sampling interval is properly chosen such that the discrete-time frequency is ω=π/2. In doing so, x 1(n)and x 1(n+1) are
$\begin{array}{l}{x}_{1}\left(n\right)={A}_{1}sin\left({\omega }_{0}n+{\varphi }_{1}\right)+{v}_{1}\left(n\right)\end{array}$
(21)
and
$\begin{array}{l}{x}_{1}\left(n+1\right)={A}_{1}sin\left({\omega }_{0}\left(n+1\right)+{\varphi }_{1}\right)+{v}_{1}\left(n+1\right)\\ \phantom{\rule{4.6em}{0ex}}={A}_{1}sin\left({\omega }_{0}n+{\varphi }_{1}\right)cos\left({\omega }_{0}\right)\\ \phantom{\rule{5.5em}{0ex}}+\phantom{\rule{2pt}{0ex}}{A}_{1}cos\left({\omega }_{0}n+{\varphi }_{1}\right)sin\left({\omega }_{0}\right)+{v}_{1}\left(n+1\right)\\ \phantom{\rule{4.6em}{0ex}}={A}_{1}cos\left({\omega }_{0}n+{\varphi }_{1}\right)+{v}_{1}\left(n+1\right).\end{array}$
(22)
On the other hand, x 2(n+1) can be written as
$\begin{array}{l}{x}_{2}\left(n+1\right)={A}_{2}sin\left({\omega }_{0}\left(n+1\right)+{\varphi }_{2}\right)+{v}_{2}\left(n+1\right)\\ \phantom{\rule{4.5em}{0ex}}={A}_{2}sin\left({\omega }_{0}n+{\omega }_{0}+{\varphi }_{1}\right)cos\left(\theta \right)\\ \phantom{\rule{5.6em}{0ex}}-\phantom{\rule{2pt}{0ex}}{A}_{2}cos\left({\omega }_{0}n+{\omega }_{0}+{\varphi }_{1}\right)sin\left(\theta \right)+{v}_{2}\left(n+1\right)\\ \phantom{\rule{4.5em}{0ex}}={A}_{2}sin\left({\omega }_{0}n+{\varphi }_{1}\right)cos\left({\omega }_{0}\right)cos\left(\theta \right)\\ \phantom{\rule{5.6em}{0ex}}+\phantom{\rule{2pt}{0ex}}{A}_{2}cos\left({\omega }_{0}n+{\varphi }_{1}\right)sin\left({\omega }_{0}\right)cos\left(\theta \right)\\ \phantom{\rule{5.6em}{0ex}}-\phantom{\rule{2pt}{0ex}}{A}_{2}cos\left({\omega }_{0}n+{\varphi }_{1}\right)cos\left({\omega }_{0}\right)sin\left(\theta \right)\\ \phantom{\rule{5.6em}{0ex}}+\phantom{\rule{1em}{0ex}}{A}_{2}sin\left({\omega }_{0}n+{\varphi }_{1}\right)sin\left({\omega }_{0}\right)sin\left(\theta \right)+{v}_{2}\left(n+1\right)\\ \phantom{\rule{4.5em}{0ex}}={A}_{2}cos\left({\omega }_{0}n+{\varphi }_{1}\right)cos\left(\theta \right)\\ \phantom{\rule{5.6em}{0ex}}+\phantom{\rule{2pt}{0ex}}{A}_{2}sin\left({\omega }_{0}n+{\varphi }_{1}\right)sin\left(\theta \right)+{v}_{2}\left(n+1\right).\end{array}$
(23)
From(21)to(23), we construct the LP error vector for n=1,,N−1:
$\begin{array}{l}\mathbf{e}=\mathbit{\mu }-{\stackrel{~}{c}}_{1}{\mathbit{\nu }}_{1}-{\stackrel{~}{c}}_{2}{\mathbit{\nu }}_{2},\end{array}$
(24)
where μ= [x 2(2) x 2(N)] T , ν 1= [x 1(1) x 1(N−1)] T , ν 2= [x 1(2) x 1(N)] T , and $\stackrel{~}{\mathbf{c}}$ is the variable for the LP coefficient vector c=[c 1 c 2] T =[sin(θ)A 2/A 1 cos(θ)A 2/A 1] T . The optimum estimate of c= [c 1 c 2] T , denoted by $\stackrel{̂}{\mathbf{c}}=\phantom{\rule{1em}{0ex}}{\left[{ĉ}_{1}\phantom{\rule{1em}{0ex}}{ĉ}_{2}\right]}^{T}$, is obtained via weighted least squares:
$\begin{array}{l}\stackrel{̂}{\mathbf{c}}=arg\phantom{\rule{2pt}{0ex}}\underset{\stackrel{~}{\mathbf{c}}}{\text{min}}\phantom{\rule{1em}{0ex}}{\mathbf{e}}^{T}\mathbf{W}\mathbf{e}={\left({\mathbf{X}}_{1}^{T}\mathbf{W}{\mathbf{X}}_{1}\right)}^{-1}\left({\mathbf{X}}_{1}^{T}\mathbf{W}\mathbit{\mu }\right),\end{array}$
(25)
where X 1=[ν 1 ν 2], and the weighting matrix W is computed according to the Gauss-Markov theorem :
(26)

where E denotes the expectation operator, A=[Toeplitz([−c 1 0 1×(N−2)] T ,[−c 1c 2 0 1×(N−3)])] with 0 M×N being the M×N zero matrix. Moreover, Toeplitz (u,v T )stands for the Toeplitz matrix with u and v T being the first column and first row, respectively.

It is observed from (25) that a scaled version of W can be used. As a result, we simplify the weighting matrix as W=(r A A T +I N−1)−1 so that only the ratio $r={\sigma }_{1}^{2}/{\sigma }_{2}^{2}$ is required. As W is a function of the unknown c, we propose the following iterative procedure to solve for $\stackrel{̂}{\theta }$:
• Step 1. Set W=I N−1.

• Step 2. Compute $\stackrel{̂}{\mathbf{c}}$ using (25).

• Step 3. Use $\mathbf{c}=\stackrel{̂}{\mathbf{c}}$ to construct W.

• Step 4. Repeat steps 2 and 3 until a stopping criterion is reached.

• Step 5. Compute the phase-difference estimate as $\stackrel{̂}{\theta }={tan}^{-1}\left({ĉ}_{1}/{ĉ}_{2}\right)$.

For small error conditions such that $\stackrel{̂}{\mathbf{c}}$ is sufficiently close to c, we follow Appendix 1 (see  for the first-order perturbation analysis) to compute the variance of the phase-difference estimate. The first-order estimation error $\mathrm{\Delta \theta }=\stackrel{̂}{\theta }-\theta$ is
$\begin{array}{l}\mathrm{\Delta \theta }=\frac{\partial {tan}^{-1}\left({c}_{1}/{c}_{2}\right)}{\partial {c}_{1}}\Delta {c}_{1}+\frac{\partial {tan}^{-1}\left({c}_{1}/{c}_{2}\right)}{\partial {c}_{2}}\Delta {c}_{2},\end{array}$
(27)
where the covariance matrix for $\stackrel{̂}{\mathbf{c}}$, denoted by $\text{cov}\left(\stackrel{̂}{\mathbf{c}}\right)$, is approximated as
$\begin{array}{l}\mathbf{C}\approx {\sigma }_{2}^{2}{\left({\mathbf{S}}_{1}^{T}\mathbf{W}{\mathbf{S}}_{1}\right)}^{-1}\end{array}$
(28)
with S 1 being the signal component in X 1. As a result, from (27) to (28), the variance is
$\begin{array}{l}\text{var}\left(\stackrel{̂}{\theta }\right)=E\left\{{\left(\mathrm{\Delta \theta }\right)}^{2}\right\}\approx {\mathbf{a}}^{T}\mathbf{C}\mathbf{a},\end{array}$
(29)
where
$\begin{array}{l}\mathbf{a}=\frac{{A}_{1}}{{A}_{2}}{\left[cos\theta \phantom{\rule{1em}{0ex}}-sin\theta \right]}^{T}.\end{array}$
(30)

When N, (29) is equivalent to (13). That is, the performance of the LP estimator is optimum in the asymptotic sense. Note that when r is not known a priori, we can use the procedure in Section 2 to perform the noise level estimation.

4 Simulation results

Extensive computer simulations are carried out to evaluate the mean square error (MSE) performance of the two proposed phase-difference estimators by comparing with the UQDE and MSAL as well as CRLB. The validity of the theoretical calculations of (12), (13), and (29) is also investigated. For the LP method, we use the number of iterations, which is selected to be 15, as the stopping criterion. The amplitude and phase parameters are assigned as α 1=3, α 2=1, ϕ 1=2, and ϕ 2=1. The noises {v 1(n)} and {v 2(n)} are uncorrelated zero-mean white Gaussian processes with variances ${\sigma }_{1}^{2}$ and ${\sigma }_{2}^{2}$, respectively. Unless stated otherwise, N=10, ${\sigma }_{1}^{2}=0.5{\sigma }_{2}^{2}$, and the frequency as well as power ratio are assumed known. All the results provided are averages of 500 independent runs.

In the first test, the sampling frequency is properly chosen such that ω=π/2. Figure 1 shows the MSEs of the four estimators versus ${\sigma }_{1}^{2}$. It is seen that the performance of all estimators is very close to the CRLB when ${\sigma }_{1}^{2}$ is sufficiently small. Moreover, the analytical variances of the ML and LP methods are verified. We observe that the former is an optimum estimator, while the difference between (29) and CRLB is very small. This experiment is repeated with N=100, and the results are shown in Figure 2. The findings are similar to those of Figure 1 except now (29) is equal to the CRLB, confirming the asymptotic optimality of the LP algorithm. We repeat the test with ${\sigma }_{1}^{2}=0.1{\sigma }_{2}^{2}$, and Figure 3 shows their MSE performance. Again, the findings are similar to those of Figure 1, but the MSEs of the four methods are close to CRLB for smaller noise conditions, namely, when ${\sigma }_{1}^{2}<0.1$. Figure 1 Mean square frequency estimation performance versus σ 1 2 at ω = π/2 , σ 1 2 = 0 . 5 σ 2 2 , and N = 10 . Figure 2 Mean square frequency estimation performance versus σ 1 2 at ω = π/2 , σ 1 2 = 0 . 5 σ 2 2 , and N = 100 . Figure 3 Mean square frequency estimation performance versus σ 1 2 at ω = π/2 , σ 1 2 = 0 . 1 σ 2 2 , and N = 10 .
In the second test, ω=0.3 is employed. Note that the LP estimator and UQDE are not designed to work for this frequency. The MSEs of the ML method and MSAL are plotted in Figure 4. We see that the ML algorithm gives optimum performance for sufficiently small noise conditions and outperforms the MSAL particularly when${\sigma }_{1}^{2}<0.1$, indicating the latter is a biased estimator. Note that when the proposed conditions  of the MSAL are satisfied, the bias will be eliminated. Figure 4 Mean square frequency estimation performance versus σ 1 2 at ω = 0.3 , σ 1 2 = 0 . 5 σ 2 2 , and N = 10 .
In the third test, we compare the performance of the ML method and MSAL for ω [0.1,0.9]π at${\text{SNR}}_{i}=\left({\alpha }_{1}^{2}+{\alpha }_{2}^{2}\right)/\left(2{\sigma }_{i}^{2}\right)=0.01,\phantom{\rule{1em}{0ex}}i=1,2$, and the results are shown in Figure 5. We again see the optimality of the ML scheme for different frequencies, while the performance of the MSAL can be close to CRLB only when Ω is around 0.5 π. Nevertheless, the MSE of the MSAL can approach the CRLB for all admissible frequencies when proper conditions are satisfied. Figure 5 Mean square frequency estimation performance versus ω .
In the final test, we consider the scenario when ω=0.3 and r=0.5 are unknown. Figure 6 shows the MSE performance of the ML method versus${\sigma }_{1}^{2}$. Note that the known frequency information is required in the remaining three schemes, and thus their results are not included. It is seen that the results are similar to those of Figure 4, although larger estimation error occurs at smaller${\sigma }_{1}^{2}$. Moreover, we observe that the CRLB with unknown Ω and r is only a little bit larger than (12). Nevertheless, it is worth pointing out that the proposed algorithms are more computationally demanding than the UQDE and MSAL. As we can see, the ML estimator requires a one-dimensional search in (19). Moreover, both the ML and LP solutions need matrix operations, the corresponding is around N×N. As a result, the complexity of the proposed methods increases with N. Figure 6 Mean square frequency estimation performance versus σ 1 2 with unknown ω and r .

5 Conclusion

Two algorithms for accurate phase-difference estimation between two discrete-time real-valued sinusoids with common frequency have been developed and analyzed. The first estimator first computes the ML solution for phase at each channel output, and the phase-shift is given by the difference between the two ML estimates. We have also extended the method to work for scenarios when the frequency and/or noise powers are unknown. When the discrete-time frequency is properly chosen as π/2, one channel output can be represented as a linear combination of another channel output, where the corresponding LP coefficients have simple relationship with the phase-difference parameter. The second estimator utilizes this LP relationship and applies the weighted least squares for phase-difference estimation. The variance expressions for the two methods are derived and confirmed by computer simulations. It is shown that the ML and LP estimators perform comparably with conventional methods when the frequency is equal to π/2. For other frequencies, even if they are unknown, the ML algorithm can still achieve optimum performance when the noise is sufficiently small. Nevertheless, the proposed algorithms are more computationally demanding, particularly for a larger data length.

Appendices

Appendix 1

Based on θ= tan−1(β 1/α 1)−tan−1(β 2/α 2), we utilize the first-order perturbation analysis  to obtain
$\begin{array}{l}\mathrm{d\theta }\phantom{\rule{1.6pt}{0ex}}=\phantom{\rule{1.6pt}{0ex}}\frac{\partial {tan}^{-1}\left({\beta }_{1}/{\alpha }_{1}\right)}{\partial {\alpha }_{1}}d{\alpha }_{1}+\frac{\partial {tan}^{-1}\left({\beta }_{1}/{\alpha }_{1}\right)}{\partial {\beta }_{1}}d{\beta }_{1}\\ \phantom{\rule{2.5em}{0ex}}-\phantom{\rule{2pt}{0ex}}\frac{\partial {tan}^{-1}\left({\beta }_{2}/{\alpha }_{2}\right)}{\partial {\alpha }_{2}}d{\alpha }_{2}-\frac{\partial {tan}^{-1}\left({\beta }_{2}/{\alpha }_{2}\right)}{\partial {\beta }_{2}}d{\beta }_{2}\\ \phantom{\rule{1.2em}{0ex}}=\frac{cos\phantom{\rule{1pt}{0ex}}\left({\varphi }_{1}\right)d{\beta }_{1}-sin\left({\varphi }_{1}\right)d{\alpha }_{1}}{{A}_{1}}\\ \phantom{\rule{2.5em}{0ex}}-\phantom{\rule{2pt}{0ex}}\frac{cos\phantom{\rule{1pt}{0ex}}\left({\varphi }_{2}\right)d{\beta }_{2}-sin\left({\varphi }_{2}\right)d{\alpha }_{2}}{{A}_{2}}\\ ⇒\mathrm{\Delta \theta }\approx \frac{cos\phantom{\rule{1pt}{0ex}}\left({\varphi }_{1}\right)\Delta {\beta }_{1}-sin\left({\varphi }_{1}\right)\Delta {\alpha }_{1}}{{A}_{1}}\\ \phantom{\rule{3.9em}{0ex}}-\phantom{\rule{0.5pt}{0ex}}\frac{cos\left({\varphi }_{2}\right)\Delta {\beta }_{2}-sin\left({\varphi }_{2}\right)\Delta {\alpha }_{2}}{{A}_{2}}.\end{array}$
(31)
The variance is computed by squaring both sides of (31) and taking the expected value. Since${\stackrel{̂}{\mathbit{\kappa }}}_{1}=\left[{\stackrel{̂}{\alpha }}_{1}\phantom{\rule{1em}{0ex}}{\stackrel{̂}{\beta }}_{1}\right]$ and${\stackrel{̂}{\mathbit{\kappa }}}_{2}=\left[{\stackrel{̂}{\alpha }}_{2}\phantom{\rule{1em}{0ex}}{\stackrel{̂}{\beta }}_{2}\right]$ are uncorrelated, we have
$\begin{array}{l}\text{var}\left(\stackrel{̂}{\theta }\right)\phantom{\rule{1.5pt}{0ex}}=E\phantom{\rule{1.5pt}{0ex}}\left\{{\left(\frac{cos\left({\varphi }_{1}\right)\Delta {\beta }_{1}-sin\left({\varphi }_{1}\right)\Delta {\alpha }_{1}}{{A}_{1}}\right)}^{2}\right\}+E\left\{{\left(\frac{cos\left({\varphi }_{2}\right)\Delta {\beta }_{2}-sin\left({\varphi }_{2}\right)\Delta {\alpha }_{2}}{{A}_{2}}\right)}^{2}\right\}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{3em}{0ex}}=\frac{\stackrel{2}{sin}{\varphi }_{1}\text{cov}\left({\stackrel{̂}{\alpha }}_{1},{\stackrel{̂}{\alpha }}_{1}\right)+\stackrel{2}{cos}{\varphi }_{1}\text{cov}\left({\stackrel{̂}{\beta }}_{1},{\stackrel{̂}{\beta }}_{1}\right)-2sin{\varphi }_{1}cos{\varphi }_{1}\text{cov}\left({\stackrel{̂}{\alpha }}_{1},{\stackrel{̂}{\beta }}_{1}\right)}{{A}_{1}^{2}}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{4em}{0ex}}+\frac{\stackrel{2}{sin}{\varphi }_{2}\text{cov}\left({\stackrel{̂}{\alpha }}_{2},{\stackrel{̂}{\alpha }}_{2}\right)+\stackrel{2}{cos}{\varphi }_{2}\text{cov}\left({\stackrel{̂}{\beta }}_{2},{\stackrel{̂}{\beta }}_{2}\right)-2sin{\varphi }_{2}cos{\varphi }_{2}\text{cov}\left({\stackrel{̂}{\alpha }}_{2},{\stackrel{̂}{\beta }}_{2}\right)}{{A}_{2}^{2}},\phantom{\rule{2em}{0ex}}\end{array}$
(32)

where cov(l 1,l 2)denotes the covariance of l 1 and l 2.

To determine the covariance matrices for${\stackrel{̂}{\kappa }}_{i}$, denoted by${\mathbf{C}}_{{\stackrel{̂}{\kappa }}_{i}}$, i=1,2, we express (1) as the matrix form of x i =Ξ κ i +v i which is linear in κ i , i=1,2, and v i =[v i (1), v i (2), , v i (N)]is the noise vector with zero mean and covariance${\sigma }_{i}^{2}{\mathbf{I}}_{N}$. According to the Gauss-Markov theorem ,${\mathbf{C}}_{{\stackrel{̂}{\kappa }}_{i}}$ is simply
$\begin{array}{l}{\mathbf{C}}_{{\stackrel{̂}{\kappa }}_{i}}={\sigma }_{i}^{2}{\left({\mathbf{\Xi }}^{T}\mathbf{\Xi }\right)}^{-1},\phantom{\rule{1em}{0ex}}i=1,2.\end{array}$
(33)

Substituting the corresponding entries of (33) into (32) yields (12).

Appendix 2

We first compute the Fisher information matrix for κ=[α 1 β 1 α 2 β 2] T , denoted by F. As the parameters are linear in (1), the (i,j) entry of F in the presence of white Gaussian noise is
$\begin{array}{l}{\left[\mathbf{F}\right]}_{\mathit{\text{ij}}}=\mathbf{s}{\prime }_{i}^{T}{\mathbf{C}}^{-1}\mathbf{s}{\prime }_{j},\end{array}$
(34)
where$\mathbf{s}={\left[{\mathbf{s}}_{1}^{T},\phantom{\rule{1em}{0ex}}{\mathbf{s}}_{2}^{T}\right]}^{T}$ with s 1=[s 1(1) s 1(N)] T , s 2=[s 2(1) s 2(N)] T , and s i′ stands for the partial derivative of s with respect to the i-th parameter of κ, i=1,,4. The C is the noise covariance of ${\left[{\mathbf{x}}_{1}^{T},{\mathbf{x}}_{2}^{T}\right]}^{T}$ which has the form of
$\begin{array}{l}\mathbf{C}=\left[\begin{array}{cc}{\sigma }_{1}^{2}{\mathbf{I}}_{N}& {\mathbf{0}}_{N×N}\\ {\mathbf{0}}_{N×N}& {\sigma }_{2}^{2}{\mathbf{I}}_{N}\end{array}\right].\end{array}$
(35)
Using (34) and (35), we obtain:
$\begin{array}{l}\mathbf{F}=\left[\begin{array}{cc}{\sigma }_{1}^{-2}\mathbit{\Gamma }& {\mathbf{0}}_{2×2}\\ {\mathbf{0}}_{2×2}& {\sigma }_{2}^{-2}\mathbit{\Gamma }\end{array}\right],\end{array}$
(36)
where
$\begin{array}{l}\mathbit{\Gamma }=\left[\begin{array}{cc}\sum _{n=1}^{N}\stackrel{2}{sin}\left(\mathrm{n\omega }\right)& \sum _{n=1}^{N}sin\left(\mathrm{n\omega }\right)cos\left(\mathrm{n\omega }\right)\\ \sum _{n=1}^{N}sin\left(\mathrm{n\omega }\right)cos\left(\mathrm{n\omega }\right)& \sum _{n=1}^{N}\stackrel{2}{cos}\left(\mathrm{n\omega }\right)\end{array}\right].\end{array}$
(37)
The inverse of (36) can be shown as
$\begin{array}{l}{\mathbf{F}}^{-1}=\left[\begin{array}{cc}{\sigma }_{1}^{2}{\left({\mathbf{\Xi }}^{T}\mathbf{\Xi }\right)}^{-1}& {\mathbf{0}}_{2×2}\\ {\mathbf{0}}_{2×2}& {\sigma }_{2}^{2}{\left({\mathbf{\Xi }}^{T}\mathbf{\Xi }\right)}^{-1}\end{array}\right].\end{array}$
(38)
Let g(κ)=θ= tan−1(β 1/α 1)−tan−1(β 2/α 2). With the use of the transformation formula , the CRLB for the phase-difference, denoted by CRLB(θ) is
$\begin{array}{l}\text{CRLB}\left(\theta \right)=\frac{\partial {g}^{T}\left(\kappa \right)}{\mathrm{\partial \kappa }}·{\mathbf{F}}^{-1}·\frac{\mathrm{\partial g}\left(\kappa \right)}{\mathrm{\partial \kappa }},\end{array}$
(39)
where
$\begin{array}{lcr}\frac{\mathrm{\partial g}\left(\kappa \right)}{\mathrm{\partial \kappa }}& =& {\left[\frac{-{\beta }_{1}}{{\alpha }_{1}^{2}+{\beta }_{1}^{2}}\phantom{\rule{1em}{0ex}}\frac{{\alpha }_{1}}{{\alpha }_{1}^{2}+{\beta }_{1}^{2}}\phantom{\rule{1em}{0ex}}\frac{-{\beta }_{2}}{{\alpha }_{2}^{2}+{\beta }_{2}^{2}}\phantom{\rule{1em}{0ex}}\frac{{\alpha }_{2}}{{\alpha }_{2}^{2}+{\beta }_{2}^{2}}\right]}^{T}\\ =& {\left[\frac{-sin{\varphi }_{1}}{{A}_{1}}\phantom{\rule{1em}{0ex}}\frac{cos{\varphi }_{1}}{{A}_{1}}\phantom{\rule{1em}{0ex}}\frac{-sin{\varphi }_{2}}{{A}_{2}}\phantom{\rule{1em}{0ex}}\frac{cos{\varphi }_{2}}{{A}_{2}}\right]}^{T}.\end{array}$
(40)

Substituting (40) into (39) yields (12).

Authors’ Affiliations

(1)
Department of Electronic Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong

References 