Open Access

A general solution to the continuous-time estimation problem under widely linear processing

  • Ana María Martínez-Rodríguez1,
  • Jesús Navarro-Moreno1,
  • Rosa María Fernández-Alcalá1Email author and
  • Juan Carlos Ruiz-Molina1
EURASIP Journal on Advances in Signal Processing20112011:119

https://doi.org/10.1186/1687-6180-2011-119

Received: 22 November 2010

Accepted: 28 November 2011

Published: 28 November 2011

Abstract

A general problem of continuous-time linear mean-square estimation of a signal under widely linear processing is studied. The main characteristic of the estimator provided is the generality of its formulation which is applicable to a broad variety of situations, including finite or infinite intervals, different types of noises (additive and/or multiplicative, white or colored, noiseless observation data, etc.), capable of solving three estimation problems (smoothing, filtering or prediction), and estimating functionals of the signal of interest (derivatives, integrals, etc.). Its feasibility from a practical standpoint and a better performance with respect to the conventional estimator obtained from strictly linear processing is also illustrated.

Keywords

Continuous-time processingLinear mean-square estimation problemWidely linear processing

1 Introduction

In most engineering systems, the state variables represent some physical quantity that is inherently continuous in time (ground-motion parameters, atmospheric or oceanographic flow, and turbulence, etc.). Thus, the formulation of realistic models to represent a signal processing problem is one of the major challenges facing engineers and mathematicians today. Given that in many problems the incoming information is constituted by continuous-time series, the use of a continuous-time model will be a more realistic description of the underlying phenomena we are trying to model. For example, [1] gives techniques of continuous-time linear system identification, and [2] illustrates the use of stochastic differential equations for modeling dynamical phenomena (see also the references therein). Continuous-time processing is especially suitable when data are recorded continuously, as an approximation for discrete-time sampled systems when the sampling rate is high [3] and when data are sampled irregularly [4]. It is also necessary with applications that require high-frequency signal processing and/or very fast initial convergence rates. Analog realizations also result in a smaller integrated circuit, lower power dissipation, and freedom from clocking and aliasing effects [5, 6]. In such cases, the continuous-time solution becomes an adequate alternative to the discrete one since it allows real-time processing and alleviates the overload problem assuring more reliable overall operation of the system [7]. Moreover, the analytical tools developed in the continuous-time case might bring new insights to the analysis which are not possible in their discrete-time counterparts. In particular, [8] illustrates this fact in the problem of sorting continuous-time signals, [9] in the problem of nonfragile H filtering for a class of continuous-time fuzzy systems, and [10] in the study of the behavior of the continuous-time spectrogram.

The estimation problem is a topic of great interest in the statistical signal processing community. This problem has traditionally been solved by using a conventional or strictly linear (SL) processing. For instance, [11, 12] deal with classical estimation problems (e.g., the Kalman-Bucy filter) under a real formalism, [13] tackles similar problems in the complex field, and [14] uses factorizable kernels for solving such problems. The main characteristic of the SL treatment is that it takes into account only the autocorrelation of the complex-valued observation process, ignoring its complementary function. That is, the only information considered for the building of the estimator is that supplied by the observation process, while the information provided by its conjugate is ignored. Cambanis [15] provided the more general solution to the problem of continuous-time linear mean-square (MS) estimation of a complex-valued signal on the basis of noisy complex-valued observations under a SL processing. In fact, Cambanis's approach is valid for any type of second-order signals and observation intervals, and it is not necessary to impose conditions such as stationarity, Gaussianity or continuity on the involved processes, nor restrictions of finite intervals.

Recently, it has been proved that the treatment of the linear MS estimation problem through widely linear (WL) processing, which takes into account both the observation process and its conjugate, leads to estimators with better performance than the SL ones in the sense that they show lower error variance. Specifically, and from a discrete-time perspective, the WL regression problem was tackled in [16], the prediction problem in a complex autoregressive modeling setting was addressed in [17, 18] and later extended to autoregressive moving average prediction in [19]. Also, an augmented1 affine projection algorithm based on the full second-order statistical information has been newly devised in [20]. Among the wide range of applications of WL processing is the analysis of communication systems [21], ICA models [22], quaternion domain [23], adaptive filters [2426], etc.

The study of continuous-time estimation problems is also interesting because it provides precise information on some structural properties of the system under study [8, 9]. For instance, an explicit expression of the MS error associated with the optimal estimator can be derived in this approach (e.g., see [12, 13]). Notice that this well-known result is independent of the number of available observations. In addition, the continuous-time solution becomes an excellent alternative to the discrete one when the number of available data is large. Discrete-time solutions involve the explicit calculation of matrix inverses whose dimensions depend on the number of observations (see, e.g., [16]). In practice, the process would be cumbersome or even prohibitive if this number were large (as occurs, e.g., in a major earthquake where the workload of the system increases suddenly).

The WL estimation problem under a continuous-time formulation was initially dealt with in [27, 28] and [29]. More precisely, the particular problem of estimating a complex signal in additive complex white noise is solved in [27] or [28] through an improper version of the Karhunen-Loève expansion. A general result comparing the performance of WL and SL processing is also presented in which it is shown that the performance gain, measured by MS error, can be as large as 2. Finally, [29] provides an extension of the previous problem to the case in which the additive noise is made up of the sum of a colored component plus a white one. The handicaps of both solutions are: i) they are limited to MS continuous signals, ii) the signals must be defined on finite intervals, iii) the model for the observation process involves additive noise (white noise in the case of [27] and [28]), and iv) they are only devoted to solving a smoothing problem.

In this paper, we address a more general estimation problem than those solved in [2729]. For that, we consider the general formulation of the estimation problem given in [15], and we solve it by using WL processing. The generality of this formulation allows the solution of a wide range of problems, including general second-order processes, infinite observation intervals, additive and/or multiplicative noise, noiseless observations, estimation of functionals of the signal, etc. It also brings under a single framework three different kinds of estimation problems: prediction, filtering, and smoothing. Hence, all the above handicaps are avoided with the proposed solution. Specifically, we present two forms of the WL estimator depending on the nature, either proper or improper, of the observation process. Then, we state conditions to express such an estimator in closed form. Closed form expressions for the estimator are convenient from a computational point of view [11, 12, 15]. Three numerical examples show that the proposed solution is feasible and demonstrate the aforementioned generality. The first one compares the performance of the WL estimator in relation to the SL one by considering an observation process defined on an infinite interval and with multiplicative noise. The second concerns the problem of estimating a signal in nonwhite noise and illustrates its application with discrete data. Lastly, the third example considers the earthquake ground-motion representation problem and illustrates a possible real application.

The rest of this paper is organized as follows. In Section 2, we review the SL solution proposed in [15]. Section 3 presents the main results. We derive the new estimator and its associated MS error. Moreover, we prove the better performance of this in relation to the SL estimator, and we give conditions to obtain a closed form of the WL estimator. The results obtained in this section are first stated and then proved rigorously in an Appendix. This section also includes a brief description of how the technique can be implemented in practice. Finally, Section 4 contains three numerical examples illustrating the application of the suggested estimator, and a performance comparison between WL and SL estimation is carried out.

Throughout this paper, all the processes involved are complex, measurable and of second-order. Next, we introduce the basic notation. The real part of a complex number will be denoted by R { } , the complex conjugate by (·)*, the conjugate transpose by (·) H and the orthogonality of two complex-valued random variables, say a and b, by a b. Also, a.s. stands for almost surely and a.e. for almost everywhere.

2 Strictly Linear Estimation

A core problem in signal processing theory is the estimation of a signal from the information supplied by another signal. A very general formulation of this problem was provided by Cambanis in [15]. Specifically, let F and G be two functionals and {s(t), t S} be a random signal, where S is any interval of the real line. Suppose that s(t) is not observed directly and that we observe the process
x ( t ) = F ( s ( τ ) , τ S , t ) , t T
where T is any interval of the real line. Based on the observations {x(t), t T}, the aim is to estimate a functional of s(t)
ξ ( t ) = G ( s ( τ ) , τ S , t ) , t S

S' being any interval of the real line.

As noted above, this formulation is very general and contains as particular cases a great number of classical estimation problems, such as estimation of signals in additive and/or multiplicative noise, estimation of signals observed through random channels, random channel identification, etc. [15]. It can also be adapted to treat filtering, prediction, and smoothing problems.

In order to proceed with the building of the Cambanis estimator, the second-order statistics of the processes involved are needed. Let r x (t, τ) and r ξ (t, τ) be the respective autocorrelation functions of x(t) and ξ(t). Let c x (t, τ) = E[x(t)x(τ)] denote the complementary autocorrelation function of x(t). Moreover, we denote the cross-correlation functions of ξ(t) with x(t) and x*(t) by ρ1(t, τ) = E[ξ(t)x*(τ)] and ρ2(t, τ) = E[ξ(t)x(τ)], respectively.

The weakness of the hypotheses imposed on the processes and the possibility of considering infinite intervals force us to construct measures other than Lebesgue measure. To avoid an excess of mathematical formalism, we do not follow the Cambanis exposition literally. Changing the measure is equivalent to searching for a function F(t) such that
T r x ( t , t ) F ( t ) d t <
(1)

This function F(t) can be selected by a trial-and-error method or by using the procedure given in [30], and in addition, it does not have to be unique. This freedom of choice is to be exploited appropriately in every particular case under consideration. For example, if T = [T i , T f ] and x(t) is MS continuous, then we can select F(t) = 1. Some practical examples can be consulted in [31].

Condition (1) guarantees the existence of the eigenvalues and eigenfunctions, {λ k } and {ϕ k (t)}, respectively, of r x (t, τ). Next, we need an orthogonal basis of random variables built from the observation process and the Hilbert space spanned by it. The elements of such a basis take the form ε k = T x ( t ) ϕ k * ( t ) F ( t ) d t a.s., and let H(ε k ) be the Hilbert space spanned by the random variables {ε k }. By using SL processing, the estimator ξ ^ SL ( t ) proposed in [15] is calculated by projecting the process ξ(t) onto H(ε k ). As a consequence, ξ ^ SL ( t ) is given by
ξ ^ SL ( t ) = k = 1 b k ( t ) ε k , t S
with b k ( t ) = 1 λ k T ρ 1 ( t , τ ) ϕ k ( τ ) F ( τ ) d τ . Moreover, its associated MS error is
P SL ( t ) = E [ | ξ ( t ) - ξ ^ SL ( t ) | 2 ] = r ξ ( t , t ) - k = 1 λ k b k ( t ) b k * ( t ) , t S .

3 Widely Linear Estimation

In general, complex-valued random processes are improper [24], and then the appropriate processing is the WL processing. In this section, we provide a new estimator, ξ ^ WL ( t ) , by using WL processing and calculate its corresponding MS error, P WL ( t ) = E [ | ξ ( t ) - ξ ^ WL ( t ) | 2 ] . To this end, we consider, together with the information supplied by the observation process, x(t), the information provided by its conjugate, x*(t). Both processes are stacked in a vector giving rise to the augmented observation process, x(t) = [x(t), x*(t)]', whose autocorrelation function is denoted by r x (t, τ) = E[x(t)x H (τ)]. Notice that ξ ^ WL ( t ) receives the name of WL estimator because it depends linearly not only on x(t) but also x*(t) in contrast with the conventional estimator.

In order to find an explicit form of the estimator and its error, we have to distinguish two possibilities in relation to the nature of x(t): proper or improper. If x(t) is proper, i.e., cx(t, τ) = 0, then the expression for the estimator is
ξ ^ WL ( t ) = ξ ^ SL ( t ) + k = 1 b ̄ k ( t ) ε k * , t S
(2)
where b ̄ k ( t ) = 1 λ k T ρ 2 ( t , τ ) ϕ k * ( τ ) F ( τ ) d τ , and with associated MS error
P WL ( t ) = P SL ( t ) - k = 1 λ k b ̄ k ( t ) b ̄ k * ( t ) , t S
(3)

Expressions (2) and (3) are derived in Theorem 1 in the "Appendix". These expressions extend to the SL ones since if ρ2(t, τ) = 0, then ξ ^ WL ( t ) = ξ ^ SL ( t ) and PWL(t) = PSL(t).

On the other hand, in the improper case (c x (t, τ) ≠ 0), and unlike the proper case, it is not as quick to calculate an explicit and easily implemented expression of ξ ^ WL ( t ) . The main difference between both cases is that now the members of the set { ε k } { ε k * } are not orthogonal. In fact, we have
E [ ε k ε l ] = T T c x ( t , τ ) ϕ k * ( t ) ϕ l * ( τ ) F ( t ) F ( τ ) d t d τ 0 , k l
Thus, the goal will be to calculate an orthogonal basis in the Hilbert space generated by {ε k } and { ε k * } , H ( ε k , ε k * ) , which avoids this serious problem. This objective is attained in Lemma 1 in the "Appendix" by means of the eigenvalues, {α k }, and the corresponding eigenfunctions, φ k (t), of r x (t, τ). Following a similar reasoning to [28], it can be shown that the eigenfunctions φ k (t) have the particular structure given by φ k ( t ) = [ f k ( t ) , f k * ( t ) ] and are orthonormal in the sense of (10). The elements of this new set are real random variables of the form
w k = T φ k H ( t ) x ( t ) F ( t ) d t a .s . = 2 R T x ( t ) f k * ( t ) F ( t ) d t a .s .
(4)
verifying that E[w n w m ] = α n δ nm . By using this new set of variables, we can obtain the WL estimator explicitly
ξ ^ WL ( t ) = k = 1 ψ k ( t ) w k , t S
(5)
where ψ k ( t ) = 1 α k ( T ρ 1 ( t , τ ) f k ( τ ) F ( τ ) d τ + T ρ 2 ( t , τ ) f k * ( τ ) F ( τ ) d τ ) , and its corresponding MS error is
P WL ( t ) = r ξ ( t , t ) - k = 1 α k ψ k ( t ) ψ k * ( t ) , t S
(6)

Theorem 2 in the "Appendix" proves these assertions.

From a practical standpoint, it would be interesting to get a closed form for ξ ^ WL ( t ) . For that, it is necessary to restrict the kind of processes considered so far. Theorem 3 in the "Appendix" gives conditions in order to express the estimator in the following way
ξ ^ WL ( t ) = T h 1 ( t , τ ) x ( τ ) F ( τ ) d τ + T h 2 ( t , τ ) x * ( τ ) F ( τ ) d τ a .s .
(7)
for some square integrable functions h1(t, ·) and h2(t, ·). Expression (7) is computationally more amenable than (2) or (5). The key question is whether the conditions of Theorem 3 are fulfilled. An example of the latter is the classical problem of estimating an improper complex-valued random signal in colored noise with an additive white part addressed in [29]. Specifically, the observation process considered is
x ( t ) = s ( t ) + n c ( t ) + v ( t ) , T i t T f <

where s(t) is an improper complex-valued MS continuous random signal, the colored noise component, n c , is a complex-valued MS continuous stochastic process uncorrelated with v(t), and v(t) is a complex white noise uncorrelated with the signal s(t). Note that the formulation of the estimation problem treated in [29] is much more restrictive than that studied in the present paper.

Finally, a remarkable advantage of the proposed estimator appears when ξ(t) is a real process, and x(t) is still complex. In this case, ξ ^ WL ( t ) is real too. However, there is no reason for the SL estimator to be real, which is not convenient when we estimate a real functional. Moreover, if x(t) is proper, then ξ ^ WL ( t ) = 2 R { ξ ^ SL ( t ) } and its associated MS error is
P WL ( t ) = r ξ ( t , t ) - 2 k = 1 λ k b k ( t ) b k * ( t ) , t S

which provides a decrease in the error that is twice as great as the SL estimator.

Notice also that the Hilbert space approach we have followed to derive the WL estimators allows us to give an alternative proof of the well-known fact that WL estimation outperforms SL estimation. The estimator ξ ^ WL ( t ) is really obtained by projecting the functional ξ(t) onto the Hilbert space H ( ε k , ε k * ) . Observe that H ( ε k ) H ( ε k , ε k * ) and then trivially by the projection theorem of the Hilbert spaces2 [[12], Proposition VII.C.1], we have PWL(t) ≤ PSL(t), for t S', and hence, the WL estimator outperforms the SL one as regards its MS error.

3.1 Practical Implementation of the Estimator

We enumerate the necessary steps in implementing the estimation technique proposed for the estimator (5). Nevertheless, some comments are made on how the algorithm can be adapted to obtain (2). Moreover, the role played by (7) becomes clear at the end of the procedure. The steps are the following:
  1. 1)

    Determine the augmented statistics of the processes involved. In some practical applications, the second-order structure is initially known. In fact, it may be derived from experimental measurements or mathematical models. For instance, the information-bearing signal in the communications problem is purposely designed to have desired statistical properties [32]. Other examples can be consulted in [33, 34].

     
  2. 2)

    Select a function F(t) such that condition (1) holds. As noted above, this function F(t) can be selected by a trial-and-error method or by using the procedure given in [30]. Notice that this function is not unique and, in general, there are many specifications possible.

     
  3. 3)

    Obtain the eigenvalues {α k } and eigenfunctions {φ k (t)} associated with r x (t, τ). In general, determination of eigenvalues and eigenfunctions, except for a few cases, is a problem that is very involved, if not impossible. However, we can avoid the calculation of true eigenvalues and eigenfunctions by means of the Rayleigh-Ritz (RR) method, which is a procedure for numerically solving operator equations involving only elementary calculus and simple linear algebra (see [31, 35] for a detailed study about the practical application of the RR method).

     
  4. 4)
    Truncate expressions (5) and (6) at n terms and substitute, if necessary, the true eigenvalues and eigenfunctions by the RR ones. This truncated version of the estimator, which is in fact a suboptimum estimator, can be calculated via the expression (7) with
    h 1 ( t , τ ) = k = 1 n ψ k ( t ) f k * ( τ ) and h 2 ( t , τ ) = k = 1 n ψ k ( t ) f k ( τ )
     

and where both functions satisfy the conditions of Theorem 3.

Thus, we have replaced the computation of 2n integrals in the truncated version of (5) (or n integrals in the finite series obtained from (2)) by the computation of two integrals in (7), and hence, it entails a reduction in the error of approximation for a given precision.

Note that both the precision and the amount of computation required in applying this method depend heavily on the number n. An easy criterion3 for determining an adequate level of truncation n without an unnecessary excess of computation can be the following: select n in such a way that k = 1 n α k represents at least 95% of the total variance of the process, k = 1 α k = 2 T r x ( t , t ) F ( t ) d t (see the proof of Lemma 1 in the "Appendix").
  1. 5)
    Finally, from a discrete set of observations, x1, ..., x N , we can compute the integrals in (7) by means of
    T h 1 ( t , τ ) x ( τ ) F ( τ ) d τ k = 1 n g 1 ( t , k ) x k T h 2 ( t , τ ) x * ( τ ) F ( τ ) d τ k = 1 n g 2 ( t , k ) x k *
     

where the weights g1(t, k) and g2(t, k) are obtained via a suitable method that performs numerical integration with integrands constituted for discrete points. For example, using the Gill-Miller quadrature method [36] implemented by subroutine d01gaf from the NAG Toolbox for MAT-LAB or the trapezoidal rule (trapz function in MATLAB).

The only changes for implementing the estimator (2) are in steps 1 and 3, where we have to use r x (t, τ) and their associated eigenvalues and eigenfunctions, {λ k } and {ϕ k (t)}, instead.

4 Numerical Examples

Three examples illustrate the implementation of the proposed solution and show its capability to solve very general estimation problems. Example 1 shows a situation where true eigenvalues and eigenfunctions are available and aims at comparing the performance of WL processing in relation to SL processing. Example 2 applies the RR method to approximate the eigenexpansion and also illustrates its implementation with discrete data. Finally, Example 3 considers an application in seismic signal processing in which the ground-motion velocity is estimated from seismic ground acceleration data.

4.1 Example 1

Assume that a real waveform s(t) is transmitted over a channel that rotates it by some random phase θ and adds a noise n(t). Unlike [28] and [29], we consider infinite observation intervals and a multiplicative quadratic noise in the observations. More precisely, s(t) is defined on the real line, S = , with zero-mean and r s ( t , τ ) = e - ( t - τ ) 2 . Thus, the observation process is given by
x ( t ) = e j θ s ( t ) n 2 ( t ) , t T =
(8)

where j = - 1 and the noise n(t) is a zero-mean Gaussian process with r n (t, τ) = 3-1/2p1/4(t)p1/4(τ), where p ( t ) = 2 π e - 2 t 2 (this type of process is studied in [34]). Three different probabilistic distributions for θ are taken: a uniform distribution on (-σ, σ), a zero-mean normal with variance σ, and a Laplace distribution with zero-mean and variance σ. Several choices of σ will be used to show how the advantages of WL processing vary with the level of improperness of the observations. Finally, mutual independence of θ, s(t) and n(t) is assumed. The objective is to estimate ( t ) , t [ 0 , 1 ] , where ( t ) denotes the MS derivative of s(t).

We first notice that - r x ( t , t ) d t < , where F(t) = 1 has been selected by a trial-and-error method and thus, condition (1) is verified. This example is one of the particular cases where calculation of true eigenvalues and eigenfunctions is possible. In fact, r x (t, τ) has eigenvalues ( 1 + E [ e 2j θ ] λ ̄ k and ( 1 - E [ e 2j θ ] ) λ ̄ k with respective associated eigenfunctions [ ϕ k ( t ) 2 , ϕ k ( t ) 2 ] and [ j ϕ k ( t ) 2 , - j ϕ k ( t ) 2 ] , k = 0, ..., and where λ ̄ k = 2 2 + 3 1 2 + 3 k , ϕ k ( t ) = 2 k k ! 1 3 3 3 4 e - ( 3 - 1 ) t 2 H k 2 3 t and H k ( t ) = ( - 1 ) k e t 2 k t k e - t 2 are the Hermite polynomials. Moreover, we can check that the associated MS errors are the following:
P SL ( t ) = 2 - E [ e j θ ] 2 k = 0 l k 2 ( t ) λ ̄ k and P WL ( t ) = 2 - 2 E [ e j θ ] 2 1 + E [ e 2j θ ] k = 0 l k 2 ( t ) λ ̄ k

with l k ( t ) = 3 - 1 2 T t r s ( t , τ ) p 1 2 ( τ ) ϕ k ( τ ) d τ .

We use the measure
I = 0 1 P SL ( t ) d t 0 1 P WL ( t ) d t
which is closely related to the performance measure considered in [29], to compare the performance of WL processing in relation to SL processing. For that, we have truncated the series in PSL(t) and PWL(t) at n = 10 terms (this approximate expansion explains 99.86% of the total variance of the process). The performance of both the SL and the WL estimators for n = 10 does not really vary substantially from the case of n > 10. Figure 1a depicts the measure I in function of σ for the three probabilistic distributions considered for θ. It turns out that the advantages of WL processing decrease in both cases as σ tends toward zero and as σ tends toward infinity. However, this occurs for different reasons. Another performance measure which helps in the interpretation is
Figure 1

Performance comparison between WL and SL estimation through the measures I (a) and L (b) for a normal phase ( solid line ), a uniform phase ( dashed line ), and a Laplace phase ( bold solid line ).

L = | c x ( t , s ) | | r x ( t , s ) |

which, for this example, takes the value L = |E[e2jθ]|. Figure 1b shows the index L as a function of σ for the three probabilistic distributions considered for θ. On the one hand, as σ tends toward zero, then the index L tends to one since in that limit the observation process becomes a real signal4. On the other hand, when σ increases, then L tends toward zero since x(t) becomes a proper signal. The faster convergence to zero in the normal case and the slower one for the Laplace distribution are also observed.

4.2 Example 2

We study a generalization of the classical communication example addressed in [28] and [29]. Assume that a real waveform s1(t) is transmitted over a channel that rotates it by a standard normal phase θ1 and adds a nonwhite noise n(t). More precisely, s1(t) is defined on the interval [0, 1], with zero-mean and r s 1 ( t , τ ) = min { t , τ } . Thus, the observation process is
x ( t ) = e j θ 1 s 1 ( t ) + n ( t ) , t [ 0 , 1 ]

where the nonwhite noise n(t) is obtained from a linear time-invariant system of the form n ( t ) = e j θ 2 0 1 r s 1 ( t , τ ) s 2 ( τ ) d τ , with θ2 being a zero-mean normal random variable with variance 2 and s2(t) a standard Wiener process (these types of noises appear in [[37], p. 357]). Moreover, we assume that θ1, θ2, s1(t), and s2(t) are independent of each other. This example extends the cases studied in [28] and [29] since the considered noise here does not have a white component and thus, the previous solutions cannot be applied. The observations have been taken in the following time instants: i/ 1000, i = 1, ..., 1000. The objective is to estimate s ( t ) = e j θ 1 s 1 ( t ) , t [0,1].

We first notice that 0 1 r x ( t , t ) d t < , where F(t) = 1 has been selected since the processes involved are continuous and thus, condition (1) is verified. Now, to apply the RR method, we choose the Fourier basis of complex exponentials on [0, 1], { exp { 2 π j k } } k = - . Following the recommendations in step 5 of Section 3.1, we compute the integrals in (7) via the subroutines d01gaf and trapz (there were no significative differences between both methods).

Figure 2 depicts the MS error PWL(t) together with the MS errors of the WL estimator obtained from the RR method with n = 25 and n = 50 terms in step 5 of the algorithm, which have been generated by Monte Carlo simulation (a total of 10,000 simulations were performed). We can see that the method may yield a sufficiently accurate solution with a short number n of terms while reducing the complexity of the problem significantly. Note that a truncated expansion at n = 25 terms explains 88.77% of the total variance of the process and the expansion with n = 50 terms 95.81%.
Figure 2

MS errors of the WL estimator (5) (solid line) and the estimator calculated in step 5 with n = 25 terms (dotted line) and with n = 50 terms (dashed line).

4.3 Example 3

The seismic ground acceleration can be represented by a uniformly modulated nonstationary process [33]. The modulated nonstationary process is obtained in the following way
s ( t ) = a ( t ) z ( t )
where a(t) is a time modulating function that could be a complex function, and z(t) is a stationary process with zero-mean and known second-order moments. In general, the so-called exponential modulating function is adopted [38, 39]. A common choice for z(t) is the standard Ornstein-Uhlenbeck process with a particular version of the exponential modulating function given by a(t) = e -t [[33], p. 38]. Thus, the seismic ground acceleration can be modeled as a stochastic signal {s(t), t S = +} with r s (t, τ) = e-(t+τ)e-|t-τ|. Consider the observation process
x ( t ) = e j θ s ( t ) , t T = +

where θ is a standard normal phase independent of s(t). Now, the objective is to estimate the seismic ground velocity at instant t ≥ 2, i.e., ξ ( t ) = 0 1 s ( τ ) d τ , with t S' = [2, ∞). A justification for considering infinite intervals on the basis of the stationarity property of z(t) can be found in [40].

By using a trial-and-error method, we select F(t) = e-tand then, (1) holds. For the case of infinite intervals, T = +, the true eigenvalues and eigenfunctions of r x (t, τ) are not known. We approximate them by means of the RR method. The RR eigenvalues and eigenfunctions of r x (t, τ) are ( 1 ± e - 2 ) λ ̄ k and [ ϕ ̃ k ( t ) 2 , ϕ ̃ k ( t ) 2 ] and [ j ϕ ̃ k ( t ) 2 , - j ϕ ̃ k ( t ) 2 ] , where λ ̃ k and ϕ ̃ k ( t ) are the RR eigenvalues and eigenfunctions, respectively, of r x (t, τ) obtained from the following trigonometric basis
{ 1 , 2 cos ( 2 π e - t ) , 2 sin ( 2 π e - t ) , 2 cos ( 4 π e - t ) , 2 sin ( 4 π e - t ) , . . . }
In Figure 3, we compare the MS error of the SL estimator calculated with n = 10 terms with the MS errors of the WL estimator with n = 2, 4 and, 10 terms (which account for 57.60, 82.30 and 93.88% of the total variance of x(t), respectively). We have limited the estimation interval to [2, 6] because of the observed stabilization of the MS errors for t ≥ 4. Apart from the better performance of the WL estimator with respect to the SL estimator (as was to be expected), the rapid convergence of the RR estimators is also confirmed.
Figure 3

MS errors for the SL estimator with n = 10 terms (crossed line) and for the WL estimator with n = 2 terms (dashed line) , with n = 4 terms (dotted line) , and with n = 10 terms (solid line).

5 Concluding Remarks

A new WL estimator has been given for solving general continuous-time estimation problems. The formulation considered can be adapted in order to include as particular cases a great number of estimation problems of interest. The proposed estimator becomes a way that avoids explicit calculation of matrix inverses altogether and can be applied provided that the second-order characteristics of the processes involved are known. Such knowledge is usual in some practical problems in fields as diverse as seismic signal processing, signal detection, finite element analysis, etc. An alternative procedure is the stochastic gradient-based iterative solution called augmented complex least mean-square algorithm (see, e.g., [24]) in which the second-order statistics are estimated from data. However, if we wish to take advantage of the knowledge of the second-order characteristics and the number of observation data is very large, then the continuous-time solution is a recommended option.

Appendix

This "Appendix" is written following a rigorous mathematical formalism parallel to [15] or [30]. Condition (1) is indeed more restrictive than the one imposed in the works of Cambanis. Specifically, suppose μ a measure on ( T , B ( T ) ) ( B ( T ) is the σ-algebra of Lebesgue measurable subsets of T) which is equivalent to the Lebesgue measure and verifies
T r x ( t , t ) d μ ( t ) <
(9)

The existence of μ satisfying (9) is proved in [30]. Cambanis also shows that (9) allows us to select a function F(t) such that dμ(t)/ dt = F(t) and (1) holds.

Theorem 1 If x(t) is proper, then
ξ ^ WL ( t ) = ξ ^ SL ( t ) + k = 1 b ̄ k ( t ) ε k * , t S
with b ̄ k ( t ) = 1 λ k T ρ 2 ( t , τ ) ϕ k * ( τ ) d μ ( τ ) . Moreover, its associated MS error is
P WL ( t ) = P SL ( t ) - k = 1 λ k b ̄ k ( t ) b ̄ k * ( t ) , t S

Proof: Firstly, notice that if x(t) is proper, then the members of the set of random variables { ε k } { ε k * } are orthogonal. Thus, the estimator ξ ^ WL ( t ) is obtained by projecting the functional ξ(t) onto the Hilbert space generated by {ε k } and { ε k * } , H ( ε k , ε k * ) . Hence, the estimator can be expressed in the form ξ ^ WL ( t ) = k = 1 b k ( t ) ε k + k = 1 b ̄ k ( t ) ε k * , where the coefficients b k (t) and b ̄ k ( t ) are determined via the projection theorem of the Hilbert spaces. This result assures that ξ ( t ) - ξ ^ WL ( t ) { ε k } { ε k * } ; that is, E [ ξ ( t ) ε k * ] = E [ ξ ^ WL ( t ) ε k * ] and E [ ξ ( t ) ε k ] = E [ ξ ^ WL ( t ) ε k ] , for all k. Since E [ ξ ( t ) ε k * ] = T ρ 1 ( t , τ ) ϕ k ( τ ) d μ ( τ ) , E [ ξ ^ WL ( t ) ε k * ] = λ k b k ( t ) , E [ ξ ( t ) ε k ] = T ρ 2 ( t , τ ) ϕ k * ( τ ) d μ ( τ ) , and E [ ξ ^ WL ( t ) ε k ] = λ k b ̄ k ( t ) , then the first part of the result follows.

On the other hand, the corresponding MS error is
P WL ( t ) = E [ | ξ ( t ) - ξ ^ WL ( t ) | 2 ] = r ξ ( t , t ) - k = 1 λ k b k ( t ) b k * ( t ) - k = 1 λ k b ̄ k ( t ) b ̄ k * ( t )

We need the following Lemma before proving Theorem 2.

Lemma 1
H ( w k ) = H ( ε k , ε k * )
Proof: From (9), we get that r x (t, τ) is the kernel of an integral operator of L2(μ × μ) into L2(μ × μ), which is linear, self-adjoint, nonnegative-definite, and compact. Let {α k } be their eigenvalues and {φ k (t)} the corresponding eigenfunctions. The eigenfunctions φ k ( t ) = [ f k ( t ) , f k * ( t ) ] are orthonormal in the following sense
T φ n H ( t ) φ m ( t ) d μ ( t ) = 2 R T f n * ( t ) f m ( t ) d μ ( t ) = δ n m
(10)

Thus, the real random variables given by (4) are trivially orthogonal, i.e., E[w n w m ] = a n δ nm .

First, we prove that H ( w k ) H ( ε k , ε k * ) . Let H ( ε k * ) be the Hilbert space spanned by the random variables { ε k * } . From Theorem 6 of [30], we have T x ( t ) f k * ( t ) d μ ( t ) a .s . H ( ε k ) and T x * ( t ) f k ( t ) d μ ( t ) a .s . H ( ε k * ) and hence it is trivial that w k H ( ε k , ε k * ) .

Now, we demonstrate that H ( ε k , ε k * ) H ( w k ) . For that, we begin to check that ε k H(w k ). By projecting x(t) onto H(w k ), we obtain that x(t) = y(t) + v(t) with y ( t ) = k = 1 f k ( t ) w k and y(t) is perpendicular to v(t). Thus, we have that r x (t, τ) = r y (t, τ) + r v (t, τ) where r y (t, τ) = E[y(t)y*(τ)] and r v (t, τ) = E[v(t)v*(τ)]. By the monotone convergence theorem and (10), we get that T r x ( t , t ) d μ ( t ) = 1 2 k = 1 α k + T r v ( t , t ) d μ ( t ) .

On the other hand, T r x ( t , t ) d μ ( t ) = 1 2 Tr ( r x ) = 1 2 k = 1 α k , where Tr(r x ) is the trace of the integral operator on L2(μ × μ) with kernel r x (t, τ).

Thus,
T r v ( t , t ) d μ ( t ) = 0
(11)
and hence
r x ( t , τ ) = r y ( t , τ ) a .e . [Leb × Leb] on T × T
(12)
Now, we consider the integral η k = T y ( t ) ϕ k * ( t ) d μ ( t ) a.s. From (12), we have
T T r y ( t , τ ) ϕ k * ( t ) ϕ k ( τ )d μ ( t ) d μ ( τ ) = λ k

and then η k H(w k ). Moreover, it follows that E[ k - η k |2] = 0 and then ε k = η k H(w k ).

Similarly, it can be proved that ε k * H ( w k ) .   ■

Theorem 2 If x(t) is improper, then
ξ ^ WL ( t ) k = 1 ψ k ( t ) w k , t S
where ψ k ( t ) = 1 α k ( T ρ 1 ( t , τ ) f k ( τ ) d μ ( τ ) + T ρ 2 ( t , τ ) f k * ( τ ) d μ ( τ ) ) . Moreover, its corresponding MS error is
P WL ( t ) = r ξ ( t , t ) - k = 1 α k ψ k ( t ) ψ k * ( t ) , t S

Proof: Following a reasoning similar to that of proof of Theorem 1 and taking Lemma 1 into account, the result is immediate.   ■

In the next result, we provide conditions in order to hold (7).

Theorem 3 The WL estimator can be expressed in the following closed form
ξ ^ WL ( t ) = T h 1 ( t , τ ) x ( τ ) d μ ( τ ) + T h 2 ( t , τ ) x * ( τ ) d μ ( τ ) a . s .
(13)
for some h1(t, ·), h2(t, ·) L2(μ) if and only if for some h1(t, ·), h2(t, ·) L2(μ) it is satisfied that
ρ 1 ( t , τ ) = T h 1 ( t , u ) r x ( u , τ ) d μ ( u ) + T h 2 ( t , u ) c x * ( u , τ ) d μ ( u ) ρ 2 ( t , τ ) = T h 1 ( t , u ) c x ( u , τ ) d μ ( u ) + T h 2 ( t , u ) r x * ( u , τ ) d μ ( u )
(14)

for t S', a.e. τ ~ [Leb].

Proof: From (11), we have
x ( t ) , x * ( t ) H ( w k ) for almost all  t T [ Leb ]
(15)

Suppose that ξ ^ WL ( t ) satisfies (13). It follows from ξ ( t ) - ξ ^ WL ( t ) H ( w k ) and (15) that E [ ξ ( t ) x * ( τ ) ] = E [ ξ ^ WL ( t ) x * ( τ ) ] and E [ ξ ( t ) x ( τ ) ] = E [ ξ ^ WL ( t ) x ( τ ) ] , for almost all τ T [Leb], and thus we obtain (14).

Reciprocally, suppose that (14) holds. Define the process
η ( t ) = T h 1 ( t , τ ) x ( τ ) d μ ( τ ) + T h 2 ( t , τ ) x * ( τ ) d μ ( τ ) a .s .

Theorem 6 of [30] guarantees that η(t) H(w k ). Moreover, from (14), we obtain that ξ(t) - η(t)x(τ) and ξ(t) - η(t)x*(t) for almost all τ T [Leb]. Hence, from the projection theorem of the Hilbert spaces ξ ^ WL ( t ) = η ( t ) a.s.   ■

Declarations

Acknowledgements

This work was supported in part by Project MTM2007-66791 of the Plan Nacional de I+D+I, Ministerio de Educación y Ciencia, Spain. This project is financed jointly by the FEDER.

Authors’ Affiliations

(1)
Department of Statistics and Operations Research, University of Jaén, Jaén, Spain

References

  1. Marelli D, Fu M: A continuous-time linear system identification method for slowly sampled data. IEEE Trans Signal Process 2010,58(5):2521-2533.MathSciNetView ArticleGoogle Scholar
  2. Murray L, Storkey A: Particle smoothing in continuous time: a fast approach via density estimation. IEEE Trans Signal Process 2011,59(3):1017-1026.MathSciNetView ArticleGoogle Scholar
  3. Guo L, Pasik-Duncan B: Adaptive ontinuous-linear quadratic Gaussian control. IEEE Trans Autom Control 1999,44(9):1653-1662. 10.1109/9.788532MathSciNetView ArticleGoogle Scholar
  4. Mossberg M: High-accuracy instrumental variable identification of continuous-time autoregressive processes from irregularly sampled noisy data. IEEE Trans Signal Process 2008,56(8):4087-4091.MathSciNetView ArticleGoogle Scholar
  5. Edmonson W, Palacios JC, Lai CA, Latchman H: A global optimization method for continuous-time adaptive recursive filters. IEEE Signal Process Lett 1999,6(8):199-201. 10.1109/97.774864View ArticleGoogle Scholar
  6. Schell B, Tsividis Y: Analysis and simulation of continuous-time digital signal processors. Signal Process 2009,89(10):2013-2026. 10.1016/j.sigpro.2009.04.005View ArticleGoogle Scholar
  7. Savkin AV, Petersen IR, Moheimanic SOR: Model validation and state estimation for uncertain continuous-time systems with missing discrete-continuous data. Comput Electr Eng 1999, 25: 29-43. 10.1016/S0045-7906(98)00024-XView ArticleGoogle Scholar
  8. Ferreira P: Sorting continuous-time signals and the analog median filter. IEEE Signal Process Lett 2000,7(10):281-283. 10.1109/97.870681View ArticleGoogle Scholar
  9. Chang X, Yang G: Nonfragile H filtering of continuous-time fuzzy systems. IEEE Trans Signal Process 2011,59(4):1528-1538.MathSciNetView ArticleGoogle Scholar
  10. Bizarro JPS: On the behavior of the continuous-time spectrogram for arbitrarily narrow windows. IEEE Trans Signal Process 2007,55(5):1793-1802.MathSciNetView ArticleGoogle Scholar
  11. Van Trees HL: Detection, Estimation, and Modulation Theory. Part I. Wiley, New York; 1968.Google Scholar
  12. Poor HV: An Introduction to Signal Detection and Estimation. 2nd edition. Springer, New York; 1994.View ArticleGoogle Scholar
  13. Kailath T, Sayed AH, Hassibi B: Linear Estimation. Prentice Hall, New Jersey; 2000.Google Scholar
  14. Fernández-Alcalá RM, Navarro-Moreno J, Ruiz-Molina JC: A solution to the linear estimation problem with correlated signal and observation noise. Signal Process 2004, 84: 1973-1977. 10.1016/j.sigpro.2004.06.017View ArticleGoogle Scholar
  15. Cambanis S: A general approach to linear mean-square estimation problems. IEEE Trans Inf Theory 1973,IT-19(1):110-114.MathSciNetView ArticleGoogle Scholar
  16. Picinbono B, Chevalier P: Widely linear estimation with complex data. IEEE Trans Signal Process 1995,43(8):2030-2033. 10.1109/78.403373View ArticleGoogle Scholar
  17. Picinbono B, Bondon P: Second-order statistics of complex signals. IEEE Trans Signal Process 1997,45(2):411-420. 10.1109/78.554305View ArticleGoogle Scholar
  18. Rubin-Delanchy P, Walden AT: Kinematics of complex-valued time series. IEEE Trans Signal Process 2008,56(9):4189-4198.MathSciNetView ArticleGoogle Scholar
  19. Navarro-Moreno J: ARMA prediction of widely linear systems by using the innovations algorithm. IEEE Trans Signal Process 2008,56(7):3061-3068.MathSciNetView ArticleGoogle Scholar
  20. Xia Y, Took CC, Mandic DP: An augmented affine projection algorithm for the filtering of noncircular complex signals. Signal Process 2010,90(6):1788-1799. 10.1016/j.sigpro.2009.11.026View ArticleGoogle Scholar
  21. Gerstacker H, Schober R, Lampe RA: Receivers with widely linear processing for frequency-selective channels. IEEE Trans Commun 2003,51(9):1512-1523. 10.1109/TCOMM.2003.816992MathSciNetView ArticleGoogle Scholar
  22. Eriksson J, Koivunen V: Complex random vectors and ICA models: identifiability, uniqueness, and separability. IEEE Trans Inf Theory 2006,52(3):1017-1029.MathSciNetView ArticleGoogle Scholar
  23. Vía J, Ramírez D, Santamaría I: Properness and widely linear processing of quaternion random vectors. IEEE Trans Inf Theory 2010,56(7):3502-3515.View ArticleGoogle Scholar
  24. Mandic DP, Goh VSL: Complex Valued Nonlinear Adaptive Filters. Noncircularity, Widely Linear and Neural Models. Wiley, New York; 2009.View ArticleGoogle Scholar
  25. Adali T, Haykin S: Adaptive Signal Processing: Next Generation Solutions. Wiley-IEEE Press; 2010.View ArticleGoogle Scholar
  26. Took CC, Mandic DP: A quaternion widely linear adaptive filter. IEEE Trans Signal Process 2010,58(8):4427-4431.MathSciNetView ArticleGoogle Scholar
  27. Schreier PJ, Scharf LL: Statistical Signal Processing of Complex-Valued Data. Cambridge University Press, Cambridge; 2010.View ArticleGoogle Scholar
  28. Schreier PJ, Scharf LL, Clifford TM: Detection and estimation of improper complex random signals. IEEE Trans Inf Theory 2005,51(1):306-312. 10.1109/TIT.2004.839538View ArticleGoogle Scholar
  29. Navarro-Moreno J, Estudillo MD, Fernández-Alcalá RM, Ruiz-Molina JC: Estimation of improper complex random signals in colored noise by using the Hilbert space theory. IEEE Trans Inf Theory 2009,55(6):2859-2867.View ArticleGoogle Scholar
  30. Cambanis S: Representation of stochastic processes of second-order and linear operations. J Math Appl 1973, (41):603-620.MathSciNetGoogle Scholar
  31. Navarro-Moreno J, Ruiz-Molina JC, Fernández-Alcalá RM: Approximate series representations of linear operations on second-order stochastic processes: application to simulation. IEEE Trans Inf Theory 2006,52(4):1789-1794.View ArticleGoogle Scholar
  32. Gardner WA, Franks LE: An alternative approach to linear least squares estimation of continuous random processes. 5th Annual Princeton Conferene Information Sciences and Systems 1971.Google Scholar
  33. Ghanem RG, Spanos PD: Stochastic Finite Elements: A Spectral Approach. Springer, New York; 1991.View ArticleGoogle Scholar
  34. Rasmussen CE, Williams CKI:Gaussian Processes for Machine Learning. The MIT Press; 2006. [http://www.gaussianprocess.org/gpml/]Google Scholar
  35. Oya A, Navarro-Moreno J, Ruiz-Molina JC: A numerical solution for multichannel detection. IEEE Trans Commun 2009,57(6):1734-1742.MathSciNetView ArticleGoogle Scholar
  36. Gill PE, Miller GF: An algorithm for the integration of unequally spaced data. Comput J 1972, (15):80-83.Google Scholar
  37. Proakis JG: Digital Communications. McGraw-Hill, Newyork; 1989.Google Scholar
  38. Liu SC: Evolutionary power spectral density of strong-motion earthquakes. Bull Seismol Soc Am 1970,60(3):891-900.Google Scholar
  39. Wang SS, Hong HP: Quantiles of critical separation distance for non-stationary seismic excitations. Eng Struct 2006, 28: 985-991. 10.1016/j.engstruct.2005.11.003View ArticleGoogle Scholar
  40. Zerva A, Zervas V: Spatial variation of seismic ground motions: an overview. Appl Mech Rev 2002,55(3):271-297. 10.1115/1.1458013View ArticleGoogle Scholar

Copyright

© Martínez-Rodríguez et al; licensee Springer. 2011

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.