# A general solution to the continuous-time estimation problem under widely linear processing

- Ana María Martínez-Rodríguez
^{1}, - Jesús Navarro-Moreno
^{1}, - Rosa María Fernández-Alcalá
^{1}Email author and - Juan Carlos Ruiz-Molina
^{1}

**2011**:119

https://doi.org/10.1186/1687-6180-2011-119

© Martínez-Rodríguez et al; licensee Springer. 2011

**Received: **22 November 2010

**Accepted: **28 November 2011

**Published: **28 November 2011

## Abstract

A general problem of continuous-time linear mean-square estimation of a signal under widely linear processing is studied. The main characteristic of the estimator provided is the generality of its formulation which is applicable to a broad variety of situations, including finite or infinite intervals, different types of noises (additive and/or multiplicative, white or colored, noiseless observation data, etc.), capable of solving three estimation problems (smoothing, filtering or prediction), and estimating functionals of the signal of interest (derivatives, integrals, etc.). Its feasibility from a practical standpoint and a better performance with respect to the conventional estimator obtained from strictly linear processing is also illustrated.

## Keywords

## 1 Introduction

In most engineering systems, the state variables represent some physical quantity that is inherently continuous in time (ground-motion parameters, atmospheric or oceanographic flow, and turbulence, etc.). Thus, the formulation of realistic models to represent a signal processing problem is one of the major challenges facing engineers and mathematicians today. Given that in many problems the incoming information is constituted by continuous-time series, the use of a continuous-time model will be a more realistic description of the underlying phenomena we are trying to model. For example, [1] gives techniques of continuous-time linear system identification, and [2] illustrates the use of stochastic differential equations for modeling dynamical phenomena (see also the references therein). Continuous-time processing is especially suitable when data are recorded continuously, as an approximation for discrete-time sampled systems when the sampling rate is high [3] and when data are sampled irregularly [4]. It is also necessary with applications that require high-frequency signal processing and/or very fast initial convergence rates. Analog realizations also result in a smaller integrated circuit, lower power dissipation, and freedom from clocking and aliasing effects [5, 6]. In such cases, the continuous-time solution becomes an adequate alternative to the discrete one since it allows real-time processing and alleviates the overload problem assuring more reliable overall operation of the system [7]. Moreover, the analytical tools developed in the continuous-time case might bring new insights to the analysis which are not possible in their discrete-time counterparts. In particular, [8] illustrates this fact in the problem of sorting continuous-time signals, [9] in the problem of nonfragile *H*_{∞} filtering for a class of continuous-time fuzzy systems, and [10] in the study of the behavior of the continuous-time spectrogram.

The estimation problem is a topic of great interest in the statistical signal processing community. This problem has traditionally been solved by using a conventional or strictly linear (SL) processing. For instance, [11, 12] deal with classical estimation problems (e.g., the Kalman-Bucy filter) under a real formalism, [13] tackles similar problems in the complex field, and [14] uses factorizable kernels for solving such problems. The main characteristic of the SL treatment is that it takes into account only the autocorrelation of the complex-valued observation process, ignoring its complementary function. That is, the only information considered for the building of the estimator is that supplied by the observation process, while the information provided by its conjugate is ignored. Cambanis [15] provided the more general solution to the problem of continuous-time linear mean-square (MS) estimation of a complex-valued signal on the basis of noisy complex-valued observations under a SL processing. In fact, Cambanis's approach is valid for any type of second-order signals and observation intervals, and it is not necessary to impose conditions such as stationarity, Gaussianity or continuity on the involved processes, nor restrictions of finite intervals.

Recently, it has been proved that the treatment of the linear MS estimation problem through widely linear (WL) processing, which takes into account both the observation process and its conjugate, leads to estimators with better performance than the SL ones in the sense that they show lower error variance. Specifically, and from a discrete-time perspective, the WL regression problem was tackled in [16], the prediction problem in a complex autoregressive modeling setting was addressed in [17, 18] and later extended to autoregressive moving average prediction in [19]. Also, an augmented^{1} affine projection algorithm based on the full second-order statistical information has been newly devised in [20]. Among the wide range of applications of WL processing is the analysis of communication systems [21], ICA models [22], quaternion domain [23], adaptive filters [24–26], etc.

The study of continuous-time estimation problems is also interesting because it provides precise information on some structural properties of the system under study [8, 9]. For instance, an explicit expression of the MS error associated with the optimal estimator can be derived in this approach (e.g., see [12, 13]). Notice that this well-known result is independent of the number of available observations. In addition, the continuous-time solution becomes an excellent alternative to the discrete one when the number of available data is large. Discrete-time solutions involve the explicit calculation of matrix inverses whose dimensions depend on the number of observations (see, e.g., [16]). In practice, the process would be cumbersome or even prohibitive if this number were large (as occurs, e.g., in a major earthquake where the workload of the system increases suddenly).

The WL estimation problem under a continuous-time formulation was initially dealt with in [27, 28] and [29]. More precisely, the particular problem of estimating a complex signal in additive complex white noise is solved in [27] or [28] through an improper version of the Karhunen-Loève expansion. A general result comparing the performance of WL and SL processing is also presented in which it is shown that the performance gain, measured by MS error, can be as large as 2. Finally, [29] provides an extension of the previous problem to the case in which the additive noise is made up of the sum of a colored component plus a white one. The handicaps of both solutions are: *i)* they are limited to MS continuous signals, *ii)* the signals must be defined on finite intervals, *iii)* the model for the observation process involves additive noise (white noise in the case of [27] and [28]), and *iv)* they are only devoted to solving a smoothing problem.

In this paper, we address a more general estimation problem than those solved in [27–29]. For that, we consider the general formulation of the estimation problem given in [15], and we solve it by using WL processing. The generality of this formulation allows the solution of a wide range of problems, including general second-order processes, infinite observation intervals, additive and/or multiplicative noise, noiseless observations, estimation of functionals of the signal, etc. It also brings under a single framework three different kinds of estimation problems: prediction, filtering, and smoothing. Hence, all the above handicaps are avoided with the proposed solution. Specifically, we present two forms of the WL estimator depending on the nature, either proper or improper, of the observation process. Then, we state conditions to express such an estimator in closed form. Closed form expressions for the estimator are convenient from a computational point of view [11, 12, 15]. Three numerical examples show that the proposed solution is feasible and demonstrate the aforementioned generality. The first one compares the performance of the WL estimator in relation to the SL one by considering an observation process defined on an infinite interval and with multiplicative noise. The second concerns the problem of estimating a signal in nonwhite noise and illustrates its application with discrete data. Lastly, the third example considers the earthquake ground-motion representation problem and illustrates a possible real application.

The rest of this paper is organized as follows. In Section 2, we review the SL solution proposed in [15]. Section 3 presents the main results. We derive the new estimator and its associated MS error. Moreover, we prove the better performance of this in relation to the SL estimator, and we give conditions to obtain a closed form of the WL estimator. The results obtained in this section are first stated and then proved rigorously in an Appendix. This section also includes a brief description of how the technique can be implemented in practice. Finally, Section 4 contains three numerical examples illustrating the application of the suggested estimator, and a performance comparison between WL and SL estimation is carried out.

Throughout this paper, all the processes involved are complex, measurable and of second-order. Next, we introduce the basic notation. The real part of a complex number will be denoted by $\mathcal{R}\left\{\cdot \right\}$, the complex conjugate by (·)*, the conjugate transpose by (·) ^{
H
}and the orthogonality of two complex-valued random variables, say *a* and *b*, by *a* ⊥ *b*. Also, a.s. stands for almost surely and a.e. for almost everywhere.

## 2 Strictly Linear Estimation

*F*and

*G*be two functionals and {

*s*(

*t*)

*, t*∈

*S*} be a random signal, where

*S*is any interval of the real line. Suppose that

*s*(

*t*) is not observed directly and that we observe the process

*T*is any interval of the real line. Based on the observations {

*x*(

*t*)

*, t*∈

*T*}, the aim is to estimate a functional of

*s*(

*t*)

*S'* being any interval of the real line.

As noted above, this formulation is very general and contains as particular cases a great number of classical estimation problems, such as estimation of signals in additive and/or multiplicative noise, estimation of signals observed through random channels, random channel identification, etc. [15]. It can also be adapted to treat filtering, prediction, and smoothing problems.

In order to proceed with the building of the Cambanis estimator, the second-order statistics of the processes involved are needed. Let *r*_{
x
}(*t, τ*) and *r*_{
ξ
}(*t, τ*) be the respective autocorrelation functions of *x*(*t*) and *ξ*(*t*). Let *c*_{
x
}(*t, τ*) = *E*[*x*(*t*)*x*(*τ*)] denote the complementary autocorrelation function of *x*(*t*). Moreover, we denote the cross-correlation functions of *ξ*(*t*) with *x*(*t*) and *x**(*t*) by *ρ*_{1}(*t, τ*) = *E*[*ξ*(*t*)*x**(*τ*)] and *ρ*_{2}(*t, τ*) = *E*[*ξ*(*t*)*x*(*τ*)], respectively.

*F*(

*t*) such that

This function *F*(*t*) can be selected by a trial-and-error method or by using the procedure given in [30], and in addition, it does not have to be unique. This freedom of choice is to be exploited appropriately in every particular case under consideration. For example, if *T* = [*T*_{
i
}*, T*_{
f
}] and *x*(*t*) is MS continuous, then we can select *F*(*t*) = 1. Some practical examples can be consulted in [31].

*λ*

_{ k }} and {

*ϕ*

_{ k }(

*t*)}, respectively, of

*r*

_{ x }(

*t, τ*). Next, we need an orthogonal basis of random variables built from the observation process and the Hilbert space spanned by it. The elements of such a basis take the form ${\epsilon}_{k}={\int}_{T}x\left(t\right){\varphi}_{k}^{*}\left(t\right)F\left(t\right)\mathsf{\text{d}}t$ a.s., and let

*H*(

*ε*

_{ k }) be the Hilbert space spanned by the random variables {

*ε*

_{ k }}. By using SL processing, the estimator ${\widehat{\xi}}_{\mathsf{\text{SL}}}\left(t\right)$ proposed in [15] is calculated by projecting the process

*ξ*(

*t*) onto

*H*(

*ε*

_{ k }). As a consequence, ${\widehat{\xi}}_{\mathsf{\text{SL}}}\left(t\right)$ is given by

## 3 Widely Linear Estimation

In general, complex-valued random processes are improper [24], and then the appropriate processing is the WL processing. In this section, we provide a new estimator, ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)$, by using WL processing and calculate its corresponding MS error, ${P}_{\mathsf{\text{WL}}}\left(t\right)=E\left[|\xi \left(t\right)-{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right){|}^{2}\right]$. To this end, we consider, together with the information supplied by the observation process, *x*(*t*), the information provided by its conjugate, *x**(*t*). Both processes are stacked in a vector giving rise to the augmented observation process, x(*t*) = [*x*(*t*)*, x**(*t*)]*'*, whose autocorrelation function is denoted by r_{
x
}(*t, τ*) = *E*[x(*t*)x^{
H
}(*τ*)]. Notice that ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)$ receives the name of WL estimator because it depends linearly not only on *x*(*t*) but also *x**(*t*) in contrast with the conventional estimator.

*x*(

*t*): proper or improper. If

*x*(

*t*) is proper, i.e.,

*cx*(

*t, τ*) = 0, then the expression for the estimator is

Expressions (2) and (3) are derived in Theorem 1 in the "Appendix". These expressions extend to the SL ones since if *ρ*_{2}(*t, τ*) = 0, then ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)={\widehat{\xi}}_{\mathsf{\text{SL}}}\left(t\right)$ and *P*_{WL}(*t*) = *P*_{SL}(*t*).

*c*

_{ x }(

*t, τ*) ≠ 0), and unlike the proper case, it is not as quick to calculate an explicit and easily implemented expression of ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)$. The main difference between both cases is that now the members of the set $\left\{{\epsilon}_{k}\right\}\cup \left\{{\epsilon}_{k}^{*}\right\}$ are not orthogonal. In fact, we have

*ε*

_{ k }} and $\left\{{\epsilon}_{k}^{*}\right\}$, $H\left({\epsilon}_{k},{\epsilon}_{k}^{*}\right)$, which avoids this serious problem. This objective is attained in Lemma 1 in the "Appendix" by means of the eigenvalues, {

*α*

_{ k }}, and the corresponding eigenfunctions, φ

_{ k }(

*t*), of r

_{ x }(

*t, τ*). Following a similar reasoning to [28], it can be shown that the eigenfunctions φ

_{ k }(

*t*) have the particular structure given by ${\mathbf{\phi}}_{k}\left(t\right)={\left[{f}_{k}\left(t\right),{f}_{k}^{*}\left(t\right)\right]}^{\prime}$ and are orthonormal in the sense of (10). The elements of this new set are real random variables of the form

*E*[

*w*

_{ n }

*w*

_{ m }] =

*α*

_{ n }

*δ*

_{ nm }. By using this new set of variables, we can obtain the WL estimator explicitly

Theorem 2 in the "Appendix" proves these assertions.

*h*

_{1}(

*t*, ·) and

*h*

_{2}(

*t*, ·). Expression (7) is computationally more amenable than (2) or (5). The key question is whether the conditions of Theorem 3 are fulfilled. An example of the latter is the classical problem of estimating an improper complex-valued random signal in colored noise with an additive white part addressed in [29]. Specifically, the observation process considered is

where *s*(*t*) is an improper complex-valued MS continuous random signal, the colored noise component, *n*_{
c
}, is a complex-valued MS continuous stochastic process uncorrelated with *v*(*t*), and *v*(*t*) is a complex white noise uncorrelated with the signal *s*(*t*). Note that the formulation of the estimation problem treated in [29] is much more restrictive than that studied in the present paper.

*ξ*(

*t*) is a real process, and

*x*(

*t*) is still complex. In this case, ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)$ is real too. However, there is no reason for the SL estimator to be real, which is not convenient when we estimate a real functional. Moreover, if

*x*(

*t*) is proper, then ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)=2\mathcal{R}\left\{{\widehat{\xi}}_{\mathsf{\text{SL}}}\left(t\right)\right\}$ and its associated MS error is

which provides a decrease in the error that is twice as great as the SL estimator.

Notice also that the Hilbert space approach we have followed to derive the WL estimators allows us to give an alternative proof of the well-known fact that WL estimation outperforms SL estimation. The estimator ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)$ is really obtained by projecting the functional *ξ*(*t*) onto the Hilbert space $H\left({\epsilon}_{k},{\epsilon}_{k}^{*}\right)$. Observe that $H\left({\epsilon}_{k}\right)\subseteq H\left({\epsilon}_{k},{\epsilon}_{k}^{*}\right)$ and then trivially by the projection theorem of the Hilbert spaces^{2} [[12], Proposition VII.C.1], we have *P*_{WL}(*t*) ≤ *P*_{SL}(*t*), for *t* ∈ *S'*, and hence, the WL estimator outperforms the SL one as regards its MS error.

### 3.1 Practical Implementation of the Estimator

- 1)
Determine the augmented statistics of the processes involved. In some practical applications, the second-order structure is initially known. In fact, it may be derived from experimental measurements or mathematical models. For instance, the information-bearing signal in the communications problem is purposely designed to have desired statistical properties [32]. Other examples can be consulted in [33, 34].

- 2)
Select a function

*F*(*t*) such that condition (1) holds. As noted above, this function*F*(*t*) can be selected by a trial-and-error method or by using the procedure given in [30]. Notice that this function is not unique and, in general, there are many specifications possible. - 3)
Obtain the eigenvalues {

*α*_{ k }} and eigenfunctions {φ_{ k }(*t*)} associated with r_{ x }(*t, τ*). In general, determination of eigenvalues and eigenfunctions, except for a few cases, is a problem that is very involved, if not impossible. However, we can avoid the calculation of true eigenvalues and eigenfunctions by means of the Rayleigh-Ritz (RR) method, which is a procedure for numerically solving operator equations involving only elementary calculus and simple linear algebra (see [31, 35] for a detailed study about the practical application of the RR method). - 4)Truncate expressions (5) and (6) at
*n*terms and substitute, if necessary, the true eigenvalues and eigenfunctions by the RR ones. This truncated version of the estimator, which is in fact a suboptimum estimator, can be calculated via the expression (7) with${h}_{1}\left(t,\tau \right)=\sum _{k=1}^{n}{\psi}_{k}\left(t\right){f}_{k}^{*}\left(\tau \right)\phantom{\rule{1em}{0ex}}\mathsf{\text{and}}\phantom{\rule{1em}{0ex}}{h}_{2}\left(t,\tau \right)=\sum _{k=1}^{n}{\psi}_{k}\left(t\right){f}_{k}\left(\tau \right)$

and where both functions satisfy the conditions of Theorem 3.

Thus, we have replaced the computation of 2*n* integrals in the truncated version of (5) (or *n* integrals in the finite series obtained from (2)) by the computation of two integrals in (7), and hence, it entails a reduction in the error of approximation for a given precision.

*n*. An easy criterion

^{3}for determining an adequate level of truncation

*n*without an unnecessary excess of computation can be the following: select

*n*in such a way that ${\sum}_{k=1}^{n}{\alpha}_{k}$ represents at least 95% of the total variance of the process, ${\sum}_{k=1}^{\mathrm{\infty}}{\alpha}_{k}=2{\int}_{T}{r}_{x}\left(t,t\right)F\left(t\right)\mathsf{\text{d}}t$ (see the proof of Lemma 1 in the "Appendix").

- 5)Finally, from a discrete set of observations,
*x*_{1}, ...,*x*_{ N }, we can compute the integrals in (7) by means of$\begin{array}{c}\underset{T}{\int}{h}_{1}\left(t,\tau \right)x\left(\tau \right)F\left(\tau \right)\mathsf{\text{d}}\tau \approx \sum _{k=1}^{n}{g}_{1}\left(t,k\right){x}_{k}\\ \underset{T}{\int}{h}_{2}\left(t,\tau \right){x}^{*}\left(\tau \right)F\left(\tau \right)\mathsf{\text{d}}\tau \approx \sum _{k=1}^{n}{g}_{2}\left(t,k\right){x}_{k}^{*}\end{array}$

where the weights *g*_{1}(*t, k*) and *g*_{2}(*t, k*) are obtained via a suitable method that performs numerical integration with integrands constituted for discrete points. For example, using the Gill-Miller quadrature method [36] implemented by subroutine d01gaf from the NAG Toolbox for MAT-LAB or the trapezoidal rule (trapz function in MATLAB).

The only changes for implementing the estimator (2) are in steps 1 and 3, where we have to use *r*_{
x
}(*t, τ*) and their associated eigenvalues and eigenfunctions, {*λ*_{
k
}} and {*ϕ*_{
k
}(*t*)}, instead.

## 4 Numerical Examples

Three examples illustrate the implementation of the proposed solution and show its capability to solve very general estimation problems. Example 1 shows a situation where true eigenvalues and eigenfunctions are available and aims at comparing the performance of WL processing in relation to SL processing. Example 2 applies the RR method to approximate the eigenexpansion and also illustrates its implementation with discrete data. Finally, Example 3 considers an application in seismic signal processing in which the ground-motion velocity is estimated from seismic ground acceleration data.

### 4.1 Example 1

*s*(

*t*) is transmitted over a channel that rotates it by some random phase

*θ*and adds a noise

*n*(

*t*). Unlike [28] and [29], we consider infinite observation intervals and a multiplicative quadratic noise in the observations. More precisely,

*s*(

*t*) is defined on the real line,

*S*= ℝ, with zero-mean and ${r}_{s}\left(t,\tau \right)={\mathsf{\text{e}}}^{-{\left(t-\tau \right)}^{2}}$. Thus, the observation process is given by

where $\mathsf{\text{j}}=\sqrt{-1}$ and the noise *n*(*t*) is a zero-mean Gaussian process with *r*_{
n
}(*t, τ*) = 3^{-1/2}*p*^{1/4}(*t*)*p*^{1/4}(*τ*), where $p\left(t\right)=\sqrt{2/\pi}{\mathsf{\text{e}}}^{-2{t}^{2}}$ (this type of process is studied in [34]). Three different probabilistic distributions for *θ* are taken: a uniform distribution on (-*σ, σ*), a zero-mean normal with variance *σ*, and a Laplace distribution with zero-mean and variance *σ*. Several choices of *σ* will be used to show how the advantages of WL processing vary with the level of improperness of the observations. Finally, mutual independence of *θ, s*(*t*) and *n*(*t*) is assumed. The objective is to estimate $\u1e61\left(t\right),t\in \left[0,1\right]$, where $\u1e61\left(t\right)$ denotes the MS derivative of *s*(*t*).

*F*(

*t*) = 1 has been selected by a trial-and-error method and thus, condition (1) is verified. This example is one of the particular cases where calculation of true eigenvalues and eigenfunctions is possible. In fact,

*r*

_{ x }(

*t, τ*) has eigenvalues $(1+E\left[{\mathsf{\text{e}}}^{\mathsf{\text{2j}}\theta}\right]{\stackrel{\u0304}{\lambda}}_{k}$ and $\left(1-E\left[{\mathsf{\text{e}}}^{\mathsf{\text{2j}}\theta}\right]\right){\stackrel{\u0304}{\lambda}}_{k}$ with respective associated eigenfunctions ${[{\varphi}_{k}\left(t\right)/\sqrt{2},{\varphi}_{k}\left(t\right)/\sqrt{2}]}^{\prime}$ and ${[\mathsf{\text{j}}{\varphi}_{k}\left(t\right)/\sqrt{2},-\mathsf{\text{j}}{\varphi}_{k}\left(t\right)/\sqrt{2}]}^{\prime}$,

*k*= 0, ..., and where ${\stackrel{\u0304}{\lambda}}_{k}=\sqrt{\frac{2}{2+\sqrt{3}}}{\left(\frac{1}{2+\sqrt{3}}\right)}^{k}$, ${\varphi}_{k}\left(t\right)={2}^{k}k!\frac{1}{3}{3}^{3/4}{\mathsf{\text{e}}}^{-\left(\sqrt{3}-1\right){t}^{2}}{H}_{k}\left(\sqrt{2\sqrt{3}}t\right)$ and ${H}_{k}\left(t\right)={\left(-1\right)}^{k}{\mathsf{\text{e}}}^{{t}^{2}}\frac{{\partial}^{k}}{\partial {t}^{k}}{\mathsf{\text{e}}}^{-{t}^{2}}$ are the Hermite polynomials. Moreover, we can check that the associated MS errors are the following:

with ${l}_{k}\left(t\right)={3}^{-1/2}{\int}_{T}\frac{\partial}{\partial t}{r}_{s}\left(t,\tau \right){p}^{1/2}\left(\tau \right){\varphi}_{k}\left(\tau \right)\mathsf{\text{d}}\tau $.

*P*

_{SL}(

*t*) and

*P*

_{WL}(

*t*) at

*n*= 10 terms (this approximate expansion explains 99.86% of the total variance of the process). The performance of both the SL and the WL estimators for

*n*= 10 does not really vary substantially from the case of

*n >*10. Figure 1a depicts the measure

*I*in function of

*σ*for the three probabilistic distributions considered for

*θ*. It turns out that the advantages of WL processing decrease in both cases as

*σ*tends toward zero and as

*σ*tends toward infinity. However, this occurs for different reasons. Another performance measure which helps in the interpretation is

which, for this example, takes the value *L* = *|E*[e^{2jθ}]*|*. Figure 1b shows the index *L* as a function of *σ* for the three probabilistic distributions considered for *θ*. On the one hand, as *σ* tends toward zero, then the index *L* tends to one since in that limit the observation process becomes a real signal^{4}. On the other hand, when *σ* increases, then *L* tends toward zero since *x*(*t*) becomes a proper signal. The faster convergence to zero in the normal case and the slower one for the Laplace distribution are also observed.

### 4.2 Example 2

*s*

_{1}(

*t*) is transmitted over a channel that rotates it by a standard normal phase

*θ*

_{1}and adds a nonwhite noise

*n*(

*t*). More precisely,

*s*

_{1}(

*t*) is defined on the interval [0, 1], with zero-mean and ${r}_{{s}_{1}}\left(t,\tau \right)=min\left\{t,\tau \right\}$. Thus, the observation process is

where the nonwhite noise *n*(*t*) is obtained from a linear time-invariant system of the form $n\left(t\right)={\mathsf{\text{e}}}^{\mathsf{\text{j}}{\theta}_{\mathsf{\text{2}}}}\underset{0}{\overset{1}{\int}}{r}_{{s}_{1}}\left(t,\tau \right){s}_{2}\left(\tau \right)\mathsf{\text{d}}\tau $, with *θ*_{2} being a zero-mean normal random variable with variance 2 and *s*_{2}(*t*) a standard Wiener process (these types of noises appear in [[37], p. 357]). Moreover, we assume that *θ*_{1}, *θ*_{2}, *s*_{1}(*t*), and *s*_{2}(*t*) are independent of each other. This example extends the cases studied in [28] and [29] since the considered noise here does not have a white component and thus, the previous solutions cannot be applied. The observations have been taken in the following time instants: *i/* 1000, *i* = 1, ..., 1000. The objective is to estimate $s\left(t\right)={\mathsf{\text{e}}}^{\mathsf{\text{j}}{\theta}_{\mathsf{\text{1}}}}{s}_{1}\left(t\right)$, *t* ∈ [0,1].

We first notice that ${\int}_{0}^{1}{r}_{x}\left(t,t\right)\mathsf{\text{d}}t<\mathrm{\infty}$, where *F*(*t*) = 1 has been selected since the processes involved are continuous and thus, condition (1) is verified. Now, to apply the RR method, we choose the Fourier basis of complex exponentials on [0, 1], ${\left\{exp\left\{2\pi \mathsf{\text{j}}k\right\}\right\}}_{k=-\mathrm{\infty}}^{\mathrm{\infty}}$. Following the recommendations in step 5 of Section 3.1, we compute the integrals in (7) via the subroutines d01gaf and trapz (there were no significative differences between both methods).

*P*

_{WL}(

*t*) together with the MS errors of the WL estimator obtained from the RR method with

*n*= 25 and

*n*= 50 terms in step 5 of the algorithm, which have been generated by Monte Carlo simulation (a total of 10,000 simulations were performed). We can see that the method may yield a sufficiently accurate solution with a short number

*n*of terms while reducing the complexity of the problem significantly. Note that a truncated expansion at

*n*= 25 terms explains 88.77% of the total variance of the process and the expansion with

*n*= 50 terms 95.81%.

### 4.3 Example 3

*a*(

*t*) is a time modulating function that could be a complex function, and

*z*(

*t*) is a stationary process with zero-mean and known second-order moments. In general, the so-called exponential modulating function is adopted [38, 39]. A common choice for

*z*(

*t*) is the standard Ornstein-Uhlenbeck process with a particular version of the exponential modulating function given by

*a*(

*t*) = e

^{ -t }[[33], p. 38]. Thus, the seismic ground acceleration can be modeled as a stochastic signal {

*s*(

*t*)

*, t*∈

*S*= ℝ

^{+}} with

*r*

_{ s }(

*t*,

*τ*) = e

^{-(t+τ)}e

^{-|t-τ|}. Consider the observation process

where *θ* is a standard normal phase independent of *s*(*t*). Now, the objective is to estimate the seismic ground velocity at instant *t* ≥ 2, i.e., $\xi \left(t\right)={\int}_{0}^{1}s\left(\tau \right)\mathsf{\text{d}}\tau $, with *t* ∈ *S*' = [2, ∞). A justification for considering infinite intervals on the basis of the stationarity property of *z*(*t*) can be found in [40].

*F*(

*t*) = e

^{-t}and then, (1) holds. For the case of infinite intervals,

*T*= ℝ

^{+}, the true eigenvalues and eigenfunctions of r

_{ x }(

*t, τ*) are not known. We approximate them by means of the RR method. The RR eigenvalues and eigenfunctions of r

_{ x }(

*t, τ*) are $\left(1\phantom{\rule{2.77695pt}{0ex}}\pm \phantom{\rule{2.77695pt}{0ex}}{\mathsf{\text{e}}}^{-2}\right){\stackrel{\u0304}{\lambda}}_{k}$ and ${\left[{\stackrel{\u0303}{\varphi}}_{k}\left(t\right)/\sqrt{2},{\stackrel{\u0303}{\varphi}}_{k}\left(t\right)/\sqrt{2}\right]}^{\prime}$ and ${\left[\mathsf{\text{j}}{\stackrel{\u0303}{\varphi}}_{k}\left(t\right)/\sqrt{2},-\mathsf{\text{j}}{\stackrel{\u0303}{\varphi}}_{k}\left(t\right)/\sqrt{2}\right]}^{\prime}$, where ${\stackrel{\u0303}{\lambda}}_{k}$ and ${\stackrel{\u0303}{\varphi}}_{k}\left(t\right)$ are the RR eigenvalues and eigenfunctions, respectively, of

*r*

_{ x }(

*t, τ*) obtained from the following trigonometric basis

*n*= 10 terms with the MS errors of the WL estimator with

*n*= 2, 4 and, 10 terms (which account for 57.60, 82.30 and 93.88% of the total variance of

*x*(

*t*), respectively). We have limited the estimation interval to [2, 6] because of the observed stabilization of the MS errors for

*t*≥ 4. Apart from the better performance of the WL estimator with respect to the SL estimator (as was to be expected), the rapid convergence of the RR estimators is also confirmed.

## 5 Concluding Remarks

A new WL estimator has been given for solving general continuous-time estimation problems. The formulation considered can be adapted in order to include as particular cases a great number of estimation problems of interest. The proposed estimator becomes a way that avoids explicit calculation of matrix inverses altogether and can be applied provided that the second-order characteristics of the processes involved are known. Such knowledge is usual in some practical problems in fields as diverse as seismic signal processing, signal detection, finite element analysis, etc. An alternative procedure is the stochastic gradient-based iterative solution called augmented complex least mean-square algorithm (see, e.g., [24]) in which the second-order statistics are estimated from data. However, if we wish to take advantage of the knowledge of the second-order characteristics and the number of observation data is very large, then the continuous-time solution is a recommended option.

## Appendix

*μ*a measure on $\left(T,\mathcal{B}\left(T\right)\right)$ ($\mathcal{B}\left(T\right)$ is the

*σ*-algebra of Lebesgue measurable subsets of

*T*) which is equivalent to the Lebesgue measure and verifies

The existence of *μ* satisfying (9) is proved in [30]. Cambanis also shows that (9) allows us to select a function *F*(*t*) such that d*μ*(*t*)*/* d*t* = *F*(*t*) and (1) holds.

**Theorem 1**

*If x*(

*t*)

*is proper, then*

*with*${\stackrel{\u0304}{b}}_{k}\left(t\right)=\frac{1}{{\lambda}_{k}}{\int}_{T}{\rho}_{2}\left(t,\tau \right){\varphi}_{k}^{*}\left(\tau \right)\mathsf{\text{d}}\mu \left(\tau \right)$.

*Moreover, its associated MS error is*

*Proof:* Firstly, notice that if *x*(*t*) is proper, then the members of the set of random variables $\left\{{\epsilon}_{k}\right\}\cup \left\{{\epsilon}_{k}^{*}\right\}$ are orthogonal. Thus, the estimator ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)$ is obtained by projecting the functional *ξ*(*t*) onto the Hilbert space generated by {*ε*_{
k
}} and $\left\{{\epsilon}_{k}^{*}\right\}$, $H\left({\epsilon}_{k},{\epsilon}_{k}^{*}\right)$. Hence, the estimator can be expressed in the form ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)={\sum}_{k=1}^{\mathrm{\infty}}{b}_{k}\left(t\right){\epsilon}_{k}+{\sum}_{k=1}^{\mathrm{\infty}}{\stackrel{\u0304}{b}}_{k}\left(t\right){\epsilon}_{k}^{*}$, where the coefficients *b*_{
k
}(*t*) and ${\stackrel{\u0304}{b}}_{k}\left(t\right)$ are determined via the projection theorem of the Hilbert spaces. This result assures that $\xi \left(t\right)-{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)\perp \left\{{\epsilon}_{k}\right\}\cup \left\{{\epsilon}_{k}^{*}\right\}$; that is, $E\left[\xi \left(t\right){\epsilon}_{k}^{*}\right]=E\left[{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right){\epsilon}_{k}^{*}\right]$ and $E\left[\xi \left(t\right){\epsilon}_{k}\right]=E\left[{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right){\epsilon}_{k}\right]$, for all *k*. Since $E\left[\xi \left(t\right){\epsilon}_{k}^{*}\right]={\int}_{T}{\rho}_{1}\left(t,\tau \right){\varphi}_{k}\left(\tau \right)\mathsf{\text{d}}\mu \left(\tau \right)$, $E\left[{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right){\epsilon}_{k}^{*}\right]={\lambda}_{k}{b}_{k}\left(t\right)$, $E\left[\xi \left(t\right){\epsilon}_{k}\right]={\int}_{T}{\rho}_{2}\left(t,\tau \right){\varphi}_{k}^{*}\left(\tau \right)\mathsf{\text{d}}\mu \left(\tau \right)$, and $E\left[{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right){\epsilon}_{k}\right]={\lambda}_{k}{\stackrel{\u0304}{b}}_{k}\left(t\right)$, then the first part of the result follows.

■

We need the following Lemma before proving Theorem 2.

**Lemma 1**

*Proof:*From (9), we get that r

_{ x }(

*t, τ*) is the kernel of an integral operator of

*L*

_{2}(

*μ*×

*μ*) into

*L*

_{2}(

*μ*×

*μ*), which is linear, self-adjoint, nonnegative-definite, and compact. Let {

*α*

_{ k }} be their eigenvalues and {φ

_{ k }(

*t*)} the corresponding eigenfunctions. The eigenfunctions ${\phi}_{k}\left(t\right)={\left[{f}_{k}\left(t\right),{f}_{k}^{*}\left(t\right)\right]}^{\prime}$ are orthonormal in the following sense

Thus, the real random variables given by (4) are trivially orthogonal, i.e., *E*[*w*_{
n
}*w*_{
m
}] = *a*_{
n
}*δ*_{
nm
}.

First, we prove that $H\left({w}_{k}\right)\subseteq H\left({\epsilon}_{k},{\epsilon}_{k}^{*}\right)$. Let $H\left({\epsilon}_{k}^{*}\right)$ be the Hilbert space spanned by the random variables $\left\{{\epsilon}_{k}^{*}\right\}$. From Theorem 6 of [30], we have $\underset{T}{\int}x\left(t\right){f}_{k}^{*}\left(t\right)\mathsf{\text{d}}\mu \left(t\right)\phantom{\rule{1em}{0ex}}\mathsf{\text{a}}\mathsf{\text{.s}}\mathsf{\text{.}}\in H\left({\epsilon}_{k}\right)$ and $\underset{T}{\int}{x}^{*}\left(t\right){f}_{k}\left(t\right)\mathsf{\text{d}}\mu \left(t\right)\phantom{\rule{1em}{0ex}}\mathsf{\text{a}}\mathsf{\text{.s}}\mathsf{\text{.}}\in H\left({\epsilon}_{k}^{*}\right)$ and hence it is trivial that ${w}_{k}\subseteq H\left({\epsilon}_{k},{\epsilon}_{k}^{*}\right)$.

Now, we demonstrate that $H\left({\epsilon}_{k},{\epsilon}_{k}^{*}\right)\subseteq H\left({w}_{k}\right)$. For that, we begin to check that *ε*_{
k
} ∈ *H*(*w*_{
k
}). By projecting *x*(*t*) onto *H*(*w*_{
k
}), we obtain that *x*(*t*) = *y*(*t*) + *v*(*t*) with $y\left(t\right)={\sum}_{k=1}^{\mathrm{\infty}}{f}_{k}\left(t\right){w}_{k}$ and *y*(*t*) is perpendicular to *v*(*t*). Thus, we have that *r*_{
x
}(*t*, *τ*) = *r*_{
y
}(*t*, *τ*) + *r*_{
v
}(*t, τ*) where *r*_{
y
}(*t, τ*) = *E*[*y*(*t*)*y**(*τ*)] and *r*_{
v
}(*t, τ*) = *E*[*v*(*t*)*v**(*τ*)]. By the monotone convergence theorem and (10), we get that ${\int}_{T}{r}_{x}\left(t,t\right)\mathsf{\text{d}}\mu \left(t\right)=\frac{1}{2}{\sum}_{k=1}^{\mathrm{\infty}}{\alpha}_{k}+{\int}_{T}{r}_{v}\left(t,t\right)\mathsf{\text{d}}\mu \left(t\right)$.

On the other hand, ${\int}_{T}{r}_{x}\left(t,t\right)\mathsf{\text{d}}\mu \left(t\right)=\frac{1}{2}\mathsf{\text{Tr}}({r}_{x}\mathsf{\text{)}}=\frac{1}{2}{\sum}_{k=1}^{\mathrm{\infty}}{\alpha}_{k}$, where Tr(r_{
x
}) is the trace of the integral operator on *L*_{2}(*μ* × *μ*) with kernel r_{
x
}(*t*, *τ*).

and then *η*_{
k
} ∈ *H*(*w*_{
k
}). Moreover, it follows that *E*[*|ε*_{
k
} - *η*_{
k
}*|*^{2}] = 0 and then *ε*_{
k
} = *η*_{
k
} ∈ *H*(*w*_{
k
}).

Similarly, it can be proved that ${\epsilon}_{k}^{*}\in H\left({w}_{k}\right)$. ■

**Theorem 2**

*If x*(

*t*)

*is improper, then*

*where*${\psi}_{k}\left(t\right)=\frac{1}{{\alpha}_{k}}\left({\int}_{T}{\rho}_{1}\left(t,\tau \right){f}_{k}\left(\tau \right)\mathsf{\text{d}}\mu \left(\tau \right)+{\int}_{T}{\rho}_{2}\left(t,\tau \right){f}_{k}^{*}\left(\tau \right)\mathsf{\text{d}}\mu \left(\tau \right)\right)$.

*Moreover, its corresponding MS error is*

*Proof:* Following a reasoning similar to that of proof of Theorem 1 and taking Lemma 1 into account, the result is immediate. ■

In the next result, we provide conditions in order to hold (7).

**Theorem 3**

*The WL estimator can be expressed in the following closed form*

*for some h*

_{1}(

*t*, ·),

*h*

_{2}(

*t*, ·) ∈

*L*

_{2}(

*μ*)

*if and only if for some h*

_{1}(

*t*, ·),

*h*

_{2}(

*t*, ·) ∈

*L*

_{2}(

*μ*)

*it is satisfied that*

*for t* ∈ *S*'*, a.e. τ ~ [Leb]*.

*Proof:*From (11), we have

Suppose that ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)$ satisfies (13). It follows from $\xi \left(t\right)-{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)\perp H\left({w}_{k}\right)$ and (15) that $E\left[\xi \left(t\right){x}^{*}\left(\tau \right)\right]=E\left[{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right){x}^{*}\left(\tau \right)\right]$ and $E\left[\xi \left(t\right)x\left(\tau \right)\right]=E\left[{\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)x\left(\tau \right)\right]$, for almost all *τ* ∈ *T* [Leb], and thus we obtain (14).

Theorem 6 of [30] guarantees that *η*(*t*) ∈ *H*(*w*_{
k
}). Moreover, from (14), we obtain that *ξ*(*t*) - *η*(*t*)⊥*x*(*τ*) and *ξ*(*t*) - *η*(*t*)⊥*x**(*t*) for almost all *τ* ∈ *T* [Leb]. Hence, from the projection theorem of the Hilbert spaces ${\widehat{\xi}}_{\mathsf{\text{WL}}}\left(t\right)=\eta \left(t\right)$ a.s. ■

## Declarations

### Acknowledgements

This work was supported in part by Project MTM2007-66791 of the Plan Nacional de I+D+I, Ministerio de Educación y Ciencia, Spain. This project is financed jointly by the FEDER.

## Authors’ Affiliations

## References

- Marelli D, Fu M:
**A continuous-time linear system identification method for slowly sampled data.***IEEE Trans Signal Process*2010,**58**(5):2521-2533.MathSciNetView ArticleGoogle Scholar - Murray L, Storkey A:
**Particle smoothing in continuous time: a fast approach via density estimation.***IEEE Trans Signal Process*2011,**59**(3):1017-1026.MathSciNetView ArticleGoogle Scholar - Guo L, Pasik-Duncan B:
**Adaptive ontinuous-linear quadratic Gaussian control.***IEEE Trans Autom Control*1999,**44**(9):1653-1662. 10.1109/9.788532MathSciNetView ArticleGoogle Scholar - Mossberg M:
**High-accuracy instrumental variable identification of continuous-time autoregressive processes from irregularly sampled noisy data.***IEEE Trans Signal Process*2008,**56**(8):4087-4091.MathSciNetView ArticleGoogle Scholar - Edmonson W, Palacios JC, Lai CA, Latchman H:
**A global optimization method for continuous-time adaptive recursive filters.***IEEE Signal Process Lett*1999,**6**(8):199-201. 10.1109/97.774864View ArticleGoogle Scholar - Schell B, Tsividis Y:
**Analysis and simulation of continuous-time digital signal processors.***Signal Process*2009,**89**(10):2013-2026. 10.1016/j.sigpro.2009.04.005View ArticleGoogle Scholar - Savkin AV, Petersen IR, Moheimanic SOR:
**Model validation and state estimation for uncertain continuous-time systems with missing discrete-continuous data.***Comput Electr Eng*1999,**25:**29-43. 10.1016/S0045-7906(98)00024-XView ArticleGoogle Scholar - Ferreira P:
**Sorting continuous-time signals and the analog median filter.***IEEE Signal Process Lett*2000,**7**(10):281-283. 10.1109/97.870681View ArticleGoogle Scholar - Chang X, Yang G:
**Nonfragile**H_{ ∞ }**filtering of continuous-time fuzzy systems.***IEEE Trans Signal Process*2011,**59**(4):1528-1538.MathSciNetView ArticleGoogle Scholar - Bizarro JPS:
**On the behavior of the continuous-time spectrogram for arbitrarily narrow windows.***IEEE Trans Signal Process*2007,**55**(5):1793-1802.MathSciNetView ArticleGoogle Scholar - Van Trees HL:
*Detection, Estimation, and Modulation Theory. Part I*. Wiley, New York; 1968.Google Scholar - Poor HV:
*An Introduction to Signal Detection and Estimation*. 2nd edition. Springer, New York; 1994.View ArticleGoogle Scholar - Kailath T, Sayed AH, Hassibi B:
*Linear Estimation*. Prentice Hall, New Jersey; 2000.Google Scholar - Fernández-Alcalá RM, Navarro-Moreno J, Ruiz-Molina JC:
**A solution to the linear estimation problem with correlated signal and observation noise.***Signal Process*2004,**84:**1973-1977. 10.1016/j.sigpro.2004.06.017View ArticleGoogle Scholar - Cambanis S:
**A general approach to linear mean-square estimation problems.***IEEE Trans Inf Theory*1973,**IT-19**(1):110-114.MathSciNetView ArticleGoogle Scholar - Picinbono B, Chevalier P:
**Widely linear estimation with complex data.***IEEE Trans Signal Process*1995,**43**(8):2030-2033. 10.1109/78.403373View ArticleGoogle Scholar - Picinbono B, Bondon P:
**Second-order statistics of complex signals.***IEEE Trans Signal Process*1997,**45**(2):411-420. 10.1109/78.554305View ArticleGoogle Scholar - Rubin-Delanchy P, Walden AT:
**Kinematics of complex-valued time series.***IEEE Trans Signal Process*2008,**56**(9):4189-4198.MathSciNetView ArticleGoogle Scholar - Navarro-Moreno J:
**ARMA prediction of widely linear systems by using the innovations algorithm.***IEEE Trans Signal Process*2008,**56**(7):3061-3068.MathSciNetView ArticleGoogle Scholar - Xia Y, Took CC, Mandic DP:
**An augmented affine projection algorithm for the filtering of noncircular complex signals.***Signal Process*2010,**90**(6):1788-1799. 10.1016/j.sigpro.2009.11.026View ArticleGoogle Scholar - Gerstacker H, Schober R, Lampe RA:
**Receivers with widely linear processing for frequency-selective channels.***IEEE Trans Commun*2003,**51**(9):1512-1523. 10.1109/TCOMM.2003.816992MathSciNetView ArticleGoogle Scholar - Eriksson J, Koivunen V:
**Complex random vectors and ICA models: identifiability, uniqueness, and separability.***IEEE Trans Inf Theory*2006,**52**(3):1017-1029.MathSciNetView ArticleGoogle Scholar - Vía J, Ramírez D, Santamaría I:
**Properness and widely linear processing of quaternion random vectors.***IEEE Trans Inf Theory*2010,**56**(7):3502-3515.View ArticleGoogle Scholar - Mandic DP, Goh VSL:
*Complex Valued Nonlinear Adaptive Filters. Noncircularity, Widely Linear and Neural Models*. Wiley, New York; 2009.View ArticleGoogle Scholar - Adali T, Haykin S:
*Adaptive Signal Processing: Next Generation Solutions*. Wiley-IEEE Press; 2010.View ArticleGoogle Scholar - Took CC, Mandic DP:
**A quaternion widely linear adaptive filter.***IEEE Trans Signal Process*2010,**58**(8):4427-4431.MathSciNetView ArticleGoogle Scholar - Schreier PJ, Scharf LL:
*Statistical Signal Processing of Complex-Valued Data*. Cambridge University Press, Cambridge; 2010.View ArticleGoogle Scholar - Schreier PJ, Scharf LL, Clifford TM:
**Detection and estimation of improper complex random signals.***IEEE Trans Inf Theory*2005,**51**(1):306-312. 10.1109/TIT.2004.839538View ArticleGoogle Scholar - Navarro-Moreno J, Estudillo MD, Fernández-Alcalá RM, Ruiz-Molina JC:
**Estimation of improper complex random signals in colored noise by using the Hilbert space theory.***IEEE Trans Inf Theory*2009,**55**(6):2859-2867.View ArticleGoogle Scholar - Cambanis S:
**Representation of stochastic processes of second-order and linear operations.***J Math Appl*1973, (41):603-620.MathSciNetGoogle Scholar - Navarro-Moreno J, Ruiz-Molina JC, Fernández-Alcalá RM:
**Approximate series representations of linear operations on second-order stochastic processes: application to simulation.***IEEE Trans Inf Theory*2006,**52**(4):1789-1794.View ArticleGoogle Scholar - Gardner WA, Franks LE:
**An alternative approach to linear least squares estimation of continuous random processes.***5th Annual Princeton Conferene Information Sciences and Systems*1971.Google Scholar - Ghanem RG, Spanos PD:
*Stochastic Finite Elements: A Spectral Approach*. Springer, New York; 1991.View ArticleGoogle Scholar - Rasmussen CE, Williams CKI:
*Gaussian Processes for Machine Learning*. The MIT Press; 2006. [http://www.gaussianprocess.org/gpml/]Google Scholar - Oya A, Navarro-Moreno J, Ruiz-Molina JC:
**A numerical solution for multichannel detection.***IEEE Trans Commun*2009,**57**(6):1734-1742.MathSciNetView ArticleGoogle Scholar - Gill PE, Miller GF: An algorithm for the integration of unequally spaced data. Comput J 1972, (15):80-83.Google Scholar
- Proakis JG:
*Digital Communications*. McGraw-Hill, Newyork; 1989.Google Scholar - Liu SC:
**Evolutionary power spectral density of strong-motion earthquakes.***Bull Seismol Soc Am*1970,**60**(3):891-900.Google Scholar - Wang SS, Hong HP:
**Quantiles of critical separation distance for non-stationary seismic excitations.***Eng Struct*2006,**28:**985-991. 10.1016/j.engstruct.2005.11.003View ArticleGoogle Scholar - Zerva A, Zervas V:
**Spatial variation of seismic ground motions: an overview.***Appl Mech Rev*2002,**55**(3):271-297. 10.1115/1.1458013View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.