 Research
 Open access
 Published:
New graphical models for sequential data and the improved state estimations by dataconditioned driving noises
EURASIP Journal on Advances in Signal Processing volume 2024, Article number: 50 (2024)
Abstract
A prevalent problem in statistical signal processing, applied statistics, and time series analysis arises from the attempt to identify the hidden state of Markov process based on a set of available noisy observations. In the context of sequential data, filtering refers to the probability distribution of the underlying Markovian system given the measurements made at or before the time of the estimated state. In addition to the filtering, the smoothing distribution is obtained from incorporating measurements made after the time of the estimated state into the filtered solution. This work proposes a number of new filters and smoothers that, in contrast to the traditional schemes, systematically make use of the process noises to give rise to enhanced performances in addressing the state estimation problem. In doing so, our approaches for the resolution are characterized by the application of the graphical models; the graphbased framework not only provides a unified perspective on the existing filters and smoothers but leads us to design new algorithms in a consistent and comprehensible manner. Moreover, the graph models facilitate the implementation of the suggested algorithms through message passing on the graph.
1 Introduction
In a wide range of data science applications, such as tracking [1], navigation [2], and audio signal processing [3], one deals with the hidden Markov model that governs the dynamics of the latent process \(x_n\) and that establishes the relationship between the unobserved variable \(x_n\) and the observation \(y_n\). Here \(n \in {\mathbb {N}}\) signifies the discrete time. Let \(Y_{n'}:= \lbrace {y}_1,\cdots ,{y}_{n'} \rbrace\) be the historical accumulation of data and let \({x}_{nn'}:= {x}_nY_{n'}\) be the conditioned random variable, then the Bayesian resolution for the unknown instance of \(x_n\) is given by the probability \({P}(x_{nn'})\), which is called filtering when \(n=n'\), smoothing when \(n< n'\), and prediction when \(n> n'\) [4]. In this paper concern is confined to the filtering and smoothing distributions.
In the simplest case of linear dynamics together with linear observations corrupted by an independent Gaussian noise, the filtering/smoothing problem can explicitly be solved by the Kalman filter/smoother which describes how the mean and covariance of the conditional probability evolve in the course of time [5,6,7]. For nonlinear systems where the conditioned measure cannot be characterized by a Gaussian probability, however, the target distributions do not permit closedform expressions. In this case, various efforts have been made to obtain approximate solutions, see e.g., [5, 8,9,10,11,12,13,14,15,16,17,18,19,20]. Examples of traditional algorithms for filtering include the extended Kalman filter [14], the ensemble Kalman filter [15], unscented Kalman filter [21], cubature Kalman filter [22], Gaussian particle filter [23], the bootstrap filter and its variants [13, 16, 24, 25], and the Gaussian mixture filter [17, 18, 26, 27]. As for the smoothing, one can solve the problem by direct application of the Kalman filtering results because smoothing problem is a Kalman filtering problem in disguise (see, for instance, [4] and references therein). In recent years, significant attention has been directed toward the filtering/smoothing for the state estimation due to the remarkable success in various applications including medical tomography [28], geological tomography [29, 30], hydrology [31], petroleum engineering [32, 33], as well as a host of other physical, biological, or social systems [34,35,36,37].
Let \(w_n\) be the independent system noise that affects \(x_n\), then the dependency among the relevant variables can typically be visualized through the directed graphical model in Fig. 1. This diagrammatic representation of the relationships among the random variables allows for the exact or approximate inference via turning the complex computation into a number of reduced operations in the graph [38, 39]. Notice that the graphbased approach has been already introduced to address the problem of sequential data (see, for instance, [19, 40,41,42] and references therein). Here it is important to point out that the classical filter and smoother never pay particular attention to the system noise \(w_n\) in estimating the hidden signal, given observations. One can verify the tendency using the graph model; in general any inference method can be associated with a suitable graphical model, and the class of data assimilation algorithms that can be associated with the graph in Fig. 1 without noises \(w_n\) encompasses most of the existing filters and smoothers [38, 39].
The goal of this work is to investigate the utilities of the previously ignored system noises in improving the state estimation skill of some wellknown methods. Specifically, we build a new family of algorithms for which the driving noises play a more prominent and active role compared to the traditional schemes. The key idea for the development is to design a graphical model where, unlike the conventional graph, the process noises are explicitly visible and further integrated to the system variables so that they cannot be simply marginalized out but constitute an essential and integral part of the whole algorithm constructed according to the new graph. For instance, from the graph in Fig. 1, the graph model depicted in Fig. 2 can be obtained through augmenting the driving noise \(w_{n}\) to the system variable \(x_{n1}\). The resultant graph is in an equivalent form with the original graph without noises, and thus one can obtain the counterpart of a classic algorithm by using the variable \(X_n:=(x_{n1},w_n)^T\) in place of \(x_n\). Here the upper T denotes transpose.
The move to the other graphical model appears to bring no advantage due to the invariance of the basic scenario, but on the contrary, we will show that the inference schemes derived from the new graph yield more accurate and stable estimations of the unobserved underlying system state. For this, we address two principal ways of data assimilation, and the context can be classified into two parts accordingly. Section 2 discusses the batch assimilation which involves processing the entire training set in one go, while Sect. 3 studies the data assimilation techniques in the sequential fashion. Differently from the batch setting where one has the opportunity to reuse the data points many times and to obtain an answer irrelevant to the order of data, the sequential Bayesian updating uses each data point on arrival and then discards it before receiving the next point. Section 4 contains the summary of our contributions and discussion on future works.
2 Batch data assimilation
Section 2.1 introduces the variational inference scheme known as the expectation propagation or EP for short (one who has knowledge of EP can skip this section). Building a new smoother in Sect. 2.2, we provide the theoretical arguments on why the proposed method is likely to outperform in Sect. 2.3, and present numerical simulation results supporting our demonstration in Sect. 2.4.
2.1 Inference using EP
Suppose that the Markov chain \({\mathcal {X}}_n\) and the associated data \(y_n\) are governed by the transition density and the observation distribution:
Then the conditional distribution \(P({\mathcal {X}}_n Y_N)\) can be approximated using the EP scheme as follows.
Together with the notations \({\mathcal {X}}_{p:q}:= \{ {\mathcal {X}}_p,{\mathcal {X}}_{p+1},\cdots , {\mathcal {X}}_q \}\) and \({\mathcal {X}}_{0:1}:={\mathcal {X}}_1\), one has the factorization \(P({\mathcal {X}}_{1:N}Y_{N}) \propto \prod _{n=1}^N \varphi _n({\mathcal {X}}_{n1:n})\) where the factor function \(\varphi _n({\mathcal {X}}_{n1:n}):= P({\mathcal {X}}_n{\mathcal {X}}_{n1}) P(y_n{\mathcal {X}}_n)\) is partially evaluated at the realization of \(y_n\). The EP method then seeks an approximation of the (conditioned) joint distribution, which is in the form
where \({ {\widehat{\varphi }}_n }\) is a Gaussian approximation of \({ \varphi }_n\). The algorithm further assumes a Gaussian factorization of the element \({ {\widehat{\varphi }}_n=\beta _{n1}({\mathcal {X}}_{n1})\alpha _n({\mathcal {X}}_n) }\) so that \(q({\mathcal {X}}_{1:N}) \propto \prod _{n=1:N}q_n( {\mathcal {X}}_n)\) is fully factorized and \(q_n = \alpha _n \beta _n\).
Suppose that all of the factors in (2) are given by some Gaussian functions. Then EP recursively updates the factor \({ {\widehat{\varphi }}_n }\) to a new Gaussian function \({\widehat{\varphi }}'_n\) (hereafter the prime will be used to denote the revised one). In order to do this, one first removes the relevant factor from approximate distribution (2) and then multiplies the exact factor \({ {\varphi }_n }\) to obtain \({\widehat{q}}: = \varphi _n \prod _{j \ne n} {\widehat{\varphi }}_j\). One next evaluates the new posterior \(q'\) by minimizing the KL divergence of \({\widehat{q}}\) against \(q'\), given by \(\int {\widehat{q}} \ln \left( {\widehat{q}}/q' \right)\). The result is that \(q'\) comprises the product of factors in which each factor is given by the corresponding marginal of \({\widehat{q}}\). To obtain the refined factor \({\widehat{\varphi }}'_n (= \beta '_{n1}\alpha '_n)\), one simply divides \(q'\) by \(\prod _{n \ne j} {\widehat{\varphi }}_n\).
Let \({ p'_n({\mathcal {X}}_{n1:n}):= \alpha _{n1} \varphi _n \beta _n }\) then the procedure involves an approximate marginalization denoted by
where \(\backslash\) denotes the set difference. Here m is either \(m=n1\) or \(m=n\). The “collapse \(\int\)”operator performs projection to a Gaussian and marginalization over the states \({\mathcal {X}}_{n}\) or \({\mathcal {X}}_{n1}\). From \(q'_m = \alpha _{m}'\beta _m'\), the knowledge of (3) allows one to update \(\beta '_{n1}\) and \(\alpha '_n\).
The difficulty encountered here is that, when \(p'_n\) is a nonGaussian function, computation (3) is usually intractable. A particular way of resolving this issue leads to one instance of the EP algorithm. Here we introduce two stateoftheart techniques for the Gaussian approximation of \(q'_m\). One method is via the use of Gaussian cubature [41, 43]. Let \({ Q_n({\mathcal {X}}_{n1:n}):=\alpha _{n1}\beta _{n1} \alpha _n \beta _n }\) be the proposal distribution and let \({ \mu _{Q_n} = \sum _j \lambda _j \delta _{{\mathcal {X}}_{n1:n}^j} }\) be a cubature rule for this Gaussian function. By virtue of the reweighting \({ \int g {p'_n } \, \textrm{d}{\mathcal {X}}_{n1:n} = \int g \frac{p'_n }{Q_n } Q_n \, \textrm{d}{\mathcal {X}}_{n1:n} }\), the discrete measure
is an approximation of the distribution \({ {p}'_n }\). For the approximation of \({q_m'}\), we use the appropriate marginalization of the joint Gaussian distribution function whose mean and covariance are given by the ones from (4). We call this scheme EP–GC (expectation propagation–Gaussian cubature). The other method is to collapse the nonGaussian twoslice posterior belief \(p'_n\) to a Gaussian form by Laplace approximation [40, 42, 44]. This can be achieved by the truncation of the Taylor expansion of \(p'_n\) at the secondorder term [38]. We call the resulting scheme EPLaplace.
Importantly, the update procedure can be cast in terms of local message passing on the graphical model. To be more precise, one can regard \({ \alpha _n }\) and \({ \beta _{n1} }\) as the messages between the factor function and the variable in the graph illustrated in Fig. 3. Though there is no required order, the usual EP implementation iteratively performs the message update via the forward and backward pass. During the forward pass, the \(\alpha\) is updated, while the \(\beta\) remains fixed. During the backward pass, the \(\beta\) is updated, while the \(\alpha\) remains fixed. The resulting algorithm reads as follows;

1.
One initializes \(\alpha _n({\mathcal {X}}_n)\) and \(\beta _n({\mathcal {X}}_n)\) by suitable Gaussian functions (\(\beta _N \equiv 1\));,

2.
Until possible convergence, one continues to perform multiple forward–backward passes;,

Forward pass: update \({ \alpha _n }\) as \({ \alpha '_n \propto q_n'/\beta _{n} }\) in the ascending order of \({ n \in [1, N] }\),

Backward pass: update \({ \beta _{n1} }\) as \({ \beta '_{n1} \propto q_{n1}'/\alpha _{n1}}\) in the descending order of \({ n \in [2, N] }\).

No convergence guarantees can be given for EP. It is however known that, in case of being convergent, the solution minimizes the Bethe free energy that takes into account twopoint correlations between neighboring variables in the chain [38, 39, 45]. Eventually, the distribution function \(P({\mathcal {X}}_n Y_N)\) is approximated by \(q_n = \alpha _{n}\beta _n\), that is the product of two incoming messages into the circle node associated with the variable \({\mathcal {X}}_n\).
2.2 Proposed algorithm
Let the forward map for the hidden process and the likelihood function be given by
and let the law of \(x_1:=x_{10}\) be known. Here we intend to make use of EP in calculating the smoothing distribution \(P(x_{nN})\) for n ranging from 1 to N.
Notice that, in order to solve the estimation problem arised from Eq. (5), one typically applies EP to the graph in Fig. 3, for which \({\mathcal {X}}_n\) is given by \(x_n\). It deserves to mention that this particular graph model can be viewed as the one converted from the graph in Fig. 1 without driving noises. Motivated by the relationship between Figs. 1 and 2, we develop a new version of EP from converting the graph in Fig. 2 into the one in Fig. 3, for which \({\mathcal {X}}_n\) is given by \(X_n =(x_{n1},w_n)^T\). More precisely, using the notation \({x}_{n} =\phi ^n( {X}_{n} )\) instead of Eq. (5a), state space model (5) is reformulated into
and our proposed algorithm uses the transition kernel and likelihood governing state space model (6) in applying the EP method described in the preceding section.
One difference between the two methods is that, while the naive EP directly approximates \(P(x_nY_N)\), our suggested scheme yields an approximation of \(P(x_n,w_{n+1}Y_N)\). Hence, in order to obtain the smoothing distribution of the system variable, additional marginalization needs to be performed. Because the task involves the integration against the noises conditioned on data, we refer to the new method as conditionednoise EP or CNEP for short.
2.3 Discussion on the prospective performances
Based on the analytic approximation to the distribution function of interest by assuming that it factorizes into a particular way, the EP method carries out a variational inference through the iterated local optimization of a Kullback–Leibler (KL) divergence [19, 40]. For the problem of data assimilation, the original EP scheme seeks the approximate factorization of the probability distribution of the variables \(x_n\), given observations. By contrast, our CNEP explicitly takes into account the noise \(w_n\) and regards it as the part of the system variable so that the new method corresponds to seeking the approximate factorization of the joint distribution function expressed in terms of the augmented variable \(X_n =(x_{n1},w_n)^T\), conditioned on data. The critical effect by CNEP is that the reference measure in the KL divergence is replaced by the one containing the more information on the whole system state, and that the space of the candidate functions for the optimization using the KL divergence has been extended from the one for EP. It is therefore our belief that the CNEP enables a closer approach to the true distribution function, leading to the accuracy enhancement.
Furthermore, unlike the conventional EP where the driving noises are simply integrated out at early stage before conditioning on data, our CNEP produces solutions through averaging with respect to the conditioned noises; the procedure effectively weakens the potential information loss caused by the marginalization, giving rise to a similar effect with the fully Bayesian approach in machine learning where all of the involved variables are modeled and estimated according to Bayes’ rule to yield a conditionalaveraged result that is robust to the particular set of data [38, 39]. We thus anticipate the improvement of the batch assimilation performance by CNEP, compared to the use of EP, in the aspect of stability.
2.4 Numerical experiment
Let us consider the Poisson tracking model [41, 46]. The dynamic equation governs neural activity unfolding over time, and the spike counts within short timebins are observed. The state space model is given by
where erf\((\cdot )\) represents the error function. Here the transition kernel can be obtained from the forward map \(\phi ^n(x,w)=\Phi (x)+w\). Note that the data \(y_n \in \{ 0,1,2, \cdots \}\) assume a nonnegative integer according to a Poisson distribution, and that the likelihood is given by
Because Eq. (8) is a nonGaussian function, rather than many other algorithms, it is plausible to apply the EP method to calculate the smoothing distribution [41, 46]. Equation (7) thus serves as a good example in comparing the performances of EP and CNEP
For the parameter values \(\alpha =0.9\), \(\beta =0.2\), \(Q_n=0.1\), \(x_1 \sim {\mathcal {N}}(0.1, 0.1)\), and \(N=60\), we implement the EPGC and the corresponding CNEP method, which we call CNEPGC. To do this, we use the standard Gaussian cubature formula of degree 3 and 5 that can be found in [47], whose support size is 2k and \(2k^2+2\) in case of k dimension. For the same problem data, we also implement EPLaplace and the corresponding CNEP method, which we call CNEPLaplace. We perform a total of 40 independent simulations, and we use the rootmeansquare error (RMSE) to compare the various reconstructions of the evolving system state. Note the RMSE between \(A=\{ A_i\}_{i=1}^L\) and \(B=\{ B_i\}_{i=1}^L\) is defined by
where \(A_i\) and \(B_i\) are vectors.
We first calculate RMSE between the true trajectories and the mean of the smoothing solutions for each time n and depict the resulting RMSE distances as the function of time in the top panels of Fig. 4. This result shows that CNEP yields more accurate estimates than EP. While one can see notable improvement in the case of EP–GC and CNEP–GC, the difference between the accuracies of EPLaplace and CNEPLaplace is not so much. It also shows that the use of higherorder cubature is advantageous in case of classical EP–GC, but the benefit in case of CNEP–GC is not significant.
We next calculate the averaged RMSEs over the entire time interval, denoted by aRMSE, for independent simulations. The mean and variance of aRMSE are presented in the bottom panels of Fig. 4. The reduced mean further ensures the accuracy improvement of CNEP against EP. We use the variance to address the stability issue; the variance reduction due to CNEP implies that the state estimation results are more robust compared to the ones by EP.
Because the simulation shows that the outperformance of CNEPLaplace over EPLaplace is not significant in both aspects of accuracy and stability, our noiseconditioned framework appears not so advantageous for the EPLaplace. By contrast, it is indeed useful to apply CNEP–GC, rather than EP–GC, for the performance improvement. Considering the increased computational complexity in the implementation of CNEP–GC using Gaussian cubature degree 5, we conclude that CNEP–GC using Gaussian cubature degree 3 is the optimal choice among the smoothing algorithms under consideration in the example of state space model (7).
3 Recursive Bayesian estimation
Section 3.1 describes the representative recursive data assimilation schemes called the (nonlinear) Kalman filter and smoother. Section 3.2 discusses a number of graph models and the associated filtering/smoothing algorithms. Section 3.3 provides new techniques for the sequential filter and smoother. Specifying how to implement the concerned algorithms in Sect. 3.4, their performances will be compared in Sect. 3.5.
3.1 Sequential filter and smoother
In what follows, our presentation is in the context of the state space model given by
where \(w_n\) and \(\eta _n\) are independently distributed centered Gaussians. Given the law of \(x_1:=x_{10}\), the goal is to estimate the distribution of the conditioned variable \(x_{nn'}\). Here n ranges from 1 to N, and the case of either \(n'=n\) or \(n'=N\) will be considered depending on the problem of filtering and smoothing, respectively.
3.1.1 Forward filter
The typical approach adopted in most filters for a sequential estimation of the underlying system state is to recursively alternate the time update and the measurement update of the probability distribution [4, 8, 9]. Here and after, the notation \(\nearrow\)/\(\searrow\) will be used to increase/decrease the first index of the conditioned variable by one, and the notation \(\Rightarrow\) to increase the second index by one. For instance, \(x_{nm}\nearrow x_{n+1m}\) and \(x_{nm}\Rightarrow x_{nm+1}\). Then the pseudocode for the filter is as follows.
To achieve a Gaussian approximation of the target distribution, the forward time update is implemented in a way that one obtains a Gaussian approximation of \((x_{n1n1}, x_{nn1})\) and performs a suitable marginalization. As for the measurement update, one approximates the joint distribution of \((x_{nn1},y_{n})\) by Gaussian and applies Bayes’ rule (A1) described in Appendix A in order to perform the conditioning \(x_{nn1} \vert y_n:= x_{nn}\).
3.1.2 Backward smoother
While the sequential filter is in progress, the approximate distributions of \(x_{nn}\) are recursively obtained from \(n=1\) to \(n=N\) in the increasing order. Once the process for filtering is over, the nonlinear Kalman smoother given in Appendix B can be applied to yield Gaussian approximations of \(x_{nN}\) in the order of decreasing index, from \(n=N\) to \(n=1\). The pseudocode reads as follows.
Note the implementation requires the knowledge of the joint Gaussian distribution of \((x_{nn}, x_{n+1n})\), which was obtained and stored during the previous filtering procedure.
3.2 Onestep ahead filter and smoother
Notice that the forward filter and backward smoother introduced in the preceding section are directly relevant to the graph in Fig. 1 without the noises \(w_n\). More precisely, the forward–backward algorithm is the general graphbased method for the statistical inference via the message passing and reduces to the standard filter and smoother when it complies with the designated graph [38, 39].
In view of the same form of two directed graph models in Fig. 1 without noises and in Fig. 2, one is naturally interested in the forward–backward algorithm with respect to the graph in Fig. 2. Let us first describe the corresponding methods. In order to do that, state space model (9) governing \((x_n,y_n)\) is posed as the one in terms of the variables \((X_{n},y_{n})\):
Denoting \({X}_{nn'}:= {X}_nY_{n'}\), one makes use of state space model (10) to perform the iteration as follows.
From suitable marginalizations of the outcomes \(P(X_{n+1n})\), the filtering distributions \(P(x_{nn})\) are obtained for \(1 \le n \le N\). Likewise, the smoothing distributions \(P(x_{nN})\) can be calculated according to the following backward iteration.
We next remark that the forward filter making use of reformulation (10) has been proposed in the author’s prior work [48]. In the present paper the algorithm will be called the onestep ahead filter because, unlike the standard filter conducting the step \(x_{nn1} \Rightarrow x_{nn}\), its variant using Eq. (10) produces the filtering law of \(x_{nn}\) from the onestep ahead smoothing law of \(x_{n1n}\) (similarly, the smoother using Eq. (10) will be called the onestep ahead smoother). There is a body of work that has demonstrated the advantage of this alternative path toward the filtering distribution in addressing the estimation problem arised in geophysical sciences [49,50,51,52]. Inspired by the practical relevance of the smoothingbased filter, we would like to improve the current version of our onestep lookahead algorithms. This can be achieved by repeating essentially the same procedure with the one we did when Fig. 1 is converted to Fig. 2, and here we can take advantage of the graphmodel framework.
Specifically one can read from the graph in Fig. 2 that, in carrying out the time update of the onestep ahead filter to quantify the uncertainty propagated by the statistical model, the prediction \(x_{n1n} \nearrow x_{nn}\) is pushed forward according to the law of \(w_{n}y_{n}\). This driving noise conditioned on future observation possesses a nonzero value as the mean, giving rise to an effective form of importance sampling so that the filtered solution tends to be nudged to the true system trajectory [48]. Now the transition from Figs. 1 to 2 creates a momentum for us to proceed to a new graphical model presented in Fig. 5, and the philosophy lying behind this attempt is to strengthen the nudging effect; unlike the onestep ahead filter, the time update in the new graph drives the evolving estimate together with not only the conditioned noises \(w_{n}y_{n}\) but \(w_{n1}y_{n}\) as well. Our concern is whether the noises conditioned on further future observations can give rise to a more accurate estimation of the hidden state. We address the issue after building this insight into the practical algorithm in the next section.
3.3 Proposed algorithm: Twostep ahead filter and smoother
Here we formulate the forward filter and backward smoother associated with the graph in Fig. 5. Let \({\mathbb {X}}_n:= ({X}_{n1}, w_{n})^T =(x_{n2}, w_{n1},w_{n})^T\) and let Eq. (10a) be denoted by \({X}_{n}:= \Phi ^n({\mathbb {X}}_{n})\) then state space models (9), (10) can be posed as the one governing the joint variable \(({\mathbb {X}}_{n},y_n)\):
Using the notation \({\mathbb {X}}_{nn'}:= {\mathbb {X}}_nY_{n'}\), the filtering of \(x_{nn}\) can be obtained via the following twolayer procedure. On the one hand, the recursive estimation \({\mathbb {X}}_{nn1} \Rightarrow {\mathbb {X}}_{nn} \nearrow {\mathbb {X}}_{n+1n}\) is performed. Meanwhile, on the other hand, the time update \({\mathbb {X}}_{n+1n} \nearrow {\mathbb {X}}_{n+2n}\) is carried out for filtering. This additional step is necessary because the law of \(x_{nn'}\) comes from the marginalization of \({\mathbb {X}}_{n+2n'}\). Putting it together, the pseudocode reads as follows.
The fixedinterval smoothing distributions \(P(x_{nN})\) can be obtained from the following iteration and from the marginalization of \({\mathbb {X}}_{n+2N}\).
The proposed algorithms are called the twostep ahead filter and smoother. In Fig. 6, the conditioned variables \(x_{nn'}\) resulting from our onestep and twostep ahead filters are illustrated to emphasize the algorithmic difference. As in [53], the twostep ahead filter produces the law of \(x_{nn}\) through the smoothing distributions of \(x_{n2n}\) and \(x_{n1n}\). However, the presence of conditioned system noises distinguishes our method from the prior works and the framework can account for the prospective outperformance of the derived algorithms in a more comprehensible fashion. Though the approach can straightforwardly be generalized to higherorder procedures, we do not proceed further due to increasing complexity.
3.4 Implementation using cubature measure
Notice that the equations in state space models (9) and (10), (11) can be cast into the common form of \({\textsf{Y}}=\Phi ({\textsf{X}})\) for an appropriate function \(\Phi (\cdot )\) and a Gaussian random variable \({\textsf{X}}\). For instance, one can regard \({\textsf{X}} = ( X_{n1},w_n)\) and \({\textsf{Y}} = X_{n}\) in case of Eq. (10a), and \({\textsf{X}} = ( X_{n},\eta _n)\) and \({\textsf{Y}} = y_{n}\) in case of Eq. (10b). Therefore, for the implementation of the algorithms discussed so far, it is sufficient to define how to obtain a Gaussian approximation of \({\textsf{Z}}=({\textsf{X}},{\textsf{Y}})\).
The method adopted here is to pass a set of weighted points known as cubature points through the function and fit a Gaussian to the resulting transformed points. To be precise, let \(\mathsf { \mu _X = \sum _i \lambda _i \delta _{X^i} }\) be the cubature with respect to \({\textsf{X}}\), that refers to a discrete measure in possession of the same moments with the distribution of \({\textsf{X}}\) up to a certain degree. Let \(\mathsf { Z^i=(X^i,\Phi (X^i)) }\) then the mean and covariance of \(\mathsf { \mu _{Z} = \sum _i \lambda _i \delta _{Z^i} }\) are given by
The probability of \({\textsf{Z}}\) is approximated by the normal distribution \({\mathcal {N}}({\textsf{M}},\mathsf {\Sigma })\). The method is easy to implement, giving rise to computational advantage, and further ensures a good degree of accuracy in filtering and smoothing applications [22, 47, 54, 55].
In the present paper, we call the filter and smoother that are introduced in Sect. 3.1 and that use the cubature measure, the cubature Kalman filter (CKF) and the cubature Kalman smoother (CKS). Similarly, we call the filters that are introduced in Sects. 3.2 and 3.3 and that use the cubature measure, the conditionednoise cubature Kalman filter of the first order (CNCKF) and the conditionednoise cubature Kalman filter of the second order (CN2CKF). The corresponding smoothers are named by CNCKS and CN2CKS.
3.5 Numerical simulation
Here we numerically perform a comparison analysis of the algorithms defined in the preceding section. Specifically, we are interested in the accuracy and stability of the cubaturebased Gaussian approximation filters (CKF, CNCKF, CN2CKF) and smoothers (CKS, CNCKS, CN2CKS). Our testbed is two benchmark examples in the context of sequential filtering and smoothing.
3.5.1 Target tracking
Consider a model airtraffic monitoring scenario, where an aircraft executes a maneuvering turn in a horizontal plane at an unknown turn rate \(\Omega _n\) at time n [22, 48, 55]. The dynamical system is given by
where \({x}_n =( \texttt{x}_n, \dot{\texttt{x}}_n, \texttt{y}_n, \dot{\texttt{y}}_n, \Omega _n )^T\); \((\texttt{x}_n, \texttt{y}_n)\) and \((\dot{\texttt{x}}_n, \dot{\texttt{y}}_n)\) are the position and velocity at time n, respectively. The system noise \(w_n\) is distributed according to a centered Gaussian with covariance
The measurement equation is given by
and the noise covariance is \(R_n = \text {diag}(10^2, 10^{5} )\). The interobservation time is \(\Delta t = 1\) (hereafter the units of physical quantities are omitted for brevity).
The simulation results are based on 200 independent Monte Carlo runs. In each case, the initial state of the true signal is a draw from the normal distribution \(x_1 \sim {\mathcal {N}}\left( ( 10^3, 3 \cdot 10^2, 10^3, 0, \frac{3\pi }{180})^T, \text {diag}( [10^2, 10, 10^2, 10, 10^{4} ] ) \right)\), and the sequential observations over the time interval \(1 \le n \le 200\) are randomly generated.
We obtain a single trajectory of \(x_n\) over time by solving (12) and calculate the filtered/smoothed estimations using the observational data from (13). Figure 7 shows the RMSEs in each time step, with respect to position, velocity and turn rate. We see that (i) the performance of CNCKF/CNCKS is less sensitive to the cubature order compared to the case of CKF/CKS, and that (ii) the algorithms obtained by applying the conditionednoise framework uniformly outperform the naive methods. Figure 8 depicts the mean and variance of aRMSE. From the reduced means in case of our algorithms, compared to the classical schemes, we argue that there are improvements in accuracy. Similarly, the reduced variances imply that the new algorithms are more stable than the classical schemes. Because the performances of CNCKS and CN2CKS are similar to each other, taking into account the computational burden, CNCKS is preferred to the other suggested algorithms.
3.5.2 Ballistic target
Let us consider the problem of tracking a ballistic target under the influence of drag and gravity acting on the target [54, 56, 57]. Let \(x_n=(x_n^1,x_n^2,x_n^3)^T\) be the state vector, where \(x_n^1\) and \(x_n^2\) are altitude and velocity, respectively, and \(x_n^3\) is a constant ballistic coefficient. The equation of motion is given by
here \(\delta =0.5\) is the integration time, \(\gamma = 1.49\times 10^{4}\) and \(g = 9.81\) serve as the drag and gravity constants.
The measurement equation is given by
where M is the horizontal distance, and H determines the radar location. The system is characterized by the parameters \(H=10^3\), \(M=10^4\) and \(R_n=30^2\). The true initial state is \(x_1^*=(61 \cdot 10^3, 3048, 4.49 \cdot 10^{4})^T\), and the initial state density is \({\mathcal {N}}\left( (62 \cdot 10^3, 3400, 10^{5})^T, \text {diag}([10^6, 10^4,10^{4}]) \right)\).
We generate a single trajectory of \(x_n\) over time by solving (14) and obtain the filtered/smoothed estimations using the observational data from (15). We perform 1800 independent simulations. The illustrations in Figs. 9, 10 allow us to demonstrate the outperformance of the conditionednoise framework over the classical methods. Figure 9 depicts the RMSE values as the function of time, showing that (i) CN2CKF is always better than CNCKF, and (ii) CN2CKS is in general better than CNCKS but occasionally does not lead to an improved performance. This numerical simulation reveals that, quite interestingly, the outperformance of our framework holds even for the system without explicit driving noise. This result accords with the prior work provided in [53].
4 Conclusion
This paper considers the design of batch smoother and sequential filter/smoother for discretetime nonlinear systems with Gaussian noise. The new family of algorithms for the state estimation are proposed as follows;

1.
For state space model (5),

we develop the conditionednoise version of expectation propagation in Sect. 2.2,


2.
For state space model (9),
Note the development is achieved by the reformulation of the original state space model into the one governing the augmented variable comprising the system variable and driving noise. The implementation of the proposed algorithm is basically the same as the existing method, and the difference is that the conditionednoise expectation propagation makes use of (6), in place of (5), and the conditionednoise filter/smoother makes use of (10) and (11), in place of (9). The numerical simulations performed in Sects. 2.4, 3.5 confirm that, in any of the benchmark examples studied in the areas of batch and sequential data assimilation, the filters and smoothers developed according to the conditionednoise framework uniformly outperform the corresponding classical methods in both aspects of accuracy and stability. We emphasize that this result from the numerical analysis is in accordance with the theoretical reasoning by considering the role of the conditioned noise in Sects. 2.3, 3.2.
Throughout the text, we investigate Gaussian approximation of the target probability distribution. It is our belief that the conditionednoise framework remains to be competitive even when the distribution function is parametrized in a different way. In future works, therefore, we plan to develop Gaussian mixture and sequential Monte Carlo algorithms for filtering and smoothing, which are similar in spirit to the conditionednoise Gaussian filters and smoothers proposed in this work. With a goal to extend the applicability of the graphbased method beyond the scope that explored in this paper, we also plan to pursue the direction where an improved performance of data assimilation is sought via introducing a new graphical model with particular characteristics and considering an inference scheme in the graph.
We finally discuss the pathway to the impact of this work to the academia and industry. Our effort to shed a new light on the driving noises can bring advantages to the academic community by creating new research momentum in devising a data assimilation technique on the basis of the graphical model. Enriching the library of the filters/smoothers directly applicable to solving realworld problems, our research result will be beneficial for industrial progress where the need for accurate and stable filtering/smoothing schemes is paramount.
Availability of data and materials
Not applicable.
References
S. Särkkä, V.V. Viikari, M. Huusko, K. Jaakkola, Phasebased uhf rfid tracking with nonlinear kalman filtering and smoothing. Sens. J. IEEE 12(5), 904–910 (2012)
D. Titterton, J.L. Weston, Strapdown Inertial Navigation Technology vol. 17. IET (2004)
S. Godsill, P. Rayner, O. Cappé, Digital Audio Restoration (Springer, 2002)
B.D.O. Anderson, J.B. Moore, Optimal Filtering, vol. 11 (Prenticehall Englewood Cliffs, NJ, 1979)
R.E. Kalman et al., A new approach to linear filtering and prediction problems. J. Basic Eng. 82(1), 35–45 (1960)
R.E. Kalman, R.S. Bucy, New results in linear filtering and prediction theory. J. Basic Eng. 83(3), 95–108 (1961)
S. Särkkä, Bayesian Filtering and Smoothing, vol. 3 (Cambridge University Press, 2013)
H. Kushner, Approximations to optimal nonlinear filters. IEEE Trans. Autom. Control 12(5), 546–556 (1967)
A. Jazwinski, Stochastic Processes and Filtering Theory, Mathematics in science and engineering, vol. 64. (Academic Press, San Diego, California, 1970)
A.F. Bennett, Inverse Methods in Physical Oceanography (Cambridge University Press, 1992)
E. Kalnay, Atmospheric Modeling, Data Assimilation, and Predictability (2003)
D.S. Oliver, A.C. Reynolds, N. Liu, Inverse Theory for Petroleum Reservoir Characterization and History Matching (Cambridge University Press, 2008)
A. Doucet, N. De Freitas, N. Gordon, Sequential Monte Carlo Methods in Practice (Springer Verlag, 2001)
A. Gelb, Applied Optimal Estimation (MIT Press, 1974)
G. Evensen, Data Assimilation: The Ensemble Kalman Filter (Springer Verlag, 2009)
N.J. Gordon, D.J. Salmond, A.F.M. Smith, in Novel Approach to Nonlinear/Nongaussian Bayesian State Estimation, IEE Proceedings F Radar and Signal Processing, vol. 140 (IET, 1993), pp. 107–113
R. Chen, J.S. Liu, Mixture Kalman filters. J. R. Stat. Soc. Ser. B (Statistical Methodology) 62(3), 493–508 (2000)
A.S. Stordal, H.A. Karlsen, G. Nævdal, H.J. Skaug, B. Vallès, Bridging the ensemble Kalman filter and particle filters: the adaptive gaussian mixture filter. Comput. Geosci. 15(2), 293–305 (2011)
T.P. Minka, in Expectation Propagation for Approximate Bayesian inference, Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, (Morgan Kaufmann Publishers Inc, 2001), pp. 362–369
S. Razali, K. Watanabe, S. Maeyama, K. Izumi, in An Unscented RauchTungStriebel Smoother for a Bearing only Tracking Problem, ICCAS 2010, (IEEE, 2010), pp. 1281–1286
S.J. Julier, J.K. Uhlmann, Unscented filtering and nonlinear estimation. Proc. IEEE 92(3), 401–422 (2004)
I. Arasaratnam, S. Haykin, Cubature Kalman filters. IEEE Trans. Autom. Control 54(6), 1254–1269 (2009)
J.H. Kotecha, P.M. Djuric, Gaussian sum particle filtering. IEEE Trans. Signal Process. 51(10), 2602–2612 (2003)
M.S. Arulampalam, S. Maskell, N. Gordon, T. Clapp, A tutorial on particle filters for online nonlinear/nonGaussian Bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002)
O. Cappé, E. Moulines, T. Rydén, Inference in Hidden Markov Models (Springer, 2005)
I. Hoteit, D. Pham, G. Triantafyllou, G. Korres, A new approximate solution of the optimal nonlinear filter for data assimilation in meteorology and oceanography. Mon. Weather Rev. 136(1), 317–334 (2008)
I. Hoteit, X. Luo, D.T. Pham, Particle Kalman filtering: a nonlinear Bayesian framework for ensemble Kalman filters*. Mon. Weather Rev. 140, 528–542 (2012)
I.S. Weir, Fully Bayesian reconstructions from singlephoton emission computed tomography data. J. Am. Stat. Assoc. 92(437), 49–60 (1997)
R.P. Barry, M. Jay, V. Hoef, Blackbox kriging: spatial prediction without specifying variogram models. J. Agric. Biol. Environ. Stat. 297–322 (1996)
R. Glaser, G. Johannesson, S. Sengupta, B. Kosovic, S. Carle, G. Franz, R. Aines, J. Nitao, W. Hanley, A. Ramirez, et al, Stochastic engine final report: Applying markov chain monte carlo methods with importance sampling to largescale datadriven simulation, Technical report, (Lawrence Livermore National Lab., Livermore, CA, 2004)
H.K. Lee, D.M. Higdon, Z. Bi, M.A. Ferreira, M. West, Markov random field models for highdimensional parameters in simulations of fluid flow in porous media. Technometrics 44(3), 230–241 (2002)
P.S. Craig, M. Goldstein, J.C. Rougier, A.H. Seheult, Bayesian forecasting for complex systems using computer simulators. J. Am. Stat. Assoc. 96(454), 717–729 (2001)
B.K. Hegstad, O. Henning et al., Uncertainty in production forecasts based on well observations, seismic data, and production history. Spe J. 6(04), 409–424 (2001)
P.K. Kitanidis, Parameter uncertainty in estimation of spatial functions: Bayesian analysis. Water Resour. Res. 22(4), 499–507 (1986)
F. Liu, M. Bayarri, J. Berger, R. Paulo, J. Sacks, A Bayesian analysis of the thermal challenge problem. Comput. Methods Appl. Mech. Eng. 197(29–32), 2457–2466 (2008)
D.M. Schmidt, J.S. George, C.C. Wood, Bayesian inference applied to the electromagnetic inverse problem. Hum. Brain Mapp. 7(3), 195–212 (1999)
J. Wang, N. Zabaras, Hierarchical Bayesian models for inverse problems in heat conduction. Inverse Probl. 21(1), 183 (2004)
C.M. Bishop et al., Pattern Recognition and Machine Learning, vol. 4 (Springer, New York, 2006)
K.P. Murphy, Machine Learning: A Probabilistic Perspective (MIT press, 2012)
A. Ypma, T. Heskes, Novel approximations for inference in nonlinear dynamical systems using expectation propagation. Neurocomputing 69(1), 85–99 (2005)
B.M. Yu, K.V. Shenoy, M. Sahani, in Expectation Propagation for Inference in Nonlinear Dynamical Models with Poisson Observations, Nonlinear Statistical Signal Processing Workshop, (IEEE, 2006) pp. 83–86
M. Deisenroth, S. Mohamed, Expectation propagation in gaussian process dynamical systems. Adv. Neural Inf. Process. Syst. 25 (2012)
O. Zoeter, A. Ypma, T. Heskes, in Improved Unscented Kalman Smoothing for Stock Volatility Estimation, Proceedings of the 2004 14th IEEE Signal Processing Society Workshop Machine Learning for Signal Processing, (IEEE, 2004) pp. 143–152
M.P. Deisenroth, R.D. Turner, M.F. Huber, U.D. Hanebeck, C.E. Rasmussen, Robust filtering and smoothing with gaussian processes. IEEE Trans. Autom. Control 57(7), 1865–1871 (2011)
T.P. Minka, From hidden markov models to linear dynamical systems. Technical report, Tech. Rep. 531, (Vision and Modeling Group of Media Lab, MIT, 1999)
B. Yu, A. Afshar, G. Santhanam, S.I. Ryu, K. Shenoy, M. Sahani, Extracting dynamical structure embedded in neural activity. Adv. Neural Inf. Process. Syst. 18, 1545 (2006)
B. Jia, M. Xin, Y. Cheng, Highdegree cubature Kalman filter. Automatica 49(2), 510–518 (2013)
W. Lee, C. Farmer, Data assimilation by conditioning of driving noise on future observations. IEEE Trans. Signal Process. 62(15), 3887–3896 (2014)
M.E. Gharamti, B. AitElFquih, I. Hoteit, in A OneStepAhead SmoothingBased Joint Ensemble Kalman Filter for StateParameter Estimation of Hydrological Models, Dynamic DataDriven Environmental Systems Science, (Springer, 2015), pp. 207–214
M. Gharamti, B. AitElFquih, I. Hoteit, An iterative ensemble kalman filter with onestepahead smoothing for stateparameters estimation of contaminant transport models. J. Hydrol. 527, 442–457 (2015)
B. AitElFquih, M. El Gharamti, I. Hoteit, A bayesian consistent dual ensemble Kalman filter for stateparameter estimation in subsurface hydrology. Hydrol. Earth Syst. Sci 20, 3289–3307 (2016)
N.F. Raboudi, B. AitElFquih, I. Hoteit, Ensemble Kalman filtering with onestepahead smoothing. Mon. Weather Rev. 146(2), 561–581 (2018)
F. Desbouvries, Y. Petetin, B. AitElFquih, Direct, predictionand smoothingbased Kalman and particle filter algorithms. Signal Process. 91(8), 2064–2077 (2011)
I. Arasaratnam, S. Haykin, Cubature Kalman smoothers. Automatica 47(10), 2245–2250 (2011)
B. Jia, M. Xin, RauchTungStriebel HighDegree Cubature Kalman Smoother, American Control Conference (ACC), (IEEE, 2013), pp. 2472–2477
S.J. Julier, J.K. Uhlmann, A general method for approximating nonlinear transformations of probability distributions. Technical report, Technical report, Robotics Research Group, Department of Engineering Science, (University of Oxford, 1996)
B. Ristic, S. Arulampalam, N. Gordon, Beyond the Kalman filter. IEEE Aerosp. Electron. Syst. Mag. 19(7), 37–38 (2004)
Acknowledgements
The author thanks the anonymous referees for their helpful comments and suggestions, which indeed contributed to improving the quality of the publication.
Funding
This work is supported by the Startup Research Grant of Beijing Normal University (No. 28704310432107), the United International College Startup Research Fund (No. UICR070004022) and the National Natural Science Fund of China (NSFC) Research Fund for International Excellent Young Scientists (No. 12250610190).
Author information
Authors and Affiliations
Contributions
Not applicable.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Bayes’ rule for conditional Gaussian
Let \(Z = \left[ \begin{array}{c} X \\ Y \end{array}\right]\) be distributed as a Gaussian with the mean \(\left[ \begin{array}{c} {\bar{x}} \\ {\bar{y}} \end{array}\right]\) and covariance \(\left[ \begin{array}{cc} \Sigma _{xx} &{} \Sigma _{xy} \\ \Sigma _{yx} &{} \Sigma _{yy} \end{array}\right]\). Then Bayes’ rule asserts that the law of \(X\vert Y\) with \(Y=y\) is Gaussian with the mean and covariance given by
respectively [4].
Appendix B: Nonlinear Kalman smoother
Let \({\textsf{x}}_{nn'}:={\textsf{x}}_{n} Y_{n'}\) then, based on the Gaussian assumption \({\textsf{x}}_{nn'} \sim {\mathcal {N}}(\bar{{\textsf{x}}}_{nn'},C_{nn'})\), one can derive
where \(G_n = D_{n+1}C_{n+1n}^{1}\) and \(D_{n+1} = \text {cov}({\textsf{x}}_{nn}, {\textsf{x}}_{n+1n})\) [4, 7]. Recurrence relation (B2) enables the backward time update \(\textsf{ x}_{n+1N}\searrow {\textsf{x}}_{nN}\), provided the joint Gaussian law of \(({\textsf{x}}_{nn}, {\textsf{x}}_{n+1n})\) is known.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lee, W. New graphical models for sequential data and the improved state estimations by dataconditioned driving noises. EURASIP J. Adv. Signal Process. 2024, 50 (2024). https://doi.org/10.1186/s1363402401145z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363402401145z