Online sequential Monte Carlo smoother for partially observed diffusion processes
 Pierre Gloaguen^{1},
 MariePierre Étienne^{1} and
 Sylvain Le Corff^{2}Email authorView ORCID ID profile
https://doi.org/10.1186/s1363401805303
© The Author(s) 2018
Received: 5 March 2017
Accepted: 11 January 2018
Published: 2 February 2018
Abstract
This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.
Keywords
1 Introduction
This paper introduces a new algorithm to solve the smoothing problem for hidden Markov models (HMMs) whose hidden state is a solution to a stochastic differential equation (SDE). These models are referred to as partially observed diffusion (POD) processes in [27]. The hidden state process (X_{ t })_{t≥0} is assumed to be a solution to a SDE, and the only information available is given by noisy observations (Y_{ k })_{0≤k≤n} of the states (X_{ k })_{0≤k≤n} (where X_{ k } stands for \(X_{t_{k}}\)) at some discrete time points (t_{ k })_{0≤k≤n}. The bivariate stochastic process {(X_{ k },Y_{ k })}_{0≤k≤n} is a statespace model such that conditional on the state sequence (X_{ k })_{0≤k≤n} the observations (Y_{ k })_{0≤k≤n} are independent and for all 0≤ℓ≤n the conditional distribution of Y_{ ℓ } given {X_{ k }}_{0≤k≤n} depends on X_{ ℓ } only.
Statistical inference for HMMs often requires to solve Bayesian filtering and smoothing problems, i.e., the computation of the posterior distributions of sequences of hidden states given observations. The filtering problem refers to the estimation, for each 0≤k≤n, of the distributions of the hidden state X_{ k } given the observations (Y_{0},…,Y_{ k }). Smoothing stands for the estimation of the distributions of the sequence of states (X_{ k },…,X_{ p }) given observations (Y_{0},…,Y_{ ℓ }) with 0≤k≤p≤ℓ≤n. These posterior distributions are crucial to compute maximum likelihood estimators of unknown parameters using the observations (Y_{0},…,Y_{ n }) only. For instance, the Estep of the EM algorithm introduced in [9] boils down to the computation of a conditional expectation of an additive functional of the hidden states given all the observations up to time n. Similarly, by Fisher’s identity, recursive maximum likelihood estimates may be computed using the gradient of the log likelihood which can be written as the conditional expectation of an additive functional of the hidden states. See [7, Chapters 10 and 11], [19, 23, 24, 31] for further references on the use of these smoothed expectations of additive functionals applied to maximum likelihood parameter inference in latent data models.
However, in most cases, the exact computation of these expectations is usually not possible explicitly. Sequential Monte Carlo (SMC) methods are popular algorithms to approximate smoothing distributions with random particles associated with importance weights. [17, 22] introduced the first particle filters and smoothers for statespace models by combining importance sampling steps to propagate particles with resampling steps to duplicate or discard particles according to their importance weights. In the case of HMMs, approximations of the smoothing distributions may be obtained using the forward filtering backward smoothing algorithm (FFBS) and the forward filtering backward simulation algorithm (FFBSi) developed respectively in [11, 18, 22], and [16]. Both algorithms require first a forward pass which produces a set of particles and weights approximating the sequence of filtering distributions up to time n. Then, a backward pass is performed to compute new weights (FFBS) or sample trajectories (FFBSi) in order to approximate the smoothing distributions. Recently, [28] proposed a new SMC algorithm, the particlebased rapid incremental smoother (PaRIS), to approximate onthefly (i.e., using the observations as they are received) smoothed expectations of additive functionals. Unlike the FFBS algorithm, the complexity of this algorithm grows only linearly with the number of particles N and contrary to the FFBSi algorithm, no backward pass is required. One of the best features of PaRIS algorithm is that it may be implemented online, using the observations (Y_{ k })_{k≥0} as they are received, without any increasing storage requirements.
Unfortunately, these methods cannot be applied directly to POD processes since some elementary quantities, such as transition densities of the hidden states, are not available explicitly. In the context of SDEs, discretization procedures may be used to approximate transition densities. For instance, the classical EulerMaruyama method, the Ozaki discretization which proposes a linear approximation of the drift coefficient between two observations [29, 32], or Gaussianbased approximations using Taylor expansions of the conditional mean and variance of an observation given the observation at the previous time step, [20, 21, 33]. Other approaches based on Hermite polynomials expansion were also introduced by [1–3] and were extended in several directions recently, see [25] and all the references on the approximation of transition densities therein. However, even the most recent discretizationbased approximations of the transition densities induce a systematic bias in the approximation of the transition densities, see for instance [8].
To overcome this difficulty, [13] proposed to solve the filtering problem by combining SMC methods with an unbiased estimate of the transition densities based on the generalized Poisson estimator (GPE). In this case, only the Monte Carlo error has to be controlled as there is no Taylor expansion to approximate unknown transition densities, i.e., no discretization scheme is used. The only solution to solve the smoothing problem for POD processes using SMC methods without any discretization procedure has been proposed in [27] and extends the fixedlag smoother of [26]. Using forgetting properties of the hidden chain, the algorithm improves the performance of [13] to approximate smoothing distributions but at the cost of a bias, this time due to the fixed lag approximation, that does not vanish as the number of particles grows to infinity.
In this paper, we propose to use SMC methods to obtain consistent approximations of smoothing expectations of POD processes by extending the PaRIS algorithm. The proposed algorithm allows to approximate smoothed expectations of additive functionals online, with a complexity growing only linearly with the number of particles and without any discretization procedure or Taylor expansion of the transition densities. The crucial and simple result (Lemma 1) of the application of the PaRIS algorithm to POD processes is that the acceptance rejection mechanism introduced in [10] ensuring the linear complexity of the procedure is still correct when the transition densities are replaced by unbiased estimates. The usual FFBS and FFBSi algorithms may not extend this easily since they both require the computation of weights defined as ratios involving the transition densities, thus replacing these unknown quantities by unbiased estimates does not lead to unbiased estimators of the weights. The linear version of the FFBSi algorithm proposed in [10] could be extended in a similar way as PaRIS algorithm but it would still require a backward pass and would not be an online smoother. The proposed generalized random version of PaRIS algorithm, hereafter named GRand PaRIS algorithm, may not only be applied to POD processes but also to any general statespace model where the transition density of the hidden chain may be approximated using a positive and unbiased estimator.
Section 2 describes the model and the smoothing quantities to be estimated. Section 3 provides the algorithm to approximate smoothed additive functionals using unbiased estimates of the transition density of the hidden states. This section also details the application of this algorithm when the transition density are approximated using a GPE. In Section 4, classical convergence results for SMC smoothers are extended to the setting of this paper and illustrated with numerical experiments in Section 5. All proofs are postponed to Appendix.
2 Model and framework
where (W_{ t })_{t≥0} is a standard Brownian motion on \(\mathbb {R}^{d}\), \(\alpha : \mathbb {R}^{d}\to \mathbb {R}^{d}\), and \(\Gamma : \mathbb {R}^{d}\to \mathbb {R}^{d\times d}\). The solution to (1) is supposed to be partially observed at times t_{0}=0,…,t_{ n } through an observation process (Y_{ k })_{0≤k≤n} in \(\left (\mathbb {R}^{m}\right)^{n+1}\). In the following, for all 0≤k≤n, the state \(X_{t_{k}}\) at time k is referred to as X_{ k }. For all 0≤k≤n, the distribution of Y_{ k } given X_{ k } has a density with respect to a reference measure λ on \(\mathbb {R}^{m}\) given by g(X_{ k },·). For the sake of simplicity, the shorthand notation g_{ k }(X_{ k }) for g(X_{ k },Y_{ k }) is used. The distribution of X_{0} has a density with respect to a reference measure μ on \(\mathbb {R}^{d}\) given by χ. For all 0≤k≤n−1, the conditional distribution of X_{k+1} given X_{ k } has a density q_{ k }(X_{ k },·) with respect to μ.
when \(\{h_{k}\}_{k=0}^{n1}\) are given functions on \(\mathbb {R}^{d}\times \mathbb {R}^{d}\). Smoothed additive functionals as (2) are crucial for maximum likelihood inference of latent data models. These quantities appear naturally when computing the Fisher score in hidden Markov models or the intermediate quantity of the expectation maximization algorithm (see Section 5). They are also pivotal to design online expectation maximizationbased algorithms which motivates the method introduced in this paper that does not require growing storage and can process observations online.
The algorithm proposed in this paper is based on sequential Monte Carlo methods which offer a flexible framework to approximate such distributions with weighted empirical measures associated with random samples. At each time step, the samples are moved randomly in \(\mathbb {R}^{d}\) and associated with importance weights. In general situations, the computation of these importance weights involve the unknown transition density of the process (1). The solution introduced in Section 3 requires an unbiased estimator of these unknown transition densities. Moreover, this estimator must be almost surely positive and upper bounded. Statistical inference of stochastic differential equations is an active area of research, and several solutions have been proposed to design unbiased estimates of these transition densities. Those estimators require different assumptions on the model (1), we provide below several solutions that can be investigated.
 i)
α is of the form α(x)=∇_{ x }A(x) where \(A: \mathbb {R}^{d} \to \mathbb {R}\) is a twice continuously differentiable function ;
 ii)
the function x↦(∥α(x)∥^{2}+△A(x))/2 is lower bounded where △ is the Laplace operator.
Assumption (i) is somewhat restrictive as it requires α to derive from a scalar potential, however, it has natural applications in many fields such as movement ecology, see [15]. Assumption (ii) is a technical condition which ensures that exact sampling of processes solution to (1) using acceptance rejection methods, see for instance [4, 5, 13]. In addition to provide an unbiased estimate of the transition density, the GPE ensure that this estimate is almost surely positive. Moreover, as detailed below, under additional conditions, a GPE that is almost surely upper bounded can be defined.
Continuous importance samplingbased estimators In the case the previous assumptions are not fulfilled, in particular assumption (i), alternatives to GPEs are given by continuous importance sampling procedures for SDE. In [34], for each 0≤k≤n−1, the transition density between t_{ k } and t_{k+1} is expressed as an infinite expansion obtained using the Kolmogorov backward operator associated with (1). This analytical expression of the transition density is not tractable and is estimated by updating random samples at random times between t_{ k } and t_{k+1} using tractable proposal distributions (for instance, based on an Euler discretization of the original SDE). Then, these samples are associated with random weights to ensure that the proposed estimator is unbiased. More recently, [14] extended the discrete time importance sampling estimator by introducing updates at random times associated with a renewal process. The random samples are weighted using the Kolmogorov forward operator associated with the SDE which relies on the first two order derivatives of the drift and diffusion coefficients (and is therefore tractable).
The unbiasedness of these procedures and the controls of the variability of the estimates require moments assumptions and Holder type conditions on the parameters of the SDE (1). Their efficiency require a fair amount of tuning as they highly depend on the proposal densities used to obtain the Monte Carlo samples and the point processes generating the underlying random times. In addition to unbiasedness, the proposed algorithm in this work requires that the estimator of the transition density is almost surely positive and upper bounded. This implies additional assumptions on the SDE depending on the chosen estimate and could lead to interesting perspectives.
3 The generalized random PaRIS algorithm
The approximation of (5) requires first to approximate the sequence of filtering distributions. Sequential Monte Carlo methods provide an efficient and simple solution to obtain these approximations using sets of particles \(\left \{\xi ^{\ell }_{k}\right \}_{\ell =1}^{N}\) associated with weights \(\left \{\omega ^{\ell }_{k}\right \}_{\ell =1}^{N}\), 0≤k≤n.
 
choose a particle index \(I^{\ell }_{k}\) at time k−1 in {1,…,N} with probabilities proportional to \(\omega _{k1}^{j} \vartheta _{k} \left (\xi ^{j}_{k1}\right)\), for j in {1,…,N} ;
 
sample \(\xi ^{\ell }_{k}\) using this chosen particle according to \(\xi ^{\ell }_{k} \sim p_{k1}\left (\xi ^{I^{\ell }_{k}}_{k1},\cdot \right)\) ;
 associate the particle \(\xi ^{\ell }_{k}\) with the importance weight:$$ \omega^{\ell}_{k} := \frac{q_{k1}\left(\xi_{k1}^{I^{\ell}_{k}},\xi^{\ell}_{k}\right)g_{k}\left(\xi^{\ell}_{k}\right)}{\vartheta_{k}\left(\xi^{I^{\ell}_{k}}_{k1}\right) p_{k1} \left(\xi_{k1}^{I^{\ell}_{k}},\xi^{\ell}_{k}\right)}\;. $$(6)
The most simple choice for p_{k−1} and 𝜗_{ k } is the bootstrap filter proposed by [17] which sets p_{k−1}=q_{k−1} and for all \(x\in \mathbb {R}^{d}\), 𝜗(x)=1. In the case of POD processes, q_{k−1} is unknown but it can be replaced by any approximation to sample the particles as any choice of p_{k−1} can be made. The approximation can be obtained using a discretization scheme such as Euler method or a Poissonbased approximation as detailed below. A more appealing choice is the fully adapted particle filter which sets for all \(x,x'\in \mathbb {R}^{d}\), p_{k−1}(x,x^{′})∝q_{k−1}(x,x^{′})g_{ k }(x^{′}) and for all \(x\in \mathbb {R}^{d}\), \(\vartheta (x) = \int q_{k1}\left (x,x'\right)g_{k}\left (x'\right)\mu \left (\mathrm {d} x'\right)\). Here, again q_{k−1} has to be replaced by an approximation. In Section 5, it is replaced by the Gaussian approximation provided by a Euler scheme which leads to a Gaussian proposal density p_{k−1} as the observation model is linear and Gaussian.
The PaRIS algorithm uses the same decomposition as the FFBS algorithm introduced in [12] and the FFBSi algorithm proposed by [16] to approximate smoothing distributions. It combines both the forwardonly version of the FFBS algorithm with the sampling mechanism of the FFBSi algorithm. It does not produce an approximation of the smoothing distributions but of the smoothed expectation of a fixed additive functional and thus may be used to approximate (2). Its crucial property is that it does not require a backward pass, the smoothed expectation is computed onthefly with the particle filter and no storage of the particles or weights is needed.
 (i)
Run one step of a particle filter to produce \(\left \{\left (\xi ^{\ell }_{k}, \omega ^{\ell }_{k}\right)\right \}\) for 1≤ℓ≤N.
 (ii)
For all 1≤i≤N, sample independently \(J_{k}^{i,\ell }\) in {1,…,N} for \(1\le \ell \le \widetilde N\) with probabilities \(\Lambda _{k}^{N}(i,\cdot)\), given by (8).
 (iii)Set$$\tau^{i}_{k+1} := \frac{1}{\widetilde{N}} \sum^{\widetilde{N}}_{\ell=1} \left\{ \tau^{J_{k}^{i,\ell}}_{k} + h_{k} \left(\xi^{J_{k}^{i,\ell}}_{k}, \xi^{i}_{k+1}\right) \right\}\;. $$
It is clear from steps (i) to (iii) that each time a new observation Y_{n+1} is received, the quantities \(\left (\tau _{n+1}^{i}\right)_{1\le i \le N}\) can be updated only using Y_{n+1}, \(\left (\tau _{n}^{i}\right)_{1\le i \le N}\) and the particle filter at time n. This means that storage requirements do not increase when processing additional data.
As proved in [28], the algorithm is asymptotically consistent (as N goes to infinity) for any precision parameter \(\tilde N\). However, there is a significant qualitative difference between the cases \(\tilde {N} = 1\) and \(\tilde {N} \geq 2\). As for the FFBSi algorithm, when there exists σ_{+} such that 0<q_{ k }<σ_{+}, PaRIS algorithm may be implemented with \(\mathcal {O}(N)\) complexity using the acceptreject mechanism of [10].
In Algorithm 1, M independent copies \(\left (\zeta ^{m}_{k1}\right)_{1\le m \le M}\) of ζ_{k−1} are sampled and the empirical mean of the associated estimates of the transition density are used to compute \(\widehat {\omega }^{\ell }_{k}\) instead of a single realization. Therefore, to obtain a generalized random version of PaRIS algorithm, we only need to be able to sample from the discrete probability distribution \(\Lambda _{k}^{N}(i,\cdot)\) in the case of POD processes.
Lemma 1
Assume that (10) holds for some 0≤k≤n−1. For all 1≤i≤N, define the random variable \(J_{k}^{i}\) as follows:
Then, the conditional probability distribution given \(\mathcal {G}_{k+1}^{N}\) of \(J_{k}^{i}\) is \(\Lambda _{k}^{N}(i,\cdot)\).
Proof
See Appendix. □
If only assumption (12) holds, the algorithm has a quadratic complexity. The bound of (10) is uniform (it does not depend on the particles) and can be used for every particle 1≤i≤N. However, this bound can be large (with respect to the simulated set of particles) for the algorithm of Lemma 1. The bound of (12) requires N computations per particle (therefore, N^{2} computations). However, it is clear that this second bound is sharper that the one of (10) for the acceptance rejection procedure and may lead to a computationally more efficient algorithm.
Assume that there exist random variables L_{ w } and U_{ w } such that for all 0≤s≤Δ_{ k }, L_{ w }≤ϕ(w_{ s })≤U_{ w }. The performance of the estimator depends on the choice of L_{ w } and U_{ w } which is specific to the SDE. In the case of the models analyzed in Section 5, these bounds are discussed in [13] for the SINE model and in [27] for the loggrowth model. Note that in the case where ϕ is not upper bounded, [5] proposed the EA3 algorithm. This layered Brownian bridge construction first samples random variables to determine in which layer the Brownian bridge lies before simulating the bridge conditional on the event that it belongs to the layer. By continuity of ϕ, L_{ w }, and U_{ w } can be computed easily.
is upper bounded almost surely by \(\hat {\sigma }^{k}_{+}\). In particular, if L_{ w } is bounded below almost surely, (14) always satisfies assumption (12) and Algorithm 1 can be used. This condition is always satisfied for models in the domains required for the applications of exact algorithms EA1, EA2, and EA3 defined in [6].
When (10) or (11) holds, it can be nonetheless of practical interest to choose the bounds \(\hat {\sigma }^{k,i}_{+}\), 1≤i≤N, corresponding to (12). Indeed, this might increase significantly the acceptance rate of the algorithm, and therefore reduce the number of draws of the random variable ζ_{ k }, which has a much higher cost than the computation of \(\rho _{\Delta _{k}}\), as it requires simulations of Brownian bridges. Moreover, this option allows to avoid numerical optimization if no analytical expression of \(\hat {\sigma }_{+}^{k}\) is available. In practice, this seems more efficient in terms of computational time when N has moderate values.
4 Convergence results

H1 (i) For all k≥0 and all \(x\in \mathbb {R}^{d}\), g_{ k }(x)>0.

(ii) \(\underset {k\geq 0}{\sup }g_{k}_{\infty } < \infty \).

H2 \(\underset {k\geq 1}{\sup }\vartheta _{k}_{\infty } < \infty \), \(\underset {k\geq 1}{\sup }p_{k}_{\infty } < \infty \), and \(\underset {k\geq 1}{\sup }\widehat {\omega }_{k}_{\infty } < \infty \), where$$\begin{array}{*{20}l} \widehat{\omega}_{0}(x) &= \frac{\chi(x)g_{0}(x)}{\eta_{0}(x)} \quad\text{and for}\; k\ge1,\\ \widehat{\omega}_{k}\left(x,x';z\right) &= \frac{\widehat{q}_{k1}\left(x,x';z\right)g_{k}(x')}{\vartheta_{k}(x) p_{k1} (x,x')}\;. \end{array} $$
Assumption H2 depends on the algorithm used to estimate the transition densities and on the tuning parameters of the SMC filter. The most common choice is 𝜗_{ k }=1 so that under H1, the only requirement is to control \(\widehat {q}_{k1}\) and p_{k−1}. For instance, in the case of the GPE1, as explained in Section 3, H2 is satisfied if ϕ is upper bounded (as for the EA1).
Lemma 2
Proof
See Appendix □
Proposition 1
Proof
See Appendix □
5 Numerical experiments
5.1 The SINE model
In the case of the SINE model, the estimator \(\widehat {q}_{k}\) defined by Eq. (14) satisfies both (10) and (11). The corresponding bound \(\widehat {\sigma }_{+}^{k}\) can be obtained using numerical optimization. If that bound is chosen, the GRand PaRIS algorithm has linear complexity in the number of particles. As an alternative, it is worth noting here that the bounds \(\widehat {\sigma }_{+}^{k,i}\), 1≤i≤N, defined by (12) can also be used. This method has a quadratic cost in the number of particles but provides the optimal bound for the algorithm of Lemma 1. This may reduce significantly the expected time before acceptance, in particular when the time step Δ_{ k } is large. In the experiment configuration presented here, both bounds resulted in an equivalent computational time.
The GRand PaRIS algorithm outperforms the fixedlag methods for any value of the lag as the bias is the lowest (it is already negligible for N = 400) and with a lower variance than fixed lag estimates with negligible bias (i.e., in this case, lags larger than 10). Small lags lead to strongly biased estimates for the fixedlag method, and unbiased estimates are at the cost of a large variance. It is worth noting here that the lag for which the bias is small is model dependent.
5.2 Loggrowth model
In this case, the conditions of the exact Algorithm 2 defined in [6] are satisfied, as for any \(m \in \mathbb {R}\) there exists U_{ m } such that for all x≥m, ψ(x):=α^{2}(x)+α^{′}(x)≤U_{ m }. Moreover, ψ is lower bounded uniformly by L. Then, GPE estimators may be computed by simulating the minimum of a Brownian bridge, and simulating Bessel bridges conditionally to this minimum, as proposed by [6].
The results for the fixedlag technique are similar to the ones presented in [27, Figure 1] using the same model. For small lags, the variance of the estimates is small, but the estimation is highly biased. The bias rapidly decreases as the lag increases, together with a great increase of variance. Again, the GRand PaRIS algorithm outperforms the fixed lag smoother as it shows a similar (vanishing) bias as the fixed lag for the largest lag and a smaller variance than the fixed lags estimates with negligible bias.
Note that in this case, the Lamperti transform to obtain a diffusion with a unitary diffusion term depends on σ. The process (X_{ t })_{t≥0} is a function of σ and is not directly observed if σ is unknown, which prevents a direct use of an EM algorithm to estimate σ. Following [6, Section 8.2], this may be overcome with a twostep transformation of the process (Z_{ t })_{t≥0}.
6 Conclusions
This paper presents a new online SMC smoother for partially observed differential equations. This algorithm relies on an acceptancerejection procedure inspired from the recent PaRIS algorithm. The main result of the article for practical applications is that the mechanism of this procedure remains valid when the transition density is approximated by a an unbiased positive estimator. The proposed procedure therefore extends the PaRIS algorithm to HMMs whose transition density is unknown and can be unbiasedly approximated. The GRand PaRIS algorithm outperforms the existing fixed lag smoother for POD processes of [27], as it does not introduce any intrinsic and nonvanishing bias. In addition, numerical simulations highlight a better variance using data from two different models. It can be implemented for the class of models for which exact algorithms of [6] are valid, with a linear complexity in N in the best cases, or at worse in N^{2}.
7 Appendix
7.1 Proofs
Proof of Lemma 1
Proof of Lemma 2
Proof of Proposition 1
The proof is completed using Lemma 3. □
Lemma 3
 (i)
a_{ N }/b_{ N }≤M, \(\mathbb {P}\)a.s. and b≥β, \(\mathbb {P}\)a.s.,
 (ii)
For all ε>0 and all N≥1, \(\mathbb {P}\left [b_{N}b>\epsilon \right ]\leq B \exp \left (C N \epsilon ^{2}\right)\),
 (iii)
For all ε>0 and all N≥1, \(\mathbb {P} \left [ a_{N}>\epsilon \right ]\leq B \exp \left (C N \left (\epsilon /M\right)^{2}\right)\).
Proof
See [10]. □
Declarations
Funding
This work has been developed during a 1year postdoc funded by ParisSaclay Center for Data Science.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All the authors have contributed to the conception of the algorithms, the analysis of the proposed estimator, and to the redaction of the manuscript. PG provided the simulations displayed in the final version. All authors read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Y AitSahalia, Transition densities for interest rate and other nonlinear diffusions. J. Financ. 54:, 1361–1395 (1999).MathSciNetView ArticleGoogle Scholar
 Y AitSahalia, Maximum likelihood estimation of discretely sampled diffusions: a closedform approximation approach. Econometrica. 70:, 223–262 (2002).MathSciNetView ArticleMATHGoogle Scholar
 Y AitSahalia, Closedform likelihood expansions for multivariate diffusions. Ann. Stat. 36:, 906–937 (2008).MathSciNetView ArticleMATHGoogle Scholar
 A Beskos, O Papaspiliopoulos, GO Roberts, Retrospective exact simulation of diffusion sample paths with applications. Bernoulli. 12(6), 1077:1098 (2006).MathSciNetView ArticleMATHGoogle Scholar
 A Beskos, O Papaspiliopoulos, GO Roberts, A factorisation of diffusion measure and finite sample path constructions. Methodol. Comput. Appl. Probab. 10(1), 85–104 (2008).MathSciNetView ArticleMATHGoogle Scholar
 A Beskos, O Papaspiliopoulos, GO Roberts, P Fearnhead, Exact and computationally efficient likelihoodbased estimation for discretely observed diffusion processes (with discusion). J. Roy. Statist. Soc. Ser. B. 68(3), 333–382 (2006).View ArticleMATHGoogle Scholar
 O Cappé, E Moulines, T Rydén, Inference in hidden Markov models (SpringerVerlag, New York, 2005).MATHGoogle Scholar
 P Del Moral, J Jacod, P Protter, The Monte Carlo method for filtering with discretetime observations. Probab. Theory Relat. Fields. 120:, 346–368 (2001).MathSciNetView ArticleMATHGoogle Scholar
 AP Dempster, NM Laird, DB Rubin, Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B. 39(1), 1–38 (1977). (with discussion).MathSciNetMATHGoogle Scholar
 R Douc, A Garivier, E Moulines, J Olsson, Sequential Monte Carlo smoothing for general state space hidden Markov models. Ann. Appl. Probab. 21(6), 2109–2145 (2011).MathSciNetView ArticleMATHGoogle Scholar
 A Doucet, S Godsill, C Andrieu, On sequential MonteCarlo sampling methods for Bayesian filtering. Stat. Comput. 10:, 197–208 (2000).View ArticleGoogle Scholar
 A Doucet, S Godsill, C Andrieu, On sequential montecarlo sampling methods for bayesian filtering. Stat. Comput. 10:, 197–208 (2000).View ArticleGoogle Scholar
 P Fearnhead, O Papaspiliopoulos, GO Roberts, Particle filters for partially observed diffusions. J. Roy. Statist. Soc. Ser. B. 70(4), 755–777 (2008).MathSciNetView ArticleMATHGoogle Scholar
 P Fearnhead, K Latuszynski, GO Roberts, G Sermaidis, Continuoustime importance sampling: Monte Carlo methods which avoid timediscretisation error (2017). Technical report.Google Scholar
 P Gloaguen, MP Étienne, S Le Corff, Stochastic differential equation based on a multimodal potential to model movement data in ecology. To appear in the Journal of the Royal Statistical Society: Series C. http://onlinelibrary.wiley.com/doi/10.1111/rssc.12251/abstract.
 SJ Godsill, A Doucet, M West, Monte Carlo smoothing for nonlinear time series. J. Am. Stat. Assoc. 50:, 438–449 (2004).MATHGoogle Scholar
 N Gordon, D Salmond, AF Smith, Novel approach to nonlinear/nonGaussian bayesian state estimation. IEE Proc. F. Radar Sig. Process. 140:, 107–113 (1993).View ArticleGoogle Scholar
 M Hürzeler, HR Künsch, Monte Carlo approximations for general statespace models. J. Comput. Graph. Stat. 7:, 175–193 (1998).MathSciNetMATHGoogle Scholar
 N Kantas, A Doucet, SS Singh, J Maciejowski, N Chopin, On particle methods for parameter estimation in statespace models. Stat. Sci. 30(3), 328–351 (2015).MathSciNetView ArticleMATHGoogle Scholar
 M Kessler, Estimation of an ergodic diffusion from discrete observations. Scand. J. Stat. 24(2), 211–229 (1997).MathSciNetView ArticleMATHGoogle Scholar
 M Kessler, A Lindner, M Sorensen, Statistical methods for stochastic differential equations (CRC Press, Boca Raton, 2012).MATHGoogle Scholar
 G Kitagawa, MonteCarlo filter and smoother for nonGaussian nonlinear state space models. J. Comput. Graph. Stat. 1:, 1–25 (1996).MathSciNetGoogle Scholar
 S Le Corff, G Fort, Convergence of a particlebased approximation of the block online Expectation Maximization algorithm. ACM Trans. Model. Comput. Simul. 23(1), 2 (2013).MathSciNetView ArticleGoogle Scholar
 S Le Corff, G Fort, Online expectation maximization based algorithms for inference in hidden Markov models. Electron. J. Stat. 7:, 763–792 (2013).MathSciNetView ArticleMATHGoogle Scholar
 C Li, Maximumlikelihood estimation for diffusion processes via closedform density expansions. Ann. Stat. 41(3), 1350–1380 (2013).MathSciNetView ArticleMATHGoogle Scholar
 J Olsson, O Cappe, R Douc, E Moulines, Sequential monte carlo smoothing with application to parameter estimation in nonlinear state space models. Bernoulli. 14(1), 155–179 (2008).MathSciNetView ArticleMATHGoogle Scholar
 J Olsson, J Strojby, Particlebased likelihood inference in partially observed diffusion processes using generalised Poisson estimators. Electron. J. Stat. 5:, 1090–1122 (2011).MathSciNetView ArticleMATHGoogle Scholar
 J Olsson, J Westerborn, Efficient particlebased online smoothing in general hidden Markov models: the PaRIS algorithm. Bernoulli. 3:, 1951–1996 (2017).MathSciNetView ArticleMATHGoogle Scholar
 T Ozaki, A bridge between nonlinear time series models and nonlinear stochastic dynamical systems: a local linearization approach. Stat. Sin. 2:, 1130–135 (1992).MathSciNetMATHGoogle Scholar
 MK Pitt, N Shephard, Filtering via simulation: Auxiliary particle filters. J. Am. Stat. Assoc. 94(446), 590–599 (1999).MathSciNetView ArticleMATHGoogle Scholar
 G Poyiadjis, A Doucet, SS Singh, Particle approximations of the score and observed information matrix in state space models with application to parameter estimation. Biometrika. 98:, 65–80 (2011).MathSciNetView ArticleMATHGoogle Scholar
 I Shoji, T Ozaki, Estimation for nonlinear stochastic differential equations by a local linearization method 1. Stoch. Anal. Appl. 16(4), 733–752 (1998).MathSciNetView ArticleMATHGoogle Scholar
 M Uchida, N Yoshida, Adaptive estimation of an ergodic diffusion process based on sampled data. Stoch. Process. Appl. 122(8), 2885–2924 (2012).MathSciNetView ArticleMATHGoogle Scholar
 W Wagner, Unbiased Monte Carlo estimators for functionals of weak solutions of stochastic differential equations. Stochast. Stochast. Rep. 28:, 1–20 (1989).MathSciNetView ArticleMATHGoogle Scholar