Skip to main content

Robust error-state Kalman-type filters for attitude estimation

Abstract

State estimation techniques appear in a plethora of engineering fields, in particular for the attitude estimation application of interest in this contribution. A number of filters have been devised for this problem, in particular Kalman-type ones, but in their standard form they are known to be fragile against outliers. In this work, we focus on error-state filters, designed for states living on a manifold, here unit-norm quaternions. We propose extensions based on robust statistics, leading to two robust M-type filters able to tackle outliers either in the measurements, in the system dynamics or in both cases. The performance and robustness of these filters is explored in a numerical experiment. We first assess the outlier ratio that they manage to mitigate, and second the type of dynamics outliers that they can detect, showing that the filter performance depends on the measurements’ properties.

1 Introduction

State estimation is a fundamental task in several engineering applications, for instance in robotics, guidance, navigation or tracking systems [1,2,3]. Standard filtering techniques, such as the Kalman filter (KF) for linear systems, the extended or sigma-point KFs for nonlinear/Gaussian state-space models (SSMs) [4], or even the family of sequential Monte Carlo methods [5], assume a perfect system knowledge: known system matrices/functions, known noise statistics and parameters, known inputs and perfect initialization. However, such assumption does not typically hold in real-life applications, in which case the performance of these filters may be severely degraded, reason why it is mandatory to develop robust filtering techniques. A number of alternatives exist in the literature to deal with possibly misspecified systems, for instance: i) robust statistics-based filters, which are nearly-optimal under nominal Gaussian conditions, while outlier insensitive under normal mixture noise distributions [6,7,8]; ii) variational inference-based filters, designed for state estimation under heavy-tailed parametric distributions [9, 10], or being able to detect and reject outliers [11]; iii) fault detection and exclusion algorithms, which identify the faulty measurements based on statistical tests [12, 13]; iv) the smooth variable structure filter [14], a hybrid filter designed to constrain state estimates within a boundary surrounding the true state trajectory (it provides robustness against model uncertainty but falls short when it comes to coping with dynamics and measurement noise, so it is considered a sub-optimal alternative to the KF); v) the robust decision-making framework based on decision trees [15]; or vi) recent contributions on linearly constrained filtering that provide a new paradigm for protecting against model mismatch on prediction and observation models [16,17,18], whose extension on manifold-based estimation is also available [19]. From a methodological point of view, the main goal of this contribution is to design new robust filtering methods, able to cope with outliers in both state evolution and measurements, for SSMs with states living on a manifold.

The underlying problem of interest is attitude estimation, which refers to finding the relative orientation between two Cartesian frames or, in other words, to the determination of the spatial orientation for a platform. Orientation information is fundamental to aid vehicles with large inertia, such as airplanes or ships, and to operate systems equipped with pointing devices, such as satellites. The attitude problem also arises in computer vision applications, being used to relate keyframes or register point clouds [20, 21]. Within the context of navigation, the attitude varies with time, and its recursive estimation is addressed with nonlinear extensions of the original KF, such as the error-state KF (ESKF) [22, 23] or the invariant KF (IKF) [24, 25]. The aforementioned filters preserve the geometrical properties for the attitude estimates or, conversely, assure that the sequence of estimates remain in the corresponding manifold. The underlying concept relates to estimating a vector of perturbations around the state on the algebra associated with the target manifold. For three-dimensional attitude systems, for instance, that manifold is SO(3), while for the rotation-translation pair is SE(3). When it comes to sensor integration, attitude systems typically include relative (i.e. angular rate) and absolute (i.e. observation models for the orientation) attitude information. Relevant works on recursive attitude estimation include the use of star-trackers and magnetometers for space applications [26,27,28], or visual sensors and Global Navigation Satellite Systems (GNSS) antenna arrays for vehicular platforms [29,30,31,32]. Unfortunately, outlying measurements are often captured by attitude-related sensor modalities. This is related to signal reflection and multipath in GNSS, wrong data association in visual systems or unexpected magnetic fields in magnetometers. Moreover, gyroscope systems may experience scale issues or misalignments, with the subsequent deterioration of the angular rate integration.

As already stated for standard filtering techniques, under the presence of contaminated data and/or system inputs, the performance of conventional attitude filters also rapidly degrades and the estimates become unreliable. With the exception of a few examples [33,34,35], the use of robust filters for attitude estimation has not been addressed in the literature. More specifically, a connection between robust estimation and the most widespread attitude filters (i.e. the ESKF and IKF) has not been drawn. In this work, we lay the focus on robust statistics-based filters, which typically make use of a robust score function to mitigate the impact of outlying measurements. This leads to a family of so-called (linear or nonlinear) M-type KFs. In this contribution, we propose two new nonlinear robust iterated ESKFs for states living on a manifold (i.e. the 3D unit sphere \(\mathcal {S}^3\) that represents unit-norm quaternions), which overcome some of the limitations of existing solutions, and one of them allows not only to cope with outliers in the observation equation, but also state evolution outliers. Numerical experiments investigate the robustness of these filters to outliers in the attitude estimation context. In particular, we focus on the outlier ratio that they can tackle, and whether they can properly mitigate outliers in the system dynamics. For the latter, we discuss the cases in which the filter succeeds.

The rest of the paper is structured as follows. Section 2 introduces the discrete and non-Euclidean SSM. The main methodological contribution of this work is introduced in Sect. 3, which first recalls the basics of the ESKF, before deriving two robust M-type filters for the attitude problem. Section 4.3 explores the robustness of the various filters. Finally, Sect. 5 presents the concluding remarks.

2 System model and problem formulation

2.1 Attitude estimation problem

Let us consider a general attitude problem and define the state of the system as

$$\begin{aligned} {\textbf{x}} \triangleq \begin{bmatrix} {\textbf{q}} \\ \textbf{a} \end{bmatrix} \in \mathcal {M}, \ \mathcal {M} \triangleq \mathcal {S}^{3} \times \mathbb {R}^{p}, \end{aligned}$$
(1)

where \({\textbf{q}}\) is a unit-norm quaternion and encodes the attitude of interest and \(\textbf{a}\) is a generic vector that encapsulates the remaining unknowns. The goal is then to recursively estimate the state at time k, \({\textbf{x}}_k\), from the measurements available up to and including time k, \({\textbf{y}}_{1:k}\). In other words, the problem is to recursively compute or approximate the marginal posterior \(p({\textbf{x}}_k|{\textbf{y}}_{1:k})\). In addition, the system may be corrupted by unknown measurement and/or input outliers, then a robust solution must be accounted for.

To this end, attitude estimation can be formulated in a general manner with the following SSM,

$$\begin{aligned} {\textbf{x}}_k&= {\textbf{f}}\left( {\textbf{x}}_{k-1}, {\varvec{\omega }}_{k}, {\textbf{v}}_{k} \right) , \end{aligned}$$
(2)
$$\begin{aligned} {\textbf{y}}_k&= {\textbf{h}}\left( {\textbf{x}}_k \right) + {\varvec{\eta }}_k , \end{aligned}$$
(3)

where both process and observation functions, \(\textbf{f}_{}(\cdot )\) and \(\textbf{h}(\cdot )\), are assumed to be known; \({\textbf{v}}_{k}\) and \({\varvec{\eta }}_k\) are the process and measurement noise sequences; and \({\varvec{\omega }}_{k}\) a system input (e.g. inertial measurements). Regarding the noise sequences, the process noise is typically assumed to be zero mean Gaussian with known covariance matrix \(\textbf{Q}_{k}\), \({\textbf{v}}_{k}\sim {\mathcal {N}}({\textbf{0}},{\textbf{Q}}_{k})\), and the measurement noise density depends on the system operation mode, that is, under nominal conditions, one may assume a zero mean Gaussian noise with known covariance matrix \(\mathbf {\Sigma }_k\), \({\varvec{\eta }}_k\sim {\mathcal {N}}({\textbf{0}},{\mathbf {\Sigma }}_k)\), but under non-nominal conditions (i.e. corrupted measurements), the system is affected by outliers, then the measurement noise density is typically unknown, heavy-tailed and/or skewed.

This general formulation allows to represent a plethora of engineering problems, such as simultaneous localization and mapping (SLAM) [36,37,38], GNSS-based navigation [30, 39, 40], spacecraft control systems [41,42,43] or computer vision tasks [44]. For the remaining of the manuscript, we stay with the generic formulation of unknowns living in the real and unit-norm quaternions spaces. A practical example on orientation estimation based on gyroscope integration and unit-line-of-sight vectors is detailed in Sect. 4.

2.2 Filtering on a manifold space

The traditional KF and its nonlinear extensions must be adapted in order to embrace the non-Euclidean structure of some spaces, such as the one introduced in (1). The inherent constraints of these objects (e.g. having unit norm) are typically addressed with geometric tools such as Lie group theory [45]. Among the possible solutions, it is common practice to resort to the so-called ESKF [22]. The ESKF separates the state \(\textbf{x}_k\), which belongs to a manifold (here a Lie group), and its perturbations \(\delta \textbf{x}_k \in {\mathbb {R}}\), which live in the tangent space of that manifold (i.e. the algebra of the manifold), that is a vector space.

The composition operators \(\oplus\) (“oplus”) and its inverse \(\ominus\) (“ominus”) allow for a bijective mapping from a local neighbourhood on a manifold \(\mathcal {M}\) and its algebra in \(\mathbb {R}\).

$$\begin{aligned} \oplus : \mathcal {M} \times \mathbb {R}^p \rightarrow \mathcal {M}; \quad \; \ominus : \mathcal {M}\times \mathcal {M} \rightarrow \mathbb {R}^p . \end{aligned}$$
(4)

In our case, the manifold of interest is \({\mathcal {S}}^{3} \times {\mathbb {R}}^{p}\), the product of the unit sphere (quaternion space) with a vector space. We can go from the algebra to the manifold thanks to the exponential function, thus we can associate a quaternion \(\textbf{q}\) to a perturbation \(\textbf{t} \in \mathbb {R}^3\) through the following functions:

$$\begin{aligned} \begin{aligned} \textbf{t} \in {\mathbb {R}}^3 \mathop {\longmapsto }\limits ^{{ \text {Exp}(\cdot ) }} \textbf{q} \in {\mathcal {S}}^3; \quad \text {Exp}(\textbf{t}): \begin{bmatrix} \cos \left( \Vert \textbf{t} \Vert / 2 \right) \\ \textbf{t} / \Vert \textbf{t} \Vert \sin \left( \Vert \textbf{t} \Vert / 2 \right) \end{bmatrix}. \end{aligned} \end{aligned}$$
(5)

This operation can be reversed. Taking \(\textbf{q}^\top = \begin{bmatrix} q_w,&\textbf{q}_u \end{bmatrix}\), we then have

$$\begin{aligned} \begin{aligned} \textbf{q} \in \mathcal {S}^3 \mathop {\longmapsto }\limits ^ {{\text {Log}(\cdot )} } \textbf{t} \in \mathbb {R}^3; \quad \text {Log}(\textbf{q}): \textbf{u} = \frac{ 2 \textbf{q}_u }{ \Vert \textbf{q}_u \Vert } \text {atan}\left( \frac{\Vert \textbf{q}_u \Vert }{q_w} \right) \end{aligned} \end{aligned}$$
(6)

Now, we can apply the “oplus” and “ominus” operators to the general state estimate in (1) as

$$\begin{aligned} \textbf{x}&= \widehat{\textbf{x}} \oplus \delta \textbf{x} = \begin{bmatrix} \widehat{\textbf{q}} \\ \hat{\textbf{a}} \end{bmatrix} \oplus \begin{bmatrix} \textbf{t} \\ \textbf{b} \end{bmatrix} = \begin{bmatrix} \widehat{\textbf{q}} \otimes \text {Exp}\left( \textbf{t} \right) \\ \hat{\textbf{a}} + \textbf{b} \end{bmatrix} , \; \delta \textbf{x} = {\textbf{x}} \ominus \widehat{\textbf{x}} = \begin{bmatrix} \textbf{q} \\ \textbf{a} \end{bmatrix} \ominus \begin{bmatrix} \hat{\textbf{q}} \\ \hat{\textbf{a}} \end{bmatrix} = \begin{bmatrix} \text {Log}( \hat{\textbf{q}}^{-1} \otimes \textbf{q} ) \\ \textbf{a} - \hat{\textbf{a}} \end{bmatrix} \end{aligned}$$
(7)

with \(\otimes\) and \((\cdot )^{-1}\) the quaternion product and quaternion inverse, respectively, and \(\textbf{b}\) being a generic error vector for the remaining unknowns. Refer to [45,46,47] for a thorough and detailed discussion on quaternions and Lie group theory.

As the standard KF, there is no reason why error-state filters should be robust against outliers. The main contribution of this article is to study this problem and propose a solution based on robust statistics.

3 Methodology

3.1 Background: error-state KFs and robust filtering

3.1.1 Standard error-state extended KF

For the problem at hand, in order to preserve the unit-norm quaternion constraint, the ESKF uses the \(\oplus\) operator in (7) (i.e. instead of the standard addition) to linearize and update the system. This ensures that the state estimate stays on the smooth (usually Riemannian) manifold. If \(\widehat{\textbf{x}}_{k-1|k-1}\) denotes the state estimate at discrete time \(k-1\), and \({\textbf{P}}_{k-1|k-1}\) the corresponding estimation error covariance, then the ESKF is

$$\begin{aligned} \widehat{\textbf{x}}_{k|k-1}&= {\textbf{f}}\left( \widehat{\textbf{x}}_{k-1|k-1}, {\varvec{\omega }}_{k} \right) ,\end{aligned}$$
(8a)
$$\begin{aligned} {\textbf{P}}_{k|k-1}&= {\textbf{F}}_{k} {\textbf{P}}_{k-1|k-1} {\textbf{F}}_{k}^\top + {\textbf{Q}}_{k}, \end{aligned}$$
(8b)
$$\begin{aligned} {\textbf{K}}_k&= {\textbf{P}}_{k|k-1} {\textbf{H}}_k^\top \left( {\textbf{H}}_k {\textbf{P}}_{k|k-1} {\textbf{H}}_k^{\top } + {\mathbf {\Sigma }}_k \right) ^{-1}, \end{aligned}$$
(8c)
$$\begin{aligned} \widehat{\textbf{x}}_{k|k}&= \widehat{\textbf{x}}_{k|k-1} \oplus {\textbf{K}}_k ({\textbf{y}}_k - {\textbf{H}}(\widehat{\textbf{x}}_{k|k-1})), \end{aligned}$$
(8d)
$$\begin{aligned} {\textbf{P}}_{k|k}&= ({\textbf{I}} - {\textbf{K}}_k {\textbf{H}}_k) {\textbf{P}}_{k|k-1}, \end{aligned}$$
(8e)

where \({\textbf{F}}_{k-1}, {\textbf{H}}_k\) represent the Jacobians of the process and observation models, \({\textbf{f}}_{k-1}, \textbf{H}_k\) (i.e. w.r.t. \(\textbf{x}_k = \widehat{\textbf{x}}_k\oplus \delta \textbf{x}_k\)). Applying the chain rule, the Jacobian matrices can be expressed as

$$\begin{aligned} \textbf{F}_{k}&= \left. \frac{\partial \textbf{f}(\textbf{x} \oplus \delta \textbf{x}, \varvec{\omega }_{k})}{\partial \delta \textbf{x}} \right| _{\hat{\textbf{x}}_{k-1|k-1}, \varvec{\omega }_{k}} = \left. \frac{\partial \textbf{f}}{\partial \textbf{x}} \right| _{\hat{\textbf{x}}_{k|k-1}, \varvec{\omega }_{k}} \left. \frac{\partial \textbf{x} \oplus \delta \textbf{x}}{\partial \delta \textbf{x}} \right| _{\hat{\textbf{x}}_{k-1|k-1}} \end{aligned}$$
(9)
$$\begin{aligned} \textbf{H}_k&= \left. \frac{\partial \textbf{h}(\textbf{x} \oplus \delta \textbf{x})}{\partial \delta \textbf{x}} \right| _{\hat{\textbf{x}}_{k|k-1}} = \left. \frac{\partial \textbf{h}}{\partial \textbf{x}} \right| _{\hat{\textbf{x}}_{k|k-1}} \left. \frac{\partial \textbf{x} \oplus \delta \textbf{x}}{\partial \delta \textbf{x}} \right| _{\hat{\textbf{x}}_{k|k-1}} . \end{aligned}$$
(10)

3.1.2 Iterated error-state extended KF

It is known that the standard EKF loses the optimality guarantees of the linear KF mainly due to linearization errors. A possible solution to such problem is to use the iterated EKF, which aims at finding the best linearization point for the measurement Jacobian at each update using a Gauss-Newton scheme [48]. Therefore, iterated schemes are particularly useful when the linearization errors of the measurement function are the main source of instability. In the ESKF context and considering states living on manifolds, the iterative sequence \((\widehat{\textbf{x}}_{k|k}^{(i)})_{i \ge 0}\) to compute \(\widehat{\textbf{x}}_{k|k}\) leading to an iterated ESKF (I-ESKF) is (using the associated \(\oplus , \ominus\) operators and \(\widehat{\textbf{x}}_{k|k}^{(0)} = \widehat{\textbf{x}}_{k|k-1}\)) [49, 50],

$$\begin{aligned}&\widehat{\textbf{x}}_{k|k}^{(i+1)} = \widehat{\textbf{x}}_{k|k-1}\oplus {\textbf{K}}_k^{(i)} \left( {\textbf{y}}_k - {\textbf{H}}\left( \widehat{\textbf{x}}_{k|k}^{(i)}\right) + {\textbf{H}}_k^{(i)} \left( \widehat{\textbf{x}}_{k|k}^{(i)} \ominus \widehat{\textbf{x}}_{k|k-1}\right) \right) , \end{aligned}$$
(11)

where \({\textbf{H}}_k^{(i)}\) is the Jacobian computed at \(\widehat{\textbf{x}}_{k|k}^{(i)}\), and \({\textbf{K}}_k^{(i)}\) the Kalman gain computed from \({\textbf{P}}_{k|k-1}\) and \({\textbf{H}}_k^{(i)}\). Such an iterated correction step is sketched in Algorithm 1. Hereinafter, we define the convergence criteria as the norm of the intrinsic difference between consecutive state estimates over the iterations, such that

$$\begin{aligned} \left\| \textbf{x}_{k|k}^{(i+1)} \ominus \textbf{x}_{k|k}^{(i)} \right\| \le T \quad \text {or} \quad i=i_{\text {max}}, \end{aligned}$$
(12)

where T and \(i_{\text {max}}\) is the threshold for the stopping criteria and the maximum number of iterations, respectively. The latter is needed since there are only local convergence guarantees for Gauss–Newton methods on Lie group [51, 52].

Algorithm 1
figure a

Iterated ESKF correction step

3.1.3 M-estimation regression-based robust filtering

Notice that the previous ESKF and I-ESKF are not robust against outliers or a model mismatch (i.e. the filters assume a perfect system knowledge). It turns out that in the Euclidean case, robust statistics appear as a natural tool. Indeed, the correction step of the standard iterated EKF corresponds to solving a maximum a posteriori (MAP) problem, using Gaussian assumptions, through optimization,

$$\begin{aligned} \widehat{\textbf{x}}_{k|k}&= \arg \min _{{\textbf{x}}_k} \left( \Vert {\textbf{x}}_k - \widehat{\textbf{x}}_{k|k-1}\Vert _{{\textbf{P}}_{k|k-1}}^2 + \Vert {\textbf{y}}_k-{\textbf{H}}({\textbf{x}}_k) \Vert _{{\mathbf {\Sigma }}_k}^2 \right) . \end{aligned}$$

Gaussian assumptions being fragile against outliers, filters based on robust statistics, for instance using M-estimation, were proposed [6, 53]. In particular, filters based on the robust statistics framework consider modified minimization problems:

$$\begin{aligned} \widehat{\textbf{x}}_{k|k} = \arg \min _{{\textbf{x}}_k} \left( \Vert {\textbf{x}}_k - \widehat{\textbf{x}}_{k|k-1}\Vert _{{\textbf{P}}_{k|k-1}}^2 + \Vert {\textbf{y}}_k- {\textbf{H}}({\textbf{x}}_k) \Vert _{{\bar{\mathbf {\Sigma }}}_k}^2 \right) , \end{aligned}$$
(13)

where a modified measurement noise covariance matrix \(\bar{{\mathbf {\Sigma }}}_k\) allows to account for outlying observations. This matrix is obtained using robust score functions in order to mitigate the effect of outliers on the final state estimate, given by

$$\begin{aligned} \bar{{\mathbf {\Sigma }}}_k = {\mathbf {\Sigma }}_k^{1/2} {\textbf{W}}_k^{-1} {\mathbf {\Sigma }}_k^{\top /2}, \end{aligned}$$
(14)

where \({\mathbf {\Sigma }}_k^{1/2}\) is the Cholesky factorization of \({\mathbf {\Sigma }}_k\) and \({\textbf{W}}_k\) is a weighting matrix depending on the observations’ residuals

$$\begin{aligned} {\textbf{W}}_k = \text {diag} \left[ {\textbf{w}} \left( {\mathbf {\Sigma }}_k^{-1/2} \big ( {\textbf{y}}_k - {\textbf{h}}({\textbf{x}_k}) \big ) \right) \right] , \end{aligned}$$
(15)

with \({\textbf{w}}(\cdot )\) a function derived from a robust score function (e.g. Huber, Tukey or IGG weighting functions in Fig. 1) [6, 53]. The most representative monotone M-estimator is given by the Huber’s family of functions, which is the one used in this contribution. Relying on a parameter \(c_{\text {Hub}}\), the loss function \(\rho _{\text {Hub}}\) and associated weight \(w_{\text {Hub}}\) are given by

$$\begin{aligned}&\rho _{\text {Hub}}(x) = \left\{ \begin{array}{cll} x^2 / 2 &{} \ \text {if} &{} |x|\le c_{\text {Hub}} \\ c_{\text {Hub}} |x| - \frac{1}{2} c_{\text {Hub}}^2 &{} \ \text {if} &{} |x|> c_{\text {Hub}} \end{array}\right. , \end{aligned}$$
(16)
$$\begin{aligned}&w_{\text {Hub}}(x) = \left\{ \begin{array}{cll} 1 &{} \qquad \quad \ \text {if} &{} |x|\le c_{\text {Hub}} \\ c_{\text {Hub}} / |x| &{} \qquad \quad \ \text {if} &{} |x|> c_{\text {Hub}} \end{array}\right. , \end{aligned}$$
(17)

The application of robust statistics’ weighting functions within conventional KF schemes can be done by exploiting the information KF form [53], or reformulating the KF as a regression problem (M-EKF) [6, 7]. While the extension of the robust information filter when some state elements may live on a manifold (e.g. such as the quaternion) was recently done in [34], the usual formulation of the M-EKF via a regression problem is not easily applicable for on-manifold systems. This is why another formulation, based on a modification of the Kalman update, was presented in [54]. For the consistency of the presentation, we formulate both the standard M-EKF and the on-manifold one the same way.

Fig. 1
figure 1

Score (left) and weighting (right) functions for conventional LS (\(\ell _2\)), Huber, Tukey and IGG functions, with tuning parameters set for 95% efficiency (i.e. to the nominal Gaussian distribution)

Although (13) is a weighted least-squares problem where the weights depend on the state, it can be solved through an iterative procedure. Starting with \(\widehat{\textbf{x}}_{k|k}^{(0)} = \widehat{\textbf{x}}_{k|k-1}\), at step (i) the correction is updated as

$$\begin{aligned}&\widehat{\textbf{x}}_{k|k}^{(i+1)} = \widehat{\textbf{x}}_{k|k-1} + \bar{\textbf{K}}_k^{(i)} \left( {\textbf{y}}_k - {\textbf{H}} \left( \widehat{\textbf{x}}_{k|k}^{(i)}\right) + {\textbf{H}}_k \left( \widehat{\textbf{x}}_{k|k}^{(i)} - \widehat{\textbf{x}}_{k|k-1}\right) \right) , \end{aligned}$$
(18a)

where \(\bar{\textbf{K}}_k^{(i)}\) is the modified Kalman gain, computed as

$$\begin{aligned} {\textbf{W}}^{(i)}_k&= \text {diag} \left[ {\textbf{w}} \left( {\mathbf {\Sigma }}_k^{-1/2} \big ( {\textbf{y}}_k - {\textbf{h}}(\widehat{\textbf{x}}_{k|k}^{(i)}) \big ) \right) \right] ,\end{aligned}$$
(18b)
$$\begin{aligned} \bar{{\mathbf {\Sigma }}}^{(i)}_k&= {\mathbf {\Sigma }}_k^{1/2} ({\textbf{W}}_k^{(i)})^{-1} {\mathbf {\Sigma }}_k^{\top /2},\end{aligned}$$
(18c)
$$\begin{aligned} \bar{\textbf{K}}_k^{(i)}&= {\textbf{P}}_{k|k-1} ({\textbf{H}}_k)^\top \left( {\textbf{H}}_k {\textbf{P}}_{k|k-1} ({\textbf{H}}_k)^{\top } + \bar{\mathbf {\Sigma }}^{(i)}_k \right) ^{-1}. \end{aligned}$$
(18d)

Once convergence is reached, the updated covariance matrix associated with the estimate is \(\textbf{P}_{k|k} = (\textbf{I} - \bar{\textbf{K}}_k^{(*)} \textbf{H}_k) {\textbf{P}}_{k|k-1}\), \((*)\) denoting the final step in the iteration.

3.2 Robust iterated error-state extended KFs

3.2.1 An ESKF robust to measurement outliers

Using the formulation proposed above is the easiest way to merge the robust weighting function with an ESKF. However, note that for nonlinear systems, the previous M-EKF formulation assumes that the first-order Taylor approximation of the measurement function is good enough (i.e. no iterative procedure over the measurement Jacobian linearization point). Therefore, it was proposed in [54] to also take advantage of the I-ESKF of Algorithm 1, leading to a double-loop procedure based on (11) and (18a). The outer loop on (i) computes the robust weights, while the inner on (j) carries out an I-ESKF update with those weights. That is, \(\widehat{\textbf{x}}_{k|k}^{(i+1)} = \widehat{\textbf{x}}_{k|k}^{(i, *)}\), with,

$$\begin{aligned}&\widehat{\textbf{x}}_{k|k}^{(i, j+1)} = \widehat{\textbf{x}}_{k|k-1}^{(i,j)} \oplus \bar{\textbf{K}}_k^{(i,j)} \left( {\textbf{y}}_k - {\textbf{H}}(\widehat{\textbf{x}}_{k|k}^{(i,j)}) + {\textbf{H}}_k^{(j)} (\widehat{\textbf{x}}_{k|k}^{(i,j)} \ominus \widehat{\textbf{x}}_{k|k-1}) \right) , \end{aligned}$$
(19a)

where \(\bar{\textbf{K}}_k^{(i,j)}\) is again a modified Kalman gain, depending on the weights, but also on the Jacobian \({\textbf{H}}_k^{(j)}\) computed at \(\widehat{\textbf{x}}_{k|k}^{(i,j)}\):

$$\begin{aligned} {\textbf{W}}^{(i)}_k&= \text {diag} \left[ {\textbf{w}} \left( {\mathbf {\Sigma }}_k^{-1/2} \big ( {\textbf{y}}_k - {\textbf{h}}(\widehat{\textbf{x}}_{k|k}^{(i,j)}) \big ) \right) \right] , \end{aligned}$$
(19b)
$$\begin{aligned} \bar{{\mathbf {\Sigma }}}^{(i)}_k&= {\mathbf {\Sigma }}_k^{1/2} ({\textbf{W}}_k^{(i)})^{-1} {\mathbf {\Sigma }}_k^{\top /2}, \end{aligned}$$
(19c)
$$\begin{aligned} \bar{\textbf{K}}_k^{(i,j)}&= {\textbf{P}}_{k|k-1} ({\textbf{H}}_k^{(j)})^\top \left( {\textbf{H}}_k^{(j)} {\textbf{P}}_{k|k-1} ({\textbf{H}}_k^{(j)})^{\top } + \bar{\mathbf {\Sigma }}^{(i)}_k \right) ^{-1}. \end{aligned}$$
(19d)

Note that the weights are fixed inside the inner loop on (j), meaning that this loop actually corresponds to Algorithm 1, and that the initial weights are all one. The overall procedure of this Robust Iterative-ESKF (RI-ESKF) is given in Algorithm 2. The convergence criteria follows the same formulation as for the Algorithm 1. For further reference, the convergence of iteratively reweighted least-squares (IRLS) procedures has been addressed in detail [55], including its application to Lie groups.

Algorithm 2
figure b

RI-ESKF correction step

3.2.2 Taking into account prediction outliers

The RI-ESKF proposed in Sect. 3.2.1 is designed to automatically detect outliers in the observations. However, it does not shield against unexpected values in the prediction step. To this end, a modification of the RI-ESKF was also proposed in a preliminary work [54], denoted as \(R^2\)I-ESKF. Its main idea is to apply the same weighting procedure to the prior and to the measurements, as is done in regression-based M-EKF. This means that the prior covariance \(\textbf{P}_{k|k-1}\) will also be weighted, so that (19) is modified as

$$\begin{aligned} {\textbf{W}}^{(i)}_{k,x}&= \text {diag} \left[ {\textbf{w}} \left( \textbf{P}_{k|k-1}^{-1/2} \left( \widehat{\textbf{x}}_{k|k-1} \ominus \widehat{\textbf{x}}_{k|k}^{(i)} \right) \right) \right] , \end{aligned}$$
(20a)
$$\begin{aligned} \bar{\textbf{P}}_{k|k-1}^{(i)}&= \textbf{P}_{k|k-1}^{1/2} (\textbf{W}^{(i)}_{k,x})^{-1} \textbf{P}_{k|k-1}^{\top /2}, \end{aligned}$$
(20b)
$$\begin{aligned} {\textbf{W}}^{(i)}_{k,y}&= \text {diag} \left[ {\textbf{w}} \left( {\mathbf {\Sigma }}_k^{-1/2} \big ( {\textbf{y}}_k - {\textbf{h}}(\widehat{\textbf{x}}_{k|k}^{(i)}) \big ) \right) \right] ,\end{aligned}$$
(20c)
$$\begin{aligned} \bar{{\mathbf {\Sigma }}}^{(i)}_k&= {\mathbf {\Sigma }}_k^{1/2} ({\textbf{W}}_{k,y}^{(i)})^{-1} {\mathbf {\Sigma }}_k^{\top /2}, \end{aligned}$$
(20d)
$$\begin{aligned} \bar{\textbf{K}}_k^{(i,j)}&= \bar{\textbf{P}}_{k|k-1}^{(i)} ({\textbf{H}}_k^{(j)})^\top \left( {\textbf{H}}_k^{(j)} \bar{\textbf{P}}_{k|k-1}^{(i)} ({\textbf{H}}_k^{(j)})^{\top } + \bar{\mathbf {\Sigma }}^{(i)}_k \right) ^{-1}. \end{aligned}$$
(20e)

The correction step of the R\(^2\)I-ESKF is then computed following Algorithm 3.

Algorithm 3
figure c

R\(^2\)I-ESKF correction step

4 Experimental results

We consider the recursive Wahba’s problem throughout the paper [56]. The state estimate consists of the 3D orientation and the gyroscope bias, with the dynamical model determined by the integration of angular velocity measurements as

$$\begin{aligned} \textbf{x}_k = \begin{bmatrix} \textbf{q}_k \\ \textbf{b}_{\omega ,k} \end{bmatrix}, \ \textbf{f}(\cdot ) : {\left\{ \begin{array}{ll} &{}\textbf{q}_k = \textbf{Q}_{k-1} \otimes \text {Exp}(\varvec{\omega }_{k-1} - \textbf{b}_{\omega , k-1} + \textbf{v}_{k-1}) \\ &{}\textbf{b}_{\omega , k} = \textbf{b}_{\omega , k-1} \end{array}\right. } \end{aligned}$$
(21)

where \(\text {Exp}(\mathbf {\cdot })\) represents the exponential mapping from the algebra to the manifold, and \(\otimes\) refers to the quaternion product, both defined in Sect. 2. The observation model consists of a set of N line-of-sight (LOS) observations:

$$\begin{aligned} \textbf{y}_k = \begin{bmatrix} (\textbf{R}_k \textbf{l}_1)^\top&\ldots&(\textbf{R}_k \textbf{l}_N)^\top \end{bmatrix}^\top + \varvec{\eta }_k \end{aligned}$$
(22)

where \(\textbf{l}_1, \dots , \textbf{l}_N\) are a set of three-dimensional baselines on the local frame, and \(\textbf{R}_k\) is the local-to-inertial frame rotation associated with \(\textbf{Q}_k\). Note that this observation function is indeed nonlinear, because of the rotation involved.

4.1 Simulation setup

We show the performance of the different robust M-type KFs presented in this contribution, w.r.t. the standard I-ESKF, using the recursive Wahba’s problem of Sect. 2. The number of Monte Carlo (MC) experiments is set to \(M=100\), the total duration of the experiment is 1000s, with the prediction and correction steps being processed at 1 Hz. Outliers are present for three periods of 150 s along the trajectory, which will be illustrated as grey zones on the various figures. The default values used in simulation are given in Tab. 1.

Table 1 MC default simulation parameters

Four different filters are compared:

  • An ideal I-ESKF, which uses the correct covariances when outliers are present.

  • The standard (not robust) I-ESKF.

  • The RI-ESKF.

  • The \(R^2\)I-ESKF.

We rely on two metrics to compare these filters. Both will rely on the intrinsic error given by \(\varvec{\varepsilon }_k = \widehat{\textbf{x}}_{k|k} \ominus \textbf{x}_k\). First, the intrinsic root-mean-squared error (IRMSE):

$$\begin{aligned} \text{ IRMSE}_k = \sqrt{\mathbb {E} \left[ \Vert \varvec{\varepsilon }_k \Vert ^2 \right] }. \end{aligned}$$

Second, the intrinsic version of the average normalized estimation error squared (IANEES), which compares the estimated MSE with the one given by the estimated covariance. Note that the IANEES should revolve around 1 for consistent estimators:

$$\begin{aligned} \text{ IANEES}_k = \mathbb {E} \left[ \varvec{\varepsilon }_k^\top \textbf{P}_{k|k}^{-1} \varvec{\varepsilon }_k \right] . \end{aligned}$$

4.2 Discussion on prediction outliers

Before providing a comprehensive performance analysis of both robust filters proposed in this contribution, we briefly discuss on outliers appearing on the prediction step (i.e. state evolution outliers), which are much less common in the robust filtering literature than the ones appearing on the measurements. Notice that the weighting scheme used in Algorithm 3 to mitigate prediction outliers assumes that they may appear in the normalized residuals \(\textbf{P}_{k|k-1}^{-1/2} ( \widehat{\textbf{x}}_{k|k-1} \ominus \widehat{\textbf{x}}_{k|k}^{(i)})\), in contrast to the standard robust weighting applied to the normalized innovation vector, \({\mathbf {\Sigma }}_k^{-1/2} \big ( {\textbf{y}}_k - {\textbf{h}}_k(\widehat{\textbf{x}}_{k|k}^{(i)}) \big )\). Theoretically, this would be the case if one could directly compare \(\widehat{\textbf{x}}_{k|k-1}\) and the true state \({\textbf{x}}_{k}\), and normalize with \(\textbf{Q}_k\). Of course, this is not the case and, within the filter, one has only access to estimates of the true state. Moreover, it is important to notice that prediction outliers have a recursive and cumulative impact, and they may be detected once the accumulated error exceeds the predicted uncertainty, thus with a given delay. On the contrary, measurement outliers are detected immediately. Then, it is likely that prediction outliers are first understood by the filter as observation outliers.

Fig. 2
figure 2

Comparison of the mean weights of \(\textbf{W}_{k,x}\) (prop. weights) and \(\textbf{W}_{k,y}\) (obs. weights) along the trajectory, for three pairs of values of \(\sigma _y\) and \(\alpha _\omega\)

To confirm the previous statement, we conducted an experiment with only prediction outliers, appearing at \(t=150 s\). Figure 2 shows the mean prediction and observation weights (i.e. using the \(R^2\)I-ESKF) along a trajectory for three different pairs of values of \(\sigma _y, \alpha _\omega\). The other values are the default ones from Table 1. The grey zone indicates where outliers are present. On the top subplot, for \(\sigma _y=0.01\)m and \(\alpha _\omega = 20\), one can clearly see that both observations and prediction are downweighted at the beginning of the outlier zone, before the observation weights go back to the nominal value. However, the middle plot with \(\sigma _y=0.1\)m and \(\alpha _\omega =20\) highlights another shortcoming of the proposed methodology: if the prediction outliers are too small compared to the measurement accuracy, they may be understood by the filter as observation outliers all the time. One can clearly see in that case that the prediction is never downweighted. Therefore, it appears that the performance of the \(R^2\)I-ESKF (and likely the standard M-EKF in regression form) will depend on a ratio between how large prediction outliers are, and how accurate nominal observations are. This is further outlined by the last case (bottom subplot), with \(\sigma _y=0.1\)m and \(\alpha _\omega =100\) shown on the right plot. Indeed, the downweighting is first applied to the observations, before gradually switching to the propagation as outliers accumulate. This behaviour is also observed in the experiments presented in Sect. 4.

4.3 Assessing the filters’ robustness

4.3.1 Case I: nominal conditions

We first consider an experiment in nominal conditions, i.e. without outliers. Indeed, M-estimator-based filters are expected to be sub-optimal in this configuration, compared to the standard KFs (here the I-ESKF). We thus measure this gap, and check whether the RI- and \(R^2\)I-ESKF behave differently.

Fig. 3
figure 3

Case I: nominal conditions (no outliers). Evolution over time of the IRMSE and IANEES for the attitude estimates. IANEES of RI-ESKF overlaps with Ideal

As it can be observed in Fig. 3, both robust filters behave almost exactly the same, and with only a small loss of performance compared to the I-ESKF. More importantly, they show very good consistency, as that of the standard filter.

4.3.2 Case II: measurement outliers only

The second experiment only considers measurement outliers. This setup is expected to favour the RI-ESKF, but there are two open questions: What outlier ratio can it withstand, and does the \(R^2\)I-ESKF also perform well?

To this end, we considered four outlier ratio values \(\varepsilon = \{25, 50, 75, 90\} \%\). For this experiment, we increased the number of measurements by setting \(N = 10\). Indeed, with \(90\%\) outliers, the system would become unobservable otherwise.

Fig. 4
figure 4

Case II: measurement outliers only. Evolution over time of the IRMSE and IANEES for the attitude estimates for the pairs of values \(\sigma _{y} = 0.1\) and \(\alpha _{\omega } = 100\)

Looking at Fig. 4, it can be noticed that not only the RI-ESKF, but also the \(R^2\)I-ESKF resist very well up to 50\(\%\) of outliers in the measurements. Even under extremely harsh conditions, as in the case of 90% of contamination, the performance of the RI-ESKF is relatively close to the ideal I-ESKF, which knows which measurements are outliers, although there is a loss in consistency.

On the contrary, the \(R^2\)I-ESKF fails for large outlier ratios. This is not due to the ambiguity issue of Sect. 4.2. On the contrary, it first stays close to the RI-ESKF but both accumulate error (their IRMSE grows). This error creates a kind of propagation outlier, which in turn gets weighted. However, looking at (13), if both terms get equivalent weights, these may cancel out, thus the filter will not be able to mitigate large outliers. Hence, it appears that the \(R^2\)I-ESKF cannot cope with extreme outlier ratios.

4.3.3 Case III: prediction outliers only

The third experiment only considers outliers in the prediction. After providing our insights into how prediction outliers impact the filter and how the filter detects them in Sect. 4.2, we can proceed to analyse the performance of the proposed robust filters under these circumstances.

In this case, we compare the behaviour of the RI-ESKF and the \(R^2\)I-ESKF for two pairs of values of \(\sigma _{y}\) and \(\alpha _{\omega }\) as in 4.2.

Fig. 5
figure 5

Case III: prediction outliers only. Evolution over time of the IRMSE and IANEES for the attitude estimates for the pairs of values \(\sigma _{y} = 0.1\) and \(\alpha _{\omega } = 100\), \(\sigma _{y} = 0.01\) and \(\alpha _{\omega } = 100\), \(\sigma _{y} = 0.1\) and \(\alpha _{\omega } = 20\), \(\sigma _{y} = 0.01\) and \(\alpha _{\omega } = 20\), respectively

As expected, the RI-ESKF behaves very poorly, even worse than the I-ESKF. On the other hand, the performance of the \(R^2\)I-ESKF observed in the second subfigure of Fig. 5 confirms the analysis presented in Fig. 2. When the prediction outliers are too small compared to the measurement accuracy as in the second subfigure of Fig. 5, the filter is not able to recognize them as outliers in the prediction but as outliers in the observations, so the robust mechanism in the prediction is not playing any role. Therefore, the \(R^2\)I-ESKF acts as the RI-ESKF. Nonetheless, as long as the prediction outliers are large enough with respect to the observation outliers, the \(R^2\)I-ESKF performs almost as good as the ideal filter, even in terms of consistency, as is depicted in the first, third and fourth subfigures of Fig. 5. Interestingly, the convergence time also varies depending on the values of \(\sigma _{y}\) and \(\alpha _{\omega }\); however, it is always less than 60 s. And, of equal importance, we can notice that the filter also detects the nominal zones and is able to reduce its error while staying consistent, as the ideal one.

4.3.4 Case IV: outliers in both prediction and measurements

Finally, in order to complete our analysis on the performance of the proposed robust filters, a fourth experiment is carried out which considers both prediction and observation outliers simultaneously.

Fig. 6
figure 6

Case IV: prediction and measurement outliers. Evolution over time of the IRMSE and IANEES for the attitude estimates for the pairs of values \(\sigma _{y} = 0.1\) and \(\alpha _{\omega } = 100\), \(\sigma _{y} = 0.01\) and \(\alpha _{\omega } = 100\), \(\sigma _{y} = 0.1\) and \(\alpha _{\omega } = 20\), \(\sigma _{y} = 0.01\) and \(\alpha _{\omega } = 20\), respectively

The analysis of the filters for this experiment has been made using the same pair of values of \(\sigma _{y}\) and \(\alpha _{\omega }\) as in 4.2 and 4.3.3. In this case, we can notice in the first, second and fourth subfigures of Fig. 6 that, provided that the prediction outliers are large enough with respect to the observation ones to be correctly detected, the behaviour of the \(R^2\)I-ESKF is outstanding. While the RI-ESKF in not able to counteract the impact of the outliers whatsoever, the \(R^2\)I-ESKF reacts after a short amount of time and recognizes the presence of outliers and manages to mitigate them correctly.

4.4 Impact of outliers on the convergence

We investigated the impact of outliers on the convergence of Algorithms 13. More precisely, we tracked the number of times (11) was computed. The average number of required iterations were evaluated by 100 runs of Monte Carlo simulation with the parameters of Table 1 for cases II and IV using \(\sigma _y = 0.1\), \(\alpha _\omega = 100\) and \(25\%\) of outliers. The results are presented in Fig. 7. We can observe that the number of iterations seems to give a good measure of the presence of outliers and of the filter’s robustness. Indeed, required iterations rise in the outlier periods before going back to a nominal case, as seen in Case II. This ability to go back down seems to show that the filter successfully mitigated the outliers. Indeed, in Case IV the RI-ESKF is not able to mitigate propagation outliers, which transpires in the required iterations. On the contrary, \(R^2\)I-ESKF is able to mitigate, but to the cost of much more iterations when outliers are present. This is due to the ambiguity it suffers, as discussed in Sect. 4.2. Note that the iterations may exceed \(i_{max}\) (refer to Tab. 1), which is simply due to the fact that Algorithm 1 is called multiple times in the update step due to the cascaded structure of the algorithms.

Fig. 7
figure 7

Average required computations of (11) in Algorithms 2 and 3 over the time. Top: Case II. Bottom: Case IV

5 Conclusion

In this paper, we investigated the capability of two new robust counterparts to the iterated ESKF, for states living on a manifold (unit-norm quaternion), to mitigate the impact of outliers in both the propagation and observation models, which may otherwise cause performance breakdowns. These filters were obtained by leveraging the M-estimation methodology applied to attitude estimation. To handle the nonlinearities arising from the observation model and the M-estimator, a structure of two cascaded iterative processes was proposed. This approach was used to derive a robust iterated ESKF (RI-ESKF), aiming at mitigating measurement outliers, and an outlier-free prediction robust I-ESKF (\(R^2\)I-ESKF), which aims at being robust to both prediction and observation outliers. Numerical experiments illustrated the impact of outliers on the standard I-ESKF and the capacity of the proposed filters to mitigate them in various cases. Results highlight that prediction outliers can only be mitigated if they are large enough with respect to the measurement nominal accuracy, which is a potential future work direction. In addition, the filters were able to correctly handle large outlier proportions, up to 50%, but the \(R^2\)I-ESKF may fail with larger proportion.

Availability of data and materials

Data supporting the findings of this study are available on request from the corresponding author.

References

  1. Y. Bar-Shalom, X. Li, T. Kirubarajan, Estimation with applications to tracking and navigation: theory, algorithms and software. (2001). https://api.semanticscholar.org/CorpusID:108666793

  2. P.S.R. Diniz, Adaptive Filtering: Algorithms and Practical Implementation (Springer, New York, 2006). https://doi.org/10.1007/978-0-387-68606-6

    Book  Google Scholar 

  3. D. Simon, Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches (Wiley InterScience, Hoboken, 2006)

    Book  Google Scholar 

  4. S. Särkkä, Bayesian Filtering and Smoothing (Cambridge University Press, Cambridge, 2013). https://doi.org/10.1017/9781108917407

    Book  Google Scholar 

  5. A. Doucet, A.M. Johansen, A tutorial on particle filtering and smoothing: fifteen years later, in Handbook of Nonlinear Filtering. ed. by D. Crisan, B. Rozovsky (Cambridge University Press, Cambridge, 2009)

    Google Scholar 

  6. A.M. Zoubir, V. Koivunen, E. Ollila, M. Muma (eds.), Robust Statistics for Signal Processing (Cambridge University Press, London, 2018)

    Google Scholar 

  7. M.A. Gandhi, L. Mili, Robust Kalman filter based on a generalized maximum-likelihood-type estimator. IEEE Trans. Signal Process. 58(5), 2509–2520 (2009)

    Article  MathSciNet  Google Scholar 

  8. D. Medina, H. Li, J. Vilà-Valls, P. Closas, Robust filtering techniques for RTK positioning in harsh propagation environments. Sensors 21(4), 1250 (2021)

    Article  Google Scholar 

  9. Y. Huang, Y. Zhang, Y. Zhao, J.A. Chambers, A novel robust gaussian-student’s t mixture distribution based Kalman filter. IEEE Trans. Signal Process. 67(13), 3606–3620 (2019)

    Article  MathSciNet  Google Scholar 

  10. G. Agamennoni, J.I. Nieto, E.M. Nebot, Approximate inference in state-space models with heavy-tailed noise. IEEE Trans. Signal Process. 60(10), 5024–5037 (2012)

    Article  MathSciNet  Google Scholar 

  11. H. Li, D. Medina, J. Vilà-Valls, P. Closas, Robust variational-based Kalman filter for outlier rejection with correlated measurements. IEEE Trans. Signal Process. 69, 357–369 (2020)

    Article  MathSciNet  Google Scholar 

  12. I. Hwang, S. Kim, Y. Kim, C.E. Seah, A survey of fault detection, isolation, and reconfiguration methods. IEEE Trans. Control Syst. Technol. 18(3), 636–653 (2009)

    Article  Google Scholar 

  13. F. Gustafsson, F. Gustafsson, Adaptive Filtering and Change Detection vol. 1. Wiley New York (2000). https://api.semanticscholar.org/CorpusID:61489326

  14. W. Youn, S. Andrew Gadsden, Combined quaternion-based error state Kalman filtering and smooth variable structure filtering for robust attitude estimation. IEEE Access 7, 148989–149004 (2019). https://doi.org/10.1109/ACCESS.2019.2946609

    Article  Google Scholar 

  15. R.V. Vitali, R.S. McGinnis, N.C. Perkins, Robust error-state Kalman filter for estimating IMU orientation. IEEE Sens. J. 21(3), 3561–3569 (2021). https://doi.org/10.1109/JSEN.2020.3026895

    Article  Google Scholar 

  16. J. Vilà-Valls, E. Chaumette, F. Vincent, P. Closas, Robust linearly constrained Kalman filter for general mismatched linear state-space models. IEEE Trans. Autom. Control 67(12), 6794–6801 (2022)

    Article  MathSciNet  Google Scholar 

  17. E. Hrustic, R. Ben Abdallah, J. Vilà -Valls, G. Pagés, E. Chaumette, Robust linearly constrained extended Kalman filter for mismatched nonlinear systems. Int. J. Robust Nonlinear Control 31(3), 787–805 (2021). https://doi.org/10.1002/rnc.5305

    Article  MathSciNet  Google Scholar 

  18. R. Ben Abdallah, G. Pagés, D. Vivet, J. Vilà-Valls, E. Chaumette, Robust linearly constrained square-root cubature Kalman filter for mismatched nonlinear dynamic systems. IEEE Control Syst. Lett. 6, 2335–2340 (2022)

    Article  MathSciNet  Google Scholar 

  19. P. Chauchat, J. Vilà-Valls, E. Chaumette, Robust linearly constrained invariant filtering for a class of mismatched nonlinear systems. IEEE Control Syst. Lett. 6, 223–228 (2021)

    Article  MathSciNet  Google Scholar 

  20. C. Li, Z. Shi, Y. Liu, T. Liu, L. Xu, Efficient and robust direct image registration based on joint geometric and photometric lie algebra. IEEE Trans. Image Process. 27(12), 6010–6024 (2018). https://doi.org/10.1109/TIP.2018.2864895

    Article  MathSciNet  Google Scholar 

  21. S. Labsir, G. Pages, D. Vivet, Lie group modelling for an EKF-based monocular slam algorithm. Remote Sens. 14(3), 571 (2022)

    Article  Google Scholar 

  22. J. Solà, Quaternion kinematics for the error-state Kalman filter. CoRR abs/1711.02508 (2017) arXiv:1711.02508

  23. N. Trawny, S.I. Roumeliotis, Indirect Kalman filter for 3D attitude estimation. University of Minnesota, Department of Computer Science and Engineering, Tech. Rep 2, 2005 (2005)

  24. A. Barrau, S. Bonnabel, Invariant Kalman filtering. Ann. Rev. Control Robot. Auton. Syst. 1, 237–257 (2018)

    Article  Google Scholar 

  25. A. Barrau, S. Bonnabel, The invariant extended Kalman filter as a stable observer. IEEE Trans. Autom. Control 62(4), 1797–1812 (2016)

    Article  MathSciNet  Google Scholar 

  26. A.M. Rad, J.H. Nobari, A.A. Nikkhah, Optimal attitude and position determination by integration of INS, star tracker, and horizon sensor. IEEE Aerosp. Electron. Syst. Mag. 29(4), 20–33 (2014)

    Article  Google Scholar 

  27. C.C. Liebe, Star trackers for attitude determination. IEEE Aerosp. Electron. Syst. Mag. 10(6), 10–16 (1995)

    Article  Google Scholar 

  28. J. Wu, Z. Zhou, J. Chen, H. Fourati, R. Li, Fast complementary filter for attitude estimation using low-cost MARG sensors. IEEE Sens. J. 16(18), 6997–7007 (2016)

    Article  Google Scholar 

  29. A.E.R. Shabayek, C. Demonceaux, O. Morel, D. Fofi, Vision based UAV attitude estimation: progress and insights. J. Intell. Rob. Syst. 65, 295–308 (2012)

    Article  Google Scholar 

  30. D. Medina, J. Vilà-Valls, A. Heßelbarth, R. Ziebold, J. García, On the recursive joint position and attitude determination in multi-antenna GNSS platforms. Remote Sens. 12(12), 1955 (2020)

    Article  Google Scholar 

  31. P.J.G. Teunissen, G. Giorgi, P.J. Buist, Testing of a new single-frequency GNSS carrier phase attitude determination method: land, ship and aircraft experiments. GPS Solut. 15(1), 15–28 (2011)

    Article  Google Scholar 

  32. D. Medina, Robust GNSS carrier phase-based position and attitude estimation theory and applications. PhD thesis, Universidad Carlos III de Madrid (2022)

  33. H. Yang, L. Carlone, A quaternion-based certifiably optimal solution to the Wahba problem with outliers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1665–1674 (2019)

  34. A. Bellés, D. Medina, P. Chauchat, J. Vilà-Valls, reliable GNSS joint position and attitude estimation in harsh environments via robust statistics. In: Proc. of the IEEE Aerospace Conference (2022)

  35. A.P. Bustos, T.-J. Chin, Guaranteed outlier removal for point cloud registration with correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2868–2882 (2017)

    Article  Google Scholar 

  36. T. Shan, B. Englot, LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758–4765 (2018). IEEE

  37. R. Mur-Artal, J.D. Tardós, ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  38. T. Qin, P. Li, S. Shen, VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans. Rob. 34(4), 1004–1020 (2018)

    Article  Google Scholar 

  39. G. Giorgi, P.J. Teunissen, S. Verhagen, P.J. Buist, Testing a new multivariate GNSS carrier phase attitude determination method for remote sensing platforms. Adv. Space Res. 46(2), 118–129 (2010)

    Article  Google Scholar 

  40. S.T. Goh, K.-S. Low, Survey of global-positioning-system-based attitude determination algorithms. J. Guid. Control. Dyn. 40(6), 1321–1335 (2017). https://doi.org/10.2514/1.G002504

    Article  Google Scholar 

  41. F.L. Markley, J.L. Crassidis, Fundamentals of Spacecraft Attitude Determination and Control, vol. 1286 (Springer, New York, 2014)

    Book  Google Scholar 

  42. J.A. Christian, A tutorial on horizon-based optical navigation and attitude determination with space imaging systems. IEEE Access 9, 19819–19853 (2021)

    Article  Google Scholar 

  43. J.A. Christian, J.L. Crassidis, Star identification and attitude determination with projective cameras. IEEE Access 9, 25768–25794 (2021)

    Article  Google Scholar 

  44. H. Yang, J. Shi, L. Carlone, TEASER: fast and certifiable point cloud registration. IEEE Trans. Rob. 37(2), 314–333 (2020)

    Article  Google Scholar 

  45. T.D. Barfoot, State Estimation for Robotics (Cambridge University Press, Cambridge, 2017). https://doi.org/10.1017/9781316671528

    Book  Google Scholar 

  46. J. Stillwell, Naive Lie Theory (Springer, New York, 2008)

    Book  Google Scholar 

  47. J. Solà, J. Deray, D. Atchuthan, A micro lie theory for state estimation in robotics. arXiv preprint arXiv:1812.01537 (2018)

  48. B.M. Bell, F.W. Cathey, The iterated Kalman filter update as a Gauss–Newton method. IEEE Trans. Autom. Control 38(2), 294–297 (1993)

    Article  MathSciNet  Google Scholar 

  49. G. Bourmaud, R. Mégret, A. Giremus, Y. Berthoumieu, From intrinsic optimization to iterated extended Kalman filtering on lie groups. J. Math. Imaging Vis. 55(3), 284–303 (2016). https://doi.org/10.1007/s10851-015-0622-8

    Article  MathSciNet  Google Scholar 

  50. P. Chauchat, A. Barrau, S. Bonnabel, Invariant smoothing on lie groups. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain (2018). https://hal.archives-ouvertes.fr/hal-01725847

  51. G. Bourmaud, R. Mégret, A. Giremus, Y. Berthoumieu, From intrinsic optimization to iterated extended Kalman filtering on lie groups. J. Math. Imaging Vis. 55, 284–303 (2016)

    Article  MathSciNet  Google Scholar 

  52. P.-A. Absil, R. Mahony, R. Sepulchre, Optimization Algorithms on Matrix Manifolds (Princeton University Press, Princeton, 2008)

    Book  Google Scholar 

  53. L. Chang, K. Li, Unified form for the robust Gaussian information filtering based on M-estimate. IEEE Signal Process. Lett. 24(4), 412–416 (2017)

    Article  Google Scholar 

  54. A. Bellés, D. Medina, P. Chauchat, S. Labsir, J. Vilà-Valls, Robust M-type error-state Kalman filters for attitude estimation. In: Proc. of the European Signal Processing Conference (EUSIPCO) (2023)

  55. K. Aftab, R. Hartley, Convergence of iteratively re-weighted least squares to robust m-estimators. In: 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 480–487 (2015). https://doi.org/10.1109/WACV.2015.70

  56. G. Wahba, A least squares estimate of satellite attitude. SIAM Rev. 7(3), 409–409 (1965). https://doi.org/10.1137/1007077

    Article  Google Scholar 

Download references

Funding

Open Access funding enabled and organized by Projekt DEAL. This research was partially supported by DLR’s FuturePorts and HiGAIN project and the DGA/AID project 2022.65.0082.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally. All authors have read and agreed to the published version of the manuscript. The authors declare this work is original and has not been published nor submitted for publication elsewhere.

Corresponding author

Correspondence to Andrea Bellés.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no conflict of interest nor issues related to journal policies to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bellés, A., Medina, D., Chauchat, P. et al. Robust error-state Kalman-type filters for attitude estimation. EURASIP J. Adv. Signal Process. 2024, 75 (2024). https://doi.org/10.1186/s13634-024-01172-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-024-01172-w

Keywords