 Research
 Open access
 Published:
Robust errorstate Kalmantype filters for attitude estimation
EURASIP Journal on Advances in Signal Processing volume 2024, Article number: 75 (2024)
Abstract
State estimation techniques appear in a plethora of engineering fields, in particular for the attitude estimation application of interest in this contribution. A number of filters have been devised for this problem, in particular Kalmantype ones, but in their standard form they are known to be fragile against outliers. In this work, we focus on errorstate filters, designed for states living on a manifold, here unitnorm quaternions. We propose extensions based on robust statistics, leading to two robust Mtype filters able to tackle outliers either in the measurements, in the system dynamics or in both cases. The performance and robustness of these filters is explored in a numerical experiment. We first assess the outlier ratio that they manage to mitigate, and second the type of dynamics outliers that they can detect, showing that the filter performance depends on the measurements’ properties.
1 Introduction
State estimation is a fundamental task in several engineering applications, for instance in robotics, guidance, navigation or tracking systems [1,2,3]. Standard filtering techniques, such as the Kalman filter (KF) for linear systems, the extended or sigmapoint KFs for nonlinear/Gaussian statespace models (SSMs) [4], or even the family of sequential Monte Carlo methods [5], assume a perfect system knowledge: known system matrices/functions, known noise statistics and parameters, known inputs and perfect initialization. However, such assumption does not typically hold in reallife applications, in which case the performance of these filters may be severely degraded, reason why it is mandatory to develop robust filtering techniques. A number of alternatives exist in the literature to deal with possibly misspecified systems, for instance: i) robust statisticsbased filters, which are nearlyoptimal under nominal Gaussian conditions, while outlier insensitive under normal mixture noise distributions [6,7,8]; ii) variational inferencebased filters, designed for state estimation under heavytailed parametric distributions [9, 10], or being able to detect and reject outliers [11]; iii) fault detection and exclusion algorithms, which identify the faulty measurements based on statistical tests [12, 13]; iv) the smooth variable structure filter [14], a hybrid filter designed to constrain state estimates within a boundary surrounding the true state trajectory (it provides robustness against model uncertainty but falls short when it comes to coping with dynamics and measurement noise, so it is considered a suboptimal alternative to the KF); v) the robust decisionmaking framework based on decision trees [15]; or vi) recent contributions on linearly constrained filtering that provide a new paradigm for protecting against model mismatch on prediction and observation models [16,17,18], whose extension on manifoldbased estimation is also available [19]. From a methodological point of view, the main goal of this contribution is to design new robust filtering methods, able to cope with outliers in both state evolution and measurements, for SSMs with states living on a manifold.
The underlying problem of interest is attitude estimation, which refers to finding the relative orientation between two Cartesian frames or, in other words, to the determination of the spatial orientation for a platform. Orientation information is fundamental to aid vehicles with large inertia, such as airplanes or ships, and to operate systems equipped with pointing devices, such as satellites. The attitude problem also arises in computer vision applications, being used to relate keyframes or register point clouds [20, 21]. Within the context of navigation, the attitude varies with time, and its recursive estimation is addressed with nonlinear extensions of the original KF, such as the errorstate KF (ESKF) [22, 23] or the invariant KF (IKF) [24, 25]. The aforementioned filters preserve the geometrical properties for the attitude estimates or, conversely, assure that the sequence of estimates remain in the corresponding manifold. The underlying concept relates to estimating a vector of perturbations around the state on the algebra associated with the target manifold. For threedimensional attitude systems, for instance, that manifold is SO(3), while for the rotationtranslation pair is SE(3). When it comes to sensor integration, attitude systems typically include relative (i.e. angular rate) and absolute (i.e. observation models for the orientation) attitude information. Relevant works on recursive attitude estimation include the use of startrackers and magnetometers for space applications [26,27,28], or visual sensors and Global Navigation Satellite Systems (GNSS) antenna arrays for vehicular platforms [29,30,31,32]. Unfortunately, outlying measurements are often captured by attituderelated sensor modalities. This is related to signal reflection and multipath in GNSS, wrong data association in visual systems or unexpected magnetic fields in magnetometers. Moreover, gyroscope systems may experience scale issues or misalignments, with the subsequent deterioration of the angular rate integration.
As already stated for standard filtering techniques, under the presence of contaminated data and/or system inputs, the performance of conventional attitude filters also rapidly degrades and the estimates become unreliable. With the exception of a few examples [33,34,35], the use of robust filters for attitude estimation has not been addressed in the literature. More specifically, a connection between robust estimation and the most widespread attitude filters (i.e. the ESKF and IKF) has not been drawn. In this work, we lay the focus on robust statisticsbased filters, which typically make use of a robust score function to mitigate the impact of outlying measurements. This leads to a family of socalled (linear or nonlinear) Mtype KFs. In this contribution, we propose two new nonlinear robust iterated ESKFs for states living on a manifold (i.e. the 3D unit sphere \(\mathcal {S}^3\) that represents unitnorm quaternions), which overcome some of the limitations of existing solutions, and one of them allows not only to cope with outliers in the observation equation, but also state evolution outliers. Numerical experiments investigate the robustness of these filters to outliers in the attitude estimation context. In particular, we focus on the outlier ratio that they can tackle, and whether they can properly mitigate outliers in the system dynamics. For the latter, we discuss the cases in which the filter succeeds.
The rest of the paper is structured as follows. Section 2 introduces the discrete and nonEuclidean SSM. The main methodological contribution of this work is introduced in Sect. 3, which first recalls the basics of the ESKF, before deriving two robust Mtype filters for the attitude problem. Section 4.3 explores the robustness of the various filters. Finally, Sect. 5 presents the concluding remarks.
2 System model and problem formulation
2.1 Attitude estimation problem
Let us consider a general attitude problem and define the state of the system as
where \({\textbf{q}}\) is a unitnorm quaternion and encodes the attitude of interest and \(\textbf{a}\) is a generic vector that encapsulates the remaining unknowns. The goal is then to recursively estimate the state at time k, \({\textbf{x}}_k\), from the measurements available up to and including time k, \({\textbf{y}}_{1:k}\). In other words, the problem is to recursively compute or approximate the marginal posterior \(p({\textbf{x}}_k{\textbf{y}}_{1:k})\). In addition, the system may be corrupted by unknown measurement and/or input outliers, then a robust solution must be accounted for.
To this end, attitude estimation can be formulated in a general manner with the following SSM,
where both process and observation functions, \(\textbf{f}_{}(\cdot )\) and \(\textbf{h}(\cdot )\), are assumed to be known; \({\textbf{v}}_{k}\) and \({\varvec{\eta }}_k\) are the process and measurement noise sequences; and \({\varvec{\omega }}_{k}\) a system input (e.g. inertial measurements). Regarding the noise sequences, the process noise is typically assumed to be zero mean Gaussian with known covariance matrix \(\textbf{Q}_{k}\), \({\textbf{v}}_{k}\sim {\mathcal {N}}({\textbf{0}},{\textbf{Q}}_{k})\), and the measurement noise density depends on the system operation mode, that is, under nominal conditions, one may assume a zero mean Gaussian noise with known covariance matrix \(\mathbf {\Sigma }_k\), \({\varvec{\eta }}_k\sim {\mathcal {N}}({\textbf{0}},{\mathbf {\Sigma }}_k)\), but under nonnominal conditions (i.e. corrupted measurements), the system is affected by outliers, then the measurement noise density is typically unknown, heavytailed and/or skewed.
This general formulation allows to represent a plethora of engineering problems, such as simultaneous localization and mapping (SLAM) [36,37,38], GNSSbased navigation [30, 39, 40], spacecraft control systems [41,42,43] or computer vision tasks [44]. For the remaining of the manuscript, we stay with the generic formulation of unknowns living in the real and unitnorm quaternions spaces. A practical example on orientation estimation based on gyroscope integration and unitlineofsight vectors is detailed in Sect. 4.
2.2 Filtering on a manifold space
The traditional KF and its nonlinear extensions must be adapted in order to embrace the nonEuclidean structure of some spaces, such as the one introduced in (1). The inherent constraints of these objects (e.g. having unit norm) are typically addressed with geometric tools such as Lie group theory [45]. Among the possible solutions, it is common practice to resort to the socalled ESKF [22]. The ESKF separates the state \(\textbf{x}_k\), which belongs to a manifold (here a Lie group), and its perturbations \(\delta \textbf{x}_k \in {\mathbb {R}}\), which live in the tangent space of that manifold (i.e. the algebra of the manifold), that is a vector space.
The composition operators \(\oplus\) (“oplus”) and its inverse \(\ominus\) (“ominus”) allow for a bijective mapping from a local neighbourhood on a manifold \(\mathcal {M}\) and its algebra in \(\mathbb {R}\).
In our case, the manifold of interest is \({\mathcal {S}}^{3} \times {\mathbb {R}}^{p}\), the product of the unit sphere (quaternion space) with a vector space. We can go from the algebra to the manifold thanks to the exponential function, thus we can associate a quaternion \(\textbf{q}\) to a perturbation \(\textbf{t} \in \mathbb {R}^3\) through the following functions:
This operation can be reversed. Taking \(\textbf{q}^\top = \begin{bmatrix} q_w,&\textbf{q}_u \end{bmatrix}\), we then have
Now, we can apply the “oplus” and “ominus” operators to the general state estimate in (1) as
with \(\otimes\) and \((\cdot )^{1}\) the quaternion product and quaternion inverse, respectively, and \(\textbf{b}\) being a generic error vector for the remaining unknowns. Refer to [45,46,47] for a thorough and detailed discussion on quaternions and Lie group theory.
As the standard KF, there is no reason why errorstate filters should be robust against outliers. The main contribution of this article is to study this problem and propose a solution based on robust statistics.
3 Methodology
3.1 Background: errorstate KFs and robust filtering
3.1.1 Standard errorstate extended KF
For the problem at hand, in order to preserve the unitnorm quaternion constraint, the ESKF uses the \(\oplus\) operator in (7) (i.e. instead of the standard addition) to linearize and update the system. This ensures that the state estimate stays on the smooth (usually Riemannian) manifold. If \(\widehat{\textbf{x}}_{k1k1}\) denotes the state estimate at discrete time \(k1\), and \({\textbf{P}}_{k1k1}\) the corresponding estimation error covariance, then the ESKF is
where \({\textbf{F}}_{k1}, {\textbf{H}}_k\) represent the Jacobians of the process and observation models, \({\textbf{f}}_{k1}, \textbf{H}_k\) (i.e. w.r.t. \(\textbf{x}_k = \widehat{\textbf{x}}_k\oplus \delta \textbf{x}_k\)). Applying the chain rule, the Jacobian matrices can be expressed as
3.1.2 Iterated errorstate extended KF
It is known that the standard EKF loses the optimality guarantees of the linear KF mainly due to linearization errors. A possible solution to such problem is to use the iterated EKF, which aims at finding the best linearization point for the measurement Jacobian at each update using a GaussNewton scheme [48]. Therefore, iterated schemes are particularly useful when the linearization errors of the measurement function are the main source of instability. In the ESKF context and considering states living on manifolds, the iterative sequence \((\widehat{\textbf{x}}_{kk}^{(i)})_{i \ge 0}\) to compute \(\widehat{\textbf{x}}_{kk}\) leading to an iterated ESKF (IESKF) is (using the associated \(\oplus , \ominus\) operators and \(\widehat{\textbf{x}}_{kk}^{(0)} = \widehat{\textbf{x}}_{kk1}\)) [49, 50],
where \({\textbf{H}}_k^{(i)}\) is the Jacobian computed at \(\widehat{\textbf{x}}_{kk}^{(i)}\), and \({\textbf{K}}_k^{(i)}\) the Kalman gain computed from \({\textbf{P}}_{kk1}\) and \({\textbf{H}}_k^{(i)}\). Such an iterated correction step is sketched in Algorithm 1. Hereinafter, we define the convergence criteria as the norm of the intrinsic difference between consecutive state estimates over the iterations, such that
where T and \(i_{\text {max}}\) is the threshold for the stopping criteria and the maximum number of iterations, respectively. The latter is needed since there are only local convergence guarantees for Gauss–Newton methods on Lie group [51, 52].
3.1.3 Mestimation regressionbased robust filtering
Notice that the previous ESKF and IESKF are not robust against outliers or a model mismatch (i.e. the filters assume a perfect system knowledge). It turns out that in the Euclidean case, robust statistics appear as a natural tool. Indeed, the correction step of the standard iterated EKF corresponds to solving a maximum a posteriori (MAP) problem, using Gaussian assumptions, through optimization,
Gaussian assumptions being fragile against outliers, filters based on robust statistics, for instance using Mestimation, were proposed [6, 53]. In particular, filters based on the robust statistics framework consider modified minimization problems:
where a modified measurement noise covariance matrix \(\bar{{\mathbf {\Sigma }}}_k\) allows to account for outlying observations. This matrix is obtained using robust score functions in order to mitigate the effect of outliers on the final state estimate, given by
where \({\mathbf {\Sigma }}_k^{1/2}\) is the Cholesky factorization of \({\mathbf {\Sigma }}_k\) and \({\textbf{W}}_k\) is a weighting matrix depending on the observations’ residuals
with \({\textbf{w}}(\cdot )\) a function derived from a robust score function (e.g. Huber, Tukey or IGG weighting functions in Fig. 1) [6, 53]. The most representative monotone Mestimator is given by the Huber’s family of functions, which is the one used in this contribution. Relying on a parameter \(c_{\text {Hub}}\), the loss function \(\rho _{\text {Hub}}\) and associated weight \(w_{\text {Hub}}\) are given by
The application of robust statistics’ weighting functions within conventional KF schemes can be done by exploiting the information KF form [53], or reformulating the KF as a regression problem (MEKF) [6, 7]. While the extension of the robust information filter when some state elements may live on a manifold (e.g. such as the quaternion) was recently done in [34], the usual formulation of the MEKF via a regression problem is not easily applicable for onmanifold systems. This is why another formulation, based on a modification of the Kalman update, was presented in [54]. For the consistency of the presentation, we formulate both the standard MEKF and the onmanifold one the same way.
Although (13) is a weighted leastsquares problem where the weights depend on the state, it can be solved through an iterative procedure. Starting with \(\widehat{\textbf{x}}_{kk}^{(0)} = \widehat{\textbf{x}}_{kk1}\), at step (i) the correction is updated as
where \(\bar{\textbf{K}}_k^{(i)}\) is the modified Kalman gain, computed as
Once convergence is reached, the updated covariance matrix associated with the estimate is \(\textbf{P}_{kk} = (\textbf{I}  \bar{\textbf{K}}_k^{(*)} \textbf{H}_k) {\textbf{P}}_{kk1}\), \((*)\) denoting the final step in the iteration.
3.2 Robust iterated errorstate extended KFs
3.2.1 An ESKF robust to measurement outliers
Using the formulation proposed above is the easiest way to merge the robust weighting function with an ESKF. However, note that for nonlinear systems, the previous MEKF formulation assumes that the firstorder Taylor approximation of the measurement function is good enough (i.e. no iterative procedure over the measurement Jacobian linearization point). Therefore, it was proposed in [54] to also take advantage of the IESKF of Algorithm 1, leading to a doubleloop procedure based on (11) and (18a). The outer loop on (i) computes the robust weights, while the inner on (j) carries out an IESKF update with those weights. That is, \(\widehat{\textbf{x}}_{kk}^{(i+1)} = \widehat{\textbf{x}}_{kk}^{(i, *)}\), with,
where \(\bar{\textbf{K}}_k^{(i,j)}\) is again a modified Kalman gain, depending on the weights, but also on the Jacobian \({\textbf{H}}_k^{(j)}\) computed at \(\widehat{\textbf{x}}_{kk}^{(i,j)}\):
Note that the weights are fixed inside the inner loop on (j), meaning that this loop actually corresponds to Algorithm 1, and that the initial weights are all one. The overall procedure of this Robust IterativeESKF (RIESKF) is given in Algorithm 2. The convergence criteria follows the same formulation as for the Algorithm 1. For further reference, the convergence of iteratively reweighted leastsquares (IRLS) procedures has been addressed in detail [55], including its application to Lie groups.
3.2.2 Taking into account prediction outliers
The RIESKF proposed in Sect. 3.2.1 is designed to automatically detect outliers in the observations. However, it does not shield against unexpected values in the prediction step. To this end, a modification of the RIESKF was also proposed in a preliminary work [54], denoted as \(R^2\)IESKF. Its main idea is to apply the same weighting procedure to the prior and to the measurements, as is done in regressionbased MEKF. This means that the prior covariance \(\textbf{P}_{kk1}\) will also be weighted, so that (19) is modified as
The correction step of the R\(^2\)IESKF is then computed following Algorithm 3.
4 Experimental results
We consider the recursive Wahba’s problem throughout the paper [56]. The state estimate consists of the 3D orientation and the gyroscope bias, with the dynamical model determined by the integration of angular velocity measurements as
where \(\text {Exp}(\mathbf {\cdot })\) represents the exponential mapping from the algebra to the manifold, and \(\otimes\) refers to the quaternion product, both defined in Sect. 2. The observation model consists of a set of N lineofsight (LOS) observations:
where \(\textbf{l}_1, \dots , \textbf{l}_N\) are a set of threedimensional baselines on the local frame, and \(\textbf{R}_k\) is the localtoinertial frame rotation associated with \(\textbf{Q}_k\). Note that this observation function is indeed nonlinear, because of the rotation involved.
4.1 Simulation setup
We show the performance of the different robust Mtype KFs presented in this contribution, w.r.t. the standard IESKF, using the recursive Wahba’s problem of Sect. 2. The number of Monte Carlo (MC) experiments is set to \(M=100\), the total duration of the experiment is 1000s, with the prediction and correction steps being processed at 1 Hz. Outliers are present for three periods of 150 s along the trajectory, which will be illustrated as grey zones on the various figures. The default values used in simulation are given in Tab. 1.
Four different filters are compared:

An ideal IESKF, which uses the correct covariances when outliers are present.

The standard (not robust) IESKF.

The RIESKF.

The \(R^2\)IESKF.
We rely on two metrics to compare these filters. Both will rely on the intrinsic error given by \(\varvec{\varepsilon }_k = \widehat{\textbf{x}}_{kk} \ominus \textbf{x}_k\). First, the intrinsic rootmeansquared error (IRMSE):
Second, the intrinsic version of the average normalized estimation error squared (IANEES), which compares the estimated MSE with the one given by the estimated covariance. Note that the IANEES should revolve around 1 for consistent estimators:
4.2 Discussion on prediction outliers
Before providing a comprehensive performance analysis of both robust filters proposed in this contribution, we briefly discuss on outliers appearing on the prediction step (i.e. state evolution outliers), which are much less common in the robust filtering literature than the ones appearing on the measurements. Notice that the weighting scheme used in Algorithm 3 to mitigate prediction outliers assumes that they may appear in the normalized residuals \(\textbf{P}_{kk1}^{1/2} ( \widehat{\textbf{x}}_{kk1} \ominus \widehat{\textbf{x}}_{kk}^{(i)})\), in contrast to the standard robust weighting applied to the normalized innovation vector, \({\mathbf {\Sigma }}_k^{1/2} \big ( {\textbf{y}}_k  {\textbf{h}}_k(\widehat{\textbf{x}}_{kk}^{(i)}) \big )\). Theoretically, this would be the case if one could directly compare \(\widehat{\textbf{x}}_{kk1}\) and the true state \({\textbf{x}}_{k}\), and normalize with \(\textbf{Q}_k\). Of course, this is not the case and, within the filter, one has only access to estimates of the true state. Moreover, it is important to notice that prediction outliers have a recursive and cumulative impact, and they may be detected once the accumulated error exceeds the predicted uncertainty, thus with a given delay. On the contrary, measurement outliers are detected immediately. Then, it is likely that prediction outliers are first understood by the filter as observation outliers.
To confirm the previous statement, we conducted an experiment with only prediction outliers, appearing at \(t=150 s\). Figure 2 shows the mean prediction and observation weights (i.e. using the \(R^2\)IESKF) along a trajectory for three different pairs of values of \(\sigma _y, \alpha _\omega\). The other values are the default ones from Table 1. The grey zone indicates where outliers are present. On the top subplot, for \(\sigma _y=0.01\)m and \(\alpha _\omega = 20\), one can clearly see that both observations and prediction are downweighted at the beginning of the outlier zone, before the observation weights go back to the nominal value. However, the middle plot with \(\sigma _y=0.1\)m and \(\alpha _\omega =20\) highlights another shortcoming of the proposed methodology: if the prediction outliers are too small compared to the measurement accuracy, they may be understood by the filter as observation outliers all the time. One can clearly see in that case that the prediction is never downweighted. Therefore, it appears that the performance of the \(R^2\)IESKF (and likely the standard MEKF in regression form) will depend on a ratio between how large prediction outliers are, and how accurate nominal observations are. This is further outlined by the last case (bottom subplot), with \(\sigma _y=0.1\)m and \(\alpha _\omega =100\) shown on the right plot. Indeed, the downweighting is first applied to the observations, before gradually switching to the propagation as outliers accumulate. This behaviour is also observed in the experiments presented in Sect. 4.
4.3 Assessing the filters’ robustness
4.3.1 Case I: nominal conditions
We first consider an experiment in nominal conditions, i.e. without outliers. Indeed, Mestimatorbased filters are expected to be suboptimal in this configuration, compared to the standard KFs (here the IESKF). We thus measure this gap, and check whether the RI and \(R^2\)IESKF behave differently.
As it can be observed in Fig. 3, both robust filters behave almost exactly the same, and with only a small loss of performance compared to the IESKF. More importantly, they show very good consistency, as that of the standard filter.
4.3.2 Case II: measurement outliers only
The second experiment only considers measurement outliers. This setup is expected to favour the RIESKF, but there are two open questions: What outlier ratio can it withstand, and does the \(R^2\)IESKF also perform well?
To this end, we considered four outlier ratio values \(\varepsilon = \{25, 50, 75, 90\} \%\). For this experiment, we increased the number of measurements by setting \(N = 10\). Indeed, with \(90\%\) outliers, the system would become unobservable otherwise.
Looking at Fig. 4, it can be noticed that not only the RIESKF, but also the \(R^2\)IESKF resist very well up to 50\(\%\) of outliers in the measurements. Even under extremely harsh conditions, as in the case of 90% of contamination, the performance of the RIESKF is relatively close to the ideal IESKF, which knows which measurements are outliers, although there is a loss in consistency.
On the contrary, the \(R^2\)IESKF fails for large outlier ratios. This is not due to the ambiguity issue of Sect. 4.2. On the contrary, it first stays close to the RIESKF but both accumulate error (their IRMSE grows). This error creates a kind of propagation outlier, which in turn gets weighted. However, looking at (13), if both terms get equivalent weights, these may cancel out, thus the filter will not be able to mitigate large outliers. Hence, it appears that the \(R^2\)IESKF cannot cope with extreme outlier ratios.
4.3.3 Case III: prediction outliers only
The third experiment only considers outliers in the prediction. After providing our insights into how prediction outliers impact the filter and how the filter detects them in Sect. 4.2, we can proceed to analyse the performance of the proposed robust filters under these circumstances.
In this case, we compare the behaviour of the RIESKF and the \(R^2\)IESKF for two pairs of values of \(\sigma _{y}\) and \(\alpha _{\omega }\) as in 4.2.
As expected, the RIESKF behaves very poorly, even worse than the IESKF. On the other hand, the performance of the \(R^2\)IESKF observed in the second subfigure of Fig. 5 confirms the analysis presented in Fig. 2. When the prediction outliers are too small compared to the measurement accuracy as in the second subfigure of Fig. 5, the filter is not able to recognize them as outliers in the prediction but as outliers in the observations, so the robust mechanism in the prediction is not playing any role. Therefore, the \(R^2\)IESKF acts as the RIESKF. Nonetheless, as long as the prediction outliers are large enough with respect to the observation outliers, the \(R^2\)IESKF performs almost as good as the ideal filter, even in terms of consistency, as is depicted in the first, third and fourth subfigures of Fig. 5. Interestingly, the convergence time also varies depending on the values of \(\sigma _{y}\) and \(\alpha _{\omega }\); however, it is always less than 60 s. And, of equal importance, we can notice that the filter also detects the nominal zones and is able to reduce its error while staying consistent, as the ideal one.
4.3.4 Case IV: outliers in both prediction and measurements
Finally, in order to complete our analysis on the performance of the proposed robust filters, a fourth experiment is carried out which considers both prediction and observation outliers simultaneously.
The analysis of the filters for this experiment has been made using the same pair of values of \(\sigma _{y}\) and \(\alpha _{\omega }\) as in 4.2 and 4.3.3. In this case, we can notice in the first, second and fourth subfigures of Fig. 6 that, provided that the prediction outliers are large enough with respect to the observation ones to be correctly detected, the behaviour of the \(R^2\)IESKF is outstanding. While the RIESKF in not able to counteract the impact of the outliers whatsoever, the \(R^2\)IESKF reacts after a short amount of time and recognizes the presence of outliers and manages to mitigate them correctly.
4.4 Impact of outliers on the convergence
We investigated the impact of outliers on the convergence of Algorithms 1–3. More precisely, we tracked the number of times (11) was computed. The average number of required iterations were evaluated by 100 runs of Monte Carlo simulation with the parameters of Table 1 for cases II and IV using \(\sigma _y = 0.1\), \(\alpha _\omega = 100\) and \(25\%\) of outliers. The results are presented in Fig. 7. We can observe that the number of iterations seems to give a good measure of the presence of outliers and of the filter’s robustness. Indeed, required iterations rise in the outlier periods before going back to a nominal case, as seen in Case II. This ability to go back down seems to show that the filter successfully mitigated the outliers. Indeed, in Case IV the RIESKF is not able to mitigate propagation outliers, which transpires in the required iterations. On the contrary, \(R^2\)IESKF is able to mitigate, but to the cost of much more iterations when outliers are present. This is due to the ambiguity it suffers, as discussed in Sect. 4.2. Note that the iterations may exceed \(i_{max}\) (refer to Tab. 1), which is simply due to the fact that Algorithm 1 is called multiple times in the update step due to the cascaded structure of the algorithms.
5 Conclusion
In this paper, we investigated the capability of two new robust counterparts to the iterated ESKF, for states living on a manifold (unitnorm quaternion), to mitigate the impact of outliers in both the propagation and observation models, which may otherwise cause performance breakdowns. These filters were obtained by leveraging the Mestimation methodology applied to attitude estimation. To handle the nonlinearities arising from the observation model and the Mestimator, a structure of two cascaded iterative processes was proposed. This approach was used to derive a robust iterated ESKF (RIESKF), aiming at mitigating measurement outliers, and an outlierfree prediction robust IESKF (\(R^2\)IESKF), which aims at being robust to both prediction and observation outliers. Numerical experiments illustrated the impact of outliers on the standard IESKF and the capacity of the proposed filters to mitigate them in various cases. Results highlight that prediction outliers can only be mitigated if they are large enough with respect to the measurement nominal accuracy, which is a potential future work direction. In addition, the filters were able to correctly handle large outlier proportions, up to 50%, but the \(R^2\)IESKF may fail with larger proportion.
Availability of data and materials
Data supporting the findings of this study are available on request from the corresponding author.
References
Y. BarShalom, X. Li, T. Kirubarajan, Estimation with applications to tracking and navigation: theory, algorithms and software. (2001). https://api.semanticscholar.org/CorpusID:108666793
P.S.R. Diniz, Adaptive Filtering: Algorithms and Practical Implementation (Springer, New York, 2006). https://doi.org/10.1007/9780387686066
D. Simon, Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches (Wiley InterScience, Hoboken, 2006)
S. Särkkä, Bayesian Filtering and Smoothing (Cambridge University Press, Cambridge, 2013). https://doi.org/10.1017/9781108917407
A. Doucet, A.M. Johansen, A tutorial on particle filtering and smoothing: fifteen years later, in Handbook of Nonlinear Filtering. ed. by D. Crisan, B. Rozovsky (Cambridge University Press, Cambridge, 2009)
A.M. Zoubir, V. Koivunen, E. Ollila, M. Muma (eds.), Robust Statistics for Signal Processing (Cambridge University Press, London, 2018)
M.A. Gandhi, L. Mili, Robust Kalman filter based on a generalized maximumlikelihoodtype estimator. IEEE Trans. Signal Process. 58(5), 2509–2520 (2009)
D. Medina, H. Li, J. VilàValls, P. Closas, Robust filtering techniques for RTK positioning in harsh propagation environments. Sensors 21(4), 1250 (2021)
Y. Huang, Y. Zhang, Y. Zhao, J.A. Chambers, A novel robust gaussianstudent’s t mixture distribution based Kalman filter. IEEE Trans. Signal Process. 67(13), 3606–3620 (2019)
G. Agamennoni, J.I. Nieto, E.M. Nebot, Approximate inference in statespace models with heavytailed noise. IEEE Trans. Signal Process. 60(10), 5024–5037 (2012)
H. Li, D. Medina, J. VilàValls, P. Closas, Robust variationalbased Kalman filter for outlier rejection with correlated measurements. IEEE Trans. Signal Process. 69, 357–369 (2020)
I. Hwang, S. Kim, Y. Kim, C.E. Seah, A survey of fault detection, isolation, and reconfiguration methods. IEEE Trans. Control Syst. Technol. 18(3), 636–653 (2009)
F. Gustafsson, F. Gustafsson, Adaptive Filtering and Change Detection vol. 1. Wiley New York (2000). https://api.semanticscholar.org/CorpusID:61489326
W. Youn, S. Andrew Gadsden, Combined quaternionbased error state Kalman filtering and smooth variable structure filtering for robust attitude estimation. IEEE Access 7, 148989–149004 (2019). https://doi.org/10.1109/ACCESS.2019.2946609
R.V. Vitali, R.S. McGinnis, N.C. Perkins, Robust errorstate Kalman filter for estimating IMU orientation. IEEE Sens. J. 21(3), 3561–3569 (2021). https://doi.org/10.1109/JSEN.2020.3026895
J. VilàValls, E. Chaumette, F. Vincent, P. Closas, Robust linearly constrained Kalman filter for general mismatched linear statespace models. IEEE Trans. Autom. Control 67(12), 6794–6801 (2022)
E. Hrustic, R. Ben Abdallah, J. Vilà Valls, G. Pagés, E. Chaumette, Robust linearly constrained extended Kalman filter for mismatched nonlinear systems. Int. J. Robust Nonlinear Control 31(3), 787–805 (2021). https://doi.org/10.1002/rnc.5305
R. Ben Abdallah, G. Pagés, D. Vivet, J. VilàValls, E. Chaumette, Robust linearly constrained squareroot cubature Kalman filter for mismatched nonlinear dynamic systems. IEEE Control Syst. Lett. 6, 2335–2340 (2022)
P. Chauchat, J. VilàValls, E. Chaumette, Robust linearly constrained invariant filtering for a class of mismatched nonlinear systems. IEEE Control Syst. Lett. 6, 223–228 (2021)
C. Li, Z. Shi, Y. Liu, T. Liu, L. Xu, Efficient and robust direct image registration based on joint geometric and photometric lie algebra. IEEE Trans. Image Process. 27(12), 6010–6024 (2018). https://doi.org/10.1109/TIP.2018.2864895
S. Labsir, G. Pages, D. Vivet, Lie group modelling for an EKFbased monocular slam algorithm. Remote Sens. 14(3), 571 (2022)
J. Solà, Quaternion kinematics for the errorstate Kalman filter. CoRR abs/1711.02508 (2017) arXiv:1711.02508
N. Trawny, S.I. Roumeliotis, Indirect Kalman filter for 3D attitude estimation. University of Minnesota, Department of Computer Science and Engineering, Tech. Rep 2, 2005 (2005)
A. Barrau, S. Bonnabel, Invariant Kalman filtering. Ann. Rev. Control Robot. Auton. Syst. 1, 237–257 (2018)
A. Barrau, S. Bonnabel, The invariant extended Kalman filter as a stable observer. IEEE Trans. Autom. Control 62(4), 1797–1812 (2016)
A.M. Rad, J.H. Nobari, A.A. Nikkhah, Optimal attitude and position determination by integration of INS, star tracker, and horizon sensor. IEEE Aerosp. Electron. Syst. Mag. 29(4), 20–33 (2014)
C.C. Liebe, Star trackers for attitude determination. IEEE Aerosp. Electron. Syst. Mag. 10(6), 10–16 (1995)
J. Wu, Z. Zhou, J. Chen, H. Fourati, R. Li, Fast complementary filter for attitude estimation using lowcost MARG sensors. IEEE Sens. J. 16(18), 6997–7007 (2016)
A.E.R. Shabayek, C. Demonceaux, O. Morel, D. Fofi, Vision based UAV attitude estimation: progress and insights. J. Intell. Rob. Syst. 65, 295–308 (2012)
D. Medina, J. VilàValls, A. Heßelbarth, R. Ziebold, J. García, On the recursive joint position and attitude determination in multiantenna GNSS platforms. Remote Sens. 12(12), 1955 (2020)
P.J.G. Teunissen, G. Giorgi, P.J. Buist, Testing of a new singlefrequency GNSS carrier phase attitude determination method: land, ship and aircraft experiments. GPS Solut. 15(1), 15–28 (2011)
D. Medina, Robust GNSS carrier phasebased position and attitude estimation theory and applications. PhD thesis, Universidad Carlos III de Madrid (2022)
H. Yang, L. Carlone, A quaternionbased certifiably optimal solution to the Wahba problem with outliers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1665–1674 (2019)
A. Bellés, D. Medina, P. Chauchat, J. VilàValls, reliable GNSS joint position and attitude estimation in harsh environments via robust statistics. In: Proc. of the IEEE Aerospace Conference (2022)
A.P. Bustos, T.J. Chin, Guaranteed outlier removal for point cloud registration with correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2868–2882 (2017)
T. Shan, B. Englot, LeGOLOAM: Lightweight and groundoptimized lidar odometry and mapping on variable terrain. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758–4765 (2018). IEEE
R. MurArtal, J.D. Tardós, ORBSLAM2: an opensource SLAM system for monocular, stereo, and RGBD cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)
T. Qin, P. Li, S. Shen, VINSMono: a robust and versatile monocular visualinertial state estimator. IEEE Trans. Rob. 34(4), 1004–1020 (2018)
G. Giorgi, P.J. Teunissen, S. Verhagen, P.J. Buist, Testing a new multivariate GNSS carrier phase attitude determination method for remote sensing platforms. Adv. Space Res. 46(2), 118–129 (2010)
S.T. Goh, K.S. Low, Survey of globalpositioningsystembased attitude determination algorithms. J. Guid. Control. Dyn. 40(6), 1321–1335 (2017). https://doi.org/10.2514/1.G002504
F.L. Markley, J.L. Crassidis, Fundamentals of Spacecraft Attitude Determination and Control, vol. 1286 (Springer, New York, 2014)
J.A. Christian, A tutorial on horizonbased optical navigation and attitude determination with space imaging systems. IEEE Access 9, 19819–19853 (2021)
J.A. Christian, J.L. Crassidis, Star identification and attitude determination with projective cameras. IEEE Access 9, 25768–25794 (2021)
H. Yang, J. Shi, L. Carlone, TEASER: fast and certifiable point cloud registration. IEEE Trans. Rob. 37(2), 314–333 (2020)
T.D. Barfoot, State Estimation for Robotics (Cambridge University Press, Cambridge, 2017). https://doi.org/10.1017/9781316671528
J. Stillwell, Naive Lie Theory (Springer, New York, 2008)
J. Solà, J. Deray, D. Atchuthan, A micro lie theory for state estimation in robotics. arXiv preprint arXiv:1812.01537 (2018)
B.M. Bell, F.W. Cathey, The iterated Kalman filter update as a Gauss–Newton method. IEEE Trans. Autom. Control 38(2), 294–297 (1993)
G. Bourmaud, R. Mégret, A. Giremus, Y. Berthoumieu, From intrinsic optimization to iterated extended Kalman filtering on lie groups. J. Math. Imaging Vis. 55(3), 284–303 (2016). https://doi.org/10.1007/s1085101506228
P. Chauchat, A. Barrau, S. Bonnabel, Invariant smoothing on lie groups. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain (2018). https://hal.archivesouvertes.fr/hal01725847
G. Bourmaud, R. Mégret, A. Giremus, Y. Berthoumieu, From intrinsic optimization to iterated extended Kalman filtering on lie groups. J. Math. Imaging Vis. 55, 284–303 (2016)
P.A. Absil, R. Mahony, R. Sepulchre, Optimization Algorithms on Matrix Manifolds (Princeton University Press, Princeton, 2008)
L. Chang, K. Li, Unified form for the robust Gaussian information filtering based on Mestimate. IEEE Signal Process. Lett. 24(4), 412–416 (2017)
A. Bellés, D. Medina, P. Chauchat, S. Labsir, J. VilàValls, Robust Mtype errorstate Kalman filters for attitude estimation. In: Proc. of the European Signal Processing Conference (EUSIPCO) (2023)
K. Aftab, R. Hartley, Convergence of iteratively reweighted least squares to robust mestimators. In: 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 480–487 (2015). https://doi.org/10.1109/WACV.2015.70
G. Wahba, A least squares estimate of satellite attitude. SIAM Rev. 7(3), 409–409 (1965). https://doi.org/10.1137/1007077
Funding
Open Access funding enabled and organized by Projekt DEAL. This research was partially supported by DLR’s FuturePorts and HiGAIN project and the DGA/AID project 2022.65.0082.
Author information
Authors and Affiliations
Contributions
All authors contributed equally. All authors have read and agreed to the published version of the manuscript. The authors declare this work is original and has not been published nor submitted for publication elsewhere.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no conflict of interest nor issues related to journal policies to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bellés, A., Medina, D., Chauchat, P. et al. Robust errorstate Kalmantype filters for attitude estimation. EURASIP J. Adv. Signal Process. 2024, 75 (2024). https://doi.org/10.1186/s1363402401172w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363402401172w