Skip to main content

A novel robust iterated CKF for GNSS/SINS integrated navigation applications

Abstract

In challenging circumstances, the estimation performance of integrated navigation parameters for tightly coupled GNSS/SINS is impacted by outlier measurements. An effective solution that employs a novel iterative sigma-point structure with a modified robustness optimization approach for enhancing the error compensation effectiveness and robustness of filters utilized in GNSS challenge conditions is proposed in this paper. The proposed method modifies the CKF scheme by incorporating nonlinear regression and numerous iteration processes for ameliorating error compensation. Subsequently, a loss function and penalty mechanism are implemented to enhance the filter's robustness to outlier measurements. Furthermore, to fully incorporate valid information of the innovation and speed up the operation of the proposed method, the outlier measurement detection criteria are established to bypass the penalty mechanism against measurement weights in the absence of outliers in GNSS measurements. Field experiments demonstrate that the proposed method outperforms traditional methods in mitigating navigation errors, particularly when multipath errors and non-line-of-sight (NLOS) reception are increased.

1 Introduction

Both strapdown inertial navigation system (SINS) and global satellite navigation system (GNSS) have certain limitations when used separately. GNSS signal is vulnerable to instantaneous interference and attenuation, and inertial instruments of SINS are susceptible to drift and noise, which can accumulate errors rapidly over time [1]. The complementing qualities of the two systems have led to a widespread adoption of their integrated system in the navigation field [2, 3]. For the tightly coupled GNSS/SINS integrated navigation applications, the parameter estimation is usually specified as a nonlinear filtering process considering multiple information fusion owing to the nonlinearity of error equations.

For the above nonlinear model, extended Kalman filter (EKF) extends the original Kalman filter to support nonlinear models by linearizing the system model at the latest estimation, which has a wide range of applications of the navigation fields [4,5,6]. Nonetheless, in systems with significant nonlinearity, EKF can yield substantial approximation errors. And the computation of the Jacobian matrix is mandatory during the implementation, which can be computationally expensive, especially for large systems. In addition, the unscented Kalman filter (UKF) is designed by incorporating the unscented transform (UT) from particle filter into the Kalman filter process [7]. UKF has been demonstrated through experiments to have superior nonlinear approximation accuracy over EKF, and the Jacobian matrix is not required. Furthermore, the quadrature Kalman filter (QKF) is designed by utilizing the Gauss–Hermite quadrature points and results in a substantial enhancement of the accurate calculation of conditional density [8]. Nevertheless, UKF necessitates specific sampling models to guarantee filtering reliability, and QKF has a high computational complexity. Inspired by the weakness of the above filters, according to the deterministic numerical integration criteria, cubature Kalman filter (CKF) is proposed to propagate the state and covariance matrix through the nonlinear system model. In comparison with the previous variant Kalman filters, the cubature technique provides a more accurate estimate of the state distribution by approximating the probability density function using cubature points [9].

It is noteworthy that obtaining parameter estimation via Bayesian estimation framework requires the corresponding filter to calculate the posterior probability distribution, regardless of whether one uses sigma-point filtering technology. In the face of the challenges posed by urban canyons, increased multipath errors and non-line-of-sight (NLOS) reception lead to outlier measurements and time-varying noise. As a result, the accuracy approximation of the probability distribution in the measurement update process is impaired and the convergence speed and robustness in the whole process are diminished in GNSS challenge conditions.

In order to accelerate the filter's convergence, Skoglund first examined the iterative EKF method from an optimization standpoint and demonstrated that the iterated update process outperforms the traditional ones [10]. In the meantime, a central difference Kalman filter that also used an iterative approach was applied to obtain fast convergence results with favorable performance, although the convergence is significantly influenced upon the chosen iteration length [11]. Additionally, some sigma-point iteration approaches are recommended for addressing the issue of particle degradation in particle filters. These approaches offer superior benefits compared to non-iterative algorithms when dealing with significant noise and accurate measurements.

In order to enhance the filter's robustness, several robust techniques are recommended for mitigating the impact of GNSS outliers. Instead of using additional 3D building models or maps, researchers utilized the robust models [12, 13] for diminishing outlier interference. From the perspective of noise distribution, outliers are regarded as measurements from unreliable sensors, which polluted by heavy-tailed noise [14, 15]. And the Student's t distribution is applied instead of replacing Gaussian distribution to approximate the posterior probability density function (PDF) in measurement update process. In addition, the dynamic covariance estimation method is proposed, which regards the covariance of observations as a crucial parameter that the estimator must estimate [13]. Subsequently, the filtering procedure modifies the weights of outliers to lessen their influence. Furthermore, our previous work also designed robust filters for suppressing the influence of unknown interference on the system in challenging circumstances, including the H control theory-based filter [16] and the Krein space theory-based filter [17, 18]. In conclusion, incorporating robust M-estimation techniques can assist in minimizing the significant influence of outlier measurements [19]. Regrettably, it is required to choose between convergence speed and robustness since many robust kernel functions in the robust estimation do not have global convexity. Owing to the above reasons, the M-estimation techniques can only obtain the suboptimal solution.

Motivated by the previous work, a novel robust estimation-based iterated CKF is proposed to minimize the significant influence of outlier measurements on the GNSS/SINS integrated system operating in challenging circumstances. The proposed method modifies the CKF scheme by incorporating nonlinear regression and numerous iteration processes for ameliorating error compensation. Besides, the filter's reliability is strengthened by implemented a loss function and penalty mechanism, which reduces the weights of outlier measurements. Furthermore, the outlier measurement detection criteria are established to bypass the penalty mechanism against measurement weights in the absence of outliers in GNSS measurements in order to fully absorb the valid information of the innovation and speed up the operation of the proposed method.

The following are the primary innovation of this work:

  1. 1.

    Based on maximum-a posteriori (MAP) estimation, a new sigma-point structure is established compared with the original EKF-based MAP to weaken the effect of the prior state error in the posterior updating process. And the valid information in the innovation is extracted sufficiently to speed up the state estimation through the iteration update framework.

  2. 2.

    The robust M-estimation approach and penalty mechanism are designed to suppress GNSS outlier interference and ensure the stability of the filtering process.

  3. 3.

    The outlier measurement detection criteria is established to fully absorb the effective information of innovation and speed up the operation of the proposed algorithm when there are no outliers in GNSS measurement.

The rest is organized as below. The iterative process of novel CKF framework and robust estimation is deduced in Sect. 2. The detection criteria of outlier measurement are described in Sect. 3. The field test for performance evaluation of proposed algorithm is run in Sect. 4. Afterward, a logical conclusion is summarized in Sect. 5.

2 Novel iterated CKF framework

2.1 Problem statement

Describe a discrete system as below:

$$\left\{ {\begin{array}{*{20}l} {x_{k} = f(x_{k - 1} ) + \omega_{k - 1} } \hfill \\ {y_{k} = h(x_{k} ) + \eta_{k} } \hfill \\ \end{array} } \right.$$
(1)

where k denotes the discrete point, xk and yk are states and observations, ωk−1, ηk denote Gaussian noise whose covariances Qk−1 and Rk. f(·), h(·) represent nonlinear models. Since the posterior PDF \(\rho (x_{k - 1} |y_{1,k - 1} )\sim N(x_{k - 1} |\hat{x}_{k - 1|k - 1} ,P_{k - 1|k - 1} )\) of previous moment is available, prior PDF \(\rho (x_{k} |y_{1,k - 1} )\sim N(x_{k} |\hat{x}_{k|k - 1} ,P_{k|k - 1} )\) is able to be calculated at the next moment via the approximation method, where \(y_{1,k - 1} = \left\{ {y_{1} ,y_{2} , \ldots ,y_{k - 1} } \right\}\) denotes the observation set, \(\hat{x}_{k|k - 1}\) denotes the estimated value of x from the estimated time k − 1 to k. Exceptionally, the double subscript are the same time, which represents the optimal estimate. In addition, the \(P_{k - 1|k - 1} ,P_{k|k - 1}\) represent the error covariance in posterior estimation and prior prediction.

The posterior update stage based on nonlinear Bayesian estimation is printed as:

$$\begin{aligned} \hat{y}_{k|k - 1} & = \int_{{\Re^{n} }} {h(t)} \frac{1}{{((2\pi )^{n} \det P_{k|k - 1} )^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} }} \\ & \quad \times \exp \left[ { - \frac{1}{2}(t - \hat{x}_{k|k - 1} )^{T} P_{k|k - 1}^{ - 1} (t - \hat{x}_{k|k - 1} )} \right]{\text{d}}t \\ \end{aligned}$$
(2)
$$\begin{aligned} P_{k|k - 1}^{zz} & = \int_{{\Re^{n} }} {(h(t) - \hat{y}_{k|k - 1} )} (h(t) - \hat{y}_{k|k - 1} )^{T} \frac{1}{{((2\pi )^{n} \det P_{k|k - 1} )^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} }} \\ & \quad \times \exp \left[ { - \frac{1}{2}(t - \hat{x}_{k|k - 1} )^{T} P_{k|k - 1}^{ - 1} (t - \hat{x}_{k|k - 1} )} \right]{\text{d}}t \\ \end{aligned}$$
(3)
$$\begin{aligned} P_{k|k - 1}^{xz} & = \int_{{\Re^{n} }} {(t - \hat{x}_{k|k - 1} )} (h(t) - \hat{y}_{k|k - 1} )^{T} \frac{1}{{((2\pi )^{n} \det P_{k|k - 1} )^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} }} \\ & \quad \times \exp \left[ { - \frac{1}{2}(t - \hat{x}_{k|k - 1} )^{T} P_{k|k - 1}^{ - 1} (t - \hat{x}_{k|k - 1} )} \right]dt \\ \end{aligned}$$
(4)

where det is the computer function for obtaining the determinant of a matrix, \(\Re^{n}\) is the n-dimensional set of real numbers, \(P_{k|k - 1}^{zz} ,P_{k|k - 1}^{xz}\) denotes the error covariance of observations and the error cross-covariance of states and observations. As a result, the estimated PDF with new measurement is \(\rho (y_{k} |x_{k} )\sim N(y_{k} |\hat{y}_{k|k - 1} ,P_{k|k - 1}^{zz} )\).

Due to the nonlinearity in observation equation h(·) in Eq. (1), the detailed representation of solution in Eqs. (24) is difficult to build via the conventional method. To solve the above issue, the posterior PDF is approximated using MAP approach and linear minimization approach in references [10, 20], which can obtain effective estimation. However, the EKF-based MAP approach in Ref. [10] is difficult to ensure the stability of the filter in the application with high nonlinearity. And the linear minimization approach in Ref. [20] has a high calculation cost, which is difficult to achieve an accurate posterior PDF in the presence of significant errors.

2.2 Iterated CKF based on Bayesian MAP estimation method

The above-mentioned MAP approach is considered for establishing the novel sigma-point structure. Then, the posterior PDF is described as:

$$p(x_{k} |y_{1,k} ) \propto p(x_{k} |y_{1,k - 1} )p(y_{k} |x_{k} ) \propto \exp \left[ \begin{gathered} - \frac{1}{2}(y_{k} - h(x_{k} ))^{T} R_{k}^{ - 1} (y_{k} - h(x_{k} )) \hfill \\ + (\hat{x}_{k|k - 1} - x_{k} )^{T} P_{k|k - 1}^{ - 1} (\hat{x}_{k|k - 1} - x_{k} ) \hfill \\ \end{gathered} \right]$$
(5)

where exp(·) is the exponential arithmetic with a natural number base.

The optimal state is estimated by maximizing a posteriori PDF in the criterion. This is similar to the parameter solution of the nonlinear regression problem. Therefore, the objective function can be converted to the following form:

$$\begin{aligned} \hat{x}_{k|k}^{{{\text{MAP}}}} = \arg \max \rho (x_{k} |y_{1,k} ) & = \arg \mathop {\min }\limits_{{x_{k} \in \Re^{n \times 1} }} \frac{1}{2}\left[ \begin{gathered} (y_{k} - h(x_{k} ))^{T} R_{k}^{ - 1} (y_{k} - h(x_{k} )) + \hfill \\ (\hat{x}_{k|k - 1} - x_{k} )^{T} P_{k|k - 1}^{ - 1} (\hat{x}_{k|k - 1} - x_{k} ) \hfill \\ \end{gathered} \right] \\ & = \arg \mathop {\min }\limits_{{x_{k} \in \Re^{n \times 1} }} \frac{1}{2}V(x_{k} ) \, = \, \arg \mathop {\min }\limits_{{x_{k} \in \Re^{n \times 1} }} \frac{1}{2}r^{T} (x_{k} )r(x_{k} ) \\ \end{aligned}$$
(6)

where V(x) denotes the quadratic cost function. For the regression equation of nonlinear structure established by Eq. (6), the iterative least squares method, such as Newton iteration rule, is usually utilized to achieve local approximation. The iterative form of local approximation is considered as:

$$x_{i + 1} = x_{i} - (\nabla^{2} V(x_{i} ))^{ - 1} \nabla V(x_{i} )$$
(7)

where i is the iterations. Suppose the partial derivative \(H_{i} = \frac{\partial h(x)}{{\partial x}}|_{{x = \hat{x}_{i|i - 1} }}\) holds, the Hessian and Jacobian matrices of cost function are \(\nabla^{2} V(x_{i} ) \approx J^{T} (x_{i} )J(x_{i} )\) and \(\nabla V(x_{i} ) = J^{T} (x_{i} )r(x_{i} )\), where \(J(x_{i} ) = \frac{\partial r(x)}{{\partial x}}|_{{x = \hat{x}_{i|i - 1} }} = - \left[ {\begin{array}{*{20}c} {R_{k}^{ - 1/2} H_{i} } & {P_{k|k - 1}^{ - 1/2} } \\ \end{array} } \right]^{T}\) and \(r(x_{i} ) = \left[ {\begin{array}{*{20}c} {R_{k}^{ - 1/2} \left( {y_{k} - h(x_{k} )} \right)} & {P_{k|k - 1}^{ - 1/2} } \\ \end{array} \left( {\hat{x}_{k|k - 1} - x_{k} } \right)} \right]^{T}\). As a result, the iterative form in Eq. (7) is able to refined as:

$$x_{i + 1} = x_{i} - (J^{T} (x_{i} )J(x_{i} ))^{ - 1} J^{T} (x_{i} )r_{i} (x_{i} )$$
(8)

To make the formula easier to understand and operate, it is rewritten as the general form of Bayesian estimation:

$$\begin{aligned} x_{i + 1} & = x_{i} + (P_{k|k - 1}^{ - 1} + H_{i}^{T} R_{k}^{ - 1} H_{i}^{{}} )^{ - 1} (H_{i}^{T} R_{k}^{ - 1} \tilde{y}_{k} + P_{k|k - 1}^{ - 1} \tilde{x}_{k} ) \\ & = \hat{x}_{k|k - 1} + K_{i} (y_{k} - h(x_{i} ) - H_{i} (\hat{x}_{k|k - 1} - x_{i} )) \\ \end{aligned}$$
(9)

where \(\tilde{y}_{k} = y_{k} - \hat{y}_{k} ,\tilde{x}_{k} = x_{k} - \hat{x}_{k} ,K_{i} = P_{k|k - 1} H_{k}^{T} \left( {H_{k} P_{k|k - 1} H_{k}^{T} + R_{k} } \right)^{ - 1}\), is a gain matrix. The \(\tilde{y}\) denotes the estimated error of y. In CKF structure, in order to use iterative Eq. (9) and eliminate linearization error, the covariance form is substituted for likelihood form as below.

$$\begin{aligned} P_{yy,k|k - 1} & = E\left\{ {(y_{k} - H_{k} \hat{x}_{k|k - 1} )(y_{k} - H_{k} \hat{x}_{k|k - 1} )^{T} } \right\} \\ & = H_{k} P_{k|k - 1} H_{k}^{T} + R_{k} \\ \end{aligned}$$
(10)
$$\begin{aligned} P_{xy,k|k - 1} & = E\left\{ {(x_{k} - \hat{X}_{k|k - 1} )(y_{k} - H_{k} \hat{X}_{k|k - 1} )^{T} } \right\} \\ & = P_{k|k - 1} H_{k}^{T} \\ \end{aligned}$$
(11)

Substitute Eqs. (10)–(11) into Eq. (9):

$$x_{i + 1} = \hat{x}_{k|k - 1} + \left\{ {P_{xy,k|k - 1} (P_{yy,k|k - 1} )^{ - 1} [y_{k} - h(x_{i} ) - P_{xy,k|k - 1}^{T} P_{k|k - 1}^{ - 1} (\hat{x}_{k|k - 1} - x_{i} )]} \right\}$$
(12)

Compared with the original EKF-based MAP estimation, the impact of a priori state errors in the posterior update process is weakened by the novel sigma-point structure. Moreover, the linearization truncation error is further reduced in each iteration update in Eq. (12). And the valid information in the innovation is extracted sufficiently to speed up the state estimation through the iteration update framework.

2.3 Robust estimation approach

The minimized sum of squared errors is usually used as the cost function for conventional state estimations, but it is very sensitive to outliers that exceed 3 times the standard deviation. Therefore, to suppress outlier interference and ensure the stability of the filtering process, robust M-estimation approaches are considered as an effective solution to the above issue. And the basic concept of M-estimation revolves around the construction of robust kernel functions, substituting various nonlinear damping components for the old squared components to formulate optimization rules. The M-estimation objective constructed for suppressing outliers is printed as:

$${\mathbf{X}}^{ * } = \mathop {\arg \min }\limits_{{\mathbf{X}}} \left( {{\rm T}\left( {\left\| {\tilde{y}}_{ki} \right\|_{{P_{yy,k|k - 1} }}^{2} } \right)} \right)$$
(13)

where X* is robust estimated state; T(·) is the robust kernel function, and \(\left\| \cdot \right\|_{P}^{2}\) is the Mahalanobis distance with the covariance P. Further, adding restrictions on the weights of the measurements in the estimation process can guarantee its own smoothness while enhancing outlier suppression and obtaining a more robust solution. Thus, through the duality [21], the robust estimation objective in Eq. (13) can be rewritten as:

$${\mathbf{\rm X}}^{ * } = \mathop {\arg \min }\limits_{{{\mathbf{\rm X}},{\mathbf{W}}}} \sum\limits_{k,N} {\left( {\omega_{k,N} \left\| {\tilde{y}}_{ki} \right\|_{{P_{yy,k|k - 1} }}^{2} + \vartheta \left( {\omega_{k,N} } \right)} \right)}$$
(14)

where ωk,N represents the penalty weight about Nth observation at point k; \(\vartheta \left( {\omega_{k,N} } \right)\) represents the penalty mechanism, which restrict the GNSS measurements with outlier. The optimal robust estimated state and the optimal weight can be estimated in Eq. (14).

Specifically in this paper, the GM function is arranged into the robust core function of Eq. (13). As a result, \({\rm T}\left( {\left\| {\tilde{y}}_{ki} \right\|_{{P_{yy,k|k - 1} }}^{2} } \right)\) and \(\vartheta \left( {\omega_{k,N} } \right)\) are described as:

$${\rm T}\left( {\left\| {\tilde{y}}_{ki} \right\|_{{P_{yy,k|k - 1} }}^{2} } \right) = \frac{{e_{G}^{2} \left\| {\tilde{y}}_{ki} \right\|_{{P_{yy,k|k - 1} }}^{2} }}{{e_{G}^{2} + \left\| {\tilde{y}}_{ki} \right\|_{{P_{yy,k|k - 1} }}^{2} }}$$
(15)
$$\vartheta \left( {\omega_{k,N} } \right) = e_{G}^{2} \left( {\sqrt {\omega_{k,N} } - 1} \right)^{2}$$
(16)

where eG is the modulation index, which is used to adjust the disturbance resistibility and sensitivity of the robust optimization process. In addition, the gradient \(\delta \left( {\omega_{k,N} } \right)\) of the weight ωk,N can be calculated by taking the partial derivative of Eq. (14):

$$\delta \left( {\omega_{k,N} } \right) = \left\| {\tilde{y}}_{ki} \right\|_{{P_{yy,k|k - 1} }}^{2} + e_{G}^{2} \left( {1 - \frac{1}{{\sqrt {\omega_{k,N} } }}} \right)$$
(17)

To find the optimal estimation of penalty weight, let the gradient \(\delta \left( {\omega_{k,N} } \right) = 0\). Then, the result of the closed form weight is shown as below:

$$\omega_{k,N} = \left( {\frac{{e_{G}^{2} }}{{e_{G}^{2} + \left\| {\tilde{y}}_{ki} \right\|_{{P_{yy,k|k - 1} }}^{2} }}} \right)^{2} \in \left( {0,1} \right)$$
(18)

3 Robust strategy for mitigating outlier measurements

Before using the robust techniques, it is crucial to detect whether the measurement contains outliers in advance. If the measurement quality is poor, the robust estimation equation mentioned above in Eqs. (14)–(18) is used to suppress the outlier interference. Conversely, its penalty function \(\vartheta \left( {\omega_{k,N} } \right)\) is bypassed to accelerate the filter's convergence. According to the hypothesis in the literature [22], outlier measurements are regard as the bad data from unreliable sensors, which polluted by heavy-tailed noise. In other words, specifically in the GNSS/SINS navigation field, the actual observed GNSS noise distribution tends to depart from the Gaussian distribution and exhibits a heavy-tailed distribution in the GNSS challenge conditions.

The noise distribution of the GNSS measurement sequence can be fitted with the Gaussian mixture model (GMM) method based on the expectation–maximization (EM) clustering scheme to detect whether the measurement contains many outliers. Only one Gaussian fitting component exists, which indicates that the quality of GNSS measurement sequence is good. On the contrary, if more than two Gaussian fitting components exist and the mean value is far away from zero, it indicates that the noise distribution is a heavy-tailed distribution, which further indicates that there are many outliers the GNSS measurement sequence.

3.1 Outlier measurement detection criteria

Suppose there exists a set of GNSS error sequences D and the random variable \({\tilde{\mathbf{Y}}} = \left\{ {\tilde{y}_{1} ,\tilde{y}_{2} , \ldots ,\tilde{y}_{m} } \right\}\) follows a Gaussian mixture distribution with parameters \({{\varvec{\uprho}}} = \left\{ {{{\varvec{\uprho}}}_{1} ,{{\varvec{\uprho}}}_{2} , \ldots ,{{\varvec{\uprho}}}_{n} } \right\} = ({\mathbf{u}},{{\varvec{\Sigma}}})\), where \(\tilde{y}_{k}\) is a multidimensional vector, and the PDF of the Gaussian mixture distribution \(f(\left. {{\tilde{\mathbf{Y}}}} \right|{\mathbf{u}},{{\varvec{\Sigma}}})\) is considered as:

$$f(\left. {{\tilde{\mathbf{Y}}}} \right|{\mathbf{u}},{{\varvec{\Sigma}}}) = \sum\limits_{j = 1}^{n} {\varpi_{j} f_{j} (\left. {\tilde{y}_{k} } \right|u_{j} ,\Sigma_{j} )}$$
(19)

where \(f_{j} (\left. {\tilde{y}_{k} } \right|u_{j} ,\Sigma_{j} )\) denotes the probability density of the kth Gaussian model component; uj, Σj are the mean and variance of Gaussian model components, respectively, which together characterize the Gaussian model; \(\varpi_{j}\) is the mixture weight, which indicates the proportion of samples contained in the kth Gaussian model component to the whole samples; n is the quantity of Gaussian model components representing the final clustering.

The probability density of each Gaussian component is specifically described as:

$$f_{j} (\left. {\tilde{y}_{k} } \right|u_{j} ,\Sigma_{j} ) = \frac{{\exp \left[ { - \tfrac{1}{2}\left( {\tilde{y}_{k} - u_{j} } \right)^{T} \Sigma_{j}^{ - 1} \left( {\tilde{y}_{k} - u_{j} } \right)} \right]}}{{((2\pi )^{m} \det \Sigma_{j} )^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} }}$$
(20)

The original measured sequence \({\tilde{\mathbf{Y}}} = \left\{ {\tilde{y}_{1} ,\tilde{y}_{2} , \ldots ,\tilde{y}_{m} } \right\}\) is expanded into a whole dataset \({{\varvec{\Theta}}} = \left\{ {{\tilde{\mathbf{Y}}},{{\varvec{\Gamma}}}} \right\}\), where \(\Theta_{k} = \left( {\tilde{y}_{k} ,\Gamma_{k} } \right)\). And \(\Gamma_{k} = \left\{ {\Gamma_{k1} ,\Gamma_{k2} , \ldots ,\Gamma_{kn} } \right\}\) is the implicit sequence and its parameter is printed as:

$$\Gamma_{kj} = \left\{ {\begin{array}{*{20}l} {0,} \hfill & {\quad {\kern 1pt} \tilde{y}_{k} \notin {{\varvec{\uprho}}}_{j} } \hfill \\ {1,} \hfill & {\quad {\kern 1pt} \tilde{y}_{k} \in {{\varvec{\uprho}}}_{j} } \hfill \\ \end{array} \quad } \right. \, j = 1,2, \ldots ,n$$
(21)

And the maximum likelihood function for whole dataset can be written below.

$$L\left( {{{\varvec{\uprho}}}_{j} ,\varpi_{j} ,\left. {\Gamma_{kj} } \right|{\tilde{\mathbf{Y}}}} \right) = \sum\limits_{k = 1}^{m} {\sum\limits_{j = 1}^{n} {\Gamma_{kj} \log_{2} \left[ {\varpi_{j} f_{j} (\left. {\tilde{y}_{k} } \right|{{\varvec{\uprho}}}_{j} )} \right]} }$$
(22)

In the EM clustering scheme, the final maximum likelihood estimate is obtained by iteratively optimizing the lower bound of the log-likelihood function in Eq. (22). And the EM clustering scheme includes the following two steps:

  1. (1)

    E-Step: Compute the expectation Q of the conditional probability distribution function \(p\left( {\left. {{\varvec{\Gamma}}} \right|{\tilde{\mathbf{Y}}},{{\varvec{\uprho}}}} \right)\) with implicit parameters Γkj, and estimate the implicit parameters Γkj.

    $$\begin{aligned} Q\left( {{{\varvec{\uprho}}},{{\varvec{\uprho}}}^{k} } \right) & = E\left( {\log_{2} \left[ {L\left( {\left. {{\varvec{\uprho}}} \right|{\tilde{\mathbf{Y}}},{{\varvec{\Gamma}}}} \right)} \right]} \right) = \sum\limits_{k = 1}^{m} {\sum\limits_{j = 1}^{n} {\Gamma_{kj} \log_{2} \left[ {\varpi_{j} f_{j} (\left. {\tilde{y}_{k} } \right|{{\varvec{\uprho}}}_{j} )} \right]} } \\ & = \int_{{{\varvec{\Gamma}}}} {\log_{2} \left[ {L\left( {\left. {{\varvec{\uprho}}} \right|{\tilde{\mathbf{Y}}},{{\varvec{\Gamma}}}} \right)} \right]}^{T} f\left( {\left. {{\varvec{\Gamma}}} \right|{{\varvec{\uprho}}}^{{\left( {k - 1} \right)}} } \right){\text{d}}{{\varvec{\Gamma}}} \\ \end{aligned}$$
    (23)
    $$\hat{\Gamma }_{kj} = {{\left[ {\varpi_{j} f_{j} (\left. {\tilde{y}_{k} } \right|{{\varvec{\uprho}}}_{j} )} \right]} \mathord{\left/ {\vphantom {{\left[ {\varpi_{j} f_{j} (\left. {\tilde{y}_{k} } \right|{{\varvec{\uprho}}}_{j} )} \right]} {\sum\limits_{l = 1}^{n} {\varpi_{l} f_{l} (\left. {\tilde{y}_{k} } \right|{{\varvec{\uprho}}}_{l} )} }}} \right. \kern-0pt} {\sum\limits_{l = 1}^{n} {\varpi_{l} f_{l} (\left. {\tilde{y}_{k} } \right|{{\varvec{\uprho}}}_{l} )} }}$$
    (24)
  2. (2)

    M-Step: Make \(Q\left( {{{\varvec{\uprho}}},{{\varvec{\uprho}}}^{k} } \right)\) , obtain the maximum value and solve the optimal parameter \({{\varvec{\uprho}}}^{*} = \arg \max \, Q\left( {{{\varvec{\uprho}}},{{\varvec{\uprho}}}^{k} } \right)\) of the Gaussian model components.

The mixed weight \(\varpi_{j}\), mean uj and variance Σj of the 1st to nth Gaussian component from are calculated as follows:

$$\varpi_{j} = \sum\limits_{k = 1}^{m} {{{\hat{\Gamma }_{kj} } \mathord{\left/ {\vphantom {{\hat{\Gamma }_{kj} } m}} \right. \kern-0pt} m}} ,\quad j = 1,2, \ldots ,n$$
(25)
$$u_{j} = {{\left( {\sum\limits_{k = 1}^{m} {\hat{\Gamma }_{kj} \tilde{y}_{k} } } \right)} \mathord{\left/ {\vphantom {{\left( {\sum\limits_{k = 1}^{m} {\hat{\Gamma }_{kj} \tilde{y}_{k} } } \right)} m}} \right. \kern-0pt} m},\quad j = 1,2, \ldots ,n$$
(26)
$$\Sigma_{j} = {{\left( {\sum\limits_{k = 1}^{m} {\hat{\Gamma }_{kj} \left( {\tilde{y}_{k} - u_{j} } \right)\left( {\tilde{y}_{k} - u_{j} } \right)^{T} } } \right)} \mathord{\left/ {\vphantom {{\left( {\sum\limits_{k = 1}^{m} {\hat{\Gamma }_{kj} \left( {\tilde{y}_{k} - u_{j} } \right)\left( {\tilde{y}_{k} - u_{j} } \right)^{T} } } \right)} {\sum\limits_{k = 1}^{m} {\hat{\Gamma }_{kj} } }}} \right. \kern-0pt} {\sum\limits_{k = 1}^{m} {\hat{\Gamma }_{kj} } }},\quad j = 1,2, \ldots ,n$$
(27)

The traditional EM clustering scheme needs to initially set the fixed quantity of Gaussian model components in advance, which cannot meet the requirements of adaptive judgment of the components in outlier detection. Thus, a fast-greedy strategy of EM clustering scheme is proposed in this section to perform the best match between the measurement sequence and the fitting model. The fast-greedy strategy initializes the original data set as a GMM containing only one Gaussian fitting component and gradually adds new components to the original GMM until the corresponding GMM log-likelihood function takes the maximum value, i.e., \(L_{n * } > L_{n * + 1} \& L_{n * } > L_{n * - 1}\) where n* is the ideal number of GMM components.

Suppose the probability density of a certain GMM is \(f_{n} \left( {\tilde{y}} \right)\) when the number of Gaussian components is n. After adding a new Gaussian component \({\text{G}} \left( {\tilde{y};\rho } \right)\), a new GMM is regenerated whose probability density function is written as:

$$f_{n + 1} \left( {\tilde{y}} \right) = \left( {1 - \Lambda } \right)f_{n} \left( {\tilde{y}} \right) + \Lambda {\text{G}} \left( {\tilde{y};\rho } \right)$$
(28)

where Λ is mixed weight of the new Gaussian component. And the mixed weights of the original GMM components are \(\left( {1 - \Lambda } \right)\varpi_{n}\). Then, the new GMM log-likelihood function is described as:

$$L_{n + 1} = \sum\limits_{k = 1}^{m} {\log_{2} \left[ {\left( {1 - \Lambda } \right)f_{n} \left( {\tilde{y}} \right) + \Lambda {\text{G}} \left( {\tilde{y};\rho } \right)} \right]}$$
(29)

To maximize the new GMM log-likelihood function Ln+1, a global search method is used to optimize the parameter \(\rho_{n + 1} = (u_{j} ,\Sigma_{j} )\) and weight Λ of the new Gaussian component.

Suppose the new component follows standard normal distribution. Then expand the second-order Taylor formula for Ln+1 at \(\Lambda_{0} = {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}\) and maximize the quadratic function with respect to Λ as:

$$\hat{L}_{n + 1} = L_{n + 1} \left( {\left. \Lambda \right|\Lambda_{0} } \right) - \left[ {L^{\prime}_{n + 1} \left( {\left. \Lambda \right|\Lambda_{0} } \right)} \right]^{2} /2L^{\prime\prime}_{n + 1} \left( {\left. \Lambda \right|\Lambda_{0} } \right)$$
(30)

Define \(\delta \left( {\tilde{y};\rho } \right) = \frac{{f_{n} \left( {\tilde{y}} \right) - {\text{G}} \left( {\tilde{y};\rho } \right)}}{{f_{n} \left( {\tilde{y}} \right) + {\text{G}} \left( {\tilde{y};\rho } \right)}}\). The local optimum of log-likelihood function Ln+1 near \(\Lambda_{0} = {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}\) can be written as:

$$\hat{L}_{n + 1} = \sum\limits_{k = 1}^{m} {\frac{{\log_{2} \left[ {f_{n} \left( {\tilde{y}} \right) + {\text{G}} \left( {\tilde{y};\rho } \right)} \right]}}{2}} + \frac{{\left[ {\sum\limits_{k = 1}^{m} {\delta \left( {\tilde{y};\rho } \right)} } \right]^{2} }}{{2\sum\limits_{k = 1}^{m} {\delta^{2} \left( {\tilde{y};\rho } \right)} }}$$
(31)

Based on Eq. (31), the weight of the new components is estimated by global search as follows:

$$\hat{\Lambda } = \frac{1}{2} - \frac{{\sum\nolimits_{k = 1}^{m} {\delta \left( {\tilde{y};\rho } \right)} }}{{2\sum\nolimits_{k = 1}^{m} {\delta^{2} \left( {\tilde{y};\rho } \right)} }}$$
(32)

Then, the optimal parameter \(\rho_{n + 1} = (u_{j} ,\Sigma_{j} )\) of the new components can be obtained through Eqs. (23)–(27) of the EM clustering scheme, and the new GMM log-likelihood function can be further calculated. The ideal number n* of GMM components can be obtained by repeatedly adding new Gaussian components until condition \(L_{n * } > L_{n * + 1} \& L_{n * } > L_{n * - 1}\) is satisfied.

When measured sequence \({\tilde{\mathbf{Y}}} = \left\{ {\tilde{y}_{1} ,\tilde{y}_{2} , \ldots ,\tilde{y}_{m} } \right\}\) is very large, optimal parameters and weights take many iterations to converge in the EM clustering scheme, resulting in more time cost than expected. Since the implicit parameter \(\hat{\Gamma }_{kj}\) is a posteriori probability that the sample \(\tilde{y}\) belongs to the kth Gaussian component obtained through E-Step in the EM clustering scheme. In other words, the size of the implicit parameter represents the probability that the data \(\tilde{y}\) belongs to the kth Gaussian component. Further, the implicit parameter \(\hat{\Gamma }_{kj}\) needs to satisfy the constraint \(\sum\nolimits_{j = 1}^{n} {\hat{\Gamma }_{kj} } = 1\), and it is only possible for the final optimization result of the implicit parameter \(\hat{\Gamma }_{kj}\) to belong to the kth Gaussian component or not, i.e., \(\hat{\Gamma }_{kj} \gg {1 \mathord{\left/ {\vphantom {1 k}} \right. \kern-0pt} k}\) or \(\hat{\Gamma }_{kj} \ll {1 \mathord{\left/ {\vphantom {1 k}} \right. \kern-0pt} k}\). Thus, most of the data in the measurement sequence have an obvious tendency toward the Gaussian component to which they belong, implying a clear bifurcation of the implied parameters. Through multiple iterations, the probability of data belonging to a certain Gaussian component is very close to 1, and the probability of data belonging to other Gaussian components is very close to 0.

To accelerate the convergence of EM clustering scheme, it is necessary to fast eliminate implicit parameters with small probability and increase the implicit parameter with maximum probability in the iteration process. The following restrictions on the implicit parameters are considered:

$$\hat{\Gamma }_{kj}^{new} = \left\{ {\begin{array}{*{20}l} {0,} \hfill & {\quad \Gamma_{kj}^{old} < \gamma_{low} } \hfill \\ {\hat{\Gamma }_{kj}^{old} ,} \hfill & {\quad {\kern 1pt} \gamma_{low} < \hat{\Gamma }_{kj}^{old} < \gamma_{high} } \hfill \\ {\hat{\Gamma }_{kj}^{old} *L_{n}^{(t)} /L_{n}^{(t - 1)} ,} \hfill & {\quad {\kern 1pt} \hat{\Gamma }_{kj}^{old} > \gamma_{high} } \hfill \\ \end{array} } \right.$$
(33)
$$\hat{\Gamma }_{kj}^{new} = {{\hat{\Gamma }_{kj}^{new} } \mathord{\left/ {\vphantom {{\hat{\Gamma }_{kj}^{new} } {\sum\limits_{j = 1}^{n} {\hat{\Gamma }_{kj}^{new} } }}} \right. \kern-0pt} {\sum\limits_{j = 1}^{n} {\hat{\Gamma }_{kj}^{new} } }}$$
(34)

where γhigh, γlow is the pre-set upper and lower thresholds of restriction; t denotes the iteration times. Noted that to ensure the constraint \(\sum\nolimits_{j = 1}^{n} {\hat{\Gamma }_{kj} } = 1\) at any moment during the iterative process, it is necessary and crucial to calculate Eq. (34).

The proposed fast-greedy strategy of EM clustering scheme does not need to know the quantity of the Gaussian model components in advance. It can automatically obtain the number of the Gaussian components, improve the judgment of components, and reduce the calculation cost.

Finally, this improved EM clustering scheme can fast and accurately detect whether the GNSS measurement sequence contains many outliers. Since only one Gaussian fit component is present, this shows that the GNSS measurement sequence is of good quality. In contrast, the presence of more than two Gaussian mixed components with mean far from zero shows that the noise distribution is heavy-tailed, which further indicates that the GNSS measurement sequence has many outliers. Therefore, the time to use the robust estimation method can be adaptively determined by the outlier measurements detection criteria. If the measurement quality is poor, the robust estimation equation mentioned above in Eqs. (15)–(19) is used to suppress the outlier interference. Conversely, its penalty function \(\vartheta \left( {\omega_{k,N} } \right)\) is bypassed to accelerate the filter's convergence.

The steps of the fast-greedy strategy-based EM clustering scheme for GNSS outlier measurements detection are shown in Table 1.

Table 1 Outlier measurements detection criteria

3.2 Novel robust iterated CKF structure

According to the description in Sect. 2 and Sect. 3.1, the novel robust estimation-based iterated CKF (IRCKF) method is designed to minimize the significant impact on the system operating caused by outlier measurements in GNSS challenge conditions. The whole calculation structure of IRCKF is described in Table 2. Suppose that the iteration times of posterior estimation is i and the iteration times of clustering scheme is t, the comparison of computational complexity between the proposed novel robust iterated CKF and traditional method is listed in Table 3.

Table 2 The whole novel robust estimation-based Iterated CKF (IRCKF) method
Table 3 The comparison of computational complexity

The computational complexity of the proposed method is an important consideration when evaluating its efficiency. It is well known that the traditional CKF, which operates only once on time updates and measurement updates, mainly involves matrix operations such as matrix multiplication and inversion. Its computational complexity is generally determined by the size of the data set, and the complexity of the CKF is approximately O(d3 + m3). In contrast, ICKF, as an iterative variant of the CKF, performs multiple iterations of the same measurement update, and its main computational complexity is increased to O(id3 + m3). Moreover, the additional computation of outlier detection can also increase the computational complexity of the proposed novel robust iterated CKF, since outlier detection usually involves each measurement point and further includes a recursive convergence process of E and M steps.

Although the outlier detection and iterative posterior estimation introduce additional computational complexity O((t + 1)m3) compared to CKF and ICKF, it is worthwhile to address the challenges of measurement outliers and improve performance in poor measurement environment by investing computational resources in these additional steps. Meanwhile, the proposed method still meets the real-time requirement since both the EM clustering scheme and the posterior estimation require small regression times t and small iteration times i for GNSS/SINS integrated navigation applications, respectively.

4 Performance evaluation

To analyze the proposed robust iterated CKF (IRCKF)'s performance in GNSS challenge conditions, static and moving tests of GNSS/SINS integrated navigation were implemented. Noted that the static test is used to analyze the effectiveness of the proposed outlier measurement detection criterion in the presence of outliers in Sect. 3, and the moving test is used to examine the overall superiority of the proposed algorithm over the traditional algorithms. The instruments and moving path in the performance evaluation are printed in Figs. 1 and 2, which mainly includes SPAN-IGM for collecting inertial data and GNSS pseudo-range information and high-precision RTK for collecting positioning and velocity reference. And the performance of equipment is displayed in Table 4.

Fig. 1
figure 1

The instruments in the performance evaluation

Fig. 2
figure 2

Moving path in performance evaluation

Table 4 The parameters of involved instruments

4.1 Static test

The location of static test in Fig. 3a is close to buildings and trees to simulate GNSS multipath errors and NLOS reception caused by severe urban canyon effect. As can be seen from the sky-view map in Fig. 3b that the satellite signals with the masking angle less than 50 degrees are difficult to be directly received by the GNSS receiver, resulting in the increase of GNSS outlier measurements. After comparing the collected GNSS single-point positioning information with the reference RTK positioning information, the obtained GNSS positioning error histogram through post-processing is shown in Fig. 4.

Fig. 3
figure 3

Visualization details of static test in performance evaluation. Noted that the numbers denote the unique PRN of GPS satellites

Fig. 4
figure 4

GNSS positioning error histogram in static test

In post-processing statistics, it is clearly observed that the GNSS measurement errors have many outliers and presents a heavy-tailed distribution. However, it is not easy to determine whether a thick-tailed noise distribution occurs in real-time navigation, which further interferes with the time to use proposed robust estimation in Sect. 2. For GNSS outlier measurements detection in real-time, the proposed fast-greedy strategy-based EM clustering scheme in Sect. 3 is used to fit the GNSS error sequence in real time. The results show that the fitted GMM model has three Gaussian components including \(N_{1} \sim \left( { - 2.13,3.70^{2} } \right)\), \(N_{2} \sim \left( {0.1,0.62^{2} } \right)\) and \(N_{3} \sim \left( {3.89,1.84^{2} } \right)\). And the Gaussian mixed components N1 and N3 with mean far from zero show that the noise distribution is heavy-tailed, which is consistent with the post-processing analysis results. Therefore, this improved EM clustering scheme can fast and accurately detect whether the GNSS measurement sequence contains many outliers. If the GNSS measurement quality is poor, the robust estimation equation mentioned above in Eqs. (1519) is used to suppress the outlier interference. Conversely, its penalty function is bypassed to accelerate the filter's convergence.

4.2 Moving test

To validate the effect of the algorithm in the actual GNSS/SINS integrated navigation application, the moving test is provided to cover both poor quality GNSS measurements due to multipath error and NLOS reception and good quality GNSS measurements. The moving test lasts for 150 s. The first 45 s and the 61th to 114th seconds of the test are in the open area, the 45th to 61th seconds and the 114th to 150th seconds of the test are in the area with severe urban canyon effect, and the moving path is shown in Fig. 2. After post-processing, the navigation errors of different algorithm are plotted in Fig. 5.

Fig. 5
figure 5

Navigation errors in moving test after post-processing

In the face of the challenges posed by urban canyons, increased multipath errors and NLOS reception lead to outlier measurements and time-varying noise in the 45th to 61th seconds and the 114th to 150th seconds of the test. According to the Bayesian estimation framework, the probability distributions of state prior and measurement posterior need to be pre-computed in the corresponding filters to obtain the estimated parameters, with or without the sigma-point filtering technique.

Since the measurement noise model does not conform to the assumed single Gaussian distribution but presents a heavy-tailed distribution, the accuracy approximation of the probability distribution in the measurement update process is impaired. As a result, in Fig. 5, the traditional CKF cannot deal with the effect of GNSS outliers, the robustness in the filtering process is reduced, and the estimation effect of navigation parameters is not adequate for actual needs in GNSS challenge conditions. Moreover, the ICKF, improved by the iterative algorithm and MAP criterion, incorporates nonlinear regression and numerous iteration processes for ameliorating error compensation. ICKF reduces the linearization truncation error in the nonlinear measurement update and brings effective innovation to speed up the estimation. However, the GNSS measurement with outliers are incorrectly absorbed by the iterated filter during the iterative process, resulting in the posterior PDF is still inaccurate. Thus, outliers continue to have an impact on the navigation system despite ICKF attempts to restrict them.

Aim at the aforementioned problem in GNSS challenge conditions, the proposed novel robust estimation-based iterated CKF (IRCKF) introduced the GM loss function to construct penalty mechanism on GNSS measurement with outlier in Sect. 2.3. As shown in Fig. 6a, the weights of all measurement sequences are updated in real time. For GNSS measurements with large outliers, small weights are given to limit their impact on the filter. On the contrary, it will be given a large weight to absorb the effective information as soon as possible. It is clear from Fig. 5 that IRCKF has very good resistance to outliers. In addition, the outlier measurement detection criteria is established to fully absorb the effective information of innovation and speed up the operation of the proposed algorithm when there are no outliers in GNSS measurement. Therefore, as shown in Fig. 6b, the penalty mechanism is only used to calculate the weight of GNSS measurement when the noise of the GNSS measurement sequence is detected to be not a single Gaussian distribution based on the analysis in Sect. 4.1. The IRCKF statistical navigation errors in the area with GNSS outliers in Table 5 are the smallest, which indicates that the IRCKF outperforms the traditional methods in GNSS challenge conditions. Additionally, the limit range of GNSS measurement update rate is 1–10 Hz, which is substantially greater than the filter processing time of about 0.0384 s. As a result, the proposed IRCKF can still fulfill real-time processing requirements in this case even though it may have higher time costs than the CKF and ICKF.

Fig. 6
figure 6

The weights of GNSS measurements under penalty mechanism

Table 5 The comparison of statistical errors

5 Conclusions

Faced with the challenges posed by urban canyons, the traditional nonlinear Kalman filters primarily address truncation errors while overlooking the potential impact of the outlier measurements induced by the increased multipath errors and NLOS reception. As a result, the accuracy approximation of the probability distribution in the measurement update process is impaired and the robustness in the whole process is diminished in GNSS challenge conditions. This paper presents an iterated CKF utilizing a novel robust estimation technology to enhance the robustness of the nonlinear filter employed in a tightly coupled GNSS/SINS system operating in GNSS challenge conditions. But it is still required to choose between disturbance resistibility and sensitivity since the chosen robust kernel functions do not have global convexity due to the fixed modulation index. In the forthcoming research, it is imperative to delve deeper into the adaptive modulation index and outlier detection criteria embedded in the robust kernel function to cater to the navigation demands of various GNSS challenge conditions.

Availability of data and materials

The data used in this paper are available from the corresponding author on reasonable request.

Abbreviations

CKF:

Cubature Kalman filter

GNSS:

Global navigation satellite system

SINS:

Strapdown inertial navigation system

GM:

Geman McClure

NLOS:

Non-line-of-sight

EKF:

Extended Kalman filter

UT:

Unscented transform

QKF:

Quadrature Kalman filter

PDF:

Probability density function

MAP:

Maximum a posteriori

LMMSE:

Linear minimization mean square error

GMM:

Gaussian mixture model

EM:

Expectation–maximization

IRCKF:

The novel robust estimation-based iterated CKF

References

  1. R. Sun, Z. Zhang, Q. Cheng, W.Y. Ochieng, Pseudorange error prediction for adaptive tightly coupled GNSS/IMU navigation in urban areas. GPS Solut. 26(1), 1–13 (2022)

    Article  Google Scholar 

  2. S. Li, S. Wang, Y. Zhou, Z. Shen, X. Li, Tightly coupled integration of GNSS, INS, and LiDAR for vehicle navigation in urban environments. IEEE Internet Things J. 9(24), 24721–24735 (2022)

    Article  Google Scholar 

  3. Y. Gao, G. Li, A slowly varying spoofing algorithm avoiding tightly-coupled GNSS/IMU with multiple anti-spoofing techniques. IEEE Trans. Veh. Technol. 71(8), 8864–8876 (2022)

    Article  Google Scholar 

  4. S.Y. Chen, Kalman filter for robot vision: a survey. IEEE Trans. Ind. Electron. 59(11), 4409–4420 (2012)

    Article  Google Scholar 

  5. Y. Wang, L. Xu, F. Zhang, H. Dong, Y. Liu, G. Yin, An adaptive fault-tolerant EKF for vehicle state estimation with partial missing measurements. IEEE ASME Trans. Mechatron. 26(3), 1318–1327 (2021)

    Article  Google Scholar 

  6. L. Herrera, M.C. Rodriguez-Linan, E. Clemente, M. Meza-Sanchez, L. Monay-Arredondo, Evolved extended Kalman filter for first-order dynamical systems with unknown measurements noise covariance. Appl. Soft Comput. 115, 108174 (2022)

    Article  Google Scholar 

  7. S.J. Julier, J.K. Uhlmann, H.F. Durrantwhyte, A.C.C. Amer, A new approach for filtering nonlinear systems, in 1995 American Control Conference (1995), pp. 1628–1632

  8. W.I. Tam, D. Hatzinakos, C.S. IEEE, An efficient radar tracking algorithm using multidimensional Gauss–Hermite quadratures, in IEEE International Conference Acoustics, Speech Signal Processing (ICASSP) (1997), pp. 3777–3780

  9. I. Arasaratnam, S. Haykin, Cubature Kalman filters. IEEE Trans. Autom. Contr. 54(6), 1254–1269 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. M.A. Skoglund, G. Hendeby, D. Axehill, Extended Kalman filter modifications based on an optimization view point, in 2015 18th International Conference on Information FusionI (ICIF) (2015), pp. 1856–1861

  11. G. Geng, B. Wei, C. Duan, H. Jiang, Y. Hua, A strong robust observer of distributed drive electric vehicle states based on strong tracking-iterative central difference Kalman filter algorithm. Adv. Mech. Eng. 10(6), 1–12 (2018)

    Article  Google Scholar 

  12. T. Pfeifer, P. Protzel, Expectation-Maximization for Adaptive Mixture Models in Graph Optimization, in 2019 International Conference on Robotics and Automation (ICRA) (2019), pp. 3151–3157

  13. T. Pfeifer, S. Lange, P. Protzel, Dynamic covariance estimation—a parameter free approach to robust sensor fusion, in 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), (2017), pp. 359–365

  14. Y. Huang, Y. Zhang, N. Li, J. Chambers, Robust student’s t based nonlinear filter and smoother. IEEE Trans. Aerosp. Electron. Syst. 52(5), 2586–2596 (2016)

    Article  Google Scholar 

  15. G. Wang, X. Xu, T. Zhang, M–M estimation-based robust cubature Kalman filter for INS/GPS integrated navigation system. IEEE Trans. Instrum. Meas. 70, 9501511 (2021)

    Google Scholar 

  16. J. Wang, X. Chen, P. Yang, Adaptive H-infinite Kalman filter based on multiple fading factors and its application in unmanned underwater vehicle. ISA Trans. 108, 295–304 (2021)

    Article  Google Scholar 

  17. J. Wang, X. Chen, J. Liu, X. Zhu, Y. Zhong, A robust backtracking CKF based on Krein space theory for in-motion alignment process. IEEE Trans. Intell. Transp. Syst. 24(2), 1909–1925 (2023)

    Google Scholar 

  18. J. Wang, X. Chen, X. Shao, An adaptive multiple backtracking UKF method based on Krein space theory for marine vehicles alignment process. IEEE Trans. Veh. Technol. (2023). https://doi.org/10.1109/TVT.2022.3220243

    Article  Google Scholar 

  19. T. Ho, Robust localization in cellular networks via reinforced iterative M-estimation and fuzzy adaptation. IEEE Trans. Wirel. Commun. 21(6), 4269–4281 (2022)

    Article  Google Scholar 

  20. A.F. Garcia-Fernandez, L. Svensson, M.R. Morelande, S. Sarkka, Posterior linearization filter: principles and implementation using sigma points. IEEE Trans. Signal Process. 63(20), 5561–5573 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  21. M.J. Black, A. Rangarajan, On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. Int. J. Comput. Vis. 19(1), 57–91 (1996)

    Article  Google Scholar 

  22. M. Bai, Y. Huang, B. Chen, Y. Zhang, A novel robust Kalman filtering framework based on normal-skew mixture distribution. IEEE Trans. Syst. Man Cybern. Syst. 52(11), 6789–6805 (2022)

    Article  Google Scholar 

Download references

Funding

This work was supported by the National Natural Science Foundation of China [No. 61873064], the National Key Research and Development Project [No. 2017YFC0306303], the Postgraduate Research & Practice Innovation Program of Jiangsu Province [KYCX21_0087], Jiangsu Province Agriculture Innovation Project under Grant [CX(21)2015], and the Suzhou City Research and Development Project under Grant [SNG2020039].

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to this study, including the methodology, experiments, and analysis. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xiyuan Chen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, J., Chen, X. & Shi, C. A novel robust iterated CKF for GNSS/SINS integrated navigation applications. EURASIP J. Adv. Signal Process. 2023, 83 (2023). https://doi.org/10.1186/s13634-023-01044-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-023-01044-9

Keywords

  • GNSS challenge conditions
  • Iterated CKF
  • Robust estimation
  • Outlier measurement