# Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates

- Shunyi Zhao
^{1}, - Yuriy S. Shmaliy
^{2}Email author, - Fei Liu
^{1}, - Oscar Ibarra-Manzano
^{2}and - Sanowar H. Khan
^{3}

**2015**:83

https://doi.org/10.1186/s13634-015-0268-0

© Zhao et al. 2015

**Received: **29 April 2015

**Accepted: **21 August 2015

**Published: **17 September 2015

## Abstract

Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of *N*
_{opt} points, the MSEs in the OFIR-EU and OFIR filters diminish with *N* and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.

### Keywords

State estimation Unbiased FIR filter Optimal FIR filter Kalman filter## 1 Introduction

Beginning with the works by Gauss [1], unbiasedness plays a role of the necessary condition that is used to derive linear and nonlinear estimators [2]. In statistics and signal processing, the ordinary least squares (OLS) estimator proposed by Gauss in 1795 is an unbiased estimator. By the Gauss-Markov theorem [3], this estimator is also the best linear unbiased estimator (BLUE) [4] if noise is white and if it has the same variance at each time step [5]. The unbiasedness is obeyed by a condition \(E \{ \hat {\mathbf {x}}_{k} \} = E \{\mathbf {x}_{k} \}\) which means that the average of estimate \(\hat {\mathbf {x}}_{k}\) is equal to that of the model **x**
_{
k
}. It leads to the unbiased finite impulse response (UFIR) estimator [6]. Of practical importance is that neither OLS nor UFIR require the noise statistics which are not always known to the engineers [7]. The unbiasedness condition, however, does not guarantee “good estimate” [8]. Therefore, the sufficient condition—minimized noise variance—is often applied along to produce different kinds of estimators which are optimal in the minimum mean square error (MSE) sense or suboptimal: Bayesian, maximum likelihood (MLE), minimum variance unbiased (MVU), etc. In recent decades, a new class of estimators having FIR (filters, smoothers, and predictors) was developed to have optimal or suboptimal properties.

The FIR filter utilizes finite measurements over the most recent time interval (horizon) of *N* discrete points. Compared to the filters with infinite impulse response (IIR), such as the Kalman filter (KF) [9], the FIR filter exhibits some useful engineering features such as the bounded input/bounded output (BIBO) stability [10], robustness against temporary model uncertainties and round-off errors [11], and lower sensitivity to noise [12]. The most noticeable early works on optimal FIR (OFIR) filtering are [13–15]. At that time, FIR filters were not the ones commonly used for state estimation due to the analytical complexity and large computational burden. Nowadays, the interest to FIR estimators has grown owing to the tremendous progress in the computational resources. Accordingly, we find a number of new solutions on FIR filtering [16–21], smoothing [22–24], and prediction [25–27] as well as efficient applications [28–30].

Basically, the unbiasedness can be satisfied in two different strategies: (1) one may test an estimator by the unbiasedness condition or (2) one may embed the unbiasedness constraint into the design. We therefore recognize below the checked (tested) unbiasedness (CU) and the embedded unbiasedness (EU). Accordingly, we denote the FIR filter with CU as FIR-CU and the FIR filter with EU as FIR-EU.

In state estimation, signal processing, tracking, and control, two different state-space models are commonly used. The prediction model which is basic in control is **x**
_{
k+1}=**A**
**x**
_{
k
}+**B**
**w**
_{
k
} and **y**
_{
k
}=**C**
**x**
_{
k
}+**D**
**v**
_{
k
}, in which **w**
_{
k
} and **v**
_{
k
} are noise vectors, and **A**, **B**, **C** and **D** are relevant matrices. Employing this model, the receding horizon FIR estimators were proposed for different types of unbiasedness. In [16], the receding horizon FIR-CU filter was derived from KF with no requirements for the initial state. Soon after, a receding horizon FIR-EU filter was proposed by Kwon, Kim, and Han in [17], where the unbiasedness condition was considered as a constraint to the optimization problem. Later, the receding horizon FIR smoothers were found in [22] for CU by employing the maximum likelihood and in [24] for EU by minimizing the error variance.

The real-time state model **x**
_{
k
}=**A**
**x**
_{
k−1}+**B**
**w**
_{
k
} is used in signal processing when the prediction is not required (different time index) [31, 32]. Employing this model, the FIR-CU filter and smoother were proposed by Shmaliy in [23, 33] for polynomial systems. In [12], a *p*-shift unbiased FIR filter (UFIR) was derived as a special case of the OFIR filter. Here, the unbiasedness was checked a posteriori, and the solution thus belongs to CU. Soon after, the UFIR filter [12] was extended to time-variant systems [18, 34]. For nonlinear models, an extended UFIR filter was proposed in [35] and unified forms for FIR filtering and smoothing were discussed in [36]. An important advantage of the UFIR filter against OFIR filter is that the noise statistics are not required. Because noise reduction in FIR structures is provided by averaging, *N*≫1 makes the UFIR filter as successful in accuracy as the OFIR filter.

It has to be remarked now that all of the aforementioned FIR estimators related to real-time state-space model belong to the CU solutions. Still no optimal FIR estimator was addresses of the EU type. It is thus unclear which kind of FIR estimators serves better in particular applications [37–39]. So, there is still room for discussion of the best FIR filter.

In this paper, we systematically investigate effect of the embedded unbiasedness on OFIR estimates. To this end, we derive a new FIR filter, called OFIR-EU filter, by minimizing the MSE subject to the unbiasedness constraint. We also learn properties of the OFIR-EU filter in a comparison with the OFIR and UFIR filters and KF. The remaining part of the paper is organized as follows. In Section 2, we describe the model and formulate the problem. The OFIR-EU filter is derived in Section 3. Here, we also consider a unified form for different kinds of OFIR filters. In Section 4, we generalize several FIR filters and discuss special cases of the OFIR-EU filter. The MSEs are compared analytically in Section 5. Extensive simulations are provided in Section 6, and concluding remarks are drawn in Section 7.

The following notations are used: \(\mathbb R^{n}\) denotes the *n*-dimensional Euclidean space; *E*{·} denotes the expected value; *d*
*i*
*a*
*g* (*e*
_{1}⋯*e*
_{
m
}) represents a diagonal matrix with diagonal elements **e**
_{1},⋯,**e**
_{
m
}; tr **M** is the trace of **M**; and **I** is the identity matrix of proper dimensions.

## 2 Preliminaries and problem formulation

in which *k* is the discrete time index, \(\mathbf {x}_{k} \in {{\mathbb {R}}^{n}}\) is the state vector, and \(\mathbf {y}_{k} \in {{\mathbb {R}}^{p}}\) is the measurement vector. Matrices \(\mathbf {A} \in \mathbb R^{n \times n}\), \(\mathbf {B} \in \mathbb R^{n \times u}\), \(\mathbf {C} \in \mathbb R^{p\times n}\) and \(\mathbf {D} \in \mathbb R^{p \times v}\) are time-invariant and known. We suppose that the process noise \(\mathbf {w}_{k} \in \mathbb R^{u}\) and the measurement noise \(\mathbf {v}_{k} \in \mathbb R^{v}\) are zero mean, *E*{**w**
_{
k
}}=**0** and *E*{**v**
_{
k
}}=**0**, mutually uncorrelated, and have arbitrary distributions and known covariances \(\mathbf {Q}(i,j) = E\left \{{{\mathbf {w}_{i}}\mathbf {w}_{j}^{T}}\right \}\), \(\mathbf {R}(i,j) = E\left \{ {{\mathbf {v}_{i}}\mathbf {v}_{j}^{T}}\right \}\) for all *i* and *j*, to mean that **w**
_{
k
} and **v**
_{
k
} are not obligatorily white Gaussian.

*l*,

*k*] with recursively computed forward-in-time solutions as

*l*=

*k*−

*N*+1 is a start point of the averaging horizon. The time-variant state vector \(\mathbf {X}_{k,l}\in {{\mathbb {R}}^{Nn \times 1}}\), observation vector \(\mathbf {Y}_{k,l}\in {{\mathbb {R}}^{Np \times 1}}\), process noise vector \(\mathbf {W}_{k,l}\in {{\mathbb {R}}^{Nu \times 1}}\), and observation noise vector \(\mathbf {V}_{k,l}\in {{\mathbb {R}}^{Nv \times 1}}\) are specified as, respectively,

*N*points. Model (1) and (2) suggests that these matrices can be written as, respectively,

Note that at the start horizon point we have an equation **x**
_{
l
}=**x**
_{
l
}+**B**
**w**
_{
l
} which is satisfied uniquely with zero-valued **w**
_{
l
}, provided that **B** is not zeroth. The initial state **x**
_{
l
} must thus be known in advance or estimated optimally.

*N*past neighboring measurement points on a horizon [

*l*,

*k*] can be specified with

where \(\hat {\mathbf {x}}_{k|k}\) is the estimate^{1}, and **K**
_{
k
} is the FIR filter gain determined using a given cost criterion. Note that a distinctive difference between the FIR with IIR filters is that only one nearest past measurement is used in the recursive IIR (Kalman) filter to provide the estimate, while the convolution-based batch FIR filter requires *N* most recent measurements.

**x**

_{ k }can be specified as

**B**

_{ k−l }. By substituting (15) and (17) into (16), replacing the term

**Y**

_{ k,l }with (4), and providing the averaging, one arrives at the unbiasedness constraint

**e**

_{ k }can be defined as

We also wish to investigate effect of the unbiasedness constraint (18) on the OFIR-EU estimate, compare errors in different kinds of FIR filters, and analyze the trade-off between the OFIR-EU filter derived in this paper, UFIR filter [33], OFIR filter [34], and KF under the diverse operation conditions.

## 3 OFIR-EU filter

In the derivation of the OFIR-EU filter, the following lemma will be used.

###
**Lemma**
**1**.

**H**=

**H**

^{ T }>

**0**,

**P**=

**P**

^{ T }>

**0**,

**S**=

**S**

^{ T }>

**0**, tr

*M*is the trace of

*M*,

*θ*denotes the constraint indication parameter such that

*θ*=1 if the constraint exists and

*θ*=0 otherwise. Here,

*F*,

*G*,

*H*,

*L*,

*M*,

*P*,

*S*,

*U*, and

*Z*are constant matrices of appropriate dimensions. The solution to (21) is

**Π**=

**I**−

*θ*

**U**(

**U**

^{ T }

**Ξ**

^{−1}

**U**)

^{−1}

**U**

^{ T }

**Ξ**

^{−1}and

###
*Proof*.

The proof is provided in Appendix A.

### 3.1 The gain for OFIR-EU filter

**x**

_{ k }with (17) and \(\hat {\mathbf {x}}_{k|k}\) with (15), the cost function becomes

**W**

_{ k,l }and the measurement noise vector

**V**

_{ k,l }are pairwise independent. The auxiliary matrices are

*θ*=1, the solution to the optimization problem (26) can be obtained by neglecting

**L**,

**M**, and

**P**and using the replacements:

**F**←

**H**

_{ k−l }, \(\mathbf {G} \leftarrow {\bar {\mathbf {B}}_{k-l}}\),

**H**←

**Θ**

_{ w },

**U**←

**C**

_{ k−l },

**Z**←

**A**

^{ N−1}, and

**S**←

**Δ**

_{ v }. We thus have

The OFIR-EU filter structure can now be summarized in the following theorem.

###
**Theorem**
**1**.

**w**

_{ k }and

**v**

_{ k }, the OFIR-EU filter utilizing measurements from

*l*to

*k*is stated by

where \(\mathbf {K}_{k}^{\text {OEU}} = \mathbf {K}_{k}^{\text {OEUa}} + \mathbf {K}_{k}^{\text {OEUb}}\), \(\mathbf {Y}_{k,l} \in \mathbb R^{Np \times 1}\) is the measurement vector given by (6), and \(\mathbf {K}_{k}^{\text {OEUa}}\) and \(\mathbf {K}_{k}^{\mathrm {{OEUb}}}\) are given by (30) and (31) with **C**
_{
k−l
} and \(\bar {\mathbf {B}}_{k-l}\) specified by (11) and the first row vector of (10), respectively.

*N*for (35) should be chosen such that the first inverse in (30) exists. In general,

*N*can be set as \(N \geqslant n\), where

*n*is the number of the model states. Table 1 summarizes the steps in the OFIR-EU estimation algorithm, in which the noise statistics are assumed to be known for measurements available from

*l*to

*k*.

Given *N*, compute \(\mathbf {K}_{k}^{\text {OEUa}}\) and \(\mathbf {K}_{k}^{\text {OEUb}}\) according to (30) and (31), respectively, then the OFIR-EU estimate can be obtained at time index *k* by (35).

### 3.2 Unified form for OFIR and OFIR-EU filters

**x**

_{ l }. Using Lemma 1 and substituting

*θ*=1, (37) reduces to

_{ k−l }is given by (38), in which \(\bar {\mathbf {\Delta }}_{x+w+v}\) is specified by (39) with

*θ*=1. Referring to (30) and (31) and taking into consideration that the second term on the right-hand side of (42) equals to zero, we come up with a deduction that

*θ*=0, (37) transforms to

**Θ**

_{ x }with identity \(\left (\mathbf {C}_{k-l}^{T}\mathbf {C}_{k-l}\right)^{-1}\mathbf {C}_{k-l}^{T}\mathbf {C}_{k-l}\) from the left-hand side, (44) turns up as

We thus infer that this case corresponds to the OFIR filter which gain was found in [34]. At this point, we notice that (37) is a unified generalized form for the OFIR filter gain which minimize the MSE in the estimate of discrete time-invariant state-space model. In this regard, the OFIR filter gain derived in [34] and OFIR-EU filter gain specified by Theorem 1 can be considered as special cases of (37).

## 4 MVU FIR filter

Owing to its unique properties, the unbiasedness constraint (18) has been employed extensively to derive different kinds of FIR filters [6, 15–17, 23]. The UFIR filter was shown in [12] to be a special case of the OFIR filter with the unbiased gain specified by (46), where *N* is chosen as \(N \geqslant n\) to guarantee the invertibility of \(\mathbf {C}_{k-l}^{T}\mathbf {C}_{k-l}\). The gain (46) can also be obtained by multiplying **A**
^{
N−1} in the constrain (18) from the right-hand side with the identity matrix \(\left (\mathbf {C}_{k-l}^{T}\mathbf {C}_{k-l}\right)^{-1}\mathbf {C}_{k-l}^{T}\mathbf {C}_{k-l}\) and neglecting **C**
_{
k−l
} in both sides. In this sense, the UFIR filter is akin to Gauss’s OLS. On the other hand, (46) does not guarantee optimality in the MSE sense. An optimized solution can be provided by minimizing the error variance that leads to the minimum variance unbiased (MVU) FIR filter [40]. Since the properties of the MVU FIR filter are in-between the UFIR and OFIR filters, a unified form for the UFIR filter can also be assumed. Below, we specify the MVU FIR filter and show a unified relationship between the UFIR, MVU FIR, and OFIR-EU filter gains.

### 4.1 Identity of MVU FIR and OFIR-EU filters

**Δ**

_{ x }. Any

**Δ**

_{ x }can thus be supposed in (50), provided that the inverse in (50) exists. This fundamental property was postulated in many papers [11, 17, 23, 33] and, based upon, \(\mathbf {K}_{k}^{\text {MVU}}\) can be rewritten equivalently as

**Ω**

_{ k−l }is given by (32). Referring to (31) and making some rearrangements, we arrive at an aquality

which is formalized below with a theorem.

###
**Theorem**
**2**.

###
*Proof*.

The proof is given in Section 4.1.

It follows from Theorem 2 that the gain \(\mathbf {K}_{k}^{\text {MVU}}\) is not unique. One may suppose any initial state matrix **Δ**
_{
x
}, compute it by solving the discrete algebraic Riccati equation (DARE) as in [12], or even neglect **Δ**
_{
x
} as we have done above. Although each of these cases require particular algorithms, Lemma 1 suggests that the estimation accuracy will not be affected by **Δ**
_{
x
}. We notice that this property of MVU FIR filter was unknown so far. We use it below while comparing different kinds of unbiased FIR filters.

### 4.2 Unified form for UFIR and MVU FIR filters

**A**

^{ N−1}in the constraint (18) from the right-hand side with an appropriate identity matrix and remove

**C**

_{ k−l }from the both sides. The unbiased gain \(\mathbf {K}_{k}^{\text {UU}}\) produced in such a way depends on an auxiliary matrix

**Z**

_{ k−l }, provided that its inverse exists. However, a class of UFIR filters associated with

**Z**

_{ k−l }must have some reasonable formulation which can be the following. Let us combine \(\mathbf {K}_{k}^{\text {UU}}\) with two additive components of the same class as

*κ*can be either 0 or 1,

*κ*and

**Π**

_{ k−l }, the following special cases can be recognized:

- –
If

*κ*=0 and**Π**_{ k−l }=*λ***I**with*λ*constant, then \(\mathbf {K}_{k}^{\text {UU}} = \mathbf {K}_{k}^{\mathrm U}\). - –
If

*κ*=1 and \(\mathbf {\Pi }_{k-l}=\mathbf {\Delta }_{w+v}^{-1}\), then \(\mathbf {K}_{k}^{\text {UU}} = \mathbf {K}_{k}^{\text {OEU}}\).

Several other generalizations can also be made regarding the types of systems:

#### 4.2.1 Deterministic state model

**Θ**

_{ w }should be omitted in (30) and (31), and (29) reduces to the gain

which becomes equals to \(\mathbf {K}_{k}^{\text {UU}}\) with *κ*=0 and \(\mathbf {\Pi }_{k-l}=\mathbf {\Delta }_{v}^{-1}\). This gain corresponds to the traditional BLUE and MLE for Gaussian models [5]. The batch form (59) was also shown in [11] for the receding horizon FIR filter with embedded unbiasedness and minimized variance.

#### 4.2.2 Deterministic measurement model

which is a special case of (55) by *κ*=1 and \(\mathbf {\Pi }_{k-l}=\mathbf {\Delta }_{w}^{-1}\).

#### 4.2.3 Deterministic state-space model

By the constraint (18), the terms in the parentheses of (61) become identically zero. Hence, the solution to (61) is the unbiased gain **K**
_{
k
} given by (46). It then follows that

*The UFIR filter is a deadbeat filter for deterministic systems*.

**Θ**

_{ x }with \(\left (\mathbf {C}_{k-l}^{T}\mathbf {C}_{k-l}\right)^{-1}\mathbf {C}_{k-l}^{T}\mathbf {C}_{k-l}\) from the left-hand side in (62) yields

which can also be obtained by setting the terms **Δ**
_{
w
} and **Δ**
_{
v
} in (45) to zero. We thus infer that

*The OFIR filter is a deadbeat filter for deterministic systems*.

Different FIR filter gains

Filter | Gain |
---|---|

UFIR | \(\mathbf {K}_{k}^{\mathrm U}=\mathbf {A}^{N-1}\left (\mathbf {C}_{k-l}^{T}\mathbf {C}_{k-l}\right)^{-1}\mathbf {C}_{k-l}^{T}\) |

OFIR-EU | \(\mathbf {K}_{k}^{\text {OEU}}= \mathbf {K}_{k}^{\text {OEUa}}+\mathbf {K}_{k}^{\text {OEUb}}\) |

OFIR | \(\mathbf {K}_{k}^{\mathrm O}= \left (\mathbf {K}_{k}^{\mathrm U}\mathbf {\Delta }_{x}+\bar {\mathbf {B}}_{k-l}\mathbf {\Theta }_{w}\mathbf {H}_{k-l}^{T}\right)\mathbf {\Delta }_{x+w+v}^{-1}\) |

## 5 Estimation errors

Provided a correspondence between the OFIR, OFIR-EU (MVU FIR), and UFIR filter gains (Table 2), in this section, we proceed with an analysis of the estimation errors. We compare the MSEs of these filters and point out their common features and differences.

### 5.1 Mean square errors

**J**

_{ k }at the estimator output can be defined as

**x**

_{ k }is inherently unbiased, we write \(E\{ \mathbf {x}_{k} \mathbf {x}_{k}^{T} \} = \text {Var}(\mathbf {x}_{k})\) and \(E\left \{ \hat {\mathbf {x}}_{k|k} \hat {\mathbf {x}}_{k|k}^{T} \right \} = \text {Bias}^{2} (\hat {\mathbf {x}}_{k|k}) + \text {Var}(\hat {\mathbf {x}}_{k|k})\). We further decompose the estimate \(\hat {\mathbf {x}}_{k|k}\) as \(\hat {\mathbf {x}}_{k|k} = \text {Bias}(\hat {\mathbf {x}}_{k|k}) + \tilde {\mathbf {x}}_{k|k}\), where \(\tilde {\mathbf {x}}_{k|k}\) is a random part of \(\hat {\mathbf {x}}_{k|k}\), find

**x**

_{ k }) is specified by

Based upon (65), below we specify the MSEs for the above considered FIR filters.

#### 5.1.1 MSE in the UFIR estimate

**W**

_{ k,l }and

**V**

_{ k,l }are mutually independent, the covariance \(\text {Cov}(\mathbf {x}_{k},\hat {\mathbf {x}}_{k|k})\) can be obtained as

where \(\mathbf {K}_{k}^{\mathrm U}\) is given by (46). The MSE (70) was first studied in [18].

#### 5.1.2 MSE in the OFIR-EU estimate

**Υ**

_{ k−l }given by (53), we transform (75) to

in which \(\mathbf {J}_{k}^{\mathrm {U}}\) is provided by (70).

#### 5.1.3 MSE in the OFIR estimate

The above-provided relations (70), (76), and (80) allow analyzing effect of the unbiasedness constraint on the OFIR-filtering estimates that we provide below.

### 5.2 Correspondence between the MSEs

A general relationship between the MSEs associated with different FIR filters is ascertained by the following theorem.

###
**Theorem**
**3**.

and it becomes an equality when the state-space model is deterministic.

###
*Proof*.

The proof is given in [40] and we support it with a simple analysis. The UFIR filter is designed to obtain zero bias. Although the noise variance is reduced here as \(\propto \frac {1}{N}\), the optimality is not guaranteed. Therefore, the MSE in UFIR filter generally exceeds those in two other filters. The MSE in the OFIR filter is minimal among other filters. The OFIR-EU filter minimizes MSE with the embedded unbiasedness. Its error is thus in between the UFIR and OFIR filters.

## 6 Applications

The reader can also find some other comparisons of the KF and FIR filters in [16, 18, 34, 41].

Accurate model—ideal case

In an ideal case, one may think that the model represents a process accurately and the noise statistics are known exactly. The goal then is to learn the effect of the horizon length *N* on the FIR estimates. We set the measurement noise variance as \({\sigma _{v}^{2}} = 10\), and the initial states as *x*
_{10}=1 and *x*
_{20}=0.01/*s*.

**J**

_{ k }as a function of

*N*. The results are illustrated in Fig. 1 for \({\sigma _{w}^{2}}=1\) and in Fig. 2 for \({\sigma _{w}^{2}}=0.1\). What we can see here is that the MSE function of the UFIR filter is traditionally concave on

*N*with a minimum at

*N*

_{opt}[42]: with

*N*<

*N*

_{opt}, noise reduction is inefficient and, if

*N*>

*N*

_{opt}, the bias error dominates. On the other hand, the KF is

*N*-invariant and its MSE is thus constant. The following generalizations can also be made:

- –The embedded unbiasedness puts the OFIR-EU filter error in between the UFIR and OFIR filters: the
*OFIR-EU filter becomes*essentially the*UFIR filter when**N*<*N*_{opt}and the*OFIR filter if**N*>*N*_{opt}. - –
The OFIR and OFIR-EU estimates converge to the KF estimate by increasing the averaging horizon

*N*. The estimates become practically indistinguishable when*N*≫*N*_{opt}. - –
An increase in

*N*_{opt}diminishes the error difference between the OFIR and UFIR filters (compare Fig. 1 with*N*_{opt}=33 and Fig. 2 with*N*_{opt}=47). - –
Because the MSEs in the

*OFIR*and*OFIR-EU filters*diminish with*N*, these filters*are**full-horizon*[18].

Filtering with errors in the noise statistics

*p*as

*p*

^{2}

**Q**and

**R**/

*p*

^{2}, vary

*p*from 0.1 to 10, and plot the RMSE \(\sqrt {\text {tr}\,\mathbf {J}_{k}}\) as shown in Fig. 3.

Note that the MSE functions of optimal filters are inherently concave on *p* with a minimum at *p*=1 and the MSE of the UFIR filter is *p*-invariant.

*p*=1 makes the OFIR filter, OFIR-EU filter, and KF a bit more accurate than the UFIR filter. But, that is only within a narrow range of

*p*(0.6<

*p*<1.5 in Fig. 3) that the KF slightly outperforms the UFIR filter. Otherwise, the UFIR filter demonstrates smaller errors. Referring to practical difficulties in the determination of noise statistics [7], the latter can be considered as an important engineering advantage of the UFIR filter. Some other generalizations also emerge from Fig. 3:

- –
The embedded unbiasedness makes the OFIR-EU filter

*p*-invariant with*p*<1. In this sense, the OFIR-EU is equal here to the UFIR filter, and this can be considered as a particular meaningful property of the approach proposed. - –
With

*p*<1, the KF is more sensitive to errors in the noise statistics than the FIR filters. - –
By

*p*>1, the MSEs in the KF, OFIR filter, and OFIR-EU filter grow and converge.

Overall, we conclude that the OFIR-EU filter inherits the robustness of the UFIR filte against the noise statistics and has better performance than the OFIR filter and KF.

Filtering with model uncertainties

To learn effect of the temporary model uncertainties on the filtering accuracy, in this section we set *τ*=0.1 s when 160≤*k*≤180 and *τ*=0.05 s otherwise. The noise variances are allowed to be \(\sigma _{w1}^{2} = 1\), \(\sigma _{w2}^{2} = 1/ \mathrm {s}^{2}\), and \({\sigma _{v}^{2}} = 10\). The process is simulated at 400 subsequent points.

*p*=0.2) and the UFIR filter produce almost equal errors and demonstrate good robustness against the uncertainties. Just on the contrary, the KF demonstrates much worse robustness for any

*p*≤1.

## 7 Conclusions

Summarizing, we notice that the unbiasedness imbedded to the OFIR filter instills into it several useful properties. Unlike the OFIR filter, the OFIR-EU filter completely ignores the initial conditions. The OFIR-EU filter is equivalent to the MVU FIR filter. In terms of accuracy, the OFIR-EU filter is in between the UFIR and OFIR filters. Unlike in the UFIR filter which MSE is minimized by *N*
_{opt}, MSEs in the OFIR-EU and OFIR filters diminish with *N* and these filters are thus full-horizon. The performance of OFIR-EU filter is developed by varying the horizon *N* around *N*
_{opt} or ranging the correction coefficient *p* around *p*=1. Accordingly, the OFIR-EU filter in general demonstrates higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR filter and KF.

Referring to the fact that optimal FIR filters are essentially the full-horizon filters but their batch forms are computationally inefficient, we now focus our attention on the fast iterative form for OFIR-EU filter and plan to report the results in near future.

## 8 Endnote

^{1}
\(\hat {\mathbf {x}}_{k|k}\) means the estimate at *k* via measurements from the past to *k*.

## 9 Appendix A: Proof of Lemma 1

*K*as

**K**

^{ T }= [

**k**

_{1}

**k**

_{2}⋯

**k**

_{ m }], where

*m*is the dimension of

*K*, rewrite

*ϕ*as

**g**

_{ i }and

**m**

_{ i }are the

*i*th column vector of

*G*and

*M*, respectively, and

*i*=1,2,…,

*m*. Reasoning along similar lines, the

*i*th constraint can be specified by

*ϕ*

_{ i }and \({\mathfrak {L}^{i}_{\left \{ {\mathbf {U}^{T}\mathbf {k}_{i}} = \mathbf {z}_{i}\right \} \left | \theta \right.}}\) are independent on

**k**

_{ j },

*j*≠

*i*, and the optimization problem (21) can be reduced to

*m*independent optimization problems as

*i*=1,2,…,

*m*. Now, define an auxiliary function

*φ*

_{ i|θ }as

**λ**

_{ i }denotes the

*i*th vector of the Lagrange multiplier. Note that

*φ*

_{ i|θ }depends on

*θ*which governs the existing of constraint. Setting

*θ*=1, first consider a general case of

**F**≠

**U**,

**L**≠

**U**,

**G**≠

**Z**and

**M**≠

**Z**which is denoted as case (a). Taking the derivative of

*φ*

_{ i|a }with respect to

**k**

_{ i }and

**λ**

_{ i }respectively and making them equal to zero lead to

**H**>0,

**P**>0, and

**S**>0. By multiplying the both sides of (89) with

**U**

^{ T }from the left-hand side, using the constraint (85), and arranging the terms, arrive at

**H**=

**H**

^{ T },

**P**=

**P**

^{ T },

**S**=

**S**

^{ T }and \(\mathbf {\Xi }_{a}=\mathbf {\Xi }_{a}^{T}\), transforms \(\mathbf {k}_{i}^{T}\) to

**K**

_{ a }as

*θ*=1,

**F**=

**U**and

**H**=

**Z**which is denoted as case (b) or

*θ*=1,

**G**=

**U**and

**M**=

**Z**which is denoted as case (c), the solutions can be obtained similarly to case (a), respectively,

Note that (93) and (94) are equal to the results found in [11] for the receding horizon FIR filtering via prediction state model.

*θ*=0 which is denoted as case (d), the derivative of

*φ*

_{ i|d }with respect to

**k**

_{ i|d }becomes

**Ξ**

_{ d }=

**Ξ**

_{ a }, and yields

**K**

_{ d }can be found to be

**F**=

**U**and

**L**=

**U**, and using

*θ*as an indicating parameter of the constraint, matrices

**K**

_{ a },

**K**

_{ b },

**K**

_{ c }, and

**K**

_{ d }can be unified with

where **Ξ** is specified by (23). An equivalent form of (100) is (22) and the proof is complete.

## Declarations

### Acknowledgements

This investigation was supported by the Royal Academy of Engineering under the Newton Research Collaboration Programme NRCP/1415/140.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- CF Gauss,
*Theory of the combination of observations least subject to errors.*(SIAM Publ, Philadelphia, 1995). Transl. by Stewart GW.View ArticleGoogle Scholar - H Stark, JW Woods,
*Probability, random processes, and estimation theory for engineers*, 2nd edn (Prentice Hall, Upper Saddle River, NJ, 1994).Google Scholar - JH Stapleton,
*Linear statistical models*, 2nd edn (Wiley, New York, 2009).MATHGoogle Scholar - AC Aitken, On least squares and linear combinations of observations. Proc. R. Soc. Edinb. 55, 42–48 (1935).View ArticleGoogle Scholar
- SM Kay,
*Fundamentals of statistical signal processing*(Prentice Hall, New York, 2001).Google Scholar - YS Shmaliy, An unbiased FIR filter for TIE model of a local clock in applications to GPS-based timekeeping. IEEE Trans. Ultrason. Ferroelec. Freq. Control. 53(5), 862–870 (2006).View ArticleGoogle Scholar
- BP Gibbs,
*Advanced Kalman filtering, least-squares and modeling*(John Wiley & Sons, Hoboken, NJ, 2011).View ArticleGoogle Scholar - M Hardy, An illuminating counterexample. Am. Math. Mon. 110(3), 232–238 (2003).View ArticleGoogle Scholar
- D Simon,
*Optimal state estimation: Kalman, Hinf, and nonlinear approaches*(John Wiley & Sons, Honboken, NJ, 2006).View ArticleGoogle Scholar - AH Jazwinski,
*Stochastic processes and filtering theory*(Academic, New York, 1970).MATHGoogle Scholar - WH Kwon, S Han,
*Receding horizon control: model predictive control for state models*(Springer, London, 2005).Google Scholar - YS Shmaliy, Linear optimal FIR estimation of discrete time-invariant state-space models. IEEE Trans. Signal Process. 58(6), 3086–2010 (2010).MathSciNetView ArticleGoogle Scholar
- KR Johnson, Optimum, linear, discrete filtering of signals containing a nonrandom component. IRE Trans. Inf. Theory. 2(2), 49–55 (1956).View ArticleGoogle Scholar
- AH Jazwinski, Limited memory optimal filtering. IEEE Trans. Autom. Contr. 13(10), 558–563 (1968).View ArticleGoogle Scholar
- CK Ahn, S Han, WH Kwon, FIR filters for linear continuous-time state-space systems. IEEE Signal Process. Lett. 13(9), 557–560 (2006).View ArticleGoogle Scholar
- WH Kwon, PS Kim, P Park, A receding horizon Kalman FIR filter for discrete time-invariant systems. IEEE Trans. Autom. Contr. 99(9), 1787–1791 (1999).MathSciNetView ArticleGoogle Scholar
- WH Kwon, PS Kim, S Han, A receding horizon unbiased FIR filter for discrete-time state space models. Automatica. 38(3), 545–551 (2002).MATHView ArticleGoogle Scholar
- YS Shmaliy, An iterative Kalman-like algorithm ignoring noise and initial conditions. IEEE Trans. Signal Process. 59(6), 2465–2473 (2011).MathSciNetView ArticleGoogle Scholar
- YS Shmaliy, Optimal gains of FIR estimations for a class of discrete-time state-space models. IEEE Signal Process. Lett. 15, 517–520 (2008).View ArticleGoogle Scholar
- CK Ahn, Strictly passive FIR filtering for state-space models with external disturbance. Int. J. Electron. Commun. 66(11), 944–948 (2012).View ArticleGoogle Scholar
- JM Park, CK Ahn, MT Lim, MK Song, Horizon group shift FIR filter: alternative nonlinear filter using finite recent measurement. Measurement. 57, 33–45 (2014).View ArticleGoogle Scholar
- CK Ahn, PS Kim, Fixed-lag maximum likelihood FIR smoother for state-space modelsIEICE Electron. IEICE Electron. Express. 5(1), 11–16 (2008).View ArticleGoogle Scholar
- YS Shmaliy, LJ Morales-Mendoza, FIR Smoothing of discrete-time polynomial signals in state space. IEEE Trans. Signal Process. 58(5), 2544–2555 (2010).MathSciNetView ArticleGoogle Scholar
- BK Kwon, S Han, OK Kim, WH Kwon, Minimum variance FIR smoothers for discrete-time state space models. EEE Trans. Signal Process. Lett. 14(8), 557–560 (2007).View ArticleGoogle Scholar
- L Danyang, L Xuanhuang, Optimal state estimation without the requirement of a prior statistics informantion of the initial state. IEEE Trans. Autom. Contr. 39(10), 2087–2091 (1994).MATHView ArticleGoogle Scholar
- KV Ling, KW Lim, Receding horizon recursive state estimation. IEEE Trans. Autom. Contr. 44(9), 1750–1753 (1999).MATHMathSciNetView ArticleGoogle Scholar
- J Makhoul, Linear prediction: a tutorial review. Proc. IEEE. 63, 561–580 (1975).View ArticleGoogle Scholar
- J Levine, The statistical modeling of atomic clocks and the design of time scales. Rev. Sci. Instrum. 83, 021101–1–021101-28 (2012).Google Scholar
- Y Kou, Y Jiao, D Xu, M Zhang, Ya Liu, X Li, Low-cost precise measurement of oscillator frequency instability based on GNSS carrier observation. Adv. Space Res. 51(6), 969–977 (2013).View ArticleGoogle Scholar
- JW Choi, S Han, JM Cioffi, An FIR channel estimation filter with robustness to channel mismatch condition. IEEE Trans. Broadcast. 54(1), 127–130 (2008).View ArticleGoogle Scholar
- J Salmi, A Richter, V Koivunen, Detection and tracking of MIMO propagation path parameters using state-space approach. IEEE Trans. Signal Process. 57(4), 1538–1550 (2009).MathSciNetView ArticleGoogle Scholar
- I Nevat, J Yuan, Joint channel tracking and decoding for BICM-OFDM systems using consistency test and adaptive detection selection. IEEE Trans. Veh. Technol. 58(8), 4316–4328 (2009).View ArticleGoogle Scholar
- YS Shmaliy, Unbiased FIR filtering of discrete-time polynomial state-space models. IEEE Trans. Signal Process. 57(4), 1241–1249 (2009).MathSciNetView ArticleGoogle Scholar
- YS Shmaliy, O Ibarra-Manzano, Time-variant linear optimal finite impulse response estimator for discrete state-space models. Int. J. Adapt. Contrl Signal Process. 26(2), 95–104 (2012).MATHMathSciNetView ArticleGoogle Scholar
- YS Shmaliy, Suboptimal FIR filtering of nonlinear models in additive white Gaussian noise. IEEE Trans. Signal Process. 60(10), 5519–5527 (2012).MathSciNetView ArticleGoogle Scholar
- D Simon, YS Shmaliy, Unified forms for Kalman and finite impulse response filtering and smoothing. Automatica. 49(6), 1892–1899 (2013).MathSciNetView ArticleGoogle Scholar
- YL Wei, J Qiu, HR Karimi, M Wang, A new design of
*H*_{ ∞ }filtering for continuous-time Markovian jump systems with time-varying delay and partially accessible mode information. Signal Process. 93(9), 2392–2407 (2013).View ArticleGoogle Scholar - YL Wei, M Wang, J Qiu, New approach to delay-dependent
*H*_{ α }filtering for discrete-time Markovian jump systems with time-varying delay and incomplete transtion descriptions. IET Control Theory Appl. 7(5), 684–696 (2013).MathSciNetView ArticleGoogle Scholar - J Qiu, YL Wei, HR Karimi, New approach to delay-dependent
*H*_{ α }control for continuous-time Markovian jump systems with time-varying delay and deficient transtion descriptions. J. Frankl. Inst. 352(1), 189–215 (2015).MATHMathSciNetView ArticleGoogle Scholar - S Zhao, YS Shmaliy, B Huang, F Liu, Minimum variance unbiased FIR filter for discrete time-variant models. Automatica. 53, 355–361 (2015).MathSciNetView ArticleGoogle Scholar
- PS Kim, An alternative FIR filter for state estimation in discrete-time systems. Digit. Signal Process. 20(3), 935–943 (2010).View ArticleGoogle Scholar
- FR Echeverria, A Sarr, YS Shmaliy, Optimal memory for discrete-time FIR filters in state-space. IEEE Trans. Signal Process. 62, 557–561 (2014).MathSciNetView ArticleGoogle Scholar