Skip to main content

Optimal and unbiased FIR filtering in discrete time state space with smoothing and predictive properties

Abstract

We address p-shift finite impulse response optimal (OFIR) and unbiased (UFIR) algorithms for predictive filtering (p > 0), filtering (p = 0), and smoothing filtering (p < 0) at a discrete point n over N neighboring points. The algorithms were designed for linear time-invariant state-space signal models with white Gaussian noise. The OFIR filter self-determines the initial mean square state function by solving the discrete algebraic Riccati equation. The UFIR one represented both in the batch and iterative Kalman-like forms does not require the noise covariances and initial errors. An example of applications is given for smoothing and predictive filtering of a two-state polynomial model. Based upon this example, we show that exact optimality is redundant when N 1 and still a nice suboptimal estimate can fairly be provided with a UFIR filter at a much lower cost.

Introduction

There is a class of estimation problems requiring optimal filtering at a discrete-time current point n employing measurement on an averaging interval (horizon) of preceding or/and succeeding neighboring but not obligatorily nearest N points. To solve such problems, filtering is usually organized employing the finite impulse response (FIR) structures. Because the averaging interval can be placed with an arbitrary time shift p with respect to n, there can be recognized three kinds of p-shift FIR filters as shown in Figure 1, namely the p-step predictive filter (p > 0), filter (p = 0), and |p|-lag smoothing filter (p < 0).

Figure 1
figure1

FIR filtering at a discrete point n. (a) p-step predictive filtering (p > 0), (b) filtering (p = 0), and (c) |p|-lag smoothing filtering (p < 0). Measurement is organized on an interval of N points from m − p to n − p, where m = n − N + 1.

Predictive FIR filtering is fundamental for discrete-time feedback systems and required in signal processing when measurement is temporary unavailable in the nearest past of p points. The one-step predictive filter known as the receding horizon filter has been put into the concept of the receding horizon (or model predictive) control [1, 2]. For polynomial signals, an unbiased predictive FIR filter was proposed by Heinonen and Neuvo in [3]. Further, this filter was investigated by many authors [4] and developed in state-space to p-step predictive filtering [5].

Smoothing FIR filtering is a key solution whenever denoising of signals is required with highest efficiency. Savitzky-Golay smoothing filter [6] is one of the most popular here. In recent decades, we meet a few new substantial results. Linear FIR smoothers were developed and used by Zhou and Wang in the FIR-median hybrid filters [7]. In state space, order-recursive FIR smoothers were proposed by Yuan and Stuller in [8]. Most recently, the general receding horizon FIR smoother theory has been developed in [9, 10] and, for polynomial signals, the |p|-lag smoothing FIR filter theory addressed in [11].

It follows from the above-given short survey that the authors prefer solving the problems of filtering, smoothing, and prediction employing different algorithms. In [12, 13], a universal scheme has been proposed for the p-shift FIR estimators (filters, predictors, and smoothers). Still no universal solution has been addressed in state space for FIR filtering with smoothing and prediction properties.

In this article, we follow the approach developed in [12] and address universal p-shift optimal FIR (OFIR) and unbiased FIR (UFIR) filters for predictive filtering (p > 0), filtering (p = 0), and smoothing filtering (p < 0) at a current point n of linear discrete-time-invariant state-space models with white noise. The rest of the article is organized as follows. In Section ‘Signal model and problem formulation’, we describe the model and formulate the problem. The p-shift OFIR filter is derived in Section ‘p-Shift OFIR filter with predictive and smoothing properties’. Here, we also find its gain and estimate the initial mean square state function. The UFIR filter is considered in detain in Section ‘p-shift UFIR filter with predictive and smoothing properties’ along with the estimation error. An application to the two-state model is given in Section ‘Applications’ and concluding remarks are drawn in Section ‘Conclusion’.

Signal model and problem formulation

Consider a class of discrete time-invariant linear signal models represented in state space with the state and observation equations, respectively,

x n = A x n 1 + B w n ,
(1)
y n = C x n + D v n ,
(2)

where x n R K and y n R M are the state and observation vectors, respectively,

x n = [ x 1 n x 2 n x K n ] T ,
(3)
y n = [ y 1 n y 2 n y M n ] T .
(4)

Here, A R K × K , B R K × P , C R M × K , and D R M × M . The system noise vector w n R P and the measurement noise vector v n R M , respectively,

w n = [ w 1 n w 2 n w P n ] T ,
(5)
v n = [ v 1 n v 2 n v M n ] T ,
(6)

are white Gaussian with zero mean components, E{w n } = 0 and E{v n } = 0. It is implied that w n and v n are mutually uncorrelated, E{ w i v j T }=0 for all i and j, and have the covariances, respectively,

R = E { w n w n T } ,
(7)
Q = E { v n v n T } .
(8)

The problem now formulates as follows. Given the model (1) and (2), we would like to derive a p-shift OFIR filter covering the problems of predictive filtering (p > 0), filtering (p = 0), and smoothing filtering (p < 0) as shown in Figure 1. We also wish to find its unbiased version, represent it in the iterative Kalman-like form, and investigate errors based on a typical example.

p-Shift OFIR filter with predictive and smoothing properties

A p-shift OFIR filter can be derived following Figure 1, if to represent (1) and (2) on a horizon of N points, similarly to [11], with recursively computed forward-in-time solutions [14] as follows, respectively,

X n p , m p = A N 1 x m p + B N 1 W n p , m p ,
(9)
Y n p , m p = C N 1 x m p + G N 1 W n p , m p + D N 1 V n p , m p ,
(10)

where X n , m R KN , Y n , m R MN , W n , m R PN , and V n , m R MN are given by, respectively,

X n p , m p = x n p T x n 1 p T x m p T T ,
(11)
Y n p , m p = y n p T y n 1 p T y m p T T ,
(12)
W n p , m p = w n p T w n 1 p T w m p T T ,
(13)
V n p , m p = [ v n p T v n 1 p T v m p T ] T .
(14)

The matrices A N 1 R K N × K , B N 1 R K N × PN , C N 1 R M N × K , G N 1 R M N × PN , and DN−1 R M N × M N are specified with, respectively,

A i = ( A i ) T ( A i 1 ) T A T I T ,
(15)
B i = B A B A i 1 B A i B 0 B A i 2 B A i 1 B 0 0 B A B 0 0 0 B ,
(16)
C i = C ̄ i A i ,
(17)
G i = C ̄ i B i ,
(18)
D i =diag ( D D D i + 1 ) ,
(19)

where we have assigned C ̄ i =diag C C C i + 1 .

In this model, the initial state xmp is supposed to be known exactly and wmpis thus allowed to have zero components.

Optimal gain

One can now assign the gain matrix H(p)H(p,n,m) R K × M N implementing the convolution principle and find the filtering estimatea of x n as

x ~ n | n p = H ( p ) Y n p , m p
(20)
= H ( p ) [ C N 1 x m p + G N 1 W n p , m p + D N 1 V n p , m p ] .
(21)

For H(p) to be optimal in the minimize mean square error (MSE) sense, the following cost function needs to be minimized,

J ( p ) = E { ( x n x ~ n | n p ) ( x n x ~ n | n p ) T } = E { [ x n H ( p ) ( C N 1 x m p + G N 1 W n p , m p + D N 1 V n p , m p ) ] [ x n H ( p ) ( C N 1 x m p + G N 1 W n p , m p + D N 1 V n p , m p ) ] T } ,
(22)

where E(x) means an average of x. The minimization can be provided employing the orthogonality condition [14] in the form of [12],

0 = E { [ x n H ̂ ( p ) ( C N 1 x m p + G N 1 W n p , m p + D N 1 V n p , m p ) ] ( C N 1 x m p + G N 1 W n p , m p + D N 1 V n p , m p ) T } ,
(23)

to produce the optimal gain matrix H ̂ (p). In doing so, one needs substituting x n with the first vector rowb in (9); that is,

x n = A N 1 + p x m p + B ̄ N 1 W n p , m p ,
(24)

where B ̄ N 1 is the first vector row in (16).

Substituting (24) to (23) and supposing that the initial state and measurement noise are mutually uncorrelated for all p, we provide the averaging in (23) and arrive at the optimal gain matrix

H ̂ ( p ) = ( A N 1 + p R m p C N 1 T + Z ̄ w ) × ( Z m p + Z ~ w + Z ~ v ) 1 ,
(25)

in which Z ̄ w = B ̄ N 1 Ψ G N 1 T ,

Z m p = C N 1 R m p C N 1 T ,
(26)
Z ~ w = G N 1 Ψ G N 1 T ,
(27)
Z ~ v = D N 1 Φ D N 1 T ,
(28)

the mean square initial state is specified by

R m p =E x m p x m p T ,
(29)

and the signal and measurement white noise covariance functions are formed as, respectively,

Ψ = E W n p , m p W n p , m p T = diag ( R R R N ) ,
(30)
Φ = E V n p , m p V n p , m p T = diag ( Q Q Q M ) .
(31)

By multiplying Rmp in (25) from the left-hand side with the identity matrix ( C n m T C n m ) 1 C n m T C n m , we have finally

H ̂ (p)=[ H ̄ (p) Z m p + Z ̄ w ] ( Z m p + Z ~ w + Z ~ v ) 1 ,
(32)

where, by n − m = N − 1, the unbiased gain attains two equivalent forms of

H ̄ ( p ) = A N 1 + p ( C n m T C n m ) 1 C n m T
(33)
= A N 1 + p ( C N 1 T C N 1 ) 1 C N 1 T .
(34)

Note that H ̄ (p) satisfies the unbiasedness condition

E{ x ̂ n | n p }=E{ x n }
(35)

and has an important applied property: it does not depend on noise and initial errors, although it is p- and N-dependent.

As shown in Appendix, matrix Zmp representing in (32) the mean square initial state Rmpon an averaging interval of N points can optimally be estimated by solving the discrete algebraic Riccati equation (DARE)

0 = Z m p ( Z ~ w + Z ~ v ) 1 Z m p + 2 Z m p + Z ~ w + Z ~ v Y n p , m p Y n p , m p T ( Z ~ w + Z ~ v ) 1 Z m p ,
(36)

whose analytic solution can be found following [15]. We notice that this equation also serves for filtering out all of the noise components [12].

Optimal filtering estimate

Determined Zmp, by (36), the p-shift OFIR filtering estimate x ̂ n | n p can now be generalized as follows. Given (1) and (2) with uncorrelated zero-mean white noise vectors w n and v n . Then p-step OFIR predictive filtering (p > 0), filtering (p = 0), and |p|-lag smoothing filtering (p < 0) can be provided at n employing data taken from m − p to n − p by

x ̂ n | n p = H ̂ ( p ) Y n p , m p
(37)
= [ H ̄ ( p ) Z m p + Z ̄ w ] ( Z m p + Z ~ w + Z ~ v ) 1 × Y n p , m p
(38)
= [ A N 1 + p ( C N 1 T C N 1 ) 1 C N 1 T Z m p + Z ̄ w ] × ( Z m p + Z ~ w + Z ~ v ) 1 Y n p , m p ,
(39)

where Ynp,mp is the data vector (12), C i is given by (17), and Z ̄ w , Z ~ w , and Z ~ v are specified for (25). The algorithm should be applied to any N ≥ K, in order to avoid problems with singularities. Note that K is typically not larger in state space modeling.

p-shift UFIR filter with predictive and smoothing properties

There are at least two cases when exact optimality is redundant and OFIR can fairly be substituted with UFIR at much lower price to produce still a nice near optimal estimate [12]. In fact, if Zmn substantially dominates Z ̄ w , Z ~ w , and Z ~ v in the order of magnitudes for all p, we have H ̂ (p) H ̄ (p). The same effect is achieved with N 1.

Thus, the UFIR filter should also be generalized. Given (1) and (2) with uncorrelated zero-mean white noise components w n and v n . Then p-step UFIR predictive filtering (p > 0), filtering (p = 0), and |p|-lag smoothing filtering (p < 0) can be provided at n employing data taken from m − p to n − p by

x ̄ n | n p = H ̄ ( p ) Y n p , m p
(40)
= A N 1 + p ( C N 1 T C N 1 ) 1 C N 1 T Y n p , m p .
(41)

Note that both (40) and (41) follow from (38) and (39) straightforwardly if to refer to the unbiasedness condition (35) and neglect Z ̄ w , Z ~ w , and Z ~ v .

Kalman-like UFIR filtering algorithm

Noticing that the UFIR filter (41) ignoring the noise covariances and initial error is highly attractive for engineering applications, one also notes that the computational problem may arise in its batch form when N 1. To circumvent this problem, a fast iterative Kalman-like form has been addressed in [13] for filters, predictors, and smoothers. If to introduce a time shift p to x ̄ n + p | n stated by Theorem 2 in [13] for time-invariant models and take into consideration that initial F l is time-invariant, then the iterative Kalman-like form of (41) appears as follows:

ϒ = ( C s m T C s m ) 1 ,
(42)
F s = A s m + p ϒ A s m + p T ,
(43)
x ̄ s | s p = A s m + p ϒ C s m T Y s p , m p ,
(44)
F l = [ C T C + ( A F l 1 A T ) 1 ] 1 ,
(45)
x ̄ l | l p = A x ̄ l 1 | l p 1 + K l ( y l p C A 1 p x ̄ l 1 | l p 1 ) ,
(46)

where K l K l (p)= A p F l C T is the bias correction gain, m = n − N + 1, s = α − 1, and an iterative variable l ranges from α = max(m + K, m + 2, m + 2 − p) to n. The true estimate corresponds to l = n.

As well as in the case of UFIR estimator [13], the gain K l in (46) also does not depend on noise and initial errors. In this algorithm, we have two batch forms, (43) and (44), which can be computed fast for typically small K. To avoid singularities, the computation starts with l = α and finishes at l = n. This last value is used as true and the procedure repeated iteratively for each new measurement. The iterative p-shift Kalman-like algorithm (42)–(46) is listed in Table 1 in a convenient computation form.

Table 1 Iterative p -shift Kalman-like UFIR filtering algorithm

Estimation error

Although the estimation error is not involved to the algorithm (42)–(46) that is its extremely remarkable property, the MSE may be required to characterize the filter performance.

The MSE in the p-shift FIR filtering estimate can be evaluated by the matrix

P l =E{( x l x ̄ l | l p ) ( x l x ̄ l | l p ) T }.
(47)

Substituting x ̄ l | l p with (46), assigning x ̄ l x ̄ l | l p and ε l = x l x ̄ l , and employing (1) and (2), we first write

P l = E { ( x n A x ̄ l 1 K l y l p + K l C A 1 p x ̄ l 1 ) × ( x n A x ̄ l 1 K l y l p + K l C A 1 p x ̄ l 1 ) T } = E { ( A ε l 1 + B w l K l C x l p K l D v l p + K l C A 1 p x ̄ l 1 ) ( A ε l 1 + B w l K l C x l p K l D v l p + K l C A 1 p x ̄ l 1 ) T } .
(48)

As a next step, it needs to express xlpvia xl−1. That can be done if we write (1) forward and backward in time for different p and provide the transformations in order to have finally

x l p = A 1 p x l 1 + β l ,
(49)

where

β l = i = 0 | p | A | p | i B w l + i , p 0 0 , p = 1 i = 1 p 1 A i p B w l i , p > 1 .
(50)

By substituting (49) with (50) to (48), taking into consideration that E{ ε l 1 β l T } and E{ β l ε l 1 T } have zero components, and providing the averaging, we finally come up with

P l = E { [ ( I K l C A p ) A ε l 1 + B w l K l C β l K l D v l p ] [ ( I K l C A p ) A ε l 1 + B w l K l C β l K l D v l p ] T } = ( I K l C A p ) A P l 1 A T ( I K l C A p ) T + B R B T B R ̄ C T K l T K l C R ̂ B T + K l C R ~ C T K l T + K l D Q ~ D T K l T ,
(51)

where R ̄ =E{ w l β l T }, R ̂ =E{ β l w l T }, and R ~ =E{ β l β l T } are given with, respectively,

R ̄ = R B T A | p | T , p 0 0 , p > 0 ,
(52)
R ̂ = A | p | B R , p 0 0 , p > 0 ,
(53)
R ~ = i = 0 | p | A | p | i B R B T A | p | i T , p 0 0 , p = 1 i = 1 p 1 A i p B R B T A i p T , p > 1 .
(54)

By (51), the estimation error can thus be computed iteratively along with the estimate (46). As can be seen, P n inherently diminishes in smoothing filtering (p < 0), by R ̄ and R ̂ . It rises with higher rate in predictive filtering (p > 0) owing to the effect of P ~ . In the case of filtering (p = 0), P n is computed by

P l = ( I K l C ) ( A P l 1 A T + B R B T ) ( I K l C ) T + K l D Q ~ D T K l T .
(55)

In all of the cases, P n becomes zero if the model is deterministic and the filter order is exactly that of a system.

Note that P n computed in such a way ranges upper the true value due to the accumulating effect caused by the noise covariances. Alternatively, P n can be well bounded with the error bound (EB) specified in [16] in the three-sigma sense via the noise power gain (NPG) as

β k ( vg ) (N,p)=3 σ k g k ( vg ) ( N , p )
(56)

to characterize the noise standard deviation in the v-to−g filter channel via measurement of the k th state in the presence of white noise having the variance σ k 2 . The NPG coefficient gk(vg)gk(vg)(N, p) is defined here as a components of the NPG matrix K k K k (N,p)

K k = H k H k T ,
(57)
= g k ( 11 ) g k ( 1 k ) g k ( 1 K ) g k ( k 1 ) g k ( kk ) g k ( kK ) g k ( K 1 ) g k ( Kk ) g k ( KK ) = A N 1 + p ( C ~ N 1 T C ~ N 1 ) 1 A N 1 + p T .
(58)

where the thinned gain H k H ̄ ( p ) k R K × N is composed with the K th columns of H ̄ (p) given by (42) starting with the k th one as

H k = A N 1 + p ( C ~ N 1 T C ~ N 1 ) 1 C ~ N 1 T
(59)

and C ~ i ( C i ) k is the k th row of C i .

To avoid the computational problem with N 1, the NPG K k can be computed iteratively [12] as

K kj = A [ A 1 p T C T C A 1 p + K k ( j 1 ) 1 ] 1 A T ,
(60)

by changing an iterative variable j from j = γ ≥ K, to N − 1. The initial value K k ( γ 1 ) is provided by (58), if to substitute N with γ, and the true K k is taken when l = N − 1.

Applications

A comparison of errors in the FIR and Kalman filters has been provided in many articles [2, 9, 10, 12, 13]. Much lesser attention has been paid to the trade-off between the OFIR and UFIR filter outputs. To investigate errors in the proposed p-shift OFIR and UFIR filters and thereby learn their facilities, below we exploit a two-state model represented with (1) and (2) having A= 1 τ 0 1 , C = [10], B, and D identity, x0 = 1, y0 = 0.01 s−1τ = 1, σ x  = 0.1, and σ y = 10−3 / s. The covariances (7) and (8) of zero mean noise components, w n and v n , are allowed to be R= σ x 2 0 0 σ y 2 and Q=[ σ v 2 ], respectively.

Measurement has been provided in the presence of the zero-mean noise v n uniformly distributed from −2 to 2 with the variance σ v =2/ 3 .

Both the OFIR algorithm and UFIR one (Table 1) have been tested and the filtering errors evaluated in the first state at a current point n for different p and fixed N. Errors were bounded with EB β(p)β1(11)(N, p) calculated by (56).

Errors in predictive FIR filtering

Supposing that the estimate is required at n = 50 and assuming that measurement may not be available in the nearest past points (as it sometimes occurs in wireless systems), we let 0 ≤ p ≤ 30 and find the predictive filtering estimate for N = 10 and N = 20. Figure 2 sketches errors in OFIR and UFIR estimates as functions of p.

Figure 2
figure2

Errors in OFIR (circle) and UFIR (cross) predictive filtering estimates at n  = 50 as functions of p  > 0. (a) N = 10 and (b) N = 20. EBs are dashed.

Here, we also show the bounds ±β(p) for each N as functions of p. Note that for the model in question, EB can also be calculated via NPG g1(N,p) found in [5] as

β 1 ( 11 ) ( N , p ) = 3 σ v g 1 ( N , p )
(61)
= 3 σ v 1 N ( N 2 1 ) 2 ( 2 N 1 ) ( N 1 ) + 12 p ( N 1 + p ) 0 . 5 .
(62)

Observing Figure 2, one infers that the estimates are closely related and that the errors range well within EBs stretched by growing p. Inherently, the prediction error is reduced by increasing N that can be seen by comparing Figure 2a,b.

Errors in smoothing FIR filtering

In the second experiment, we change p from zero to −N + 1 and evaluate errors in the smoothing filters at n = 30. Figure 3 illustrates the results for N = 10 (Figure 3a) and N = 20 (Figure 3b).

Figure 3
figure3

Errors in OFIR (circle) and UFIR (cross) smoothing filtering estimates at n  = 50 as functions of p  < 0. (a) N = 10 and (b) N = 20. EBs are dashed.

The first conclusion that can be made is that errors reach a minimum at a center of the averaging horizon with p = −N / 2. This should not be surprising, since the ramp impulse response associated with linear models reduces at this point to the uniform one associated with simple averaging [11] producing noise minimum among all other filters [6]. It can also be seen that the difference between the optimal and unbiased estimates exists but it is not large, in view of the scale in Figures 2 and 3. And we notice again that errors in the smoothing filter range well within a gap between the EBs.

Conclusion

In this article, the p-shift OFIR and UFIR algorithms with the properties of predictive filtering (p > 0), filtering (p = 0), and smoothing filtering (p < 0) have been addressed for linear discrete time-invariant state-space models. The OFIR filter is shown to self-determine the mean square initial state function by solving the DARE. The UFIR filter represented both in the batch and iterative Kalman-like forms ignores covariances and initial errors, unlike the Kalman filter. As an example of applications, we have exploited the two-state polynomial model and investigated errors in the OFIR and UFIR filters. Based upon, we have confirmed the statement made earlier for FIR filters, predictors and smoothers: the UFIR estimate converges to the OFIR one by increasing N and the estimation errors are well bounded with EB. That means that exact optimality may be redundant with N 1 and still a nice suboptimal estimate can be provided with UFIR filters at much lower cost.

An importance of the OFIR and UFIR filtering algorithms proposed resides in the fact that they are both general for linear discrete time-invariant state-space models. The algorithms virtually generalize the well-known Savitzky-Golay solution for smoothing and predictive filtering in state-space. However, unlike the latter, both OFIR and UFIR filters have the convolution-based forms more familiar for electronics engineers. Moreover, the convolution computation can efficiently be provided in the frequency domain that is its another benefit. Finally, engineers should certainly appreciate the iterative Kalman-like algorithm. Paying attention to these advantages, our current investigations are focused on several applied problems associated with signal and image processing.

Endnotes

a x ~ n | k is the filtering estimate at n via measurement from the past to k; x ̂ n | k is optimal and x ̄ n | k unbiased.b The case of filtering out all of the noise components is considered in [12].

Appendix

Mean square initial state function

Consider the estimate provided by (20) with gain (32),

x ̂ n | n p = [ A N 1 + p ( C N 1 T C N 1 ) 1 C N 1 T Z m p + Z ̄ w ] × ( Z m p + Z ~ w + Z ~ v ) 1 Y n p , m p .
(A.1)

Following [12], find the smoothing estimate at the initial point n − N + 1 of the averaging interval. By letting p = −(n − m) = −(N − 1), go to

x ̂ n | n + N 1 = [ ( C N 1 T C N 1 ) 1 C N 1 T Z n + Z ̄ w ] × ( Z n + Z ~ w + Z ~ v ) 1 Y n + N 1 , n .
(A.2)

Substitute n with m − p and find the estimate at m − p

x ̂ m p | m + N 1 p = [ ( C N 1 T C N 1 ) 1 C N 1 T Z m p + Z ̄ w ] × ( Z m p + Z ~ w + Z ~ v ) 1 Y m + N 1 p , m p = ( R m p C N 1 T + Z ̄ w ) ( Z m p + Z ~ w + Z ~ v ) 1 × Y m + N 1 p , m p .
(A.3)

Now, substitute the unknown xmpwith its optimal estimate x ̂ m p | m + N 1 p and recall that xmpis supposed to be known exactly. This allows providing the following transformations:

R m n = E { x m p x m p T } = x m p x m p T
(A.4)
E { x ̂ m p | m + N 1 p x ̂ m p | m + N 1 p T }
(A.5)
x ̂ m p | m + N 1 p x ̂ m p | m + N 1 p T .
(A.6)

By employing (A.6) and taking into account that Rmp, Z ̄ w , Zmp, Z ~ w , and Z ~ v are all symmetric, transform (A.6) to

R m p x ~ m p | m + N 1 p x ~ m p | m + N 1 p T
(A.7)
= ( R m p C N 1 T + Z ̄ w ) ( Z m p + Z ~ w + Z ~ v ) 1 × Y m + N 1 p , m p Y m + N 1 p , m p T × ( Z m p + Z ~ w + Z ~ v ) 1 ( C N 1 R m p + Z ̄ w ) .
(A.8)

A supposition that xmp is deterministic makes both Rmpand Zmpsingular. However, if we allow an equality in (A.7) and solve (A.8) for Zmp, these matrices will be found approximately in the minimum MSE sense and thus no longer be singular. Next, observe that the second term in the first parenthesis on (A.8) represents the noise variance on the averaging interval and is commonly much smaller than the first term representing the initial state gained. Then neglect Z ̄ w , accept an equality in (A.7), multiply (A.8) with Cnmand C n m T from the left-hand and right-hand sides, respectively, invoke (26), remove nonsingular Zmp from both sides, and substitute m + N − 1 with n. That leads to

I = ( Z m p + Z ~ w + Z ~ v ) 1 Y n p , m p Y n p , m p T × ( Z m p + Z ~ w + Z ~ v ) 1 Z m p .
(A.9)

By rearranging the terms, (A.9) becomes the DARE (36), whose solution with respect to Zmp can be found following [15].

References

  1. 1.

    Camacho EF, Bordons C: Model Predictive Control. (Springer-Verlag, Berlin, 2004)

    Google Scholar 

  2. 2.

    Kwon WH, Han S: Receding Horizon Control: Model Predictive Control for State Models. (Springer, Berlin, 2005)

    Google Scholar 

  3. 3.

    Heinonen P, Neuvo Y: FIR-median hybrid filters with predictive FIR structures. IEEE Trans. Acoust. Speech Signal Process 1988, 36(6):892-899. 10.1109/29.1600

    Article  Google Scholar 

  4. 4.

    Campbell TG, Neuvo Y: Predictive FIR filters with low computational complexity. IEEE Trans. Circ. Syst 1991, 38(9):1067-1071. 10.1109/31.83876

    Article  Google Scholar 

  5. 5.

    Shmaliy YS: An unbiased p-step predictive FIR filter for a class of noise-free discrete-time models with independently observed states. Signal Image Video Process 2009, 3(2):127-135. 10.1007/s11760-008-0064-5

    Article  Google Scholar 

  6. 6.

    Savitzky A, Golay MJE: Smoothing and differentiation of data by simplified least squares procedures. Anal. Chem 1964, 36(8):1627-1639. 10.1021/ac60214a047

    Article  Google Scholar 

  7. 7.

    Zhou X, Wang X: FIR-median hybrid filters with polynomial fitting. Digital Signal Process 2004, 39(2):112-124.

    Article  Google Scholar 

  8. 8.

    Yuan J-T, Stuller JA: Order-recurcive FIR smoothers. IEEE Trans. Signal Process 1994, 42(5):1242-1246. 10.1109/78.295191

    Article  Google Scholar 

  9. 9.

    Kwon BK, Han S, Kwon OK, Kwon WH: Minimum variance FIR smoothers for discrete-time state space models. IEEE Signal Process. Lett 2007, 14(8):557-560.

    Article  Google Scholar 

  10. 10.

    Han S, Kwon WH: L2E FIR smoothers for deterministic discrete-time state-space signal models. IEEE Trans. Autom. Control 2007, 52(5):927-932.

    MathSciNet  Article  Google Scholar 

  11. 11.

    Shmaliy YS, Morales-Mendoza L: FIR smoothing of discrete-time polynomial signals in state space. IEEE Trans. Signal Process 2010, 58(5):2544-2555.

    MathSciNet  Article  Google Scholar 

  12. 12.

    Shmaliy YS: Linear optimal FIR estimation of discrete time-invariant state-space models. IEEE Trans. Signal Process 2010, 58(6):3086-3096.

    MathSciNet  Article  Google Scholar 

  13. 13.

    Shmaliy YS: An iterative Kalman-like algorithm ignoring noise and initial conditions. IEEE Trans. Signal Process 2011, 59(6):2465-2473.

    MathSciNet  Article  Google Scholar 

  14. 14.

    Stark H, Woods JW: Probability, Random Processes, and Estimation Theory for Engineers,. 2nd edn. (Prentice Hall, Upper Saddle River, 1994)

    Google Scholar 

  15. 15.

    Lancaster P, Rodman L: Algebraic Riccati Equations. (Oxford University Press, New York, 1995)

    Google Scholar 

  16. 16.

    Shmaliy YS, Ibarra-Manzano O: Noise power gain for discrete-time FIR estimators. IEEE Signal Process. Lett 2011, 18(4):207-210.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Yuriy S Shmaliy.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Shmaliy, Y.S., Ibarra-Manzano, O. Optimal and unbiased FIR filtering in discrete time state space with smoothing and predictive properties. EURASIP J. Adv. Signal Process. 2012, 163 (2012). https://doi.org/10.1186/1687-6180-2012-163

Download citation

Keywords

  • State estimation
  • Optimal FIR filter
  • Unbiased FIR filter
  • Kalman-like algorithm
\