Skip to main content

Multiple descriptions for packetized predictive control

Abstract

In this paper, we propose to use multiple descriptions (MDs) to achieve a high degree of robustness towards random packet delays and erasures in networked control systems. In particular, we consider the scenario, where a data-rate limited channel is located between the controller and the plant input. This forward channel also introduces random delays and dropouts. The feedback channel from the plant output to the controller is assumed noiseless. We show how to design MDs for packetized predicted control (PPC) in order to enhance the robustness. In the proposed scheme, a quantized control vector with future tentative control signals is transmitted to the plant at each discrete time instant. This control vector is then transformed into M redundant descriptions (packets) such that when receiving any 1≤JM packets, the current control signal as well as J−1 future control signals can be reliably reconstructed at the plant side. For the particular case of LTI plant models and i.i.d. channels, we show that the overall system forms a Markov jump linear system. We provide conditions for mean square stability and derive upper bounds on the operational bit rate of the quantizer to guarantee a desired performance level. Simulations reveal that a significant gain over conventional PPC can be achieved when combining PPC with suitably designed MDs.

1 Introduction

In networked control systems (NCSs), the controller communicates with the plant via a general purpose communication network [1, 2]. When compared to using dedicated hardwired control networks, the use of general purpose and possibly wireless communication technology brings significant benefits in terms of efficiency, interoperability, deployment costs, etc. However, the use of practical communication technology also leads to new challenges, since the network needs to be taken into account in the overall design, see also [17].

In this paper, we will focus on the existence of a digital network between the controller and the plant input. This network contains either a single channel that introduces i.i.d. packet delays and erasures or multiple independent channels with i.i.d. packet delays and erasures. The channel between the plant output and the controller is considered ideal, i.e., noiseless and instantaneous. For example, this could be a situation where the controller and plant communicates over wireless channels. The controller could be battery driven and therefore with limited transmission power. On the other hand, the plant might not have a limitation on the transmission power. In this case, the reverse channel from the plant to the controller has a significantly greater SNR than the forward channel between the controller and the plant. There are many other practical sitations with wireless controller-actuator links but direct sensor-controller connections, e.g., groups of agents/vehicles/robots/drones. Their positions/formation are sensed via a system comprising a camera and attached controller. Activation commands are then sent wirelessly to the agents.

The main contributions of this work is the theoretical analysis and practical design of the quantized control signals. In particular, we propose to combine a recent robust control strategy known as (quantized) packet predictive control (PPC) [811] with a joint source-channel coding strategy based on multiple description (MD) coding [12, 13]. We provide computable upper bounds on the operational bit rate required for coding the quantized control signals (descriptions) and provide a practical design based on our theoretical analysis. The simulation study shows that the combination of MDs and PPC provides a significant improvement over PPC in the case of large packet loss ratios.

In quantized PPC, a control vector with the current and N−1 future predicted plant inputs is constructed at the controller side to compensate for random delays and packet dropouts in the channel. Thus, in the case of packet erasures (and if not too many consecutive dropouts occur), the buffer will feed the plant with the appropriate future predicted control value [8]. The key principle of MDs is to encode a source signal into a number of descriptions (packets) that are transmitted over separate channels. Each description is able to approximate the source signal to within a prescribed quality. Moreover, if several descriptions are received, they can be combined to further improve the reconstruction quality. Thus, in the case of packet erasures, it is possible to achieve a graceful degradation of the reconstruction quality [13].

The design of optimal quantized control strategies subject to data rate limitations defines a complicated problem that lies in the intersection of signal processing and controls. In particular, if the quantizers are designed using conventional open-loop source-coding strategies, it cannot be guaranteed that the overall system will be stable, when used in closed-loop control. Indeed, the resulting data rate could exceed the bandwidth of the digital channel, the data rate could be too low to capture the plant uncertainty and thereby not guarantee stability, or the non-linear effects due to quantization could have a negative impact on the overall stability when fed back into the system [8, 9, 14, 15].

The combination of MDs and PPC has to the best of the authors’ knowledge not been considered before (except in the conference contributions of the authors [1618]). In [16], MDs were used for power control in wireless sensor networks. The quantizers were designed under high-resolution assumptions, and no stability assessment was provided. In [17, 18], the preliminary ideas for the current work (without analysis and proofs) were presented. MDs for state-estimation was considered in [19, 20] under high-resolution quantization assumptions. The design of lattice quantizers for PPC without MDs was treated in [16, 21] for the cases of entropy-constrained and resolution-constrained quantization, respectively.

In this work, we will focus on LTI plant models, which are (possibly) open-loop unstable. Thus, it is necessary to provide quantized control signals to the plant in a reliable way to guarantee stability in the presence of data rate limitations, random packet delays and erasures. Our key idea is to design and use MDs in a novel way that differs from how it is traditionally used. Traditionally, when the received descriptions are combined at the decoder, the approximation of a given source signal is improved. On the other hand, in the proposed work, when the received descriptions are combined at the decoder, then rather than improving existing control signals, new future controls signals are instead recovered.

There exists a vast amount on literature on MJLS with delays, cf.,[2225]. In the present work, we show that the overall system with delays, erasures, quantization effects, and multiple descriptions, can be cast as a Markov jump linear system (MJLS), which makes it possible to use general stability results from the MJLS literature [26, 27].

The paper is organized as follows. Section 2 contains background information on quantized PPC. Section 3 contains the system analysis of a theoretical joint PPC and MD scheme. Section 4 presents the design of the combined practical PPC and MD scheme. Section 5 provides a simulation study of the proposed scheme. Section 6 contains the conclusions. Proofs of lemmas and theorems are deferred to the appendices.

1.1 Notation

Let S be the down-shift-by-one matrix operator, which replaces the jth row of an N×M matrix by its (j−1)th row for \(j=N,\dotsc, 2\). Similarly, define S as the up-shift-by-one matrix operator. Let e i denote the unit-vector aligned with the ith axis of the Cartesian coordinate system, e.g., e 2=[0,1,0,,0]T, where the dimension of e i will be clear from the context. Let \(\boldsymbol {1}_{i}\in \mathbb {R}^{i}\) be the all-ones vector of dimension i. Let γ i be the matrix operator that takes the ith diagonal of an N×N matrix, where i=1 is the main diagonal and i>1 are diagonals above the main diagonal. Thus, \(\gamma _{i}(A) \in \mathbb {R}^{N-i+1}\) if \(A \in \mathbb {R}^{N\times N}\). We will use σ r (A) to denote the spectral radius of the matrix A, and AB denotes the usual Kronecker product between the matrices A and B. The squared and weighted l 2-norm of a vector, say x, is written as \(\|x \|_{P}^{2} = x^{T} P x\), where P0, i.e., P is a positive semidefinite matrix.

2 Quantized packetized control over erasure channels

In this section, we provide a summary of existing results on quantized PPC and relate them to the present situation. The system considered is shown in Fig. 1. For a more detailed presentation of quantized PPC, see [11].

Fig. 1
figure 1

System setup. The PPC communicates with the plant via a data-rate limited (digital) erasure channel with delays

2.1 System model

We consider the following discrete-time stochastic linear time invariant (LTI) possibly unstable dynamical plant with state \(x_{t}\in \mathbb {R}^{z}\), z≥1 and scalar input \(u_{t}\in \mathbb {R}\):

$$ x_{t+1} = {Ax}_{t} + B_{1}u_{t} + B_{2}w_{t},\quad t\in \mathbb{N}. $$
((1))

In (1), \(w_{t}\in \mathbb {R}^{z'}\), z ≥1, is an unmeasured disturbance, modeled as an arbitrarily distributed (and with possibly unbounded support) zero-mean stochastic process with bounded covariance matrix Σ w , and \(B_{1}\in \mathbb {R}^{z}\) and \(B_{2}\in \mathbb {R}^{z\times z'}\). We do not assume that \(A\in \mathbb {R}^{z\times z}\) is stable; however, we will assume that the pair (A,B 1) is stabilizable. The initial state x 0 is arbitrarily distributed with bounded variance.

2.2 Cost function

In MPC, at each time instant t and for a given plant state x t , one often uses a linear quadratic cost function on the form [28]:

$$ V(\bar{u}'\!,x_{t}) \triangleq \|x'_{N}\|^{2}_{P} +\sum_{\ell=0}^{N-1} \big(\|x'_{\ell}\|^{2}_{Q} + \lambda (u'_{\ell})^{2}\big), $$
((2))

where N≥1 is the horizon length, and the design variables P0, Q0 and λ>0 allow one to trade-off control performance versus control effort. The variables x ′ and \(\bar {u}_{l}'\) denote tentative variables and are defined below. The final state weighting \( \|x'_{N}\|^{2}_{P}\) in (2) aids in stabilizing the feedback loop by approximating the effect of the infinite-horizon behavour [28]. For example, one may choose P as the unique positive semidefinite solution to the discrete algebraic Riccati equation:

$$ P = A^{T}PA + Q - A^{T}{PB}_{1} \left(\lambda+{B_{1}^{T}}{PB}_{1}\right)^{-1}{B_{1}^{T}}PA, $$
((3))

which exists if the system (1) is stabilizable [28].

The cost function in (2) examines a prediction of the plant model over a finite horizon of length N. It is common to assume that the predicted state trajectories at time t are independent of the buffer contents at the decoder (i.e., they are independent of what has been received at the plant input side), network effects, and the external disturbances w t , and are generated by

$$ x'_{\ell+1} = Ax'_{\ell} +B_{1} u'_{\ell}, $$
((4))

\(x^{\prime }_{0}=x_{t},\) while the entries in \( \bar {u}\,'=\big [ u'_{0}, \dotsc, u'_{N-1}\big ]^{T}\) represent the associated predicted plant inputs. Thus, the current control vector

$$\bar{u}_{t} =\,[u_{t}(1), \dotsc, u_{t}(N)]^{T} $$

contains the control signal u t (1) for the current time instant t as well as N−1 future predictive control signals for time up to t+N−1.

One may include the effect of the channel delays in the cost function (2) by, for example, formulating the individual stage costs in terms of their expected stage costs, i.e., weighting by the probabilities of control signals being delayed:

$$ \mathbb{E}\sum_{\ell=1}^{N-1} \big(\|x'_{\ell}\|^{2}_{Q} + \lambda (u'_{\ell})^{2}\big) = \sum_{\ell=1}^{N-1} \big(\|x'_{\ell}\|^{2}_{Q} + \lambda (u'_{\ell})^{2}\big) p_{\ell}, $$
((5))

where p denotes the probability of using the control signal u ′. Moreover, in this work, we will also model the effect of the quantizer directly in the design of the control signal u ′, see Section 2.4 for details.

Following the ideas underlying PPCs, see, e.g., [29], at each time instant t, and for current state x t , the controller sends the entire optimizing sequence, \(\bar {u}_{t}\), to the actuator node. Depending upon future packet dropout scenarios, a subsequence of \(\bar {u}_{t}\) will be applied at the plant input, or not. Following the receding horizon paradigm, at the next time instant, x t+1 is used to carry out another optimization, yielding \(\bar {u}_{t+1}\), etc.

2.3 Network effects

As illustrated in Fig. 1, we shall assume that the backward channel of the network is noiseless and instantaneous, whereas the forward channel is a packet erasure channel, where packets can be delayed and also be received out-of-order. In fact, we allow the delay to be unbounded, which means that packets can be lost. In our setup, if a transmitted packet has not been received within N consecutive time slots, it is considered lost. In MD coding, it is common to assume the availability of either M separate and independent channels or a single (compound) channel where the M packets can be sent simultaneously and yet be subject to independent erasures and delays [13]. Formally, we define \({\tau _{t}^{i}} \in \mathbb {N}_{0}\cup \infty \) to be the delay experienced by the ith packet that is constructed at time t. Thus, \({\tau _{t}^{i}}\) is a property of the ith channel. We will assume that the delays \(\{{\tau _{t}^{i}}\}\) experienced by the different packets are independent and identically distributed (i.i.d.). With this notation, we model transmission effects via the discrete processes \(\left \{d_{t,t'}^{i}\right \}_{t'=t}^{\infty }\), where 0≤tt and \(i =1, \dotsc, M\), defined via:

$$d_{t,t'}^{i}\triangleq\left\{ \begin{array}{ll} 1, &\text{if}\; {\tau_{t}^{i}} \leq t' - t,\\ 0, &\text{else,}\\ \end{array}\right. $$

where \({\tau _{t}^{i}} \leq t' - t\) implies that the ith packet constructed and transmitted at time t has experienced a delay no more than t t time instances. We note that even though \({\tau _{t}^{i}}, \forall t,\) are mutually independent, the processes \(d_{t,t'}^{i}\) are generally not i.i.d., since if a packet constructed at time t experiences a delay of \({\tau _{t}^{i}}\), then \(d_{t,t'}^{i}=1\) for all \(t' \geq t + {\tau _{t}^{i}}\). However, for t =t, the outcomes \(d_{t,t}^{i}, i=1,\dotsc, M, t\geq 0\), are assumed mutually independent. We will also assume that the packet reception at time t is conditionally independent of the past packet receptions prior to time tN, given the knowledge of the packet reception between time t and tN+1. Specifically, for t t+N,

$$\begin{array}{*{20}l} &\text{Prob}\left(d^{i}_{t,t'} = 1 | d^{i}_{t,t'-1}, d^{i}_{t,t'-2}, \dotsc, d^{i}_{t,t}\right) \\ &\quad = \text{Prob}\left(d^{i}_{t,t'} = 1 | d^{i}_{t,t'-1}, d^{i}_{t,t'-2}, \dotsc, d^{i}_{t,t' - N+1}\right). \end{array} $$

Finally, we assume that the channel statistics are stationary so that \(\text {Prob}\left (d^{i}_{t,t'} = 1 | d^{i}_{t,t'-1}, d^{i}_{t,t'-2}, \dotsc, d^{i}_{t,t' - N+1}\right)\) does not depend upon t. We will make explicit use of the above stationarity and Markov assumptions in Lemma 3.2.

2.4 Quantization constraints

We consider a bit-rate limited digital network between controller output and plant input and all data to be transmitted needs therefore to be quantized. This introduces a quantization constraint into the problem of minimizing \(V(\bar {u}\,'\!,x_{t})\).

Let \(\overline {Q} \triangleq \text {diag}(Q,\dots,Q,P) \in \mathbb {R}^{zN\times zN}\) and define:

$$\begin{array}{*{20}l} \Phi&\triangleq \left[\begin{array}{llll} B_{1}&0&\dots&0\\ {AB}_{1}&B_{1}&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ A^{N-1}B_{1}&A^{N-2}B_{1}&\dots&B_{1} \end{array}\right] \in \mathbb{R}^{zN\times N}, \end{array} $$
((6))
$$\begin{array}{*{20}l} \Upsilon &\triangleq \left[\begin{array}{l} A\\A^{2}\\\vdots\\A^{N} \end{array}\right] \in \mathbb{R}^{zN\times z}, \end{array} $$
((7))
$$\begin{array}{*{20}l} F &\triangleq \Upsilon^{T} \overline{Q} \Phi \in \mathbb{R}^{z\times N}, \end{array} $$
((8))
$$\begin{array}{*{20}l} \Gamma &\triangleq - \Psi^{-T} F^{T} \in \mathbb{R}^{N\times z}, \quad\\\notag &~\qquad\Psi^{T}\Psi = \Phi^{T}\overline{Q}\Phi+{\lambda} I \in\mathbb{R}^{N\times N}, \end{array} $$
((9))
$$\begin{array}{*{20}l} \xi_{t} &\triangleq \Gamma x_{t} \in \mathbb{R}^{N}. \end{array} $$
((10))

Then using the above and (4), the cost function (2) can be rewritten as

$$ V(\bar{u}\,'\!,x_{t}) = {x_{t}^{T}} \Upsilon^{T} \Upsilon x_{t} + \bar{u}' \Psi^{T}\Psi \bar{u}' + 2{x_{t}^{T}}F\bar{u}', $$
((11))

which has the unique (unquantized) minimizer \(\bar {u}^{*}\) given by

$$\begin{array}{*{20}l} \bar{u}^{*} &= (\Psi^{T}\Psi)^{-1}F^{T} x_{t} \end{array} $$
((12))
$$\begin{array}{*{20}l} &= \Psi^{-1}\Gamma x_{t} = \Psi^{-1}\xi_{t} \in \mathbb{R}^{N}. \end{array} $$
((13))

We note that Ψ is fixed and we may at this point either directly quantize \(\bar {u}^{*}\) or instead quantize ξ t and then apply the mapping Ψ −1 in order to obtain the quantized control vector.1 Since Ψ is invertible, and we are transmitting the entire quantized control vector, the resulting coding rate is not affected by this operation [30].

When using entropy-constrained (subtractively) dithered (lattice) quantization (ECDQ), a dither vector ζ t is added to the input prior to quantization and then subtracted again at the decoder to obtain the reconstruction [31].2 Specifically, let \(\mathcal {Q}_{\Lambda }\) denote an ECDQ with underlying lattice Λ. Then the discrete output ξ t′ of the ECDQ is given by \(\xi _{t}' = \mathcal {Q}_{\Lambda }(\xi _{t} + \zeta _{t})\). Furthermore, the reconstruction \(\hat {\xi }_{t}\) at the decoder is then obtained by subtracting the dither, i.e., by forming \(\hat {\xi }_{t} = \xi _{t}' - \zeta _{t}\). Interestingly, this quantization operation may be exactly modeled by an additive noise channel, i.e., we have \(\hat {\xi }_{t} = \xi _{t} + n_{t}\), where the noise n t is zero-mean with variance \({\sigma _{n}^{2}}\) and independent of ξ t , see [31] for details. With this, the quantized (and reconstructed) control variable \(\vec {u}_{t}\) can be written as

$$\begin{array}{*{20}l} \vec{u}_{t} = \Psi^{-1}(n_{t} + \xi_{t}), \end{array} $$
((14))

where n t and ξ t are mutually independent and ξ t =Γ x t . We note that \(\vec {u}_{t}\) is the quantized (and reconstructed) control signal, which has been found by using an ECDQ on ξ t . Thus, \(\vec {u}_{t}\) is a continuous variable whereas \(\tilde {u}_{t} = \Psi ^{-1}\xi _{t}'\) is the corresponding discrete valued variable, which is entropy coded and thereby converted into a bit-stream (to be transmitted over the network), see Fig. 1. Throughout this work, we will use u t (i) to refer to the ith element of the vector \(\vec {u}_{t}\).

2.5 MD coding for PPC

We design the MDs by explicitly exploiting the layered construction of the control signals. In particular, we first generate a quantized control vector based on the principles of PPC. This vector contains the current control signal and N−1 future control signals. Then, we construct M descriptions based on this control vector. The descriptions are constructed so that the current control signal and J−1 future control signals can be obtained by combining any subset of \(J\in \{1, \dotsc, M\}\) descriptions. Thus, the more packets that are received at the plant, the more future plant predictions become available. Note that on reception of at least one packet out of the M packets, the current quantized control signal can be completely recovered at the plant input side. When receiving and combining more descriptions, the quality of this control signal is not improved. Instead new control signals become available. With this approach, we thus avoid the issue of having to guarantee stability subject to a probabilistic and time-varying accuracy of the control signals. Instead, we can use ideas from quantized PPC, when assessing the stability. A detailed design of the MDs is provided in Section 4.

3 Theoretical analysis of the PPC-MDC scheme

3.1 Markov jump linear system

Let \(\bar {x}_{t} \in \mathbb {R}^{zN\times 1}\) be the N−1 past and the present system state vectors, i.e.,

$$ \bar{x}_{t} \triangleq\, \left[{x_{t}^{T}},\dotsc, x_{t+1-N}^{T}\right]^{T}, $$
((15))

where x t is given by (1), and let \(\bar {n}_{t}\) be the N−1 past and the present quantization noise vectors, i.e.,

$$ \bar{n}_{t}=\,\left[{n_{t}^{T}},\dotsc, n_{t+1-N}^{T}\right]^{T} \in \mathbb{R}^{N^{2}\times 1}, $$
((16))

where n t is introduced in (14). Moreover, let \(\Xi _{t}\in \mathbb {R}^{N(z+1)\times 1}\) be the augmented state variable given by

$$ \Xi_{t} \triangleq \left[\begin{array}{l} \bar{x}_{t} \\ \bar{f}_{t-1} \end{array}\!\!\right], $$
((17))

where \(\bar {f}_{t}=[f_{t}(1),\dotsc, f_{t}(N)]^{T}\in \mathbb {R}^{N\times 1}\) represents the buffer with the control signals to be applied by the actuator at the plant input side. This buffer holds the present and the N−1 tentative future control values. In particular, f t (1) is the control value to be applied at current time t, and f t (i) is to be applied at time t+i−1. In addition, there is also a buffer \(\bar {f}'_{t}\) at the plant side, which holds all received packets that are no older than tN+1 time instances.

Let \(\Delta _{t}\in \mathbb {R}^{N\times N}\) be an indicator matrix with binary elements {0,1} indicating the complete buffer contents of \(\bar {f}'_{t}\) at time t. In particular, if Δ t has a “1” at entry (i,j), it shows that at least j packets from time ti+1 have been received and the buffer therefore contains at least \(u_{t-i+1}(1), u_{t-i+1}(2), \dotsc, u_{t-i+1}(j)\). If, in addition, entry (i,j+1)=1, it further means that the buffer also contains u ti+1(j+1). To better illustrate the relationship between Δ t and the buffers \(\bar {f}'_{t}\) and \(\bar {f}_{t}\) consider the following example.

Example 3.1.

Let N=3 and assume that \(\bar {f}'_{t}\) is empty and that \(\bar {f}_{t}\) is initialized to zero. Moreover, let the three packets constructed at time t be denoted by \(s_{t}(i), i =1,\dotsc, 3\). Then at time t, assume that two packets, say s t (1)and s t (3), constructed at time t are received, which implies that u t (1) and u t (2)can be recovered. At time t+1, a single packet, say s t+1(1), from time t+1 is received. Finally, at time t+2, the third and remaining packet s t (2) from time t is received. This leads to the following sequence of variables:

$$\begin{array}{*{20}l} \bar{f}'_{t} &= \{s_{t}(1), \ s_{t}(3)\} \Rightarrow \Delta_{t} = \left[\begin{array}{lll} 1 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right] \\ &\Rightarrow \bar{f}_{t} = \left[\begin{array}{c} u_{t}(1) \\ u_{t}(2) \\ 0 \end{array}\right] \\ \bar{f}'_{t+1} &= \{ s_{t+1}(1), s_{t}(1), s_{t}(3) \} \Rightarrow \Delta_{t+1} = \left[\begin{array}{lll} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right] \\ &\Rightarrow \bar{f}_{t+1} = \left[\begin{array}{c} u_{t+1}(1) \\ 0 \\ 0 \end{array}\right] \\ \bar{f}'_{t+2} &= \{ s_{t+1}(1),s_{t}(1), s_{t}(2),s_{t}(3) \} \\ &\Rightarrow \Delta_{t+2} = \left[\begin{array}{lll} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\right] \Rightarrow \bar{f}_{t+2} = \left[\begin{array}{c} u_{t}(3) \\ 0 \\ 0 \end{array}\right]. \end{array} $$

\(\square \)

In order to present a formal relationship between Δ t and the buffer \(\bar {f}_{t}\), we introduce U t as the upper triangular matrix containing the relevant control signals, that is

$$ U_{t}\!\! =\!\!\! \left[\begin{array}{ccccc} u_{t}(1) & u_{t}(2) & u_{t}(3) & \cdots & u_{t}(N) \\ 0 & u_{t-1}(2) & u_{t-1}(3) & \cdots & u_{t-1}(N) \\ \vdots & 0 & & \ddots & \vdots \\ 0 & 0 & \dots & 0 & u_{t-N+1}(N) \end{array}\right]. $$
((18))

The control signal to be applied at time t is given by one of the elements on the main diagonal of U t , and the control signal to be applied at time t+j is an element on the jth diagonal above the main diagonal (unless the buffer is changed in the mean time). Let

$$\gamma_{i}(U_{t}) = [u_{t}(i), u_{t-1}(i+1), \cdots, u_{t-N+i}(N)]^{T} $$

and let \(\delta _{i} \triangleq \gamma _{i}(\Delta _{t})\). Moreover, let

$$\tilde{\delta}_{i}(k) \triangleq \delta_{i}(k)\prod_{j=1}^{k-1}(1-\delta_{i}(j)), $$

where δ i (k) is the kth element of the vector δ i . Thus, for a given i, at most one element of the vector \(\tilde {\delta }_{i} = [\tilde {\delta }_{i}(1), \dotsc, \tilde {\delta }_{i}(N-i+1)]^{T}\) is 1 and the others are 0. The control signal to be applied at time t+i−1 is, thus, given by \(\tilde {\delta }_{i}^{T} \gamma _{i}(U_{t})\), which could be zero if \(\tilde {\delta }_{i}\) is the all zero vector. With this notation, it follows that

$$ \bar{f}_{t}(i) = \tilde{\delta}_{i}^{T} \gamma_{i}(U_{t}), \quad i=1,\dotsc,N. $$
((19))

To avoid updating the buffer \(\bar {f}_{t}\) with information from packets that were already received in previous time instances, it is useful to look only at the changes between Δ t and Δ t−1. Towards that end, let \(\Delta _{t}' \in \{0,1\}^{N\times N}\) be the difference indicator matrix that only indicates the packets that are received at current time t, i.e.,

$$ \Delta_{t}'= \Delta_{t} - S^{\downarrow}\Delta_{t-1}. $$
((20))

In the following, we will show that the number of distinct difference indicator matrices is finite for bounded N, and that the sequence of difference indicator matrices {Δ t′} is stationary Markov and ergodic. These properties will be helpful in the subsequent analysis.

Lemma 3.1.

The number L of distinct difference indicator matrices is upper bounded by:

$$ L \leq (N+1)\bigg(1+\frac{1}{2}N(N+1)\bigg)^{N-1} $$
((21))

with equality if N=M, i.e., if the number of packets is equal to the horizon length.

Proof.

See Appendix 1.

Lemma 3.2.

The sequence of difference indicator matrices {Δ t′} is stationary Markov and ergodic.

Proof.

See Appendix 2.

Example 3.2.

Let us briefly consider the special case without delays, i.e., where we do not allow for late packet arrivals but simply discard late packets. Let us assume that M=N, i.e., the number of packets equals the horizon length. In this case, the difference indicator matrices Δ t′ take the form of the all-zero matrix except for the first row, which has J t consecutive ones starting at the beginning of the row. Here J t denotes the number of packets received at the current time (excluding any late packets). Thus, the number of distinct difference indicator matrices reduces to L=M+1. Let J t−1 denote the number of packets received in the previous time slot. Then the transition probability \(p_{J_{t}|J_{t-1}}\), i.e., the probability of receiving J t packets conditioned upon receiving J t−1 packets in the previous time slot does not depend upon J t−1. Indeed, in this particular case:

$$ p_{J_{t}|J_{t-1}}=\binom{N}{J_{t}} (1-p)^{J_{t}} p^{N-J_{t}}, \quad J_{t}=0,\dotsc, N. $$
((22))

\(\square \)

We are now in a position to introduce the main technical result of this section, which shows that the sequence of augmented state variables {Ξ t } in (17) and the sequence of difference indicator matrices {Δ t′} in (20) are jointly Markovian and form a Markov jump linear system.

Theorem 3.1.

Let \(\nu _{t} =\, \left [{w_{t}^{T}}, \bar {n}_{t}^{T}\right ]^{T}\) be the vector containing the external disturbances and quantization noises. Moreover, let \(\delta '_{i} \triangleq \gamma _{i}(\Delta _{t}') \in \mathbb {R}^{N-i+1}\) and let

$$\tilde{\delta}_{i}'(k) \triangleq \delta_{i}'(k)\prod_{j=1}^{k-1}(1-\delta_{i}'(j)), $$

where δ i′(k) is the kth element of the vector δ i′. Then, {Ξ t ,Δ t′} forms a Markov jump linear system with a state recursion that can be written in the following form:

$$ \Xi_{t+1} = \mathcal{A}(\Delta_{t}') \Xi_{t} + \mathcal{B}(\Delta_{t}') \nu_{t}, $$
((23))

where the two switching matrices

$$\mathcal{A}(\Delta_{t}')\triangleq \left[\begin{array}{cc} \mathcal{A}_{1}(\Delta_{t}') & \mathcal{A}_{2}(\Delta_{t}') \\ \mathcal{A}_{3}(\Delta_{t}') & \mathcal{A}_{4}(\Delta_{t}') \end{array}\right] \in\mathbb{R}^{(zN+N)\times(zN+N)}$$

and

$$\mathcal{B}(\Delta_{t}') \triangleq \left[\begin{array}{cc} \mathcal{B}_{1}(\Delta_{t}') & \mathcal{B}_{2}(\Delta_{t}') \\ \mathcal{B}_{3}(\Delta_{t}') & \mathcal{B}_{4}(\Delta_{t}') \end{array}\right] \in\mathbb{R}^{(zN+N)\times (z'+N^{2}) } $$

are given by:

$$\begin{array}{*{20}l}\notag \mathcal{A}_{1}(\Delta_{t}^{\prime}) &=\left[ \begin{array}{cc} A & \mathbf{0}_{z\times z(N-1)} \\ \mathbf{0}_{z(N-1)\times z} & \mathbf{0}_{z(N-1)\times z(N-1)} \end{array}\right] \\ &+ \left[\begin{array}{cc} B_{1}\tilde{\delta}_{1}^{\prime T} E_{1} \\ \mathbf{0}_{z(N-1) \times zN} \end{array}\right] \in \mathbb{R}^{zN \times zN} \end{array} $$
((24))
$$\begin{array}{*{20}l} \mathcal{A}_{2}(\Delta_{t}^{\prime}) &=\left[ \begin{array}{cc} B_{1}\left(1- \boldsymbol{1}_{N}^{T}\tilde{\delta}_{1}^{\prime}\right){e_{1}^{T}}S^{\uparrow} \\ \mathbf{0}_{z(N-1) \times N} \end{array}\right] \in \mathbb{R}^{zN \times N} \end{array} $$
((25))
$$\begin{array}{*{20}l} \mathcal{A}_{3}(\Delta_{t}^{\prime}) &=\left[ \begin{array}{ccccc} \tilde{\delta}_{1}^{\prime T} E_{1} \\ \tilde{\delta}_{2}^{\prime T} E_{2} & 0 \\ \vdots \\ \tilde{\delta}_{N}^{\prime T} E_{N} & 0 & \cdots & 0\\ \end{array}\right] \in \mathbb{R}^{N \times zN} \end{array} $$
((26))
$$\begin{array}{*{20}l} \mathcal{A}_{4}(\Delta_{t}^{\prime}) &=\left[ \begin{array}{cc} \left(1-\boldsymbol{1}_{N}^{T}\tilde{\delta}^{\prime}_{1}\right){e_{1}^{T}} \\ \vdots \\ \left(1-\boldsymbol{1}_{1}^{T}\tilde{\delta}^{\prime}_{N}\right){e_{N}^{T}} \end{array}\right] S^{\uparrow} \in \mathbb{R}^{N \times N} \end{array} $$
((27))

and

$$\begin{array}{*{20}l} \mathcal{B}_{1}(\Delta_{t}') &=\left[ \begin{array}{c} B_{2} \\ \mathbf{0}_{z(N-1)\times z'} \end{array}\right]\in \mathbb{R}^{zN \times z'} \end{array} $$
((28))
$$\begin{array}{*{20}l} \mathcal{B}_{2}(\Delta_{t}') &=\left[ \begin{array}{c} B_{1}\tilde{\delta}_{1}^{\prime T}E_{1}' \\ \mathbf{0}_{z(N-1)\times N^{2}} \end{array}\right]\in \mathbb{R}^{zN \times N^{2}} \end{array} $$
((29))
$$\begin{array}{*{20}l} \mathcal{B}_{3}(\Delta_{t}') &= \mathbf{0}_{N\times z'} \end{array} $$
((30))
$$\begin{array}{*{20}l} \mathcal{B}_{4}(\Delta_{t}') &= \left[\begin{array}{cccc} \tilde{\delta}_{1}^{\prime T} E'_{1} \\ \tilde{\delta}_{2}^{\prime T} E'_{2} & 0 \\ \vdots \\ \tilde{\delta}_{N}^{\prime T} E'_{N} & 0 & \cdots & 0\\ \end{array}\right]\in \mathbb{R}^{N \times N^{2}}, \end{array} $$
((31))

where \(E_{i} \in \mathbb {R}^{(N-i+1)\times (N-i+1)z}\) and \(E_{i}' \in \mathbb {R}^{(N-i+1)\times (N-i+1)N}\) are given by

$$\begin{array}{*{20}l} E_{i} &= \left[\begin{array}{cccc} {e_{i}^{T}} \Psi^{-1}\Gamma \\ & e_{i+1}^{T} \Psi^{-1}\Gamma \\ && \ddots \\ &&& {e_{N}^{T}} \Psi^{-1}\Gamma \\ \end{array}\right] \end{array} $$
((32))
$$\begin{array}{*{20}l} E_{i}' &= \left[\begin{array}{cccc} {e_{i}^{T}} \Psi^{-1} \\ & e_{i+1}^{T} \Psi^{-1} \\ && \ddots \\ &&& {e_{N}^{T}} \Psi^{-1} \\ \end{array}\right]. \end{array} $$
((33))

Proof.

See Appendix 3.

3.2 Stability and steady state system analysis

At time step t+1, the switching variable jumps from some particular state, say Δ t′=Δ to some state, say \(\Delta _{t+1}'=\tilde {\Delta }\), where it is possible that \(\Delta =\tilde {\Delta }\). Let the number of distinct states be L, see Lemma 3.1. Thus, without loss of generality, we can enumerate the L (not necessarily distinct) pairs of system matrices that are associated with the L states by \(\{(\mathcal {A}(1), \mathcal {B}(1)), (\mathcal {A}(2), \mathcal {B}(2)), \cdots, (\mathcal {A}(L), \mathcal {B}(L))\}\). We note that even though some of the system matrices might be identical, there is a bijection between the state Δ and the index i of the pair of system matrices. Let p i|j =Prob(Δ t′=i|Δ t−1′=j), i.e., the transition probability due to jumping from state j to state i, where we note that p i|j is independent of t due to stationarity of the switching sequence, see Lemma 3.2.

In order to assess the stability of the MJLS in (23) and find its stationary first- and second-order moments, we will first introduce some new notation and then directly invoke Proposition 3.37 in [27], which we for completeness3 include as Lemma 3.3 below.

Define \(\mathfrak {A}\) and \(\mathfrak {B}\) as in (34) and (35), respectively.

$${} \mathfrak{A} =\! \left[\begin{array}{cccc} p_{1|1}\mathcal{A}(1) \otimes \mathcal{A}(1) & p_{1|2}\mathcal{A}(2) \otimes \mathcal{A}(2) & \cdots &p_{1|L} \mathcal{A}(L) \otimes \mathcal{A}(L) \\ p_{2|1}\mathcal{A}(1) \otimes \mathcal{A}(1) & p_{2|2}\mathcal{A}(2) \otimes \mathcal{A}(2) & \cdots &p_{2|L} \mathcal{A}(L) \otimes \mathcal{A}(L) \\ \vdots & & \ddots & \vdots \\ p_{L|1}\mathcal{A}(1) \otimes \mathcal{A}(1) & p_{L|2}\mathcal{A}(2) \otimes \mathcal{A}(2) & \cdots & p_{L|L} \mathcal{A}(L) \otimes \mathcal{A}(L) \end{array}\right]. $$
((34))
$$ \mathfrak{B} = \left[\begin{array}{cccc} p_{1|1} \mathcal{A}(1) & p_{1|2} \mathcal{A}(2) & \cdots & p_{1|L} \mathcal{A}(L)\\ p_{2|1} \mathcal{A}(1) & p_{2|2} \mathcal{A}(2) & \cdots & p_{2|L} \mathcal{A}(L)\\ \vdots & & \ddots \\ p_{L|1} \mathcal{A}(1)& p_{L|2}\mathcal{A}(2) & \cdots & p_{L|L} \mathcal{A}(L) \end{array}\right]. $$
((35))

Moreover, let \(q =\, [\!q_{1}, \dotsc, q_{L}] \triangleq (I-\mathfrak {B})^{-1}\psi \), \(\psi \triangleq [\!\psi _{1}, \dotsc, \psi _{L}]\), where

$$ \psi_{j} \triangleq \sum_{i=1}^{L} p_{j|i} \mathcal{B}(i) \gamma \pi_{i}, $$
((36))

where \(\gamma = {\lim }_{\textit {t}\to \infty }\mathbb {E}[\nu _{t}]\) and π i are the state priors. Define the operators ϕ and \(\hat {\phi }\) as follows:

$$ \phi(V_{i}) \triangleq \left[\begin{array}{c} \bar{v}_{i,1} \\ \bar{v}_{i,2} \\ \vdots \\ \bar{v}_{i,L} \end{array}\right], \quad \hat{\phi}(V) \triangleq \left[\begin{array}{c} \phi(V_{1}) \\ \phi(V_{2}) \\ \vdots \\ \phi(V_{L}) \end{array}\right], $$
((37))

where \(\bar {v}_{i,j} \in \mathbb {R}^{m}\), \(V_{i} =\, [\bar {v}_{i,1}, \dotsc, \bar {v}_{i,L}]\) and \(V=\,[\!V_{1},\dotsc, V_{L}]\). Then define

$$ Q \triangleq \hat{\phi}^{-1}((I - \mathfrak{A})^{-1}\hat{\phi}(R(q))), $$
((38))

where

$$\begin{array}{*{20}l} R(q) &\triangleq [R_{1}(q), \dotsc, R_{L}(q)], \end{array} $$
((39))
$$\begin{array}{*{20}l} \notag R_{j}(q) &\triangleq \sum_{i=1}^{L} p_{j|i}(\mathcal{B}(i)W\mathcal{B}(i)^{*}\pi_{i} \\ &+ \mathcal{A}(i) q_{i} \gamma^{*} \mathcal{B}(i)^{*} + \mathcal{B}(i) \gamma q_{i}^{*} \mathcal{A}(i)^{*}), \end{array} $$
((40))

where

$$\begin{array}{*{20}l} W &= {\lim}_{\textit{t}\to\infty}\mathbb{E}[\!\nu_{t} {\nu_{t}^{T}}] \\ &=\text{diag}\left(\Sigma_{w}, {\sigma_{n}^{2}}, \dotsc, {\sigma_{n}^{2}}\right)\in \mathbb{R}^{(z' + N^{2})\times (z'+N^{2})}. \end{array} $$

Definition 3.1 (Definitions 3.8 and 3.32 in [27]).

The MJLS in (23) is mean square stable (MSS) if and only if for any initial condition (Ξ 0,Δ0′) and ergodic Markov jump sequence {Δ t′}, there exists μ Ξ and Σ Ξ such that

$$\begin{array}{*{20}l} \| \mathbb{E}[\Xi_{t}] - \mu_{\Xi} \|_{2} &\to 0 \quad \text{as} \quad t\to \infty, \end{array} $$
((41))
$$\begin{array}{*{20}l} \| \mathbb{E}[\Xi_{t}{\Xi_{t}^{T}}] - \Sigma_{\Xi} \|_{2} &\to 0 \quad \text{as} \quad t\to \infty. \end{array} $$
((42))

Lemma 3.3 (Proposition 3.37 in [27]).

If \(\sigma _{r}(\mathfrak {A}) < 1\), then the system in (23) is MSS.

Remark 1.

Lemma 3.3 shows that there is an upper limit on the spectral radius of the matrix \(\mathfrak {A}\) given by (34) above which the system cannot be stabilized. This matrix \(\mathfrak {A}\) depends on the packet loss rates via p i|j and on the delays via the different switching matrices \(\mathcal {A}(i), i=1,\dotsc, L\).

The MJLS in (23) is in general not stationary. However, as can be observed from Definition 3.1, if the system is MSS then asymptotically as \(t\to \infty \), its first- and second-order moments do not depend on t. This observation is formalized in ([27] Theorem 3.33), which we include in part below.

Theorem 3.2 ([27] Theorem 3.33).

If the MJLS in (23) is MSS, then it is also asymptotically wide sense stationary (AWSS) and vice versa.

Lemma 3.4 (Proposition 3.37 in [27]).

If the MJLS is AWSS, then its first- and second-order asymptotically stationary (non-centralized) moments are given by:

$$\begin{array}{*{20}l} \mu_{\Xi} \triangleq {\lim}_{\textit{t}\to\infty} \mathbb{E}[\Xi_{t}] &= \sum_{i=1}^{L} q_{i}, \end{array} $$
((43))
$$\begin{array}{*{20}l} \Sigma_{\Xi} \triangleq {\lim}_{\textit{t}\to\infty}\mathbb{E}[\Xi_{t}{\Xi_{t}^{T}}] &= \sum_{i=1}^{L} Q_{i}. \end{array} $$
((44))

In our case, we note that γ=0 since the external disturbance w t and the quantization noise n t both are zero mean. This implies that q i =0,i, in (43).

3.3 Assessing the coding rate of the quantizer

Recall from Section 2.4 that the quantized control vector \(\tilde {u}_{t}\) is obtained by quantizing ξ t to get the quantized vector ξ t′ and then using \(\tilde {u}_{t}=\Psi ^{-1}\xi _{t}'\). The following result establishes an upper bound on the bit rate required for transmitting ξ t′.

Theorem 3.3.

Let the system (23) be AWSS. Then, for a given horizon length N, the total coding rate using M=N descriptions of the quantized control vector \(\tilde {u}_{t}\), can be upper bounded by R u :

$$\begin{array}{*{20}l} \notag R_{u} &\triangleq \frac{N}{2}\log_{2}\left(\prod_{i=1}^{N} \bigg(1 + \frac{\sigma^{2}_{\bar{\xi}(i)|\bar{\xi}(1),\cdots,\bar{\xi}(i-1)}}{{\sigma_{n}^{2}}} \bigg)^{\frac{1}{i}} \right) \\ & + \frac{N}{2}\log_{2}\bigg(\frac{\pi e}{6}\bigg) + 1, \end{array} $$
((45))

where \(\sigma ^{2}_{\bar {\xi }(i)|\bar {\xi }(1),\cdots,\bar {\xi }(i-1)}\) denotes the conditional variance of \(\bar {\xi }(i)\) given \((\bar {\xi }(1),\cdots,\bar {\xi }(i-1))\), and where \(\bar {\xi }\) denotes Gaussian random variables with the same first- and second-order moments as the asymptotically stationary moments of ξ t .

Proof.

See Appendix 4.

Remark 2.

It is straight-forward to extend Theorem 3.3 to the case of MN descriptions by considering M (instead of N) subsets of the vector ξ t . For example, if N=4 and M=3, one could make the split {ξ t′(1),ξ t′(2),(ξ t′(3),ξ t′(4))}, where upon receiving a single description only ξ t′(1) is recovered, receiving any two descriptions makes it possible to recover ξ t′(1) and ξ t′(2), and receiving all M=3 descriptions, the entire vector \(\xi '_{t}(1),\dotsc, \xi '_{t}(4)\) is recovered. \(\square \)

Remark 3.

In (45), the conditional variances can easily be obtained using Schur’s complement on the covariance matrix Σ ξ of ξ, which is implicitly given via Σ Ξ in (44) using (10), that is

$$ \Sigma_{\xi} = [\Gamma \quad 0]\Sigma_{\Xi} [\Gamma \quad 0]^{T}. $$
((46))

This makes the upper bound on the bit rate in (45) computable and thereby relevant from a practical perspective. Indeed, we show in the simulation study in Section 5, that the bound in (45) is very close to (only 1 bit above) the resulting operational bit rate.4 \(\square \)

4 Practical design of the PPC-MDC scheme

In this section, we design a scheme that satisfies the theoretical analysis provided in the previous section. We first present the idea behind our design of MDs and then show the connection to PPC that was sketched in Section 2.5. The proposed scheme is illustrated in Fig. 2.

Fig. 2
figure 2

The proposed combined PPC and MD scheme. The PPC-MDC (controller-encoder) communicates with the plant via multiple independent data-rate limited (digital) erasure channels with delays. The received descriptions are decoded and combined in the buffer

There are many ways to design MD coding schemes, for example, by use of lattice quantization and index assignment techniques [32, 33], frame expansions followed by quantization [34], oversampling and delta-sigma quantization [35], or layered source coding followed by unequal error protection [36, 37]. In this work, we will be using the latter technique, where the source is decomposed into a number of layers and encoded in such a way that upon reception of say k descriptions, all layers up till the kth layer are revealed [36]. In particular, we rely on a common practical implementation of this strategy, which is based on conventional forward error correction (FEC) codes that are applied on the individual source layers [37]. It will be shown that there exists a natural connection between PPC and MD based on FEC codes, in the sense that a quantized control vector \(\tilde {u}_{t}\) with N−1 future predictions, can be split into MN “layers”, where each layer contains at least one control value. Then, based on these M “layers”, we construct M packets \(s_{t}(i), i=1,\dotsc, M\), so that upon reception of any kM packets, the control signals \(\tilde {u}_{t}(1), \dotsc, \tilde {u}_{t}(k)\) can be exactly obtained at the decoder. Thus, as more packets are received, more information about future predicted control signals will become available at the plant input side.

4.1 Forward error correction codes

Consider an (n,k)-erasure code, which as input takes k symbols \({y_{t}^{k}}=(y_{t}(1),\dotsc, y_{t}(k))\) and outputs n symbols \(\tilde {y}_{t}^{n}=(\tilde {y}_{t}(1),\dotsc, \tilde {y}_{t}(n))\), where nk, and where \(y_{t}, \tilde {y}_{t}\) belong to some (yet to be specified) discrete alphabets. With an (n,k)-erasure code, the original k input symbols can be completely recovered using any subset of at least k output symbols. For example, a (3,2)-erasure code may be constructed by letting \(\tilde {y}_{t}(1) = y_{t}(1), \tilde {y}_{t}(2) = y_{t}(2)\), and \(\tilde {y}_{t}(3) = y_{t}(1) \textrm {XOR} y_{t}(2)\), where the XOR operation is performed on, e.g., the binary expansions of y t (1) and y t (2). Thus, using any two \(\tilde {y}_{t}(i),\tilde {y}_{t}(j), i\neq j\) both y t (1) and y t (2) may be perfectly recovered. This principle extends to any n>k by using, e.g., erasure codes that are maximum distance separable cf. [38].

4.2 Combining PPC- and FEC-based MDs

For the NCS studied, we apply a sequence of erasure codes on the quantized control vector \(\tilde {u}_{t} = (\tilde {u}_{t}(1), \dotsc, \tilde {u}_{t}(N))\) in order to obtain M packets. This process is illustrated in Fig. 3 and described in detail below. We first split \(\tilde {u}_{t}\) into M subsets. For example, if M=N, the kth set consists of the kth control signal (i.e., \(\tilde {u}_{t}(k)\)). In general, we allow several control signals within the same set so that M<N. To simplify the exposition and without loss of generality, we will in the following assume that M=N. Due to quantization, each distinct \(\tilde {u}_{t}(k)\) can be mapped (entropy coded) to a unique bit stream (codeword), say b t (k). The bitstream is then split into k non-overlapping sub-bitstreams \(b_{t}^{(i)}(k), i=1,\dotsc, k\) of equal length.5 These k bitstreams (whose union yields b t (k)) are now considered as input to an (M,k)-erasure code, whose M outputs are denoted by \(\phi _{t}^{(i)}(k), i=1,\dotsc, M\). To summarize, \(\tilde {u}_{t}(1)\) is first mapped to bits b t (1) and then an (M,1)-erasure code is applied, which outputs M symbols \(\phi _{t}^{(i)}(1), i=1,\dotsc, M\). Then, the second control signal \(\tilde {u}_{t}(2)\) is mapped to b t (2). Hereafter, b t (2) is split into two bitstreams \(b_{t}^{(1)}(2)\) and \(b_{t}^{(2)}(1)\) and an (M,2)-erasure code is applied, which outputs \(\phi _{t}^{(i)}(2), i=1,\dotsc, M\). This process is repeated for all the M control signals.

Fig. 3
figure 3

MPC control vector conversion into MDs. The kth control signal at time t is mapped into M output symbols

The M packets \(s_{t}(i), i = 1,\dotsc, M\), to be sent over the network at time t are then finally constructed as:

$$\begin{array}{*{20}l} s_{t}(i) = (\phi_{t}^{(i)}(1),\phi_{t}^{(i)}(2), \dotsc, \phi_{t}^{(i)}(M)), i=1,\dotsc, M. \end{array} $$

To further illustrate the usefulness of the above approach, consider the case where M=5 and where the decoder receives three packets say s t (2),s t (3), and s t (5). Then from say s t (2), we first recover \(\phi _{t}^{(2)}(1)\), which is in fact identical to \(\tilde {u}_{t}(1)\). Then, from say s t (2) and s t (3), we then recover \(\phi _{t}^{(2)}(2)\) and \(\phi _{t}^{(3)}(2)\) from which we can decode \(\tilde {u}_{t}(2)\). Finally, using all three received packets, we recover \(\phi _{t}^{(2)}(3), \phi _{t}^{(3)}(3)\), and \(\phi _{t}^{(5)}(3)\), which can be uniquely decoded to obtain \(\tilde {u}_{t}(3)\).

The foregoing discussion shows that the presence of packet dropouts together with the use of MDs makes the length of the received control packets stochastic and time-varying, while the prediction horizon N is fixed. This aspect makes the analysis of the resultant NCS significantly more involved than that of earlier PPC schemes, as presented in [11]. For example, the number of switching states L, as given by Lemma 3.1, grows exponentially in the horizon length N, whereas in [11] it was enough to consider only two states irrespective of the horizon length.

4.3 Buffering and reconstruction of control signals

At time t, the buffer at the plant input side contains all received packets, which are not older than tN+1. These will be used for obtaining the current control signal \(\hat {u}_{t}\) giving preference to newer data. For example, assume the buffer is initially empty. Then, for the case of M=N=3, if we at time t receive s t (2), then clearly we obtain \(\hat {u}_{t}=u_{t}(1)\). If we then at time t+1 receive s t+1(1) and the delayed packet s t (3) then we should form \(\hat {u}_{t+1}=u_{t+1}(1)\) from s t+1(1) and, thus, simply ignore s t (3). However, if we now at time t+2, only receive the very late s t (1), then we recover \(\hat {u}_{t+2} = u_{t}(3)\). Thus, we use the older packets to obtain the control signal. This process is clarified in Table 1 for M=N=3.

Table 1 Control value \(\hat {u}_{t}\) at time t from available buffer contents

4.4 Quantization and coding rates

In order to construct the MDs, we need to split the quantized control vector into individual components. It is therefore not possible to directly quantize the vector ξ t by use of vector quantization as we have done in our previous work on NCS [11], which did not include the use of MDs. Instead, we will in this work use a scalar quantizer separately along each dimension of the vector ξ t . Of course, a scalar quantizer is not as efficient as a vector quantizer, but the gap from optimality, which is given by \(N/2\log _{2}(\pi e/6)\), is included in the upper bound in (52). Interestingly enough, we can still do vector entropy coding by making use of conditional entropy coding. In particular, we first entropy code the first element of the quantized control vector, i.e., \(\tilde {u}_{t}(1)\). This results in an average discrete entropy of \(H(\tilde {u}_{t}(1)|\zeta _{t})\). Next, we conditional entropy code the second element \(\tilde {u}_{t}(2)\), which results in an average entropy of \(H(\tilde {u}_{t}(2)|\tilde {u}_{t}(1),\zeta _{t})\). This procedure is repeated for the entire vector \(\tilde {u}_{t}\). The FEC code is now applied on outputs of the conditional entropy coders following the approach described in Section 4.2.

As pointed out in Section 2.4, we transmit the elements of \(\tilde {u}_{t}\) and not those of ξ t′. The reason for this is that if we receive ξ t′(1) for the case of N>1, then we are actually not able to reconstruct \(\tilde {u}_{t}(1)\), since \(\tilde {u}_{t} = \Psi ^{-1} \xi _{t}'\). Thus, \(\tilde {u}_{t}(1)\) depends upon the whole vector ξ t′ and not just the first element. Since Ψ −1 is fixed and full rank, it simply maps elements one from discrete set into another discrete set. Thus, the coding rate is not affected by sending \(\tilde {u}_{t}(i)\) instead of ξ t′(i).

The size R (in bits) of a single packet is then on average given by:

$$\begin{array}{*{20}l} \notag R&=H(\tilde{u}_{t}(1)|\zeta_{t}) + \frac{1}{2}H(\tilde{u}_{t}(2)|\tilde{u}_{t}(1),\zeta_{t}) + \cdots \\ &+ \frac{1}{M}H(\tilde{u}_{t}(N)|\tilde{u}_{t}(1),\dotsc, \tilde{u}_{t}(N-1),\zeta_{t}). \end{array} $$
((47))

Since we have M of these packets, i.e., we have M descriptions, the resulting coding rate is RM.

5 Simulation study

We will now use the analysis and design presented in Sections 3 and 4 in a simulation study in MATLAB.6

5.1 System setup

In the state recursion given in (1), we let z=5 and randomly select the system matrix \(A\in \mathbb {R}^{z\times z}\) to be

$$\begin{array}{*{20}l} {}A \!= \! \left[\begin{array}{ccccc} -0.1065 & -0.4330 & -0.0006 & -0.8232 & -0.9397 \\ -1.0164 & -1.0668 & -0.1995 & 0.1945 & -0.8169 \\ -1.3309 & 0.8582 & 0.3173 & -1.0053 & -0.3214 \\ -0.5629 & -0.5697 & -0.2112 & -0.2778 & 0.1390 \\ 0.2247 & -0.0090 & -1.3312 & -0.7531 & -0.0929 \end{array}\right], \end{array} $$

where the absolute values of the eigenvalues of A are {1.9829,1.2265,1.2265,0.9455,0.9455}. Thus, the system is open-loop unstable. We let the external disturbance \(w_{t}\in \mathbb {R}^{2}\) in (1) be Gaussian distributed with zero mean and covariance matrix Σ w =I 2, where I 2 denotes the 2×2 identity matrix. The remaining constants in (1) are set to B 1=1 z and B 2=[B 1,B 1]. In these simulations, we have used T=4×106 vectors each of dimension z=5 in the sequence \(\{x_{t}\}_{t=0}^{T}\) in (1). x 0 is initialized to the zero vector.

5.2 Cost function

For the cost function in (2), we let Q=I 5,λ=1/20, and P is found by (3) and given by:

$$\begin{array}{*{20}l} &P = \\ &\left[\begin{array}{ccccc} 259.5872 & -100.8986 & -76.8526 & -63.0725 & -59.5344 \\ -100.8986 & 46.9687 &40.4038 & 15.9182 & 15.0465 \\ -76.8526 & 40.4038 & 73.9883 & -10.3694 & -32.3071 \\ -63.0725 & 15.9182 & -10.3694 & 34.9787 & 42.6824 \\ -59.5344 & 15.0465 & -32.3071 & 42.6824 & 68.9741 \end{array}\right]. \end{array} $$

5.3 Horizon length and number of packets

We consider the cases where N=1,2,3 and compare the proposed scheme that includes multiple descriptions, with the same scheme without multiple descriptions, i.e., that of our earlier work [11]. The two schemes are hereafter referred to as PPC-MDC and PPC, respectively. For the case of PPC-MDC, we let the number of packets M be equal to the horizon length N. For the case of PPC, the entire N-horizon vector is encoded into a single packet. For the case of N=1, the two schemes are identical.

5.4 Network

To simplify the simulations and to be able to compare to existing works on PPC, we will not consider delayed or out-of-order packets. Specifically, if at time t, packet s t ,>0 is received, it is discarded. This means that for the case of N=M=3, the number of jump states reduces to L=4 instead of L=196 as given by Lemma 3.1. Note that even though we do not consider late packet arrivals, control signals can still be applied out of order. To see this, assume that M=N=3, and that all three packets {s t (1),s t (2),s t (3)} are received at time t. Then, at time t+1, a single packet is received, say s t+1(1), and at time t+2 no packets are received. Then, the control signal u t+1(1) applied at time t+1 is constructed later than the control signal u t (3) to be applied at time t+2.

We let the packet losses be mutually independent and identically distributed with probability p that a packet is lost (erased). For this case, the state transition probabilities are given by (22).

5.5 Stability

To assess the stability of the system, we need to compute the spectral radius \(\sigma _{r}(\mathfrak {A})\) of \(\mathfrak {A}\) in (34). In order to compute \(\mathfrak {A}\) we simply insert the above presented system and network parameters into (24) – (27) and (34). We then obtain the spectral radius by using MATLAB to find the eigenvalue of \(\mathfrak {A}\) with the largest absolute value. For the case of N=1,2,3, we have in Fig. 4 shown the spectral radius \(\sigma _{r}(\mathfrak {A})\) as a function of the packet loss probability p [ 0,0.5]. According to Lemma 3.3, the MJLS is MSS and AWSS if \(\sigma _{r}(\mathfrak {A})<1\). As can be observed from Fig. 4, the MJLS is guaranteed to be MSS for p<0.06,p<0.3, and p<0.5 for the cases of N=1,N=2, and N=3, respectively. Thus, choosing a larger horizon brings stability benefits.

Fig. 4
figure 4

Spectral radius. Spectral radius \(\sigma _{r}(\mathfrak {A})\) of \(\mathfrak {A}\) in (34) for N=1,2,3 and as a function of the packet loss probability p [0,0.5]

5.6 Quantization

Each scalar control value in the control vector \(\bar {u}_{t}\) is quantized using a uniform scalar quantizer with some step size δ. Specifically, for the case of PPC, we simply keep the step size fixed at δ=10. On the other hand, for the case of PPC-MDC, we need to use a larger step than what is used for PPC, since PPC-MDC introduces redundancy across the M=N descriptions. Thus, to keep the bit rate from growing too much as a function of N, we have experimentally found that δ=25N 2 to be a suitable choice, i.e., δ=25,100,225, for N=1,2,3, respectively.

5.7 Bit-rates

In order to compute the upper bound (45) on the bit-rate, we need to estimate the conditional variances \(\sigma ^{2}_{\bar {\xi }(i)|\bar {\xi }(1),\cdots,\bar {\xi }(i-1)}\) for \(i=1,\dotsc, N\), and the quantization noise variance \({\sigma _{n}^{2}}\). To find \(\sigma ^{2}_{\bar {\xi }(i)|\bar {\xi }(1),\cdots,\bar {\xi }(i-1)}\), we first find Σ Ξ in (44) by use of (34) – (40). Then, we use (46) to obtain Σ ξ from Σ Ξ , where Σ ξ is the steady state covariance matrix of ξ t . Finally, we simply use the Schur complement [39] of Σ ξ to obtain the desired conditional variances. To estimate the quantization noise variance \({\sigma _{n}^{2}}\), we use the relationship \({\sigma _{n}^{2}} \approx \delta ^{2}/12\), which is exact for a dithered uniform quantizer and a good approximation for a non-dithered scalar uniform quantizer. We have plotted the theoretical upper bound (45) in Fig. 5 as a function of the packet loss probability and for N=1,2,3.

Fig. 5
figure 5

Average entropy. Average entropy as a function of the packet loss probability

To estimate the bit-rate of the quantized control signals, we use (47), which require the computations of discrete conditional entropies. To estimate these conditional entropies, we use a histogram-based entropy estimation on the sequence of discrete (quantized) control signals \(\{\tilde {u}_{t}\}_{t=0}^{T}\). Specifically, we first estimate \(H(\tilde {u}_{t}(1))\) directly from \(\{\tilde {u}_{t}(1)\}_{t=0}^{T}\). Then, we estimate \(H(\tilde {u}_{t}(1), \tilde {u}_{t}(2))\) from \(\{\tilde {u}_{t}(1),\tilde {u}_{t}(2)\}_{t=0}^{T}\) and use that \(H(\tilde {u}_{t}(2)|\tilde {u}_{t}(1)) = H(\tilde {u}_{t}(1), \tilde {u}_{t}(2)) - H(\tilde {u}_{t}(1))\). We obtain \(H(\tilde {u}_{t}(3)|\tilde {u}_{t}(1),\tilde {u}_{t}(2)) = H(\tilde {u}_{t}) - H(\tilde {u}_{t}(1),\tilde {u}_{t}(2))\) in a similar way. Finally, these estimates of the conditional entropies are inserted into (47) in order to approximate the resulting operational bit-rate R. The resulting total discrete entropy R T =R M obtained by adding the entropies of the M descriptions is shown in Fig. 5 as a function of the packet loss rate and M=N.

It may be noticed in Fig. 5 that the upper bound (45) is approximately 1 bit above the estimate R T of the operational bit-rates except in the region, where the packet loss rates approach and exceed the critical point, where the system becomes unstable. This excess 1 bit accounts for the theoretical loss of an entropy coder. While we have not applied actual entropy coding, it is well known that the loss of the entropy coder diminishes at moderate to large bit rates.

Note that the multiple descriptions of PPC-MDC have a certain amount of controlled redundancy, and one might therefore expect that the total coding rate for all M=N descriptions would be much greater than what is used for the single description in PPC. However, due to being a closed-loop system, packet losses affect the variance of the input to the quantizer. Consequently, the resulting coding rate for PPC as well as for PPC-MDC also depend upon the packet loss rate.

5.8 Performance

We have measured the performance of the system in terms of the average state power \(\frac {1}{T}\sum _{t=1}^{T} \| x_{t}\|_{2}^{2}\). This is shown in Fig. 6. For smaller packet loss rates, the performance of PPC is better than that of PPC-MDC for N>1. This is because the negative impact on the performance due to quantization in PPC-MDC out-weights the impact due to using future predicted control values in PPC in case of packet losses. Recall that the quantizer in PPC-MDC is coarser than that used in PPC. When the packet loss rate is increased, PPC-MDC is often able to apply the most recent control value \(\tilde {u}_{t}(1)\) due to the construction of the MDs. On the other hand, PPC will frequently be applying the future predicted control values \(\tilde {u}_{t}(2)\) and \(\tilde {u}_{t}(3)\) due the packet dropouts. This leads to a significant performance gain of PPC-MDC at higher packet loss rates.

Fig. 6
figure 6

Average state power. Average state power as a function of the packet loss probability

5.9 Complexity

From the analysis of the MJLS in Section 3, it is not easy to assess the computational burden required, when using the proposed system in practice. In this section, we provide a brief overview of the complexity of the encoder and decoder. The encoder includes the controller, quantizer, entropy coder, and channel (FEC) coder. The decoder includes channel decoder, entropy decoder, buffering, and selection of the control values:

Encoder

  1. 1.

    At any given time, say t, the control vector \(\bar {u}_{t}\) is constructed as in (10) and (13), which amount to a few matrix vector multiplications. The matrices in question are \(\Gamma \in \mathbb {R}^{N\times z}\) and \(\Psi \in \mathbb {R}^{N\times N}\), where N is the horizon length and z is the state dimension. For many applications, both the horizon length and the state dimension are moderately small.

  2. 2.

    Each scalar element in either the control vector \(\bar {u}_{t} \in \mathbb {R}^{N}\) or in \(\xi _{t} \in \mathbb {R}^{N}\) is quantized using a scalar quantizer as described in Section 5.6. This amounts to N simple rounding operations, which can be done efficiently in hardware.

  3. 3.

    The quantized elements are entropy encoded either independently, conditionally, or jointly. In either case, it is done in practice by look-up tables and is therefore of low complexity, i.e., \(\mathcal {O}(N)\).

  4. 4.

    The resulting bitstream after entropy coding is converted into M packets by applying M FEC codes, which amounts to matrix-vector multiplications over finite fields [40]. If we use M=N packets, and thereby split the control vector into N “layers”, then the ith layer uses i×N multiplications due to the (N,i) FEC code. Thus, the total number of multiplications is \(N\times (1+2+\cdots + N) = \mathcal {O}(N^{3})\).

Decoder

  1. 1.

    At the decoder at time t, all received packets that are no older than time tN+1, are stored in a buffer. Moreover, all decoded control values that are no older than time tN+1 time delays are stored in another buffer. Thus, since there can be MN packets in each time slot, the storage complexity is \(\mathcal {O}(MN)\).

  2. 2.

    Decoding of received packets involves decoding the FEC code and decoding the entropy code. Decoding the FEC code can be done by, e.g., Gaussian elimination, which has complexity \(\mathcal {O}(N^{3})\) per layer, and therefore at most \(\mathcal {O}(N^{4})\) for decoding the entire control vector. Decoding of the entropy code is done by a look-up table and has, thus, complexity \(\mathcal {O}(N)\), since the control vector contains N elements.

  3. 3.

    If the decoded control signals are stored in U t (18), then the selection of the control signal from the buffer can be done as suggested in (19). This includes construction of the vector \(\tilde {\delta }_{i}\) in addition to forming the inner product of \(\tilde {\delta }_{i}\) and the diagonal of U t indexed by γ i . The inner product has complexity \(\mathcal {O}(N)\).

6 Conclusions

We have shown how to combine multiple description coding with quantized packetized predictive control, in order to get a high degree of robustness towards packet delays and erasures in network control systems. We focused on a digital network located between the controller and the plant input. In our scheme, when any single packet is received, the most recent control value becomes available at the plant input. Moreover, when any J out of M packets are received, the most recent control value and J−1 future predicted control values become available at the plant input. These future-predicted control values can then be applied at time instances, where no packets are received. The key motivation for this design was twofold. From a practical point of view, it was shown that a significant gain over existing packetized predictive control was possible in the range of large packet loss rates. Moreover, from a theoretical point of view, computable guarantees for stability and upper bounds on the operational bit rate could be established. Indeed, a simulation study revealed that the upper bounds on the bit rate was a good indicator for the operational bit rate of the system in the range of packet loss probabilities that were not too close to the region of system instability.

Future works could include source coding in the feedback channel as well as the forward channel, which is a non-trivial extension. Indeed, the design and analysis of optimal joint controller, encoders, and decoders in both forward and backward channels is an open problem even in the absence of erasures and delays. The main difficulty is that the design of the source coder in the forward channel hinges heavily on the design of the source coder in the backward channel as well as on the controller. Another interesting open research direction is to establish lower bounds on the bit rates, which will then make it possible to assess the optimality of the overall system architecture from an information theoretic point of view.

7 Endnotes

1 For the case of quantized MPC with fixed-rate quantization and without dithering, it was shown in [41], that the optimal quantized control vector is given by nearest neighbour quantization of ξ t in (10).

2 It follows that we require the dither sequence to be known both at the encoder and at the decoder.

3 We will explicitly make use of (34) – (40) and Lemma 3.3, when assessing the stability of the system in the simulation study in Section 5.

4 The excess 1 bit is due to the conservative estimate of the loss of the entropy coder, which is characterized by 1 bit.

5 If they are not of equal length, it is always possible to augment one of the sub-bitstreams with a fixed (known) bit pattern to make them of equal length.

6 Matlab code to reproduce all results (figures and tables) will be made available online on the authors webpage.

7Of course, information about what time instances the packets were received can be learned from past Δ’s. However, we are not exploiting this knowledge here.

8 Appendix 1: Proof of Lemma 3.1

Let us first consider the case M=N. In this case, each row of Δ t can take on N+1 distinct patterns, i.e.,

$$\overbrace{[1 \cdots 1}^{m} \overbrace{0 \cdots 0}^{N-m}],\quad m=0,\dotsc, N, $$

where m describes the number of packets received for the time slot corresponding to that particular row. The first row of Δ t′ is equivalent to the first row of Δ t . The remaining rows of Δ t′ can each either be the zero vector or any one of the following:

$$ \overbrace{[0 \cdots 0}^{m-k}\ \overbrace{1 \cdots 1}^{k} \ \overbrace{0 \cdots 0}^{N-m}], m=0,\dotsc, N, k=1,\dotsc, m, $$

where k describes the number of packets received at time t and which contain control signals for that particular row in the buffer. Thus, the number of distinct patterns for each of these rows are \(1+ \sum _{m=1}^{N} m\). Since there is a total of N−1 of such rows, the total number of distinct difference matrices is

$$(N+1)\bigg(\!1+ \sum_{m=1}^{N} m\bigg)^{N-1}\!\!\!\!\!\!\! = \!(N+1)\bigg(\!1+\frac{1}{2}N(N+1)\!\bigg)^{N-1}. $$

The case of M<N follows easily from the above analysis. In this case, each row of Δ t can only take on M+1 distinct patterns, i.e., the zero vector, or a vector containing the number of consecutive ones corresponding to the number of control values that are recovered, when receiving J out of the M packets, where \(J=1,\dotsc,M\). It follows immediately that the number of possible difference indicator matrices is less for M<N compared to M=N.

\(\square \)

9 Appendix 2: Proof of Lemma 3.2

We first prove ergodicity. Clearly, from the all zero difference indicator matrix, it is possible to get to any other difference indicator matrix in a finite number of steps. Moreover, the probability of not receiving any packets in N consecutive time steps is positively bounded away from zero for any finite N. The all zero difference indicator matrix can therefore be reached in a finite number of steps (from any other difference indicator matrix). Thus, it is possible to jump between any two difference indicator matrices in a finite number of steps. We may therefore view the difference indicator matrices as being the different nodes in a fully connected graph. In this graph, any node can be reached at irregular times. Thus, the nodes are recurrent and aperiodic, which implies that they are are ergodic and the sequence {Δ t′} of difference indicator matrices is therefore also ergodic.

We now prove the Markovian property. Observe that the matrices in the sequence {Δ t } are not mutually independent. However, the sequence does satisfy a first-order Markov condition due to the Markov assumption on the data reception, see Section 2.3, i.e.,

$$ \Delta_{0}^{t-1} \leftrightarrow \Delta_{t} \leftrightarrow \Delta_{t+1},\quad \forall t, $$
((48))

which implies that knowledge of the buffer \(\bar {f}'_{t-1}\) does not bring more useful 7 information about the buffer \(\bar {f}'_{t+1}\) if the buffer \(\bar {f}'_{t}\) is already known. Similarly, it is easy to see that the sequence {Δ t′} of difference matrices form a Markov chain similar to (48), i.e.,

$$ \{\Delta_{i}'\}_{i=0}^{t-1} \leftrightarrow \Delta_{t}' \leftrightarrow \Delta_{t+1}', \quad \forall t. $$
((49))

Finally, the stationarity of the channel, see Section 2.3, implies that the sequence of difference matrices {Δ t′} is stationary. This proves the lemma. \(\square \)

10 Appendix 3: Proof of Theorem 3.1

In Lemma 3.2, we have established ergodicity and Markov properties of the switching sequence {Δ t′} as is required by Lemma 3.3. We then need to derive the recursive form for the system evolution, which guarantees that the combined system {Ξ t ,Δ t′} will be Markovian.

Recall from (19) that \(\bar {f}_{t}(i) = \tilde {\delta }_{i}^{T} \gamma _{i}(U_{t})\) for \(i=1,\dotsc,N\). However, to avoid updating the buffer with information about packets that was already received in previous time instances, we need to look only at the changes between Δ t and Δ t−1. Towards that end, let \(\delta _{i}' \triangleq \gamma _{i}(\Delta _{t} - S^{\downarrow }\Delta _{t-1})\). Define \(\tilde {\delta }'_{i}\) in a similar manner as \(\tilde {\delta }_{i}\). If δ i′ is the all zero vector for some i, it means that no new control signals to be used at time t+i−1 has been received yet. Thus, the ith element of the buffer is then simply obtained by using older control signals, i.e., \(\bar {f}_{t}(i) = \bar {f}_{t-1}(i+1)\). With this, we obtain the following recursion:

$$ \bar{f}_{t}(i) = \tilde{\delta}_{i}^{\prime T} \gamma_{i}(U_{t}) + \left(1- \boldsymbol{1}_{N-i+1}^{T}\tilde{\delta}_{i}'\right)\bar{f}_{t-1}(i+1). $$
((50))

Using (10) and (14), it is possible to write γ i (U t ) as a function of \(\bar {x}_{t}\) and \(\bar {n}_{t}\), that is:

$$\begin{array}{*{20}l} \gamma_{i}(U_{t}) &= [u_{t}(i), u_{t-1}(i+1), \cdots, u_{t-N+i}(N)]^{T} \\ &= \left[\begin{array}{c} {e_{i}^{T}} \Psi^{-1}\Gamma x_{t}\\ e_{i+1}^{T} \Psi^{-1}\Gamma x_{t-1} \\ \ddots \\ {e_{N}^{T}} \Psi^{-1}\Gamma x_{t-N+i} \end{array}\right] + \left[\begin{array}{c} {e_{i}^{T}} \Psi^{-1}n_{t}\\ e_{i+1}^{T} \Psi^{-1}n_{t-1} \\ \vdots \\ {e_{N}^{T}} \Psi^{-1}n_{t-N+i} \end{array}\right]. \end{array} $$

With the above notation, the system state vector recursions can be written as:

$$\begin{array}{*{20}l} \notag x_{t+1} &= A x_{t} + B_{1}(\tilde{\delta}_{1}^{\prime T}\gamma_{1}(U_{t}) \\ &+ (1- \boldsymbol{1}_{N}^{T}\tilde{\delta}_{1}'){e_{1}^{T}}S^{\uparrow}\bar{f}_{t-1}) +B_{2} w_{t}. \end{array} $$
((51))

Using that \(\Xi _{t} = \left [\begin {array}{c} \bar {x}_{t} \\ \bar {f}_{t} \end {array}\right ]\) and combining (51) and (50) and using the matrix definitions in (24) – (33) yields (23). This proves the theorem. \(\square \)

11 Appendix 4: Proof of Theorem 3.3

In order to provide an upper bound on the required bit rate for transmitting the quantized control vector ξ t′, we assume that the system is designed such that the loop is AWSS. For such a system, the bit rate R of the ECDQ is related to the discrete entropy H(ξ t′|ζ t ) of the quantized signal ξ t′, conditioned upon the dither signal ζ t [31]. That is,

$$ H(\xi_{t}' | \zeta_{t}) \leq R \leq H(\xi_{t}' | \zeta_{t}) + 1/N, $$
((52))

where the term 1/N is the loss due to using entropy coding on finite dimensional vectors [42]. At this point, we could continue upper bounding H(ξ t′|ζ t ), which would then provide an upper bound on the bit rate required for coding the entire control vector. However, recall that we need to send ξ t′ using M descriptions such that upon receiving any 0<JM descriptions, the J control signals \(\tilde {u}_{t}(1),\dotsc, \tilde {u}_{t}'\) can be reliably recovered. This is clearly not possible if ξ t′ is arbitrarily split into M sub-streams having a total bit rate of H(ξ t′|ζ t ). In Section 4.2, we introduced a practical scheme for MDs based on forward error correction codes. With this scheme, a description is constructed by concatenating the entire bitstream used for representing the encoded version of \(\tilde {u}_{t}(1)\) with half the bitstream used for \(\tilde {u}_{t}(2)\), one third of the bits allocated for \(\tilde {u}_{t}(3)\), and so on. With such a scheme in mind, we first invoke the chain rule of entropies, in order to expand H(ξ t′|ζ t ) in (52) as:

$$\begin{array}{*{20}l} \notag H(\xi_{t}' | \zeta_{t}) &= H(\xi_{t}' (1) | \zeta_{t}) + H(\xi_{t}' (2) |\xi_{t}' (1), \zeta_{t}) + \cdots \\ &+ H(\xi_{t}' (N) |\xi_{t}' (1),\dotsc,\xi_{t}' (N-1), \zeta_{t}). \end{array} $$
((53))

The ith term on the r.h.s. of (53), describes the minimum bit rate required for conditionally encoding ξ t′(i). With this, and using the MD construction sketched above, the total rate R T required for all M=N descriptions is given by:

$$\begin{array}{*{20}l} \notag R_{T} &\leq M \bigg(H(\xi_{t}' (1) | \zeta_{t}) + \frac{1}{2}H(\xi_{t}' (2) |\xi_{t}' (1), \zeta_{t}) + \cdots \\ &+ \frac{1}{M}H(\xi_{t}' (N) |\xi_{t}' (1),\dotsc,\xi_{t}' (N-1), \zeta_{t})\bigg) +1. \end{array} $$
((54))

Since we are using an ECDQ, the discrete entropy of the quantized variables satisfies:

$$\begin{array}{*{20}l} H(\xi_{t}' | \zeta_{t}) &= I(\xi_{t}; \hat{\xi}_{t}) \end{array} $$
((55))
$$\begin{array}{*{20}l} &= I(\xi_{t}; \xi_{t} + n_{t}) \end{array} $$
((56))
$$\begin{array}{*{20}l} &\leq I(\bar{\xi}_{t}; \bar{\xi}_{t} + \bar{n}_{t}) + \mathcal{D}(n_{t} \| \bar{n}_{t}), \end{array} $$
((57))

where equality in (55) follows from [31] and where \(I(\xi _{t}; \hat {\xi }_{t})\) denotes the mutual information [30] between the input ξ t and the output \(\hat {\xi }_{t}\) of the ECDQ [31]. In (56), the equality follows by replacing the quantization operation by its additive noise model, which is exact from a statistical point of view [31]. The upper bound in (57) follows from ([43] Lemma 2) by replacing the variables in play by their Gaussian counterparts, i.e., \(\bar {\xi }_{t}\) and \(\bar {n}_{t}\) are Gaussian distributed with the same first- and second moments as ξ t and n t , respectively. The Divergence operator \(\mathcal {D}(n_{t} \| \bar {n}_{t})\) describes the Kullback-Leibler distance (in bits) between the distribution of the quantization noise n t to that of a Gaussian distribution [30] and is in our case upper bounded by \(\mathcal {D}(n_{t} \| \bar {n}_{t})\leq N/2\log _{2}(\pi e/6)\). This upper bound is achieved if n t is uniformly distributed over an N-dimensional cube [44]. We may now proceed by expressing the mutual information in terms of differential entropies provided the latter exists [30]. Thus, we obtain that \(I(\bar {\xi }_{t}; \bar {\xi }_{t} + \bar {n}_{t}) = h(\bar {\xi }_{t} + \bar {n}_{t}) - h(\bar {n}_{t})\). Using the same idea on the conditional estimates ξ t (i)|ξ t (1),,ξ t (i−1) instead of the entire vector ξ t leads to:

$$\begin{array}{*{20}l}\notag &H(\xi_{t}' (i) |\xi_{t}' (1), \dotsc, \xi_{t}'(i-1), \zeta_{t}) \\ \notag &\leq h(\bar{\xi}_{t}(i)|\bar{\xi}_{t}(1),\cdots,\bar{\xi}_{t}(i-1) + \bar{n}_{t}(i)) - h(\bar{n}_{t}(i)) \\ \notag &+ 1/2\log_{2}(\pi e/6)\\ &= \frac{1}{2}\log_{2}\left(1 + \frac{\sigma^{2}_{\xi'(i)|\xi'(1),\cdots,\xi'(i-1)}}{{\sigma_{n}^{2}}} \right) + \frac{1}{2}\log_{2}\left(\frac{\pi e}{6}\right), \end{array} $$
((58))

where the last equality follows from the definition of differential entropy of a Gaussian variable [30]. Inserting (58) into (54) and (52) yields (45). This completes the proof. \(\square \)

References

  1. JP Hespanha, P Naghshtabrizi, Y Xu, A survey of recent results in networked control systems. Proc. IEEE. 1(95), 138–162 (2007).

    Article  Google Scholar 

  2. GN Nair, F Fagnani, S Zampieri, RJ Evans, Feedback control under data rate constraints: an overview. Proc. IEEE. 95(1), 108–137 (2007).

    Article  Google Scholar 

  3. S Wong, RW Brockett, Systems with finite communication band- width constraints ii: stabilization with limited information feedback. IEEE Trans. Autom. Control. 44(5), 1049–1053 (1999).

    Article  MathSciNet  MATH  Google Scholar 

  4. M Trivellato, N Benvenuto, State control in networked control systems under packet drops and limited transmission bandwidth. IEEE Trans. Commun. 58(2), 611–622 (2010).

    Article  Google Scholar 

  5. DE Quevedo, A Ahlén, J Østergaard, Energy efficient state estimation with wireless sensors through the use of predictive power control and coding. IEEE Trans. Signal Process. 58(9), 4811–4823 (2010).

    Article  MathSciNet  Google Scholar 

  6. T Wang, Y Zhang, J Qiu, H Gao, Adaptive fuzzy backstepping control for a class of nonlinear systems with sampled and delayed measurements. IEEE Trans. Fuzzy Syst.23(2), 302–312 (2015).

    Article  Google Scholar 

  7. T Wang, H Gao, J Qiu, A combined adaptive neural network and nonlinear model predictive control for multirate networked industrial process control. IEEE Trans. Neural Netw. Learn. Syst. 27(2), 416–425 (2016).

    Article  Google Scholar 

  8. PL Tang, CW de Silva, Compensation for transmission delays in an Ethernet-based control network using variable-horizon predictive control. IEEE Trans. Contr. Syst. Technol. 14(4), 707–718 (2006).

    Article  Google Scholar 

  9. G Liu, Y Xia, J Chen, D Rees, W Hu, Networked predictive control of systems with random network delays in both forward and feedback channels. IEEE Trans. Ind. Electron. 54(3), 1282–1297 (2007).

    Article  Google Scholar 

  10. DE Quevedo, D Nešić, Input-to-state stability of packetized predictive control over unreliable networks affected by packet-dropouts. IEEE Trans. Autom. Control. 56(2), 370–375 (2011).

    Article  MathSciNet  Google Scholar 

  11. DE Quevedo, J Østergaard, Nesic, D́, Packetized predictive control of stochastic systems over digital channels with random packet loss. IEEE Trans. Autom. Control. 56(12), 2854–2868 (2011).

    Article  Google Scholar 

  12. AAE Gamal, TM Cover, Achievable rates for multiple descriptions. IEEE Trans. Inf. Theory. IT-28(6), 851–857 (1982).

    Article  MathSciNet  MATH  Google Scholar 

  13. VK Goyal, Multiple description coding: compression meets the network. IEEE Signal Proc. Mag. 18(5), 74–93 (2001).

    Article  Google Scholar 

  14. EI Silva, MS Derpich, J Østergaard, An achievable data-rate region subject to a stationary performance constraint for LTI plants. IEEE Trans. Autom. Control. 56:, 1968–1973 (2011).

    Article  MathSciNet  Google Scholar 

  15. AS Leong, S Dey, GN Nair, Quantized filtering schemes for multi-sensor linear state estimation: stability and performance under high rate quantization. IEEE Trans. Signal Process. 61(15), 3852–3865 (2013).

    Article  MathSciNet  Google Scholar 

  16. J Østergaard, DE Quevedo, A Ahlen, in IEEE Int. Conference on Audio, Speech and Signal Processing. Predictive power control and multiple-description coding for wireless sensor networks (IEEE, 2009), pp. 2785–2788.

  17. J Østergaard, DE Quevedo, in 9th IEEE International Conference on Control & Automation. Multiple descriptions for packetized predictive control over erasure channel (IEEESantiago, Chile, 2011).

    Google Scholar 

  18. J Østergaard, DE Quevedo, in IEEE Data Compression Conference. Multiple description coding for closed loop systems over erasure channels (IEEE, 2013).

  19. Z Jin, V Gupta, R Murray, State estimation over packet dropping networks using multiple description coding. IEEE Trans. Autom. Control. 42(9), 1441–1452 (2006).

    MathSciNet  MATH  Google Scholar 

  20. DE Quevedo, J Østergaard, A Ahlen, A power control and coding formulation for state estimation with wireless sensors. IEEE Trans. Control Syst. Technol. 22(2), 413–427 (2014).

    Article  Google Scholar 

  21. E Peters, DE Quevedo, J Østergaard, Shaped Gaussian dictionaries for quantized networked control systems with correlated dropouts. IEEE Trans. Signal Process. 64(1), 203–213 (2016).

    Article  MathSciNet  Google Scholar 

  22. Y Ge, Q Chen, M Jiang, Y Huang, J. Control Sci. Eng. 2013: (2013).

  23. Y Wei, J Qiu, H Karimi, M Wang, A new design of \({H}_{\infty }\) filtering for continuous-time Markovian jump systems with time-varying delay and partially accessible mode information. Signal Process. 93(9), 2392–2407 (2013).

    Article  Google Scholar 

  24. Y Wei, J Qiu, H Karimi, M Wang, Filtering design for two-dimensional Markovian jump systems with state-delays and deficient mode information. Inf. Sci. 269(10), 316–331 (2014).

    Article  MathSciNet  Google Scholar 

  25. J Qiu, Y Wei, H Karimi, New approach to delay-dependent \({H}_{\infty }\) control for continuous-time Markovian jump systems with time-varying delay and deficient transition descriptions. J. Frankl. Inst. 352:, 189–215 (2015).

    Article  MathSciNet  MATH  Google Scholar 

  26. Y Fang, KA Loparo, Stochastic stability of jump linear systems. IEEE Trans. Autom. Control. 7:, 1204–1208 (2002).

    Article  MathSciNet  Google Scholar 

  27. OLV Costa, MD Fragoso, RP Marques, Discrete-time Markov Jump Linear Systems (Springer, 2005).

  28. JB Rawlings, DQ Mayne, Model Predictive Control: Theory And Design (Nob Hill Publishing, 2009).

  29. DE Quevedo, E Silva, GC Goodwin, Packetized predictive control over erasure channels. Proc. Amer. Contr. Conf, 1003–1008 (2007).

  30. TM Cover, JA Thomas, Elements of Information Theory, 2nd edn. (Wiley-Interscience, 2006).

  31. R Zamir, M Feder, On universal quantization by randomized uniform/lattice quantizers. IEEE Trans. Inform. Theory. 38(2), 428–436 (1992).

    Article  MATH  Google Scholar 

  32. VA Vaishampayan, NJA Sloane, SD Servetto, Multiple-description vector quantization with lattice codebooks: design and analysis. IEEE Trans. Inf. Theory. 47(5), 1718–1734 (2001).

    Article  MathSciNet  MATH  Google Scholar 

  33. J Østergaard, J Jensen, R Heusdens, n-channel entropy-constrained multiple-description lattice vector quantization. IEEE Trans. Inf. Theory. 52(5), 1956–1973 (2006).

    Article  MathSciNet  MATH  Google Scholar 

  34. J Kovačević, PL Dragotti, VK Goyal, Filter bank frame expansions with erasures. IEEE Trans. Inf. Theory. 48(6), 1439–1450 (2002).

    Article  MathSciNet  MATH  Google Scholar 

  35. J Østergaard, R Zamir, Multiple description coding by dithered delta-sigma quantization. IEEE Trans. Inf. Theory. 55(10), 4661–4675 (2009).

    Article  MathSciNet  Google Scholar 

  36. R Yeung, R Zamir, in Proceedings of IEEE International Symposium on Information Theory. Multilevel diversity coding via successive refinement (IEEEUlm, Germany, 1996).

    Google Scholar 

  37. R Puri, K Ramchandran, Conf. Record Thirty-Third Asilomar Conf Signals Syst. Comput. 1:, 342–346 (1999).

  38. RC Singleton, Maximum distance q-nary codes. IEEE Trans. Inf. Theory. 10(2), 116–118 (1964).

    Article  MathSciNet  MATH  Google Scholar 

  39. S Boyd, L Vandenberghe, Convex Optimization (Cambridge University Press, 2004). Appendix A.5.5.

  40. J Sundararajan, D Shah, M Medard, S Jakubczak, M Mitzenmacher, J Barros, Network coding meets tcp: theory and implementation. Proc. IEEE. 99(3), 490–512 (2011).

    Article  Google Scholar 

  41. DE Quevedo, GC Goodwin, JA De Doná, Finite constraint set receeding horizon quadratic control. Int. J. Robust Nonlin. Contr. 14(4), 355–377 (2004).

    Article  MATH  Google Scholar 

  42. CE Shannon, A mathematical theory of communications. Bell Syst. Tech. J. 27: (1948).

  43. MS Derpich, J Østergaard, DE Quevedo. Achieving the quadratic Gaussian rate-distortion function for source uncorrelated distortions, (2008). Electronically available on arXiv.org: http://arxiv.org/pdf/0801.1718.pdf. Accessed Oct 2015.

  44. R Zamir, M Feder, On lattice quantization noise. IEEE Trans. Inf. Theory. 42(4), 1152–1159 (1996).

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This research was partially supported by VILLUM FONDEN Young Investigator Programme, Project No. 10095. The authors would like to thank the reviewers for pointing us to references [2325] and to [6, 7].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Østergaard.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Østergaard, J., Quevedo, D. Multiple descriptions for packetized predictive control. EURASIP J. Adv. Signal Process. 2016, 45 (2016). https://doi.org/10.1186/s13634-016-0343-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-016-0343-1

Keywords