Skip to main content

Stochastic analysis of neural network modeling and identification of nonlinear memoryless MIMO systems

Abstract

Neural network (NN) approaches have been widely applied for modeling and identification of nonlinear multiple-input multiple-output (MIMO) systems. This paper proposes a stochastic analysis of a class of these NN algorithms. The class of MIMO systems considered in this paper is composed of a set of single-input nonlinearities followed by a linear combiner. The NN model consists of a set of single-input memoryless NN blocks followed by a linear combiner. A gradient descent algorithm is used for the learning process. Here we give analytical expressions for the mean squared error (MSE), explore the stationary points of the algorithm, evaluate the misadjustment error due to weight fluctuations, and derive recursions for the mean weight transient behavior during the learning process. The paper shows that in the case of independent inputs, the adaptive linear combiner identifies the linear combining matrix of the MIMO system (to within a scaling diagonal matrix) and that each NN block identifies the corresponding unknown nonlinearity to within a scale factor. The paper also investigates the particular case of linear identification of the nonlinear MIMO system. It is shown in this case that, for independent inputs, the adaptive linear combiner identifies a scaled version of the unknown linear combining matrix. The paper is supported with computer simulations which confirm the theoretical results.

Introduction

Neural network[1] approaches have been extensively used in the past few years for nonlinear MIMO system modeling, identification and control where they have shown very good performances compared to classical techniques[26].

If these NN approaches are to be used in real systems, it is important for the algorithm designer and the user to understand their learning behavior and performance capabilities. Several authors have analyzed NN algorithms during the last two decades which considerably helped the neural network community to better understand the mechanisms of neural networks[1, 715]. For example, the authors in[13] have studied a simple structure consisting of two inputs and a single neuron. The authors in[8] studied a memoryless single-input single-output (SISO) system identification model for the single neuron case. In[9] the authors proposed a stochastic analysis of gradient adaptive identification of nonlinear Wiener systems composed of a linear filter followed with a Zero-memory nonlinearity. The model was composed of a linear adaptive filter followed by an adaptive parameterized version of the nonlinearity. This study has been later generalized[16] for the analysis of stochastic gradient tracking of time-varying polynomial Wiener systems. In[12] the author analyzed NN identification of nonlinear SISO Wiener systems with memory for the case where the adaptive nonlinearity is a memoryless NN with an arbitrary number of neurons. The case of a nonlinear SISO Wiener-Hammerstein system (i.e., an adaptive filter followed by an adaptive Zero-memory NN followed by an adaptive filter) has been analyzed in[11].

This paper deals with a typical class of nonlinear MIMO systems (Figure 1) which is composed of M inputs, M memoryless nonlinearities, a linear combiner, and L outputs. This corresponds, for example, to MIMO channels used in wireless terrestrial communications[1722], satellite communications[23, 24], amplifier modeling[25], control of nonlinear MIMO systems[6], etc. Recently, a neural network approach has been proposed to adaptively identify the overall input–output transfer function of this class of MIMO systems and to characterize each component of the system (i.e., the memoryless nonlinearities and the linear combiner)[4]. The proposed NN model is composed of a set of memoryless NN blocks followed by an adaptive linear combiner. Each part of the adaptive system aims at identifying the corresponding part in the unknown MIMO system. The algorithm has been successfully applied to system modeling, channel tracking, and fault detection.

Figure 1
figure 1

Nonlinear MIMO system.

The purpose of this paper is to provide a stochastic analysis of NN modeling of this class of MIMO systems. The paper provides a general methodology that may be used to solve other problems in stochastic NN learning analysis. The methodology consists of splitting the study into simple structures, before studying the complete structure. Here, as a first step we start by analyzing a simple linear adaptive MIMO scheme (consisting of an adaptive matrix) that identifies the nonlinear MIMO system (i.e., problem of linear identification of a nonlinear MIMO system). Then we analyze a nonlinear adaptive system in which the nonlinearities are assumed to be known and frozen during the learning process, only the linear combiner is made adaptive. Finally, the complete adaptive scheme is analyzed taking into account the insights given by the analysis of the simpler structures. In our analytical approach, we derive the general formulas and recursions, which we apply to a case study that we believe is illustrative to the reader.

The paper is organized as follows. The problem statement is given in Section 2. The study of the simple structures is detailed in Section 3. Section 4 presents the analysis for the complete structure. Simulation results and illustrations are given in Section 5. Finally, conclusions and future work are given in Section 6.

Problem statement

Nonlinear MIMO system

The class of nonlinear MIMO systems discussed in this paper is presented in Figure 1. Each input x i (n) (i = 1,…,M) is nonlinearly transformed by a memoryless nonlinearity g i (.). The outputs of these nonlinearities are then linearly combined by an L × M matrix H = [h ji ] (assumed in this paper to be constant). Matrix H is defined by the unknown system to be identified. For example, in wireless MIMO communication systems, M is the propagation matrix representing the channel between M transmitting antennas and L receiving antennas.

The jth output of the MIMO system is expressed as:

y j n = i = 1 M h ji n g i x i n + N j n
(1)

where N j is a white Gaussian noise with variance σ02. Let

X n = [ x 1 n x 2 n x M n ] t , g X n = [ g 1 x 1 n g 2 x 2 n g M x M n ] t , Y n = [ y 1 n y 2 n y L n ] t , and N n = [ N 1 n N 2 n N L n ] t .

The system input–output relationship can be expressed in a matrix form as:

Y n = H × g X n + N n .
(2)

Neural Network identification structure and algorithm

The neural network (Figure 2) is composed of M blocks. Each block k has a scalar input x k (n) (k = 1,…,M), N neurons and a scalar output. The output of the kth block is expressed as:

N N k n = i = 1 N c ki f a ki x k n + b ki , k = 1 , .... , M
(3)
Figure 2
figure 2

NN identification structure.

Where f is the NN activation function. a ki , c ki , b ki are, respectively, the input weight, bias term, and output weight of the ith neuron in the kth block. The output NN k of the kth block is connected to the jth output of the system through weight wjk. The system jth output is then expressed as:

s j n = k = 1 M w jk N N k n , j = 1 , , L
(4)

Weights w jk will be represented by an LxM matrix: W = [w jk ]. Let

S n = [ s 1 n s 2 n s L n ] t a n d N N n = [ N N 1 n N N 2 n N N M n ] t .

Equations (4) can then be expressed in a matrix form as:

S n = W × N N n .
(5)

For the learning process, the NN parameters are updated so that to minimize the sum of the squared errors between the unknown system outputs and the corresponding outputs of the model (Figure 3):

e n 2 = j = 1 L e j 2 n .
(6)
Figure 3
figure 3

Adaptive learning scheme.

Here e j n = y j n s j n a n d e n = [ e 1 n e 2 n e L n ] t .

The gradient descent recursions for weight adaptation are:

W n + 1 = W n + 2 μ e n N N t n
(7)
c ki n + 1 = c ki n + 2 μ f a ki x k n + b ki l = 1 L w lk e l n
(8)
a ki n + 1 = a ki n + 2 μ c ki x k n f ' ( a ki x k n + b ki ) l = 1 L w lk e l n
(9)
b ki n + 1 = b ki n + 2 μ c ki f ' ( a ki x k n + b ki ) x l = 1 L w lk e l n
(10)

where μ is a small positive constant and f ' represents the derivative: f ' x = f x x .

Case study

After the derivation of the general formulas, it is important that we apply them to special cases in order to get closed-form expressions of the different recursions that can be illustrated to the reader. We have chosen here a case study that we think is good to illustrate our results. In this case study, the inputs x i (n) will be assumed uncorrelated Zero-mean Gaussian variables with variance σ x i 2 . The NN activation function will be taken as the erf function. The unknown nonlinear transfer functions are taken from a family of nonlinear functions of the form g i x = α i x exp β i x 2 2 , where α i and β i are positive constants. These nonlinear functions are reasonable models for amplitude conversions of nonlinear high power amplifiers (HPA) used in digital communications[12, 25, 26]. Note that other nonlinear functions may be considered, however, explicit closed-form solutions of the different derivations may not be possible.

Study of simplified structures: Linear adaptation

Before analyzing the full structure, we will analyze the following simplified schemes which will help us understand the complete structure:

  1. 1.

    The adaptive system is composed of an adaptive linear combiner W (Section 3.1).

  2. 2.

    The adaptive system is composed of W and scaled versions of the unknown nonlinearities (Section 3.2).

Linear adaptive system

This section studies the linear adaptive system that tries to model the nonlinear MIMO system (Figure 4):

Figure 4
figure 4

Linear adaptive system.

Mean weight behavior and Wiener solution

Since matrix W is linear, it will not be able to identify the nonlinear blocks. However, we will see that it is able to identify matrix H to within a diagonal scaling matrix if the inputs are Zero-mean and independent.

The gradient descent update of matrix W is expressed as:

W n + 1 = W n + 2 μ e n X t n = W n + 2 μ ( ( H g X n + N n W n X n ( X t n
(11)

Averaging both sides of (11) and using the standard LMS assumption of small μ[10], we obtain:

E ( W n + 1 ) E W n + 2 μ ( H R g X X E W n R XX ) = E W n ( I 2 μ R XX ) + 2 μ H R g X X
(12)

Where R XX = E X X t , R g X X = E g X X t .

By setting the updating gradient term to Zero, it can be shown that this equation has a single stationary point (Wiener solution[10]) which is expressed by:

W = W 0 = H U w h e r e U = R g X X R XX 1
(13)

Following Equation (12), the mean weights can be expressed as a function of the initial condition as:

E W n = W 0 I 2 μ R XX n + 2 μ H R g X X p = 0 n 1 I 2 μ R XX p
(14)

If μ is sufficiently small, the first term converges to 0 and the second term converges to H R g X X R XX 1 .

Hence, the mean weights converge to the Wiener solution:

W = W 0 = H R g X X R XX 1
(15)

It can be easily shown that the stability condition on μ is[10]:

0 < μ < 1 λ max
(16)

where λmax is the largest eigenvalue of the covariance matrix R XX .

Note that for Zero-mean independent inputs, U is a diagonal matrix:

U = R g X X R XX 1 = E g 1 x 1 x 1 σ x 1 2 0… 0 0 E g 2 x 2 x 2 σ x 2 2 0 0… E g M x M x M σ x M 2
(17)

In this case, the linear adaptation allows the identification of matrix W to within a scaling matrix, which depends on the nonlinearities and the input signals. As expected, the scaling matrix reduces to the identity matrix if g k (x k ) = x k .

Application to the case study:

For the particular nonlinear functions given in the case study (see Section 2.3), it is easy to show:

E x i n g i x i n = α i σ x i 2 1 + β i σ x i 2 3 2 and E g i 2 x i n = α i 2 σ x i 2 1 + 2 β i σ x i 2 3 2
(18)

The mean weight transient recursions are expressed as:

E w jk n + 1 = E w jk n 1 2 μ σ x k 2 + 2 μ h jk α k σ x k 2 1 + β k σ x k 2 3 2
(19)

Matrix U reduces to the following diagonal matrix:

U = α 1 1 + β 1 σ x 1 2 3 2 0… 0 0 α 2 1 + β 2 σ x 2 2 3 2 0 0… α M 1 + β M σ x M 2 3 2
(20)

Transient MSE and Wiener MSE

The transient MSE is determined by:

E e n 2 = E H g X n + N n W n X n 2 = j = 1 L E e j 2 n
(21)

where:

E e j 2 n = E H j g X n + N j n W j n X n 2
(22)

where W j n = [ w j 1 n w j 2 n w jM n ] t a n d H j = [ h j 1 h j 2 h jM ] t .

Using the independence of noise and weights at time n, we get:

E e j 2 n = σ 0 2 + E H j g X n W j n X n 2 = σ 0 2 + H j t R g X g X H j 2 H j t R g X X E ( W j n ) + E ( W j n R XX W j t n )
(23)

The total MSE is therefore expressed as:

E e n 2 = L σ 0 2 + j = 1 L H j t R g X g X H j 2 H j t R g X X E W j n + E ( W j t n R XX W j n ) .
(24)

Wiener MSE:

The Wiener MSE, ζ 0 = E e W 0 n 2 , is the minimum MSE that can be reached by the system if W is equal to the Wiener solution W0 = HU. It can be easily shown that:

ζ 0 = E e W 0 n 2 = L σ 0 2 + E H g X n W 0 X n 2 = L σ 0 2 + E H g X n U X n 2
(25)

It is clear from this equation that if the unknown functions are linear, then the Wiener MSE reduces to the noise power. The MSE is always larger than ζ0 because of the misadjustment error introduced by the weight fluctuations.

Now we can write the MSE as a function of the Wiener MSE:

E e n 2 = E H g X n + N n W n X n 2 = E e W 0 n W n W 0 X n ] 2
(26)

Let the instantaneous deviation of the matrix weights with respect to the Wiener solution be denoted by:

V n = v jk n = W n W 0 .
(27)

We have:

E e n 2 = E e W 0 n V n X n 2 .
(28)

This expression is similar to that of the well-known LMS algorithm[10], and can be evaluated as the sum of the minimum error and excess error (or misadjustment) as:

E e n 2 = ζ 0 + j = 1 L t r R XX K V j V j n
(29)

where V j n = v j 1 v j 2 v jM t a n d K V j V j n = E V j n V j t n .

The misadjustment is expressed as:

Δ n = t r R XX j = 1 L R V j V j n .
(30)

At the convergence, we have:

E e 2 = ζ 0 + Δ .
(31)

Derivation of the misadjustment:

From Equation (11) it is easy to show that the weight fluctuations follow the recursion:

V n + 1 = V n + 2 μ e W 0 n V n X n X t n
(32)

Taking the mean of this equation and applying the orthogonality principle between the input vector and the Wiener error, we get:

E V n + 1 = E V n 1 2 μ R XX
(33)

Thus, as expected, if μ is sufficiently small E( V (n)) converges to 0.

Similarly, for each vector V j we can obtain the following recursion:

V j n + 1 = V j n + 2 μ e W 0 j n X t n V j n X n )
(34)

The evaluation of the covariance matrix of the weight fluctuations is obtained by multiplying both sides of Equation (34) by V j t n + 1 and averaging:

K V j V j t n + 1 = K V j t V j t n 2 μ R XX K V j V j n 2 μ K V j V j n R XX + 2 μ E e W 0 j n X V j t I 2 μ X X t + 2 μ E e W 0 j n X V j t I 2 μ X X t t + 4 μ 2 E X X t K V j V j X X t + 4 μ 2 E e W 0 j 2 n X X t
(35)

These expectations are derived in Appendix III, which yields:

K V j V j t n + 1 = K V j t V j t n 2 μ R XX K V j V j n 2 μ K V j V j n R XX + 4 μ 2 E H j t g X X E V j t n X X t + t r R XX W 0 E V j t n R XX + R XX W 0 E V j t n R XX + 4 μ 2 E H j t g X X E V j t n X X t + t r R XX W 0 E V j t n R XX + R XX W 0 E V j t n R XX ) t + 4 μ 2 t r R XX K V j V j n R XX + 2 R XX K V j V j n R XX + 4 μ 2 E g X g t X H j H j t X X t + σ 0 2 R XX t r R XX W 0 j t W 0 j t R XX 2 R XX W 0 j t W 0 j t R XX
(36)

Taking into account that E(Vj(∞)) = 0, K VjVj can be obtained by solving the following equation:

R XX K V j V j + K V j V j R XX 2 μ t r R XX K V j V j R XX 4 μ R XX K V j V j R XX = 2 μ E g X g t X H j H j t X X t + 2 μ σ 0 2 R XX 2 μ t r R XX W 0 j t W 0 j t R XX 4 μ R XX W 0 j t W 0 j t R XX
(37)

This expression holds for any input signal. It can be simplified if R XX = σ x 2 I . In this case we have:

t r R XX K V j V j = μ σ 0 2 σ x 2 M + t r E g X g t X H j H j t X X t σ x 4 M + 2 t r W 0 j W 0 j t 1 μ σ x 2 M + 2
(38)

It is now easy to determine the total misadjustment:

Δ = j = 1 L t r R XX K V j V j = μ σ 0 2 σ x 2 M L + t r E g X g t X j = 1 L H j H j t X X t σ x 4 M + 2 t r W 0 W 0 t 1 μ σ x 2 M + 2
(39)

Note that, as expected, in the case of linear functions Δ(∞) reduces to:

Δ | g X = X = μ σ 0 2 σ x 2 M L 1 μ σ x 2 M + 2 .
(40)

The additional terms are due to the nonlinearities and they should be calculated specifically for each nonlinearity.

Application to the case study:

The MSE is expressed as:

E e n 2 = L σ 0 2 + j = 1 L k = 1 M α k σ x k 2 α k 1 + 2 β i σ x k 2 3 2 h jk 2 2 1 + β k σ x k 2 3 2 h jk w jk n + w jk 2 n
(41)

The Wiener MSE is expressed in this case as:

ζ 0 = L σ 0 2 + i = 1 M α i 2 σ x i 2 1 + 2 β i σ x i 2 3 2 1 1 + β i σ x i 2 3 j = 1 L h ji 2
(42)

Adaptive W, the nonlinearities are frozen and known with scale factors

In this section, matrix W is adaptive, the nonlinearities are frozen and known with scale factors (Figure 5).

Figure 5
figure 5

Simplified scheme, the nonlinearities are frozen and assumed known with a scaling factor.

Mean weight behavior and stationary points

The gradient descent update of matrix W is expressed as:

W n + 1 = W n + 2 μ e n Ω g X n t = W n + 2 μ [ ( H g X n + N n W n Ω g X n ] Ω g X n t
(43)

where Ω = η 1 0 0 0 0 0 η M .

Averaging both sides of (43) and using the standard LMS assumption of small μ, we obtain:

E W n + 1 E W n + 2 μ H Ω R g X g X E W n Ω 2 R g X g X = E W n I 2 μ Ω 2 R g X g X + 2 μ H Ω R g X g X
(44)

These recursions have a single stationary point (Wiener solution) which is:

W = W 0 = H × Ω 1
(45)

Following Equation (44), the mean weight behavior can be expressed as function of the initial condition as:

E W n = W 0 I 2 μ Ω 2 R g X g X n + 2 μ H Ω R g X g X p = 0 n 1 I 2 μ Ω 2 R g X g X p
(46)

Hence, if μ is sufficiently small, it can be shown that the mean weights converge to the Wiener solution:

W = W 0 = H Ω 1 .
(47)

The stability condition on μ is: 0 < μ < 1 λ max

Where λmax is the largest eigenvalue of the covariance matrix Ω2Rg (X)g(X).

Thus, if each nonlinear function g k (.) is known with a scaling factor η k , then weights h jk will be identified by w jk (to the inverse of the scaling factor).

MSE

We have:

E e n 2 = E H g X n + N n W n Ω g X n 2 = j = 1 L E e j 2 n
(48)

where:

E e j 2 n = E H j g X n + N j n W j n Ω g X n 2
(49)

Using the independence of noise and weights at time n, we obtain:

E e j 2 n = σ 0 2 + E H j Ω W j n ) g ( X n 2 = σ 0 2 + H j t R g X g X H j 2 H j t Ω R g X g X E W j n + E W j t n Ω 2 R g X g X W j n

The MSE is therefore expressed as:

E e n 2 = L σ 0 2 + j = 1 L H j t R g X g X H j 2 H j t Ω R g X g X E W j n + E W j t n Ω 2 R g X g X W j n
(50)

Wiener MSE:

The Wiener MSE can be easily expressed as:

ζ 0 = E e W 0 2 n = L σ 0 2 + E H W 0 Ω g X n 2 = L σ 0 2
(51)

Therefore the Wiener MSE is equal to the noise floor: There are no terms due to the nonlinearities. This is expected since the nonlinearities are known with a scaling matrix Ω (we have seen that the scaling matrix is canceled by W0 since W0 = HΩ-1).

Let Z (n) = Ω g(X(n)), we can then express the MSE as a function of ζ0, the weight fluctuation vector V(n) = W(n)–W0, and Z(n):

E e n 2 = E e W 0 n W n W 0 Z n 2 = ζ 0 + E V n Z n 2 = ζ 0 + j = 1 L t r R ZZ K V j V j n
(52)

Similarly to Equation (29), the misadjustment is expressed as:

Δ n = t r R ZZ j = 1 L K V j V j n .
(53)

The steady state MSE is then expressed as:

E e 2 = ζ 0 + Δ .
(54)

Derivation of the misadjustment:

It is easy to show that the weight fluctuations follow the recursion:

V n + 1 = V n + 2 μ e W 0 n V n Z n Z t n ) .
(55)

Taking the mean of this equation and applying the orthogonality principle between the input vector and the Wiener error, we obtain:

E V n + 1 = E V n × 1 2 μ R ZZ .
(56)

Thus, as expected, if μ is sufficiently small, E(V(n)) converges to 0.

For each vector V j we have similar recursions:

V j n + 1 = V j n + 2 μ e W 0 j n Z n t V j n Z n ) .
(57)

The evaluation of the covariance matrix of the weight fluctuations is obtained by multiplying both sides of Equation (57) by V j t n + 1 and averaging:

K V j V j t n + 1 = K V j t V j t n 2 μ R ZZ K V j V j n 2 μ K V j V j n R ZZ + 2 μ E e W 0 j n Z n V j t I 2 μ Z n Z n t + 2 μ E e W 0 j n Z n V j t I 2 μ Z n Z n t t + 4 μ 2 E Z n Z n t K V j V j Z n Z n t + 4 μ 2 E e W 0 j 2 n Z n Z n t
(58)

In a similar way as in Appendix III, K VjVj (∞) can be obtained by solving the following equation:

R ZZ K V j V j + K V j V j R ZZ 2 μ t r R ZZ K V j V j R ZZ 4 μ R ZZ K V j V j R ZZ = 2 μ E g X g t X H j H j t Z Z t + 2 μ σ 0 2 R ZZ 2 μ t r R ZZ W 0 j t W 0 j t R ZZ 4 μ R ZZ W 0 j t W 0 j t R ZZ
(59)

This expression can not be further simplified because R ZZ is not necessarily of the form σ z 2 I .

Therefore, tr(R ZZ K VjVj (∞)) should be calculated for each nonlinearity and for each Ω.

It is interesting to study the case where nonlinearities are known with the same scaling factor, i.e., Ω = ηI. In this case, and if the input vectors are independent and the outputs of the nonlinearities are Zero-mean and of equal variance σ g 2 , we have:

t r R ZZ K V j V j = μ σ 0 2 η 2 σ g 2 M 1 μ η 2 σ g 2 M + 2
(60)

As expected, the total misadjustment reduces to:

Δ = j = 1 L t r R ZZ K V j V j = μ σ 0 2 η 2 σ g 2 M L 1 μ η 2 σ g 2 M + 2 .
(61)

Here the value of the misadjustment is similar to that of linear identification of a linear system (LMS algorithm). This is expected since in this case there are no errors due to the approximation of the nonlinearities.

Case study

For the case study, it is easy to show that G ii = E g x i 2 n = α i 2 σ x 2 1 + 2 β i σ x i 2 3 2 . This yields:

E e j 2 n = σ 0 2 + i M α i 2 σ x i 2 1 + 2 β i σ x i 2 3 2 h ji w ji n η i 2
(62)

Study of the full structure

This section deals with the full structure (Figures 2,3). All the NN and matrix W weights are updated.

Mean weight transient behavior

We take the following notations for the weights: E w jk n = w ¯ jk n , E c ki n = c ¯ ki n , E a ki n = a ¯ ki n , E b ki n = b ¯ ki n .

The update of matrix W is expressed as:

W n + 1 = W n + 2 μ e n N N X n t = W n + 2 μ H g X n + N n W n N N X n N N X n t
(63)

Averaging both sides of (63) and using the standard LMS assumption of small μ, we obtain:

E W n + 1 E W n + 2 μ H R g X N N X n E W n R N N X N N X n = E W n I 2 μ R N N X N N X n + 2 μ H R g X N N X n
(64)

Where R N N X N N X n = E N N X n N N X n t , R g X N N X n = E g X n N N X n t .

These matrices are time-dependent since they depend on the NN block weights which are updated through time.

Using the scalar notation we have:

w ¯ jk n + 1 = w ¯ jk n + 2 μ E i = 1 M h ji g i x i n l = 1 M w jl N N l n ) × m = 1 N c km f a km x k n + b km w ¯ jk n + 2 μ i , m M , N h ji c ¯ km E g i x i n f a ¯ km x k n + b ¯ km l , m , i M , N , N w ¯ jl c ¯ li c ¯ km E f a ¯ li x l n + b ¯ li f a ¯ km x k n + b ¯ km
(65)

Let: K i a km , b km = E g i x i n f a km x k n + b km , a n d F lk a li , b li , a km , b km = E f a li x l n + b li f a km x k n + b km

With these notations we have:

w ¯ jk n + 1 = w ¯ jk n + 2 μ i , m M , N h ji c km K i a ¯ km , b ¯ km l , m , i M , N , N w ¯ jl c ¯ li c ¯ km F lk a ¯ li , b ¯ li , a ¯ km , b ¯ km
(66)

For the NN block weights we have:

c ¯ ki n + 1 = c ¯ ki n + 2 μ E l = 1 L w lk e l n f a ki x k n + b ki c ¯ ki n + 2 μ l = 1 L w ¯ lk E p = 1 M h lp g p x n f a ¯ ki x n + b ¯ ki m = 1 M w lm q = 1 N c ¯ mq f a ¯ mq x n + b ¯ mq f a ¯ ki x n + b ¯ ki = c ¯ ki n + 2 μ l = 1 L w ¯ lk p = 1 M h lp K p a ¯ ki , b ¯ ki m = 1 M w ¯ lm q = 1 N c ¯ mq F mk a ¯ mq , b ¯ mq , a ¯ ki , b ¯ ki
(67)
a ¯ ki n + 1 = a ¯ ki n + 2 μ E c ki l = 1 L w lk e l n x n f ' a ki x n + b ki a ¯ ki n + 2 μ c ¯ ki l = 1 L w ¯ lk p = 1 M h lp E g p x n x n × f ' a ¯ ki x n + b ¯ ki m = 1 M w ¯ lm q = 1 N c ¯ mq E f a ¯ mq x n + b ¯ mq x n f ' a ¯ ki x n + b ¯ ki = a ¯ ki n + 2 μ c ¯ ki l = 1 L w ¯ lk p = 1 M h lp K p a ¯ ki , b ¯ ki a ¯ ki m , q , m , q k , i M , N w ¯ lm c ¯ mq F mk a ¯ mq , b ¯ mq , a ¯ ki , b ¯ ki a ¯ ki 1 2 w ¯ lk c ¯ ki F ki a ¯ ki , b ¯ ki , a ¯ ki , b ¯ ki a ¯ ki
(68)
b ¯ ki ( n + 1 ) = b ¯ ki ( n ) + 2 μ E c ki l = 1 L w lk e l n f ' a ki x n + b ki b ¯ ki ( n ) + 2 μ c ¯ ki l = 1 L w ¯ lk p = 1 M h lp E g p x n f ' a ¯ ki x n + b ¯ ki m = 1 M w ¯ lm q = 1 N c ¯ mq E f a ¯ mq x n + b ¯ mq f ' a ¯ ki x n + b ¯ ki = b ¯ ki ( n ) + 2 μ c ¯ ki l = 1 L w ¯ lk p = 1 M h lp K p a ¯ ki , b ¯ ki b ¯ ki m , q M , N w ¯ lm c ¯ mq F mk a ¯ mq , b ¯ mq , a ¯ ki , b ¯ ki b ¯ ki
(69)

These equations hold for any nonlinearity. In the following, we will calculate them explicitly for the case study described in Section 2.3

Application to the case study:

Since the inputs are independent and Zero-mean, we have K i (a km ,b km )=0, ki , and (see Appendix I)

K i a im , b im = E g i x i n f a im x n + b im = 2 π α i σ x 2 1 + β i σ x 2 a im σ x 2 a im 2 + β i + 1 1 2 2 π α i σ x 2 b im 2 a im 1 + σ x 2 β i + a im 2 3 2
(70)

In the other hand we have: F lk a li , b li , a km , b km = 0 , l k , and (see Appendix I)

F kk a ki , b ki , a km , b km = E f a ki x n + b ki f a km x n + b km = 2 π sin 1 a ki a km σ x 2 1 + σ x 2 a ki 2 + σ x 2 a km 2 + σ x 4 a ki 2 a km 2 1 π b ki 2 σ x 2 a ki a km 1 + σ x 2 a ki 2 1 + σ x 2 a ki 2 + a km 2 1 π b km 2 σ x 2 a ki a km 1 + σ x 2 a km 2 1 + σ x 2 a ki 2 + a km 2 + b ki b km 2 π 1 1 + σ x 2 a ki 2 + a km 2
(71)

Inserting these expressions in equations (66)-(69), we obtain:

w ¯ jk n + 1 = w ¯ jk n + 2 μ h jk m = 1 N c km K k a ¯ km , b ¯ km w ¯ jk m , i N c ¯ ki c ¯ km F kk a ¯ ki , b ¯ ki , a ¯ km , b ¯ km
(72)
c ¯ ki n + 1 = c ¯ ki n + 2 μ l = 1 L w ¯ lk h lk K k a ¯ ki , b ¯ ki w ¯ lk q = 1 N c ¯ kq F kk a ¯ kq , b ¯ kq , a ¯ ki , b ¯ ki
(73)
a ¯ ki n + 1 = a ¯ ki n + 2 μ c ¯ ki l = 1 L w ¯ lk h lk K k a ¯ ki , b ¯ ki a ¯ ki w ¯ lk q k N c ¯ kq F kk a ¯ kq , b ¯ kq , a ¯ ki , b ¯ ki a ¯ ki 1 2 w ¯ lk c ¯ kk F kk a ¯ kk , b ¯ kk , a ¯ ki , b ¯ ki a ¯ kk
(74)
b ¯ ki n + 1 = b ¯ ki n + 2 μ c ¯ ki l = 1 L w ¯ lk h lk K k a ¯ ki , b ¯ ki b ¯ ki q N w ¯ lk c ¯ kq F kk a ¯ kq , b ¯ kq , a ¯ ki , b ¯ ki b ¯ ki
(75)

The explicit expressions of the different derivatives are detailed in Appendix II.

Stationary points

We obtain the stationary points by setting to 0 the expectations of the updating gradient terms in (64) and(4.5-7).

For W, we obtain:

W 0 = H × R g X N N X R N N X N N X 1 = H × U , w h e r e U = R g X N N X R N N X N N X 1 .
(76)

For c ki we obtain the equations:

l = 1 L w lk h lk K k a ki , b ki w ¯ lk q = 1 N c kq F kk a kq , b kq , a ki , b ki = 0
(77)

For a ki we obtain the equations:

l = 1 L w lk h lk K k a ki , b ki a ki w ¯ lk q k N c ¯ kq F kk a kq , b kq , a ki , b ki a ki 1 2 w lk c kk F kk a kk , b kk , a ki , b ki a kk = 0
(78)

For b ki we obtain:

l = 1 L w lk h lk K k a ki , b ki b ki q N w lk c kq F kk a kq , b kq , a ki , b ki b ki = 0
(79)

The above equations are nonlinear in the NN variables. They can be solved numerically, but they are very difficult to solve analytically.

Convergence of the algorithm to the stationary points:

It is always interesting to show whether an algorithm is capable of converging to its stationary points or not. In our case it is difficult to establish this, since the updating equations of the weights are nonlinear, except for W.

In the case where the NN weights are frozen we can establish the convergence condition for W.

In this case we have:

E W n + 1 = E W n I 2 μ R N N X N N X + 2 μ H R g X N N X
(80)

The covariance matrices are fixed, since in this case the NN weights are frozen.

E(W(n)) can be expressed as a function of the initial condition as:

E W n = W 0 I 2 μ R N N X N N X n + 2 μ H R g X N N X × p = 0 n 1 I 2 μ R N N X N N X p
(81)

If μ is sufficiently small, the steady state solution to (81) is:

W = W 0 = H R g X N N X R N N X N N X 1 .
(82)

Hence, the mean weights converge to the stationary point W0, and the stability condition on μ is:

0 < μ < 1 λ max
(83)

Where λmax is the largest eigenvalue of the correlation matrix R NN(X)NN(X) .

Application to the case study:

For the case study it can be shown that U reduces to a diagonal matrix:

U = R g X N N X R N N X N N X 1 = γ 1 0 0 0 γ 2 0 0 0 γ M ,
(84)

where:

γ k = m N K k a km , b km m , i N , M c ki c km F kk a ki , b ki , a km , b km
(85)

This indicates that weights w jk are scaled versions of the unknown weights h jk , the scale factor γ k is the same for all the weights connecting the kth NN block to the outputs and it depends only on block k weights. If the error is sufficiently small, the kth block NN will approximate the kth nonlinearity to the inverse of the scale factor.

MSE expression

The transient MSE is determined by:

E e n 2 = j = 1 L E e j 2 n = E H g X n + N n W n N N X n 2
(86)

where:

E e j 2 n = E H j g X n + N j n W j n N N X n 2 = E ( i = 1 M h ji g i x i n + N j n k 1 M w jk × i = 1 N c ki f a ki x k n + b ki ) 2
(87)

Which can be expressed as:

E e j 2 n = σ 0 2 + i , l M h ji h jl E g i x i n g l x l n 2 i , k M h ji n w jk m = 1 N c km E g i x i n f a km x k n + b km + k , l M i , m N w jl w jk c li c ki E f a li x l n + b li f a km x k n + b km
(88)

Let Gil = E(g i (x i (n))gl(x i (n))). Using the notations of Section 4.1, we have:

E e j 2 n = σ 0 2 + i , l M h ji h jl G il 2 i , k M h ji w jk m = 1 N c km K i a km , b km + k , l M i , m N w jl w jk c li c km F lk a li , b li , a km , b km
(89)

Application to the case study:

It can be easily shown that:

E e j 2 n = σ 0 2 + i M h ji 2 G ii 2 k M h jk w jk m = 1 N c km K k a km , b km + k M w jk 2 i , m N c ki 2 F kk a ki , b ki , a km , b km .
(90)

The 1st term of E e j 2 n represents the noise power, the 2nd term is the signal power of the jth MIMO output, the 3rd term is the sum of the individual contributions of the neurons weighed by W and H weights, the 4th term represents the sum of the coupling terms between neurons inside the same block weighed by W. Note that since the inputs are Zero-mean and independent, there are no coupling terms between neurons in different blocks (as in Eq. (89)).

The total MSE is then expressed as:

E e n 2 = L σ 0 2 + j , i M h ji 2 G ii 2 j , k M h jk w jk m = 1 N c km K k a km , b km + j , k M w jk 2 i , m N c ki 2 F kk a ki , b ki , a km , b km .
(91)

Case of frozen NN weights:

It is interesting to see the behavior of the MSE in the case where the NN weights are frozen.

In this case we have:

ζ 0 = E e W 0 n 2 = L σ 0 2 + E H g X n W 0 N N X n 2 .
(92)

Here the minimum MSE depends on the noise floor and on the NN approximation error of the nonlinearities. It is clear from this equation and from Section 3.2 that, if the NN blocks ideally identify the nonlinearities (to within scale factors), then ζ0 reduces to the noise floor.

The MSE can be written as a function of ζ0 as:

E e n 2 = E e W 0 n W n W 0 N N X n ] 2 = E e W 0 n V n X n 2
(93)

The steady state MSE is in this case:

E e 2 = ζ 0 + j = 1 L t r R N N X N N X K V j V j .
(94)

The misadjustment can be derived similarly as in Sections 3.1 and 3.2. We obtain a similar equation as (53), by replacing R ZZ by R NN(X)NN(X) . The equation can not be simplified further.

It is interesting to notice, however, that if the NN blocks perfectly identify the nonlinearities and if the conditions above equation (60) are fulfilled, then:

Δ = j = 1 L t r R N N X N N X K V j V j = μ σ 0 2 σ g 2 M L 1 μ σ g 2 M + 2
(95)

Simulation examples

In this section we present some simulation results which are applied to the case study described in Section 2.3. In these simulations, we have considered a 2 × 2 MIMO system (i.e., M = L = 2). For the parameterized nonlinearities we have chosen α1=α2=1, β1=1, β2=2. Unless otherwise specified, the inputs are uncorrelated Zero-mean white Gaussian processes with σ xi = 1. In the simulations, the unknown combining matrix was fixed and was taken as H = 1 0.3 0.3 1 . For example, in a MIMO communication system, H can be seen as the propagation matrix between 2 transmitting antennas and 2 receiving antennas.

Linear adaptation

For the linear adaptation case (Section 3.1), the adaptive system is composed of a 2x2 matrix W. For the noise we have taken σ0 = 0.001. The mean weight recursions and the MSE transient behaviors (Figures 6,7) have been estimated over 20 Monte Carlo (MC) simulations and compared to the theoretical derivations (Equations (19) and (41)). This chosen number of MC simulations shows excellent fit between the Theory and MC estimations which confirms the validity of the different assumptions made. A larger number of MC simulations allows a better smoothing of the curves, but the conclusions remain the same.

Figure 6
figure 6

Smoothed MSE vs. iteration number - Linear adaptation case.

Figure 7
figure 7

Mean weight transient behavior - Linear adaptation case.

Matrix W converges to a scaled version of H: W 0 = 0 . 3536 0 . 0577 0 . 1061 0 . 1925 = H U, U = 0 . 3536 0 . 0000 0 . 0000 0 . 1925 . Note the typical behavior of the LMS algorithm: A time constant controls the transient part of the learning curve and the mean weight curve. This is fundamentally different from the full NN system learning which is governed by several time constants and presents plateau regions (Section 5.2). It should be noted that the steady state MSE is high because of the error caused by the fact that the nonlinearities are not approximated (actually they are modeled by the identity function) (Equation (25)).

MSE surface for the full NN algorithm

We move now to the study of the full NN algorithm. In this simulation, we have taken N = 3 neurons in each of the two NN blocks. The learning rate was fixed to μ =0.0045. Figure 8 shows the MSE surface (i.e., Eq. (90), with no time dependence) as a function of w11 and w12 (the other parameters were fixed). It is clear that the MSE is quadratic in w11 and w21. It presents a single global minimum (as shown in Equations (76 and 84)).

Figure 8
figure 8

MSE surface, MSE ( w 11 , w 21 ).

Figure 9 (resp. Figure 10) shows the MSE surface as a function of a11 and c11 (resp. a11 and a12) (the other parameters were fixed). It can be noted the flat areas (plateau regions) around the minima of the MSE surface. This explains the slow evolution of the NN weights when the algorithm gets close to its convergence point.

Figure 9
figure 9

MSE surface, MSE ( a 11 , c 21 ).

Figure 10
figure 10

MSE surface, MSE ( a 11 , a 1 2 ).

The MSE evolution during the learning process (Equation (90)) has been compared to 20 MC estimations (Figure 11a): The theory shows very good fit with the simulation results. In the Figure, we notice that the MSE presents several phases (each phase is controlled by a time constant) which end by a plateau phase where the MSE decreases very slowly. This is a typical behavior of the backpropagation algorithm[1] which is fundamentally different from that of the linear adaptation scheme (Figure 6). Here the MSE error is much smaller. This is expected, since here the additional MSE error due to the nonlinearities (Eq. 92) is highly reduced because our NN blocks have correctly identified the unknown nonlinearities (Figure 12,13). Here we are in a situation close to that of Section 3.2 (Equations 51-52).

Figure 11
figure 11

a: MSE during the learning process (theory and simulation), σ 0 =0.001 and μ =0.0045. b: MSE for different values of noise variance, the learning rate was μ=0.0045 (theoretical results). c : MSE for different values of learning rate μ (the noise variance was set to σ0=0.001), theoretical results.

Figure 12
figure 12

Identification of g 1 ( x ), by the 1 st NN block.

Figure 13
figure 13

Identification of g 2 ( x ) by the 2 nd NN block.

Figure 11b shows the MSE evolution during the learning process for different values of the noise variance σ0. It can be seen that, as the noise variance decreases, the MSE decreases. However, below a certain value of σ0 (here σ0=0.0005), the MSE curves are almost identical. This is because in this case, the weight misadjustment error (for the linear part) and the nonlinear approximation error (of the nonlinear memoryless part) are much higher than the error caused by the presence of noise (see Eqs. 92-93).

Figure 11c investigates the influence of the learning rate μ. It can be seen that as μ increases (up to μ=0.002), the algorithm is faster and the MSE is lower at the end of the simulation time. However, for μ>0.002, as μ increases, the algorithm is faster at the beginning of the learning process, but the MSE is higher at the end of the simulation time. This is due to the misadjustment error which is higher for higher μ (see, e.g., Eq. 95).

Mean weight transient behavior for the full NN algorithm

Here we keep the system described in Section 5.2. The mean weight recursions for the linear combiner W and the two NN blocks are shown in Figure 14,15,16 for both theory and MC estimations. The theoretical and estimated curves are indistinguishable. This confirms the validity of the different assumptions made in Sections 4.1 and 4.2.

Figure 14
figure 14

Mean weight behavior: Matrix W (theory and simulation).

Figure 15
figure 15

Mean weight behavior: First NN block (theory and simulation).

Figure 16
figure 16

Mean weight behavior: Second NN block (theory and simulation vs. iterations).

Notice that, in Figure 14, W weights have a fast evolution at the beginning of the learning process (with values approaching H×U(n) where U is a diagonal matrix). They then evolve slowly till the end of the learning process. The slow evolution is justified by the plateau regions presented by the MSE surface. At the end of this simulation, matrix U was close to a diagonal matrix: U Sim = 1.2702 0 . 0001 0 . 0003 1 . 0946 , (and U Theory = 1.270 0 0 1 . 095 ). This result is expected since the inputs are uncorrelated (Equations 84-85).

Figure 12,13 show that functions g1 (x) and g2 (x) have been correctly identified by the corresponding NN blocks (the NN functions are normalized by the scaling factors γ1=1/1.2702, γ1=1/1.0946, respectively).

Impact of correlated inputs

In the simulations below we study the impact of correlated inputs. The input signal vector is chosen here as a 2D Gaussian process with covariance matrix of the form R XX = 1 ρ ρ 1 . The number of neurons in each NN block was taken as N = 5 neurons. The learning rate μ=0.0075. We have run several simulations for different values of the cross correlation. Note that (ρ=0) corresponds to independent inputs, and (ρ=1) corresponds to the same input (i.e., x1=x2). The values of matrix U and the MSE after n=2105 iterations are shown in Table 1. It can be seen from Table 1 that, in practice, matrix U remains very close to a diagonal matrix, even for high correlation between inputs. This indicates that the system is capable of correctly identifying the nonlinearities even when the inputs are highly correlated. The identification performances for the cases (ρ=0.6) and ( ρ=0.99) are illustrated in Figures 17,18 and19,20, respectively. As expected, the MSE increases as the correlation between inputs increases (Table 1). When ρ=1 (i.e., the two inputs are the same) the system is capable of correctly identifying the overall MIMO input–output transfer function. However, in this case, it is not capable of separating the nonlinearities (as U is not diagonal). The reason is that in this case, the system is seen by the learning algorithm as a 1x2 SIMO system which has several equivalent structures. Figure 21 shows an example of two equivalent structures. Therefore, the adaptive system is structurally not able to separate the nonlinearities. It is worth to note that for the case (ρ=0.999), the inputs look like noisy versions of each other (i.e., this is equivalent to a 1x2 system identification problem with noisy inputs). Thus, the MSE for the case (ρ=0.999 ) is larger than the MSE for the case (ρ=1).

Table 1 Effect of correlated inputs
Figure 17
figure 17

Identification of g 1 ( x ), case ρ =0.6.

Figure 18
figure 18

Identification of g 2 ( x ), case ρ =0.6.

Figure 19
figure 19

Identification of g 1 ( x ), case ρ =0.99.

Figure 20
figure 20

Identification of g 2 ( x ), case ρ =0.99.

Figure 21
figure 21

Example of two equivalent structures when the inputs are the same.

Conclusion and future work

The paper provides a statistical analysis of NN modeling and identification of a class of nonlinear MIMO systems. The study investigates the MSE error, mean weight behavior, stationary points, misadjustment error, and stability conditions. The unknown system is composed of a set of single-input memoryless nonlinearities followed by a combining matrix. The NN model is composed of a set of single-input memoryless NN blocks followed by an adaptive linear combiner. The paper is supported with simulation results which show good agreement between the theoretical recursions and MC simulations. Future work will focus on 3 research directions. The first will explore the theoretical findings in order to express the effect of the number of neurons on the transient and steady state behavior of the algorithm. The second research axis will investigate the case where matrix H is time-varying and/or with memory (this may have applications, for example, in adaptive control of nonlinear dynamical MIMO systems). Finally, we will study the algorithm behavior and performance for specific inputs (such as space-time coded signals used in wireless communications and their impact on the system capacity).

Appendix I

  1. 1)

    Calculation of F kk Let x 1 and x 2 be two zero-mean Gaussian variables such that σ x 1 2 = σ x 2 2 = σ x 2 a n d E x 1 x 2 = ρ Therefore, F kk a ki , b ki , a km , b km = E f a ki x k n + b ki f a km x k n + b km ρ = σ x 2 . Using Price’s theorem we have: E 2 f a ki x 1 + b ki f a km x 2 + b km x 1 x 2 = E f a ki x 1 + b ki f w km x 2 + b km ρ . Let U ρ = E 2 f a ki x 1 + b ki f w km x 2 + b km x 1 x 2 , Then: E f a ki x 1 + b ki f a km x 2 + b km ρ = σ x 2 E f a ki x 1 + b ki f a km x 2 + b km ρ = 0 = 0 σ x 2 U ρ d ρ .

    Thus, using the un-correlation criteria between x1 and x2 for ρ=0 , we have: E f a ki x 1 + b ki f a km x 2 + b km ρ = 0 = E f a ki x 1 + b ki E f a km x 2 + b km . Thus: F a ki , b ki , a km , b km = E f a ki x 1 + b ki E f a km x 2 + b km + 0 σ x 2 U ρ d ρ . We have:

    U ρ = E 2 f a ki x 1 + b ki f a km x 2 + b km x 1 x 2 = 2 π a ki a km 1 2 π R 1 2 + + e 1 2 a ki x 1 + b ki 2 e 1 2 a km x 2 + b km 2 e 1 2 X t R 1 X d x 1 d x 2

    where X = x 1 x 2 t a n d R = σ x 2 ρ ρ σ x 2 . Combining the terms in the exponentials and completing the squares, the integrals can be calculated:

    U ρ = 2 π a ki a km 1 + σ x 2 a ki 2 + σ x 2 a km 2 + σ x 4 ρ 2 a ki 2 a km 2 × exp 1 2 b ki 2 b km 2 + 1 a ki 2 + σ x 2 σ x 2 ρ 2 b ki 2 a ki 2 + b ki a ki a ki 2 + σ x 2 σ x 2 ρ 2 + b km a km ρ σ x 2 ρ 2 2 a ki 2 + σ x 2 σ x 2 ρ 2 a km 2 + σ x 2 σ x 2 ρ 2 a km 2 ρ 2 σ x 2 ρ 2 .

    Note that in the biasless case (i.e. all the bias terms are set to 0) this expression reduces to:

    U ρ = 2 π a ki a km 1 + σ x 2 a ki 2 + σ x 2 a km 2 + σ x 4 ρ 2 a ki 2 a km 2 .

    The integral is then simple to calculate:

    0 σ x 2 U ρ d ρ = 2 π sin 1 a ki a km σ x 2 1 + σ x 2 a ki 2 + σ x 2 a km 2 + σ x 4 a ki 2 a km 2

    In the other hand, since E f w k x 1 = E f w i x 2 = 0 , then: F a ki , a km , 0 , 0 = 2 π sin 1 a ki a km σ x 2 1 + σ x 2 a ki 2 + σ x 2 a km 2 + σ x 4 a ki 2 a km 2 . When the bias terms are not set to 0, a Taylor series expansion on the bias terms can be used in order to avoid the calculation of the integral.

  2. 2)

    Calculation of K

    K k a km , b km = E g k x k f a km x k + a km = 1 2 π 1 σ x + α k x e β k x 2 2 e x 2 2 σ x 2 0 a km x + b km e u 2 2 d u d x .

    The inside integral can be eliminated by integrating by parts on variable x.The integral is then evaluated by combining the terms in the exponentials and completing the squares. This yields:

    K k a km , b km = 2 π α k 1 σ x 2 + β k a km σ x 2 a km 2 + β k + 1 × exp b km 2 2 1 σ x 2 1 + σ x 2 β k + a km 2 .

    Again, a Taylor series expansion can be used to simplify this expression.Note that in the biasless case we have:

    K k a km , 0 = 2 π α 1 σ x 2 + β k a km σ x 2 a km 2 + β k + 1

Appendix II

The derivatives needed to compute the different recursions are expressed as follows:

K k a km , 0 a km = 2 π α σ x 2 σ x 2 a km 2 + β k + 1 3 2 , K k a km , b km a km = K k a km , 0 a km 2 π α k σ x 2 b km 2 2 1 + σ x 2 β k 2 σ x 2 a km 2 1 + σ x 2 β k + a km 2 5 2 F kk a ki , 0 , a ki , 0 a ki = 4 π σ x 2 a ki σ x 2 a ki 2 + 1 1 + 2 σ x 2 a ki 2 , F kk a ki , a ki , b ki , b ki a ki = F a ki , 0 , a ki , 0 a ki 2 π b ki 2 σ x 2 a ki 5 σ x 2 a ki 2 + 3 1 + σ x 2 a ki 2 2 1 + 2 σ x 2 a ki 2 3 2 F a ki , 0 , a km , 0 a ki = 2 π σ x 2 a km σ x 2 a ki 2 + 1 1 + σ x 2 a ki 2 + a km 2 F a ki , a km , b ki , b km a ki = F a ki , a ki , 0 , 0 a ki 1 π b ki 2 σ x 2 a km 1 + σ x 2 a ki 2 1 + σ x 2 a km 2 + a ki 2 1 + σ x 2 a km 2 1 + σ x 2 a km 2 + a ki 2 2 σ x 2 a ki 2 1 + σ x 2 a ki 2 1 π a km 2 σ x 2 a km 1 + σ x 2 a km 2 + a ki 2 3 2 b ki b km 2 π σ x 2 a ki 1 + σ x 2 a km 2 + a ki 2 3 2 F a ki , a km , b ki , b km b km = 2 π b ki σ x 2 a ki a km 1 + σ x 2 a ki 2 1 + σ x 2 a km 2 + a ki 2 b km 2 π 1 1 + σ x 2 a km 2 + a ki 2

Appendix III

K V j V j t n + 1 = K V j t V j t n 2 μ R XX K V j V j n 2 μ K V j V j n R XX + 2 μ E e W 0 j n X V j t I 2 μ X X t + 2 μ E e W 0 j n X V j t I 2 μ X X t t + 4 μ 2 E X X t K V j V j X X t + 4 μ 2 E e W 0 j 2 n X X t
(96)

The calculations are similar to[9] Appendix, the main difference is that here we deal with a multi-dimensional input. Therefore, we will follow the same methodology as in[9]. Following[10] the expectation before the last one can be calculated as:

E X X t K V j V j n X X t = t r R XX K V j V j n R XX + 2 R XX K V j V j n R XX .
(97)

The first expectation is expressed as:

E e W 0 j n X V j t I μ X X t = E e W 0 j n X V j t μ E e W 0 j n X V j t X X t
(98)

The first term is Zero (orthogonality principle). The second term is:

E e W 0 j n X V j t X X t = E H j t g j x j + N j n W 0 j t X n X V j t X X t
(99)

The middle term is Zero (Zero-mean white noise), the last expectation is:

E W 0 j t X n X V j t X X t = E X n X W 0 V j t n X X t t r R XX W 0 E V j t n R XX + 2 R XX W 0 E V j t n R XX
(100)

The first expectation in Eq. (99) E H j t g X X V j t n X X t E H j t g X X E V j t n X X t involves the nonlinearity gj (x j ) and should be evaluated explicitly.The remaining expectation in (96) is: E e oj 2 n X X t .

E e oj 2 n X X t = E H j t g X W 0 j t X 2 X X t + σ 0 2 R XX
(101)

We have

E W 0 j t X 2 X X t = E X X t W 0 j t W 0 j t X X t = t r R XX W 0 j t W 0 j t R XX + 2 R XX W 0 j t W 0 j t R XX
(102)

The first term in (102) is:

E H j t g X 2 X X t = E H j H j t g X g t X X X t = E g X g t X H j H j t X X t
  1. (96)

    is then expressed as:

    K V j V j t n + 1 = K V j t V j t n 2 μ R XX K V j V j n 2 μ K V j V j n R XX + 4 μ 2 E H j t g X X E V j t n X X t + t r R XX W 0 E V j t n ) R XX + R XX W 0 E V j t n R XX + 4 μ 2 E H j t g X X E V j t n X X t + t r R XX W 0 E V j t n ) R XX + R XX W 0 E V j t n R XX t + 4 μ 2 t r R XX K V j V j n R XX + 2 R XX K V j V j n R XX + 4 μ 2 E g X g t X H j H j t X X t + σ 0 2 R XX t r R XX W 0 t W 0 t R XX 2 R XX W 0 j t W 0 j t R XX

Endnotes

This work has been supported by The Natural Sciences and Engineering Research Council of Canada (NSERC).

The time index of the weights has been omitted from the right hand side of the equations to make them easier to read.

References

  1. Haykin S: Neural Networks: A Comprehensive Foundation. Prentice Hall; 1999.

    MATH  Google Scholar 

  2. Gao Y, Er M: Online adaptive fuzzy neural identification and control of a class of MIMO nonlinear systems. IEEE Trans. Fuzzy Systems 2003,  : 462-476.

  3. Ge SS, Wang C: Adaptive neural control of uncertain nonlinear MIMO systems. IEEE Trans. Neural Networks 2004, 15: 674-692. 10.1109/TNN.2004.826130

    Article  Google Scholar 

  4. Ibnkahla M, Al-Hinai A: Adaptive modeling and identification of nonlinear MIMO channels using neural networks. In Adaptive Signal Processing in Wireless Communications. Edited by: Ibnkahla M. CRC Press, Boca Raton, FL, USA; 2008.

    Chapter  Google Scholar 

  5. Narendra KS, Parthasarathy K: Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Networks 1990, 1: 4-27. 10.1109/72.80202

    Article  Google Scholar 

  6. Xu H, Ioannou P: Robust adaptive control for a class of MIMO nonlinear systems with guaranteed error bounds. IEEE Trans. Automatic Control 2003,  : 718-742.

  7. Amari S: Mathematical foundations of neurocomputing. Proc. IEEE September 1990, 78(9):1443-1463. 10.1109/5.58324

    Article  Google Scholar 

  8. Bershad NJ, Ibnkahla M, Castanié F: Statistical analysis of a two-layer back propagation algorithm used for modeling non linear memoryless channels: The single neuron case. IEEE Trans. Signal Processing March 1997, 45(3):747-756. 10.1109/78.558493

    Article  Google Scholar 

  9. Bershad N, Celka P, Vesin JM: Stochastic analysis of gradient adaptive identification of nonlinear systems with memory for Gaussian data and noisy input and output measurements. IEEE Trans. Signal Processing March 1999, 47(3):675-689. 10.1109/78.747775

    Article  MathSciNet  MATH  Google Scholar 

  10. Haykin S: Adaptive Filter Theory. Prentice Hall; 1996.

    MATH  Google Scholar 

  11. Ibnkahla M, Bershad NJ, Sombrin J, Castanié F: Neural network modeling and identification of non linear channels with memory: Algorithms, applications and analytic models. IEEE Trans. Signal Processing 1998, 46: 5.

    Article  MATH  Google Scholar 

  12. Ibnkahla M: Statistical analysis of neural network modeling and identification of nonlinear channels with memory. IEEE Trans. Signal Processing 2002,  : 1508-1517.

  13. Shynk J, Roy S: Convergence properties and stationary points of a perceptron learning algorithm. Proc. IEEE Oct. 1990, 70: 1599-1604.

    Article  Google Scholar 

  14. Taylor JG: Mathematical Approaches to Neural Networks. North-Holland, Amsterdam; 1993.

    MATH  Google Scholar 

  15. White H: learning in artificial neural networks: A statistical perspective. Neural Comput. 1989, 1: 425-464. 10.1162/neco.1989.1.4.425

    Article  Google Scholar 

  16. Bershad N, Celka P, Vesin JM: Analysis of stochastic gradient tracking of time-varying polynomial Wiener systems. IEEE Trans. Signal Processing June 2000, 48(6):1676-1686. 10.1109/78.845925

    Article  MathSciNet  MATH  Google Scholar 

  17. Bolcskei H: MIMO systems: Principles and trends. In Signal Processing for. Mobile Communications Handbook. 12th edition. Edited by: Ibnkahla M. CRC Press; 2004.

    Google Scholar 

  18. Javornik T, Kandus G, Plevel S, White G, Burr A: V-BLAST algorithm performance in non-linear channel. IEEE Computer as a Tool Conference September 2003, 1: 183-187.

    Google Scholar 

  19. Poitau G, Kouki A: Impact of realistic amplification models on dynamic VBLAST optimization. Proc. Vehicular Technology Conference Spring 2004,  : 894-897.

  20. Woo S, Lee D, Kim K, Hur H, Lee C, Laskar J: Combined effects of RF impairments in the future IEEE 802.11n WLAN systems. Proc. IEEE Vehicular Technology Conference Spring May 2005, 2: 1346-1349.

    Google Scholar 

  21. Yang S, Xi J, Mu X: Decision aided joint compensation of clipping noise and nonlinearity for MIMO-OFDM systems. Proc. IEEE International Symposium on Communications and Information Technology (ISCIT) 2005, 1: 725-728.

    Google Scholar 

  22. Yang S, Xi J, Wang F, Mu X, Kobayashi H: Decision aided compensation of residual frequency offset for MIMO-OFDM systems with nonlinear channel. Proc. International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) 2005,  : 113-116.

  23. Sulyman A, Ibnkahla M: Performance of MIMO systems with antenna selection over nonlinear fading channels. IEEE Journal in Selected Topics in Signal Processing April 2008, 2: 159-170.

    Article  Google Scholar 

  24. Sulyman A, Ibnkahla M: Performance analysis of nonlinearly amplified M-QAM signals in MIMO channels. European Transactions in Communications January 2008, 19(1):15-22.

    Google Scholar 

  25. Pedro J, Maas S: A comparative overview of microwave and wireless power amplifier behavioral modeling approaches. IEEE Trans. Microwave Theory and Techniques 2005, 53(4):1150-1163.

    Article  Google Scholar 

  26. Saleh A: Frequency-independent and frequency–dependent nonlinear models of TWT amplifiers. IEEE Trans. Communications 1981, 29: 11. 10.1109/TCOM.1981.1094876

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed Ibnkahla.

Additional information

Competing interests

The author declares that he has no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ibnkahla, M. Stochastic analysis of neural network modeling and identification of nonlinear memoryless MIMO systems. EURASIP J. Adv. Signal Process. 2012, 179 (2012). https://doi.org/10.1186/1687-6180-2012-179

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-179

Keywords