Skip to main content

Multi-stream continuous hidden Markov models with application to landmine detection

Abstract

We propose a multi-stream continuous hidden Markov model (MSCHMM) framework that can learn from multiple modalities. We assume that the feature space is partitioned into subspaces generated by different sources of information. In order to fuse the different modalities, the proposed MSCHMM introduces stream relevance weights. First, we modify the probability density function (pdf) that characterizes the standard continuous HMM to include state and component dependent stream relevance weights. The resulting pdf approximate is a linear combination of pdfs characterizing multiple modalities. Second, we formulate the CHMM objective function to allow for the simultaneous optimization of all model parameters including the relevance weights. Third, we generalize the maximum likelihood based Baum-Welch algorithm and the minimum classification error/gradient probabilistic descent (MCE/GPD) learning algorithms to include stream relevance weights. We propose two versions of the MSCHMM. The first one introduces the relevance weights at the state level while the second one introduces the weights at the component level. We illustrate the performance of the proposed MSCHMM structures using synthetic data sets. We also apply them to the problem of landmine detection using ground penetrating radar. We show that when the multiple sources of information are equally relevant across all training data, the performance of the proposed MSCHMM is comparable to the baseline CHMM. However, when the relevance of the sources varies, the MSCHMM outperforms the baseline CHMM because it can learn the optimal relevance weights. We also show that our approach outperforms existing multi-stream HMM because the latter one cannot optimize all model parameters simultaneously.

1 Introduction

Hidden Markov models (HMMs) have emerged as a powerful paradigm for modeling stochastic processes and pattern sequences. Originally, HMMs have been applied to the domain of speech recognition, and became the dominating technology [1]. In recent years, they have attracted growing interest in automatic target detection and classification [2], computational molecular biology [3], bioinformatics [4], mine detection [5], handwritten character/word recognition [6], and other computer vision applications [7]. HMMs are categorized into discrete and continuous models. An HMM is called continuous if the observation probability density functions are continuous and discrete if the observation probability density functions are discrete.

Continuous probability density functions have the advantage of covering the entire landscape of the feature space when dealing with continuous attributes. In fact, each data point would correspond to a unique probability density value that represents its likelihood or unique occurrence rate. The discrete HMM, on the other hand, reduces the feature space to a finite set of prototypes or representatives. The quantization is typically accompanied by a loss of information that tends to reduce the generalization accuracy. Therefore, in this article, we focus on the continuous version of HMM for classification.

For complex classification problems involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. In these cases, multiple features extracted from different modalities and sensors may be needed. HMM approaches that combine multiple features can be divided into three main categories: feature fusion or direct identification; decision fusion or separate identification (known also as late integration); and model fusion (early/intermediate integration) [8]. In feature fusion, multiple features are concatenated into a large feature vector and a single HMM model is trained [9]. This type of fusion has the drawback of treating heterogeneous features equally important. Moreover, it cannot easily represent the loose timing synchronicity between different modalities. In decision fusion, the modalities are processed separately to build independent models [10]. This approach ignores the correlation between features and allows complete asynchrony between the streams. In addition, it is computationally heavy since it involves two layers of decision. In the third category, model fusion, a more complex HMM model than the standard one is sought. The additional complexity is needed to handle the correlation between modalities and the loose synchronicity between sequences when needed. Several HMM structures have been proposed for this purpose. Examples include factorial HMM [11], coupled HMM [12] and multi-stream HMM [13]. Both factorial and coupled HMM structures assign a state sequence to each stream and allow asynchrony between sequences [14]. However, the parameter estimation of these models is not trivial and only approximate solutions can be obtained. In particular, the parameters of factorial and coupled HMMs could be estimated via EM (Baum-Welch) algorithm. However, the E-step is computationally intractable and approximation approaches are used instead [11, 12]. Multi-stream HMM (MSHMM) is an HMM based structure that handles multiple modalities for temporal data. It is used when the modalities (streams) are synchronous and independent.

Multi-stream HMM techniques have been proposed for both the discrete and the continuous cases [1517]. In our earlier study [17], we have proposed a multi-stream HMM framework for the discrete case where two distinct structures that integrate a stream relevance weight for each symbol in each state. For each structure, we generalized the Baum-Welch [1] and the minimum classification error (MCE) [18] training algorithms. In particular, we modified the objective function to include the stream relevance weights and derived the necessary conditions to optimize all of the model parameters simultaneously.

For the continuous case, multi-stream HMM was originally introduced to fuse audio and visual streams in speech recognition using continuous HMM [15, 16]. In these methods, the feature space is partitioned into subspaces and different probability density functions (pdf) are learned for the different streams. The relevance of the different streams were encoded by exponent weights and a weighted geometric mean of the streams is used to approximate the pdf. This geometric approximation of the pdf makes it impossible to derive the maximum likelihood estimates of the stream relevance weights [16], unless the model is restricted to include only one Gaussian component per state [15]. Consequently, a two step learning mechanism was adapted to learn all model parameters. In the first step the MLE (standard Baum-Welch algorithm) [1] is used to learn all model parameters, except the stream relevance weights. In the second step, a discriminative training algorithm is used to learn the exponent weights. The main drawback of this approach is its inability to provide an optimization framework that learns all the HMM parameters simultaneously unless the number of components per state is limited to one which can be too restrictive for most real applications. In addition, solving this issue using two layers of training that optimize two different types of parameters is susceptible to local optima. To alleviate these limitations, the authors in [19] proposed a MSHMM structure that allows for simultaneous learning of all model parameters, including the stream relevance weights, by linearizing the approximation of the pdf. In this approach, the stream relevance weight were introduced at the mixture level, and the Baum-Welch (BW) learning algorithm was generalized to derive the necessary conditions to learn all parameters simultaneously.

In this article, we extend the MSHMM structure proposed in [19] to the state level stream weighting and generalize the MLE learning algorithm for this structure. We also generalize the minimum classification error (MCE) learning to both mixture level and state level streaming.

The organization of the rest of the article is as follows. In Section 2, we outline the baseline CHMM with maximum likelihood and discriminative training. We also provide an overview of existing HMM based structures for multi-sensor fusion. In Section 3, we present our continuous multi-stream HMM structures and we derive the necessary conditions to optimize all parameters simultaneously using both MLE and MCE/GPD learning approaches. Section 4 has the experimental results that compare the proposed multi-stream HMM with existing HMM approaches. Finally, Section 5 contains the conclusions and future directions.

2 Related study

2.1 Baseline continuous HMM

An HMM is a model of a doubly stochastic process that produces a sequence of random observation vectors at discrete times according to an underlying Markov chain. At each observation time, the Markov chain may be in one of N s states, s 1 ,, s N s and given that the chain is in a certain state, there are probabilities of moving to other states. These probabilities are called the transition probabilities. An HMM is characterized by three sets of probability density functions, the initial probabilities (π), the transition probabilities (A), and the state probability density functions (B). Let T be the length of the observation sequence (i.e., number of time steps), O=[o 1,…,o T ] be the observation sequence, where each observation vector o t is characterized by p features (i.e., o t R p ), and Q=[q 1,…,q T ] be the state sequence. The compact notation

λ=(π,A,B)
(1)

is generally used to indicate the complete parameter set of the HMM model. In (1), π=[π i ], where π i =Pr(q 1=s i ) are the initial state probabilities; A =[a i j ] is the state transition probability matrix, where a i j =Pr(q t =j|q t−1=i) for i,j=1,…,N s ; and B={b i (o t ),i=1,…,N s }, where b i (o t )=Pr(o t |q t =i) is the set of observation probability distribution in state i. For the continuous HMM, b i (o t ) are defined by a mixture of some parametric probability density functions (pdfs). The most common parametric pdf used in continuous HMM is the mixture Gaussian densities where

b i ( o t )= j = 1 M i u ij b ij ( o t ),fori=1,, N s .
(2)

In (2), M i is the number of components in state i, b i j (o t ) is a p-dimensional multivariate Gaussian density with mean μ i j and a covariance matrix Σ i j , and u i j is the mixture coefficient for the j th mixture component in state i, and satisfies the constraints

u ij 0,and j = 1 Mi u ij =1,fori=1,, N s .
(3)

For a C-class classification problem, each random sequence O is to be classified into one of the C classes. Each class, c, is modeled by a CHMM λ c . Let O=[ O ( 1 ) ,, O ( R ) ] be a set of R sequences drawn from these C different classes and let g c (O) be a discriminant function associated with classifier c that indicates the degree to which O belongs to class c. The classifier Γ(O) defines a mapping from the sample space (OO) to the discrete categorical set {1,2,…,C}. That is,

Γ(O)=Iiff I = arg max c = 1 , , C g c (O).
(4)

Two main approaches were considered to learn the HMM parameters. The first one is based on learning the model parameters that maximizes the likelihood of the training data. The second approach is based on discriminative training that minimizes the classification error over all classes.

2.1.1 CHMM with maximum likelihood estimation (MLE)

The Baum-Welch (BW) [1] is an MLE algorithm that is commonly used to learn the HMM parameters. It consists of adjusting the parameters of each model λ independently to maximize the likelihood Pr(O|λ). Maximizing Pr(O|λ) is equivalent to maximizing the auxiliary function:

Q(λ, λ ̄ )= Q F Pr(Q,E|O,λ)lnPr(O,Q,E| λ ̄ ),
(5)

where λ is the initial guess and λ ¯ is the subject of optimization. In fact, it was proven [20] that Pr ( O | λ ) ∂λ = ∂Q ( λ , λ ̄ ) λ ̄ | λ ̄ = λ . In (5), Q=[q 1,q 2,…,q T ] is a random vector representing the underlying state at time slot t, and E=[e 1,e 2,…,e T ] is a random vector, where each e t represents the index of the mixture component within the underlying state that is responsible for the generation of the observation o t .

Using a mixture of Gaussian densities with diagonal covariance matrices, it can be shown that the HMM parameters A and B need to be updated iteratively using [1]:

a ¯ ij = t = 1 T 1 ξ t ( i , j ) t = 1 T 1 γ t ( i ) ,
(6)
u ¯ ij = t = 1 T γ t ( 1 ) ( i , j ) t = 1 T γ t ( i )
(7)
μ ¯ ijd = t = 1 T γ t ( 1 ) ( i , j ) o td t = 1 T γ t ( 1 ) ( i , j )
(8)
Σ ¯ ij = t = 1 T γ t ( 1 ) ( i , j ) ( o t μ ¯ ij ) t ( o t μ ¯ ij ) t = 1 T κ t ( i , j )
(9)

In the above,

ξ t ( i , j ) = α t ( i ) a ij b j ( o t + 1 ) β t + 1 ( j ) i = 1 N s j = 1 N s α t ( i ) a ij b j ( o t + 1 ) β t + 1 ( j ) , γ t ( i ) = α t ( i ) β t ( i ) j = 1 N s α t ( j ) β t ( j ) , κ t ( i , k ) = γ t ( i ) u ij b ij ( o t ) b i ( o t )

The variables α t (j) and β t (j) are computed using the Forward and Backward algorithms [1], respectively.

2.1.2 CHMM with discriminative training

The optimality of the MLE training criterion is conditioned on the availability of an infinite amount of training data and the correct choice of the model. Indeed, it was shown in [21] that, if the true distribution of the samples to be classified can be accurately described by the assumed statistical model and if the size of the training set tends to infinity, the MLE tends to be optimal. However, in practice, neither of these conditions are satisfied as the available training data are limited, and the assumptions made on the HMM structure are often inaccurate. As a consequence, the likelihood-based training may not be effective. In this case, minimization of the classification error rate is a more suitable objective than minimization of the error of the parameter estimates. A common discriminative training method is the MCE [18]. In fact, it has been reported since the mid-1990s that discriminative training techniques were more successful [18]. The optimization of the error function is generally carried out by the GPD algorithm [18], a gradient descent-based optimization, and results in a classifier with minimum error probability. Let,

g c (O,Λ)=log[ max Q g c (O,Q,Λ)],
(10)

be the discriminant function associated with classifier λ that indicates the degree to which O belongs to class c. In (10), Q is a state sequence correspondent to the observation sequence O, λ includes the models parameters, and

g c ( O , Q , Λ ) = Pr ( O , Q ; λ c ) = π q 0 ( c ) t = 1 T 1 a q t q t + 1 ( c ) t = 1 T b q t ( c ) ( o t ) = π q 0 ( c ) t = 1 T 1 a q t q t + 1 ( c ) t = 1 T j = 1 M u q t j ( c ) b q t j ( c ) ( o t ) .
(11)

Assuming that Q ̄ =( q ̄ 0 , q ̄ 1 ,, q ̄ T ) is the optimal state sequence that achieves maxQ g c (O,Q,Λ), which could be computed using the Viterbi algorithm [22], Equation (10) can be rewritten as

g c ( O , Λ ) = log [ g c ( O , Q ̄ , Λ ) ]

The misclassification measure of sequence O is defined by:

d c ( O ) = g c ( O , Λ ) + log 1 1 C j , j c exp [ η g j ( O , Λ ) ] 1 η
(12)

where η is a positive number. A positive d c (O) indicates misclassification, while a negative d c (O) indicates correct decision.

The misclassification measure is embedded in a smoothed zero-one function, referred to as loss function, defined as:

l c (O,Λ)=l( d c (O)),
(13)

where l is a sigmoid function, one example of which is:

l(d)= 1 1 + exp ( ζd + θ ) .
(14)

In (14), θ is normally set to zero, and ζ is set to a number larger than one. Correct classification corresponds to loss values in [0, 1 2 ), and misclassification corresponds to loss values in ( 1 2 ,1]. The shape of the sigmoid loss function varies with the parameter ζ>0: the larger the ζ, the narrower the transition region. Finally, for any unknown sequence O, the classifier performance is measured by:

l(O;Λ)= c = 1 C l c (O;Λ)I(O C c )
(15)

where I(.) is the indicator function. Given a set of training observation sequences O (r), r=1,2,…,R, an empirical loss function on the training data can be defined as

L(Λ)= r = 1 R c = 1 C l c ( O ( r ) ;Λ)I( O ( r ) C c ).
(16)

Minimizing the empirical loss is equivalent to minimizing the total misclassification error. The CHMM parameters are therefore estimated by carrying out a gradient descent on L(Λ). In order to ensure that the estimated CHMM parameters satisfy the stochastic constraints of a i j ≥0, j = 1 N s a ij =1 and u i j ≥0, j = 1 M u ij =1, and μ i j d ≥0, and Σ i j ≥0, these parameters are mapped using

a ij ã ij = log a ij , u ij ũ ij = log u ij , μ ijd μ ~ ijd = μ ijd Σ ij and Σ ij Σ ~ ij = log Σ ij
(17)

Then, the parameters are updated with respect to Λ ~ . After updating, the parameters are mapped back using

a ij = exp ã ij j = 1 N s exp ã i j , u ij = exp ũ ij j = 1 M exp ũ i j , μ ijd = μ ~ ijd σ ijd , and Σ ij = exp Σ ~ ij
(18)

Using a batch estimation mode, it can be shown that the CHMM parameters ã ij ( c ) , ũ jk ( c ) , μ ~ ijd ( c ) , and Σ ~ ij ( c ) need to be updated using [18]:

Λ ~ (τ+1)= Λ ~ ( τ ) ϵ Λ ~ L ( Λ ) Λ ~ = Λ ~ ( τ ) ,
(19)

where

L ( Λ ) ã ij ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) δ ( q t r = i , q t + 1 r = j ) ( 1 a ij ( c ) ) d c ( O r ) g m ( O r , Λ ) , L ( Λ ) ũ ij ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) u ij ( c ) × ( 1 u ij ( c ) ) δ ( q t , i ) b q t j ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ ) , L ( Λ ) μ ~ ijd ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) × σ ijd ( c ) δ ( q t , i ) u ij ( c ) ( o td ( r ) μ ijd ( c ) ) b q t j ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ ) , L ( Λ ) Σ ~ ij ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) × σ ijd ( c ) 1 δ ( q t , i ) u ij ( c ) × ( o td ( r ) μ ijd ( c ) ) Σ ij 1 ( o td ( r ) μ ijd ( c ) ) t 1 × b q t j ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ ) ,

In the above,

d c ( O ) g m ( O , Λ ) = 1 if c = m exp [ η g c ( O , Λ ) ] j , j c exp [ η g j ( O , Λ ) ] if c m .
(20)

2.2 HMM structures for multiple streams

For complex classification systems, data is usually gathered from multiple sources of information that have varying degrees of reliability. Within the context of hidden Markov models, different modalities could contribute to the generation of the sequence. These sources of information usually represent heterogeneous types of data. Assuming that the different sources are equally important in describing all the data might lead to suboptimal solutions.

Multi-modalities appear in several applications and could be broadly grouped into natural modalities and synthetic modalities. The first category consists of naturally available modalities such as audio and video used in automatic audio-visual speech recognition (AAVSR) systems [14]. Both speech and lips movement (possibly captured by video) are available when someone speaks. Natural modalities also appear in sign language recognition where multi-stream HMM, based on hand position and movement, has been used [23]. In the second category, the modalities are synthesized by several feature extraction techniques with different characteristics and expressiveness. For instance, for automatic speech recognition (ASR), Mel-frequency cepstral coefficients (MFCC) and formant-like features have been used as different sources within HMM classifiers [24]. Synthesized modalities have also been used to combine upper contour features and lower contour features as two streams for off-line handwritten word recognition [25].

Under the assumption of synchronicity and independence, the streams are handled using multi-stream HMM (MSHMM). MSHMM assumes that for each time slot, there is a single hidden state, from which different streams interpret the observations. The independence of the streams means that their interpretation of the hidden states and their generation of the observations is performed independently. Multi-stream HMM techniques have been proposed for both the discrete and the continuous case [1517]. In our earlier study [17], we have proposed a multi-stream HMM framework for the discrete case that integrate a stream relevance weight for each symbol in each state, and we have generalized the BW and the MCE/GPD training algorithms for this structure.

For the continuous case, few types of MSHMM have been proposed in the literature to learn audio and visual stream relevance weights in speech recognition using continuous HMM [15, 16]. In these methods, the feature space is partitioned into subspaces generated by the different streams, and different probability density functions (pdf) are learned for each subspace. The relevance weights for each stream could be fixed a priori by an expert [13], or learned via Minimum Classification Error/Generalized Probabilistic Descent (MCE/GPD) [16]. In [15], the authors have adapted the Baum-Welch algorithm [26] to learn the stream relevance weights. However, to derive the maximum likelihood equations, the model was restricted to include only one Gaussian component per state.

In the above approaches, the stream relevance weighting was introduced within the pdf characterizing the continuous HMM at the mixture level and at the state level. The mixture level weighting is based on factorizing each mixture into a product of weighted streams [16]. In particular, in [16] each component of the MFCC feature vector is considered as a separate stream. This is reflected on the observation probability as,

b i ( o t )= j = 1 M u ij k = 1 L ϕ ( o t ( k ) , μ ijk , Σ ijk ) w ijk ,
(21)

subject to

j = 1 M u ij =1and k = 1 L w ijk =1,
(22)

where w i j k is the relevance weight of each stream k within component j of state i. It is learned via the minimum classification error (MCE) approach with generalized probabilistic descent (GPD) [16]. There is no method to learn the weights using the maximum likelihood (ML) approach. In the rest of the article, we refer to this method by MSCHMM G M .

On the other hand, the state level weighting treats the pdf as a product of exponent weighted mixture of Gaussians [27]. In [27], the streams are the audio and visual modalities of the speech signal, and the observation probability is given by

b i ( o t )= k = 1 L j = 1 M u ijk ϕ ( o t ( k ) , μ ijk , Σ ijk ) w ik ,
(23)

subject to

j = 1 M u ijk =1,and k = 1 L w ik =1,
(24)

where w i k is the relevance weight of each stream k within state i. For this approach, it was shown [16] that it is not possible to derive an update equation for the exponent weights using maximum likelihood learning. As an alternative, in [28] the authors proposed an algorithm where these weights are learnt via the MCE/GPD approach while the remaining HMM parameters are estimated by means of traditional maximum likelihood techniques.

We should note here that, in general, (21) and (23) do not represent a probability distribution, and was therefore referred to as “score". In the rest of the article, we refer to this method by MSCHMM G S .

Even though existing MSCHMM structures provide a solution to combine multiple sources of information and were shown to outperform the baseline HMM, they are not general enough and they have several limitations. In particular, they do not provide an optimization framework that learns all the HMM parameters simultaneously. In general, a two step training approach is needed. First, the BW learning algorithm is used to learn the parameters of the HMM relative to each subspace. Then, the MCE/GPD algorithm is used to learn the relevance weights. This two-step approach is due to the difficulty that arises when using the proposed pdf within the BW learning algorithm. Consequently, the feature relevance weights learned with MCE/GPD may not correspond to local minima of the ML optimization. The only approach that extends the BW learning was derived for the special case that limits the number of components per state to one. This can be too restrictive for many applications.

To overcome the above limitations, we propose a generic approach that integrates stream discrimination within the CHMM classifier. In particular, we propose linear “scores" instead of the geometric ones in (21) and (23). We show that all parameters of the proposed model could be optimized simultaneously and we derive the necessary conditions to optimize them for both the MLE and MCE training approaches.

3 Multi-stream continuous HMM

We assume that, we have L streams of information. These streams could have been generated by different sensors and/or different feature extraction algorithms. Each stream is represented by a different subset of features. We propose two multi-stream continuous HMM (MSCHMM) structures that integrate stream relevance weights and alleviate the limitations of existing MSCHMM structures. In particular, we generalize the objective function to include stream relevance weights and derive the necessary conditions to update all parameters simultaneously. This is achieved by linearizing the “score" or the pdf approximate of the observation. We use the compact notation

λ=(π,A,B,W),
(25)

to indicate the complete set of parameters of the proposed model. This includes the initial probabilities π, the transition probability A, the observation probability distribution B, and the stream relevance weights W. The distributions π and A are defined in the same way as for the baseline CHMM. However, B and W are defined differently and depend on whether the streaming is at the mixture or at the state level.

In this article, we propose two forms of pdfs approximations. The first one is a mixture level streaming pdf that integrates local stream relevance weights that depend on the states and their mixture components. We will refer to this model as MSCHMM Lm. The second version uses state level streaming pdf where the relevance weights depend only on the states. We will refer to this model as MSCHMM Ls.

3.1 Multi-stream HMM with mixture level streaming

Let N( o t ( k ) , μ ijk , Σ ijk ) be a normal pdf with mean μ i j k and covariance matrix Σ i j k that represents the j th component in state i taking into account only the feature subset generated by stream k. Let w i j k be the relevance weight of stream k in the j th component of state i. To cover the aggregate feature space generated by the L streams, we use a mixture of L normal pdfs, i.e.,

b ij ( o t )= k = 1 L w ijk b ijk ( o t ( k ) )= k = 1 L w ijk N( o t ( k ) , μ ijk , Σ ijk ).
(26)

To model each state by multiple components, we let

b i ( o t )= j = 1 M u ij b ij ( o t ),
(27)

subject to

j = 1 M u ij =1,and k = 1 L w ijk =1.
(28)

In (27), u i j is the mixing coefficient as defined in the standard CHMM (3). This linear form of the probability density function is motivated by the following probabilistic reasoning:

b i ( o t ) = Pr ( o t | q t = i ; λ ) = j = 1 M Pr ( o t | q t = i , e t = j ; λ ) Pr ( e t = j | q t = i ; λ )

where e t is a random variable representing the index of the component occurring at time t. By introducing a random variable, f t , that represents the index of the most relevant stream at time t, we can rewrite b i (o t ) as:

b i ( o t ) = j = 1 M Pr ( e t = j | q t = i ; λ ) k = 1 L Pr ( o t | q t = i , e t = j , f t = k ; λ ) Pr ( f t = k | q t = i , e t = j ; λ )

If we assume that at time t one of the L streams is significantly more relevant than the others. In other words, the fusion of the L sources of information is performed in a mutual exclusive manner, and not in “collective" way where all the sources contribute (each with a small portion) to the characterization of the raw data. Then,

b i ( o t ) j = 1 M Pr ( e t = j | q t = i ; λ ) k = 1 L Pr ( f t = k | q t = i , e t = j ; λ ) Pr ( o t ( k ) | q t = i , e t = j , f t = k ; λ )

It follows then that:

b ijk ( o t ( k ) ) = Pr ( o t ( k ) | q t = i , e t = j , f t = k ; λ ) , w ijk = Pr ( f t = k | q t = i , e t = j ; λ ) , u ij = Pr ( e t = j | q t = i ; λ ) .

The MLE learning algorithm is an iterative approach that is prone to local minima. Therefore, it is important to provide good initial estimates of the parameters. For our approach, we propose the following initialization scheme. First, we use the SCAD algorithm [29] to cluster the training data into N s clusters. The prototype of each of the N s clusters is taken as the state representative vector. Next, we partition the observations assigned to each state cluster into M clusters to learn the M Gaussian components within each state. One advantage of using SCAD to perform this partitioning is that this algorithm learns feature relevance weights for each cluster. These relevance weights and the cardinality, mean, and covariance of each of the clusters are then used to initialize the MSCHMM parameters. After initialization, the model parameters are then tuned using the maximum Likelihood or the discriminative learning approaches. In the following, we generalize these learning methods for the proposed MSCHMM architectures.

3.1.1 Learning model parameters with generalized MLE

Given a sequence of training observation O=[o 1,…,o T ], the parameters of λ could be learned by maximizing the likelihood of the observation sequence O, i.e., Pr(O|λ). We achieve this by generalizing the continuous Baum-Welch algorithm to include a stream relevance weight component. We define the genera lized Baum-Welch algorithm through the following auxiliary function:

Q(λ, λ ̄ )= Q E F Pr(Q,E,F|O,λ)lnPr(O,Q,E,F| λ ̄ ),
(29)

where E=[e 1,…,e T ] and F=[f 1,…,f T ] are two sequences of random variables representing the component and stream indices at each time step. It can be shown that a critical point of Pr(O|λ), with respect to λ, is a critical point of the new auxiliary function Q(λ, λ ̄ ) with respect to λ ̄ when λ ̄ =λ, that is: Pr ( O | λ ) ∂λ = ∂Q ( λ , λ ̄ ) λ ̄ | λ ̄ = λ . Maximizing the likelihood of the training data results in the following update equations (see Appendix 2):

w ij ( k ) = t = 1 T γ t ( 2 ) ( i , j , k ) t = 1 T κ t ( i , j )
(30)
u ij = t = 1 T κ t ( i , j ) t = 1 T γ t ( i )
(31)
μ ijkd = t = 1 T ν t ( i , j , k ) o td ( k ) t = 1 T ν t ( i , j , k )
(32)
Σ ijk = t = 1 T ν t ( i , j , k ) ( o td ( k ) μ ijkd ) ( o td ( k ) μ ijkd ) t t = 1 T γ t ( 2 ) ( i , j , k ) .
(33)

In the above,

γ t ( i ) = α t ( i ) β t ( i ) j = 1 N s α t ( j ) β t ( j ) ,
(34)
κ t ( i , j ) = γ t ( i ) u ij b ij ( o t ) b i ( o t ) ,
(35)

and

ν t ( i , j , k ) = γ t ( i ) u ij w ijk N ( o t ( k ) , μ ijk , Σ ijk ) b j ( o t ) .
(36)

In the case of multiple observations [O (1),…,O (R)], it can be shown that the update equations become:

w ijk = r = 1 R t = 1 T γ t ( 2 ) ( i , j , k ) r = 1 R t = 1 T γ t ( 1 ) ( i , j )
(37)
μ ijkd = r = 1 R t = 1 T γ t ( 2 ) ( i , j , k ) o td ( k ) r = 1 R t = 1 T γ t ( 2 ) ( i , j , k )
(38)
Σ ijk = r = 1 R t = 1 T γ t ( 2 ) ( i , j , k ) ( o td ( k ) μ ijkd ) ( o td ( k ) μ ijkd ) t r = 1 R t = 1 T γ t ( 2 ) ( i , j , k ) .
(39)

Algorithm 1 outlines the steps of the proposed generalized BW algorithm to learn all of the MSCHMM Lm parameters simultaneously.

Algorithm 1 Generalized BW training for the mixture level MSCHMM

3.1.2 Learning model parameters with generalized MCE/GPD

As an alternative training approach, we generalize the MCE/GPD to develop a discriminative training for the proposed MSCHMM Lm. In particular, we extend the discriminant function in (10) to accommodate for the stream relevance weights using:

g c ( O , Q , Λ ) = P ( O , Q ; λ c ) = π q 0 ( c ) t = 1 T 1 a q t q t + 1 ( c ) t = 1 T b q t ( c ) ( o t ) = π q 0 ( c ) t = 1 T 1 a q t q t + 1 ( c ) t = 1 T j = 1 M v q t j ( c ) k = 1 L w q t jk ( c ) b q t jk ( c ) ( o t )
(40)

In the above, b ijk ( o t )=N( o t ( k ) , μ ijk , Σ ijk ), where N( o t ( k ) , μ ijk , Σ ijk ) represents the normal density function with mean μ i j k and covariance Σ i j k . We assume that the covariance matrix Σ i j k is diagonal. Hence, Σ ijk = ( σ ijkd ) 2 d = 1 p . Thus, g c (O,Λ)=log[ g c (O, Q ̄ ,Λ)], where Q ̄ =( q ̄ 0 , q ̄ 1 ,, q ̄ T ) is the optimal state sequence that achieves maxQ g c (O,Q,Λ), which could be computed using the Viterbi algorithm [22].

The misclassification measure of sequence O is defined by:

d c ( O ) = g c ( O , Λ ) + log 1 1 C j , j c exp [ η g j ( O , Λ ) ] 1 η ,
(41)

where η is a positive number. A positive d c (O) implies misclassification and a negative d c (O) implies correct decision.

The misclassification measure is embedded in a smoothed zero-one function, referred to as loss function, defined as:

l c (O,Λ)=l( d c (O)),
(42)

where l is the sigmoid function in (14).

For an unknown sequence O, the classifier performance is measured by:

l(O;Λ)= c = 1 C l c (O;Λ)I(O C c )
(43)

where I(.) is the indicator function. Given a set of training observation sequences O (r), r=1,2,…,R, an empirical loss function on the training data, that can approximate the true Bayes risk is defined as:

L(Λ)= r = 1 R c = 1 C l c (O;Λ)I(O C c ).
(44)

The MSCHMM Lm parameters are estimated by applying a steepest descent optimization to L(Λ). In order to ensure that the estimated MSCHMM Lm parameters satisfy the stochastic constraints, we map them using (17) and

w ijk w ~ ijk =log w ijk .
(45)

Then, the parameters are updated with respect to Λ ~ . After updating, we map them back using (18) and

w ijk = exp w ~ ijk k = 1 L exp w ~ ij k .
(46)

Using a batch estimation mode, it can be shown that the MSCHMM Lm parameters, ũ ij ( c ) , w ~ ijk ( c ) , μ ~ ijkd ( c ) , and σ ~ ijkd ( c ) need to be updated iteratively using:

Λ ~ (τ+1)= Λ ~ ( τ ) ϵ Λ ~ L ( Λ ) Λ ~ = Λ ~ ( τ ) ,
(47)

where

L ( Λ ) ũ ij ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) u ij ( c ) ( 1 u ij ( c ) ) δ ( q t , i ) b q t j ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ )
(48)
L ( Λ ) w ~ ijk ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) w ijk ( c ) ( 1 w ijk ( c ) ) δ ( q t , i ) b q t j ( c ) ( o t ) b q t ( c ) ( o t ) × d c ( O r ) g m ( O r , Λ ) ,
(49)
L ( Λ ) μ ~ ijkd ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) × σ ijkd ( c ) δ ( q t , i ) u ij ( c ) ( o td ( r ) μ ijkd ( c ) ) b q t j ( c ) ( o t ) b q t ( c ) ( o t ) × d c ( O r ) g m ( O r , Λ ) ,
(50)

and

L ( Λ ) Σ ~ ijk ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) × Σ ijk ( c ) 1 δ ( q t , i ) u ij ( c ) × ( o t ( r ) μ ijk ( c ) ) Σ ijk 1 ( o t ( r ) μ ijk ( c ) ) t 1 × b q t j ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ ) .
(51)

In the above, d c ( O ) g m ( O , Λ ) is as defined in (20). The update equation for ã ij ( c ) remains the same as that in given by (19).

Algorithm 2 outlines the steps needed to learn the parameters of all the models λ c using the MCE/GPD framework.

Algorithm 2 MCE/GPD training of the mixture level MSCHMM

3.2 Multi-stream HMM with state level streaming

For the MSCHMM Ls structure, we assume that the streaming is performed at the state level, i.e., each state is generated by L different streams, and each stream embodies M Gaussian components. Let b i k be the probability density function of state i within stream k. Since stream k is modeled by a mixture of M components, b i k can be written as:

b ik ( o t ( k ) ) = j = 1 M u ikj b ikj ( o t ( k ) ) = j = 1 M u ikj N ( o t ( k ) , μ ikj , Σ ikj ) .
(52)

Let w i k be the relevance weight of stream k in state i. The probability density function covering the entire feature space is then approximated by:

b i ( o t )= k = 1 L w ik b ik ( o t ( k ) ),
(53)

subject to

k = 1 L w ik =1,and j = 1 M u ikj =1.
(54)

The linear form of the probability density function in (53) is motivated by the following probabilistic reasoning:

b i ( o t ) = Pr ( o t | q t = i ; λ ) = k = 1 L Pr ( o t | q t = i , f t = k ; λ ) Pr ( f t = k | q t = i ; λ )

where f t is a random variable representing the most relevant stream at time t. Similar to the component level case, we assumed that the fusion of the L sources of information is performed in a mutual exclusive manner. Hence, we have the following approximation:

Pr ( o t | q t = i , f t = k ; λ ) = Pr ( o t ( k ) | q t = i , f t = k ; λ )

It follows that:

b i ( o t ) k = 1 L Pr ( o t ( k ) | q t = i , f t = k ; λ ) Pr ( f t = k | q t = i ; λ ) = k = 1 L Pr ( f t = k | q t = i ; λ ) j = 1 M Pr ( o t ( k ) | q t = i , f t = k , e t = j ; λ ) Pr ( e t = j | q t = i , f t = k ; λ )

where e t and f t a random variable that represents the index of the component that occurs at time t. It follows then that

b ikj ( o t ( k ) ) = Pr ( o t ( k ) | q t = i , f t = k , e t = j ; λ ) , u ikj = Pr ( e t = j | q t = i , f t = k ; λ ) , w ik = Pr ( f t = k | q t = i ; λ ) .

3.2.1 Learning model parameters with generalized MLE

Using similar steps to those used in the MSCHMM Lm, it can be shown (see Appendix 2) that the model parameters need to be updated iteratively using:

w ik = t = 1 T γ t ( 1 ) ( i , k ) t = 1 T γ t ( i ) ,
(55)
u ikj = t = 1 T ν t ( i , k , j ) t = 1 T κ t ( i , k ) ,
(56)
μ ikjd = t = 1 T ν t ( i , k , j ) o td ( l ) t = 1 T ν t ( i , k , j ) ,
(57)
Σ ikj = t = 1 T ν t ( i , k , j ) ( o t ( k ) μ ikjd ) ( o td ( k ) μ ikjd ) t t = 1 T ν t ( i , k , j ) .
(58)

In the above,

γ t ( i ) = Pr ( q t = i | O , λ ) κ t ( i , k ) = γ t ( i ) w ik b ik ( o t ) b i ( o t ) ν t ( i , k , j ) = γ t ( i ) w ik u ikj N ( o t ( k ) , μ ikj , Σ ijk ) b i ( o t )
(59)

The updating equation for a i j remains the same as in standard Baum-Welch algorithm (i.e., as in (6)). In the case of multiple observations [O (1),…,O (R)], it can be shown that the learning equations need to be updated using:

w ik = r = 1 R t = 1 T κ t r ( i , k ) r = 1 R t = 1 T γ t r ( i ) ,
(60)
u ikj = r = 1 R t = 1 T ν t r ( i , k , j ) r = 1 R t = 1 T κ t r ( i , k ) ,
(61)
μ ikjd = r = 1 R t = 1 T ν t ( i , k , j ) o td ( k ) ( r ) r = 1 R t = 1 T ν t ( i , k , j ) ,
(62)
Σ ikj = r = 1 R t = 1 T ν t r ( i , k , j ) ( o t ( k ) ( r ) μ ikj ) ( o t ( k ) ( r ) μ ikjd ) t r = 1 R t = 1 T ν t r ( i , k , j ) .
(63)

Algorithm 3 outlines the steps of the MLE training procedure of the different parameters of the MSCHMMLs.

Algorithm 3 Generalized BW training for the state level MSCHMM

3.2.2 Learning model parameters with generalized MCE/GPD

We generalize the MCE/GPD training approach for the MSCHMM Ls by extending the discriminant function in (10) to accommodate for the stream relevance weights using:

g c ( O , Q , Λ ) = Pr ( O , Q ; λ c ) = π q 0 ( c ) t = 1 T 1 a q t q t + 1 ( c ) t = 1 T b q t ( c ) ( o t ) = π q 0 ( c ) t = 1 T 1 a q t q t + 1 ( c ) t = 1 T k = 1 L w q t k ( c ) j = 1 M v q t kj ( c ) b q t kj ( c ) ( o t )
(64)

Defining the misclassification measure as in the component level streaming (Equation (41)) and following similar steps to minimize it, it can be shown that the MSCHMMLs parameters need to be updated iteratively using

Λ ~ (τ+1)= Λ ~ ( τ ) ϵ Λ ~ L ( Λ ) Λ ~ = Λ ~ ( τ ) ,
(65)

where

L ( Λ ) ã ij ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) × δ ( q t r = i , q t + 1 r = j ) ( 1 a ij ( c ) ) d c ( O r ) g m ( O r , Λ ) , L ( Λ ) w ~ ik ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) w ik ( c ) × ( 1 w ik ( c ) ) δ ( q t , i ) b q t k ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ ) , L ( Λ ) ũ ikj ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) u ikj ( c ) × ( 1 u ikj ( c ) ) w ik ( c ) δ ( q t , i ) b q t kj ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ ) , L ( Λ ) μ ~ ijd ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) × σ ijd ( c ) δ ( q t , i ) w ik ( c ) u ikj ( c ) ( o td ( r ) μ ijd ( c ) ) b q t kj ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ ) , L ( Λ ) Σ ~ ikj ( c ) = r = 1 R m = 1 C t = 1 T ζ l m ( O r , Λ ) ( 1 l m ( O r , Λ ) ) Σ ikj ( c ) 1 × δ ( q t , i ) w ik ( c ) u ikj ( c ) ( o t ( r ) μ ikj ( c ) ) Σ ikj 1 ( o t ( r ) μ ikj ( c ) ) t 1 × b q t kj ( c ) ( o t ) b q t ( c ) ( o t ) d c ( O r ) g m ( O r , Λ ) ,

In the above, d c ( O ) g m ( O , Λ ) is as defined in (20). Algorithm 4 outlines the steps of the MCE/GPD training procedure for the different parameters of the MSCHMMLs.

Algorithm 4 MCE/GPD training of the state level MSCHMM

4 Experimental results

To illustrate the performance of the proposed MSCHMM architectures, we first use synthetically generated data sets to outline the advantages of the proposed structures and their learning algorithms. Then, we apply them to the problem of landmine detection using ground penetrating radar (GPR) sensor data.

4.1 Synthetic data

4.1.1 Data generation

We generate two synthetic data sets. The first one is a single stream sequential data, and the second is a multi-stream one. Both sets are generated using two continuous HMMs to simulate a two class problem. We follow an approach similar to the one used in [30] to generate sequential data using a continuous HMM with N s =4 states and M=3 components per state with 4D. We start by fixing μ k R 4 , k=1,…,N s to represent the different states. Then, we randomly generate M vectors from each normal distribution, with mean μ k and identity covariance matrix, to form the mixture components of each state. The mixture weights of the components within each state are randomly generated and then normalized. The covariance of each mixture component is set to the identity matrix. The initial state probability distribution and the state transition probability distribution are generated randomly from a uniform distribution in the interval [0,1]. The randomly generated values are then scaled to satisfy the stochastic constraints. For more information about the data generation procedure, we refer the reader to [30].

For the single stream sequential data, we generate R sequences of length T=15 vectors for each of the two classes. We start by generating a continuous HMM with N s states and M components as described above. Then, we generate the single stream sequences using Algorithm 5.

Algorithm 5 Single stream sequential data generation for each class

For the multi-stream case, we assume that the sequential data is synthesized by L =2 streams, and that each stream k is described by N s states, where each state is represented by vector μ n k of dimension p k =2. For each state i, three components are generated from each stream k, and concatenated to form a double-stream components. To simulate components with various relevance weights, we use a variation of three combinations of components in each state. The first combination concatenates a component from each stream by just appending the features (i.e., both streams are relevant). The second combination concatenates noise (instead of stream 2 features) to stream 1 (i.e., stream 1 is relevant and stream 2 is irrelevant). The last combination concatenates noise (instead of stream 1 features) to stream 2 (i.e., stream 1 is irrelevant and stream 2 is relevant). Thus, for each state i we have a set of double-stream components where the streams have different degrees of relevance. Once the set of double-stream components is generated, a state transition probability distribution is generated, and the double-stream sequential data is generated using Algorithm 5.

4.1.2 Results

First, we apply the baseline CHMM and the proposed multi-stream CHMM structures to the single stream sequential data where the features are generated from one homogeneous source of information. The MSCHMM architectures treat the single stream sequential data as a double-stream one (each stream is assumed to have 2D observation vectors). In this experiment all models are trained using standard Baum-Welch (for the baseline CHMM), generalized Baum-Welch (for the MSCHMM), standard and generalized MCE/GPD algorithms, or a combination of the two (Baum-Welch followed by MCE/GPD). The results of this experiment are reported in Table 1. As it can be seen, the performance of the proposed MSCHMM structures and the baseline CHMM are comparable for most training methods. This is because when both streams are equally relevant for the entire data, the different streams receive nearly equal weights in all states’ components and the MSCHMM reduces to the baseline CHMM. Figure 1 displays the weights of stream 1 components. As it can be seen, most weights are clustered around 0.5 (maximum weight is less than 0.6 and minimum weight is more than 0.4). Since weights of both streams must sum to 1, both weights are equally important for all symbols.

Table 1 Classification rates of the different CHMM structures of the single stream data
Figure 1
figure 1

Stream 1 relevance weights of the mixture components in all four states, learned by the MSCHMM Lm model for the single-stream sequential data.

The second experiment involves applying both the baseline CHMM and the proposed MSCHMM to the double stream sequential data where the features are generated from two different streams. In this experiment, the various models are trained using Baum-Welch, MCE, and Baum-Welch followed by MCE training algorithms. First, we note that using stream relevance weights, the generalized Baum-Welch and MCE training algorithms converge faster and the MCE results in smaller training error. Figure 2 displays the number of misclassified samples versus the number of iterations for the baseline CHMM and the proposed MSCHMM using MCE/GPD training. As it can be seen, learning stream relevance weights causes the error to drop faster. In fact, at each iteration, the classification error for the MSCHMM is lower than that of the baseline CHMM. However, as shown in Table 2, for each iteration, the computational complexity involved in the proposed MSCHMM is about 2.5 times of the baseline CHMM.

Figure 2
figure 2

Number of misclassified samples versus training iteration number for the standard and multistream CHMMs.

Table 2 CPU time per iteration for the MCE/GPD training

The testing results are reported in Table 3. First, we note that all proposed multi-stream CHMMs outperform the baseline CHMM for all training methods. This is because the data set used for this experiment was generated from two streams with different degrees of relevance and the baseline CHMM treats both streams equally important. The proposed MSCHMM structures on the other hand, learn the optimal relevance weights for each symbol within each state. The learned weights for stream 1 by the MSCHMMLm are displayed in Figure 3. As it can be seen, some components are highly relevant (weight close to 1) in some states, while others are completely irrelevant (weights close to 0). The latter ones correspond to the components where stream 1 features were replaced by noise in the data generation. We should note here that in theory, we assumed that at time t one of the L streams is significantly more relevant than the others in order to derive update equations for all parameters using the Baum-Welch algorithm (refer to Section 3.1). However, in practice, the performance of the algorithm does not break down if this assumption does not hold. For instance, in Figure 1, the weights are equal when all streams are relevant while in Figure 3 the weights are different but not binary.

Table 3 Performance of the different CHMM structures on the multi-stream data
Figure 3
figure 3

Stream 1 relevance weights of the mixture components in all four states learned by the MSCHMM Lm model for the double-stream sequential data.

In Table 3, we also compare our approach to the two state of the art MSCHMM that were discussed in Section 2.2. The proposed multi-stream CHMMs outperform both of these methods. This is mainly due to the fact that the parameters of the proposed MSCHMM structures allow for a simultaneous update for both Baum-Welch and MCE/GPD training. However, for the MSCHMMG, the parameters learned separately by two different algorithms and two different objective functions.

From Table 3, we also notice that using the generalized Baum-Welch followed by the MCE to learn the model parameters is a better strategy. This is consistent with what has been reported for the baseline HMM [18].

4.2 Application to landmine detection

4.2.1 Data collection

We apply the proposed multi-stream CHMM structures to the problem of detecting buried landmines. We use data collected using a robotic mine detection system. This system includes a ground penetrating radar (GPR) and a Wideband Electro-Magnetic Induction (WEMI) sensor and is shown in Figure 4. Each sensor collects data as the system moves. Only data collected by the GPR sensor is used in our experiments. The GPR sensor [31] collects 24 channels of data. Adjacent channels are spaced approximately 5 cm apart in the cross-track direction, and sequences (or scans) are taken at approximately 1 centimeter down-track intervals. The system uses a V-dipole antenna that generates a wide-band pulse ranging from 200 MHz to 7 GHz. Each A-scan, that is, the measured waveform collected in one channel at one down-track position, contains 516 time samples at which the GPR signal return is recorded. We model an entire collection of input data as a 3D matrix of sample values, S(z,x,y); z=1,…,516;x=1,…,24;y=1,…,T, where T is the total number of collected scans, and the indices z, x, and y represent depth, cross-track position, and down-track positions, respectively.

Figure 4
figure 4

NIITEK autonomous mine detection system.

The autonomous mine detection system (shown in Figure 4) was used to acquire large collections of GPR data from two geographically distinct test sites in the eastern U.S. with natural soil. The two sites are partitioned into grids with known mine locations. Twenty eight distinct mine types that can be classified into four categories: anti-tank metal (ATM), anti-tank with low metal content (ATLM), anti-personal metal (APM), and anti-personal with low metal content (APLM) were used. All targets were buried up to 5 inches deep. Multiple data collections were performed at each site resulting in a large and diverse collection of signatures. In addition to mines, clutter signatures were used to test the robustness of the detectors. Clutter arises from two different processes. One type of clutter is emplaced and surveyed. Objects used for this clutter can be classified into two categories: high metal clutter (HMC) and non-metal clutter (NMC). High metal clutter such as steel scraps, bolts, soft-drink cans, was emplaced and surveyed in an effort to test the robustness of the detection algorithms. Non-metal clutter such as concrete blocks and wood blocks was emplaced and surveyed in an effort to test the robustness of the GPR based detection algorithms. The other type of clutter, referred to as blank, is caused by disturbing the soil.

For our experiment, we use a subset of the data collection that includes 600 mine and 600 clutter signatures. The raw GPR data are first preprocessed to enhance the mine signatures for detection. We identify the location of the ground bounce as the signal’s peak and align the multiple signals with respect to their peaks. This alignment is necessary because the mounted system cannot maintain the radar antenna at a fixed distance above the ground. Since the system is looking for buried objects, the early time samples of each signal, up to few samples beyond the ground bounce are discarded so that only data corresponding to regions below the ground surface are processed.

Figure 5 displays several preprocessed B-scans (sequences of A-scans) both down-track (formed from a time sequence of A-scans from a single sensor channel) and cross-track (formed from each channels response in a single sample) at the position indicated by a line in the down-track. The objects scanned are (a) a high-metal content anti-tank mine, (b) a high-metal content anti-personnel mine, and (c) a wood block. The reflections between depths 50 and 125 in these figures are the artifact of preprocessing and data alignment. The strong reflections between cross-track scans 15 and 20 are due to Electromagnetic interference (or EMI). The preprocessing artifacts and the EMI can add considerable amounts of noise to the signatures and make the detection problem more difficult.

Figure 5
figure 5

NIITEK radar down-track and cross-track (at position indicated by a line in the down-track) B-scans pairs for (a) an anti-tank (AT) mine, (b) an anti-personnel (AP) mine, and (c) a non-metal clutter alarm.

4.2.2 Feature extraction

As it can be seen in Figure 6, landmines (and other buried objects) appear in time domain GPR as hyperbolic shapes (corrupted by noise), usually preceded and followed by a background area. Thus, the feature representation adopted by the HMM is based on the degree to which edges occur in the diagonal and antidiagonal directions, and the features are extracted to accentuate these edges.

Figure 6
figure 6

Shape of a typical mine signature and the interpretation of the four states of the DHMM structure.

Each alarm has over 516 depth values, however, the mine signature is not expected to cover all the depth values. Typically, depending on the mine type and burial depth, the mine signature may extend over 40–200 depth values, i.e., it may cover no more than 10% of the extracted data cube. For example, in Figure 5b, the signature essentially extends from depth index 170 to depth index 200. There is a little or no evidence that a mine is present in depth bins above or below this region. Thus, extracting one global feature from the alarm may not discriminate between mine and clutter signatures effectively. To avoid this limitation, we extract the features from a small window with W d =45 depth values. Since the ground truth for the depth value (z s ) is not provided, we visually inspect all training mine signatures and estimate this value. For the clutter signatures, this process is not trivial as clutter objects can have different characteristics and their signature can extend over a different number of samples. Instead, for each clutter signature, we extract five training signatures at equally spaced depths covering the entire depth range. Also, out of the 24 GPR channels, we process only the middle 7 channels as it is unlikely that the target signatures extend beyond this range. Thus, each training signature s consists of 45(depth) ×15(scans) ×7(channels) volume extracted from the aligned GPR data.

Figure 6 displays a hyperbolic curve superimposed on a preprocessed mine signature (only 45 depths) to illustrate the features of a typical mine signature. This figure also justifies the choice of N s =4 states in the adopted CHMM structure. State 1 corresponds to non-edge activity (i.e., background), state 2 corresponds to diagonal edge, state 3 corresponds to a flat edge, and state 4 corresponds to an anti-diagonal edge.

We adopt the Homogeneous Texture Descriptor [32] to capture the spatial distribution of the edges within the 3D GPR alarms. We extract features by expanding the signature’s B-scan using a bank of Gabor filters at 4 scales and 4 orientations. Let S(x,y,z) denotes the 3D GPR data volume of an alarm. To keep the computation simple, we use 2D filters (in the yz plane) and average the response over the third dimension. Let S x (y,z) be the x th plane of the 3D signature S(x,y,z). Let S G x ( k ) (y,z), k=1,…,16 denotes the response of S x (y,z) to the 16 Gabor filters. Figure 7 displays a strong signature of a typical metal mine and its response to the 16 Gabor filters. As it can be seen, the signature has a strong response to the θ 2 (45°) filters (especially scale 1 and scale 2 to a lesser degree) on the left part of the signature (rising edge), and a strong response to the θ 4 (135°) filters on the right part of the signature (falling edge). Similarly, the middle of signature has a strong response to the θ 3 (horizontal) filters (flat edge). Figure 7b displays a weak mine signature and its response to the Gabor filters. For this signature, the edges are not as strong as those in Figure 7a. As a result, it has a weaker response at all scales (scale 2 has the strongest response), especially for the falling edge. Figure 7c displays a clutter signature (with high energy) and its response. As it can be seen, this signature has strong response to the θ 4 (135°) degree filters. However, this response is not localized on the right side of the signatures.

Figure 7
figure 7

Response of three alarms to the 16 Gabor filters at different scales and orientations. (a) Strong mine signature, (b) weak mine signature, and (c) clutter signature with high energy.

In our HMM models, we take the down-track dimension as the time variable (i.e., y corresponds to time in the HMM model). Our goal is to produce a confidence that a mine is present at various positions, (x,y), on the surface being traversed. To fit into the HMM context, a sequence of observation vectors must be produced for each signature. We define the observation sequence of S x (y,z), at a fixed depth z, the sequence

[ O ( x , y 7 , z ) , O ( x , y 6 , z ) , , O ( x , y 1 , z ) , O ( x , y , z ) , O ( x , y + 1 , z ) , , O ( x , y + 7 , z ) ] ,
(66)

where

O(x,y,z)=[ O 1 (x,y,z),, O 16 (x,y,z)],
(67)

and

O k (x,y,z)= 1 45 z = 1 45 S G x ( k ) (y,z),
(68)

encodes the response of S(x,y,z) to the k th Gabor filters.

4.2.3 Learning HMM parameters

We construct and train multiple landmine detectors using the proposed HMM structures. Each detector has one model for background (learned using non-mine training signatures) and another for mine (learned using trained mine signatures). Each model produces a probability value by backtracking through model states using the Viterbi algorithm. The probability value produced by the mine (background) model can be thought of as an estimate of the probability of the observation sequence given that there is a mine (background) present.

For all CHMM structures, we assume that each model has N s =4 states. The states representatives, v k , are obtained by clustering the training data into four clusters using Fuzzy C-Means [33]. The learning procedures used for the other parameters depend on the HMM structures and are outlined below.

Baseline (single stream) CHMM

For the baseline CHMM, we treat all features (responses of the 16 Gabor filters) equally important. To generate the state components, we cluster the training data relative to each state into M=4 clusters using FCM algorithm [33]. The transition probabilities A, the mixing coefficients U, and the component parameters could be estimated using Baum-Welch algorithm [1], the MCE/GPD algorithm [18], or few iteration of Baum-Welch followed by the MCE/GPD algorithm. Our results have indicated that the combination of the two learning algorithms provides the best classification accuracy. Thus, due to the space constraint, only those results are reported in this article.

Multi-stream CHMM

The Gabor features used within the baseline continuous HMM assume that all scales and orientations contribute equally in characterizing alarm signatures. However, this assumption may not be valid for most cases. For instance, some alarms may be better characterized at a lower scale, while others may be better characterized at a higher scale. The different scales could then be treated as different sources of information, i.e., different streams.

Since it is not possible to know a priori which scale is more discriminative, we propose considering the different Gabor scales as different streams of information and use the training data to learn multi-stream CHMMs (mixture and state level). Thus, we use four streams where each stream (Gabor response at a fixed scale) produces a 4D feature vectors (Gabor response at the different orientations). To generate the state components, we cluster the training data relative to each state in M=4 clusters using SCAD [29] and learn initial stream relevance weights for each state and component. The state transition probabilities A, the mixing coefficients U, and the component parameters and the observation probabilities B are learned using the generalized Baum-Welch (see Sections 3.1.1 and 3.2.1), the generalized MCE/GPD (see Sections 3.1.2 and 3.2.2), or a combination of the two.

4.2.4 Confidence value assignment

The confidence value assigned to each observation sequence, Conf(O), depends on: (1) the probability assigned by the mine model (λ m), Pr(O|λ m); (2) the probability assigned by the background model (λ c), Pr(O|λ c); and (3) the optimal state sequence. In particular, we use:

Conf(O)= max log Pr ( O | λ m ) Pr ( O | λ c ) , 0 if # { s t = 1 , t = 1 , , T } T max 0 otherwise
(69)

Since each alarm has over 300 depth values (after preprocessing) and only 45 depths are processed at a time, we divide the test alarm into 10 overlapping sub-alarms and test each one independently to obtain 10 partial confidence values. These values could be combined using various fusion methods such as averaging, artificial neural networks [34], or an order-weighted average (OWA) [35]. In this article, we report the results using the average of the top three confidences. This simple approach has been successfully used in [36].

4.2.5 Experimental results

We use a 5-fold cross validation scheme to evaluate the proposed MSCHMM structures and compare them to the baseline CHMM and to MSCHMMG (Section 2.2). For each cross-validation, we use a different subset of the data that has 80% of the alarms for training and test on the remaining 20% of the alarms. The scoring is performed in terms of probability of detection (PD) versus probability of false alarms (PFA). Confidence values are thresholded at different levels to produce the receiver operating characteristics (ROC) curve.

Figure 8 compares the ROC curves generated using each of the four streams (Gabor features at each scale) and their combination using simple concatenation (Baseline CHMM), using the proposed MSCHMM and MSCHMMG (Section 2.2). We only display the ROC segments where the PD is larger than 0.5 to magnify the interesting and practical regions. All results were obtained when the model parameters are learned using Baum-Welch followed by the MCE/GPD training method. First, we note that the CHMM with Gabor features at scale 2 and 4 outperform all other features (for FAR≤40). Second, the baseline CHMM with all 4 scales is not much better than the CHMM at scale 2 and 4 especially for FAR ≤30. In fact, for some FAR, the performance can be worse. This is due mainly to the way the four scales are combined equally. Third, we note that all MSCHMM structures outperform the baseline CHMM. Moreover, the MSCHMM with mixture level streaming outperforms the other structures. Fourth, the proposed MSCHMM structures outperform the MSCHMMG (Section 2.2). This is due to the fact that for the latter approach, the stream relevance weights are learned separately from the rest of the model parameters. These results are consistent with those obtained with the synthetic data in Table 3. Figure 8 also compares the performance of the proposed continuous MSCHMM structures with our previously published discrete version [17]. As expected with most HMM classifiers, the continuous versions have slightly better performance.

Figure 8
figure 8

Performance of the proposed multi-stream CHMM compared to the baseline CHMM and the state of the art MSCHMM when the PD >=0.5 .

To illustrate the advantages of combining the different Gabor scales into a MSCHMM structure and learning stream dependent relevance weights, in Figure 9, we display a scatter plot of the confidence values generated by the baseline CHMM that uses Gabor feature at scale 1 and scale 2, separately. As it can be seen, for many alarms, the confidence values generated by both CHMMs are comparable (i.e., alarms along the diagonal). However, there are different regions in the confidence space where one scale is more reliable than the other. For instance, alarms highlighted in region R 3 include more mine signatures than false alarms, and these signatures have higher confidence values using scale 1. Thus, for this region, scale 1 is a better detector than scale 2. The alarm shown in Figure 7a is one of those alarms, and as it can be seen, the alarm’s response to scale 1 Gabor filters is more dominant. Similarly, region R 1 include mainly mine signatures that have high confidence values using scale 1 and low confidence values using scale 1. Thus, for this group of alarms, the scale 2 detector is more reliable than scale 1 detector. The alarm shown in Figure 7b is one of those alarms and has a stronger response to scale 2. This difference in behavior exists for both target and non-target alarms. For instance, region R 2 highlights both target and non-target alarms that are detected at scale 2 but not detected at scale 1 using an 80 % PD threshold (=4.2).

Figure 9
figure 9

Scatter plot of the confidence values generated using two baseline DHMM that use Gabor features at scales 1 and 2.

5 Conclusions

We have proposed novel multi-stream continuous Hidden Markov models structures that integrate stream relevance weighting component for the classification of temporal data. These structures allow learning component or state dependent stream relevance weights. In particular, we modified the probability density function that characterizes the standard continuous HMM to include state and component dependent stream relevance weights. For both methods, we generalized the Baum-Welch and MCE/GPD learning algorithms and derived the update equations for all model parameters are derived. Results on synthetic data set and a library of GPR signatures show that the proposed multi-stream CHMM structures improve the discriminative power and thus, the classification accuracy of the CHMM. The introduction of stream relevance weights also causes the training error to decrease faster and for the training algorithm to converge faster.

The discriminative training performed in this article uses batch mode training. Sequential training could be investigated and combined with a boosting framework. In order to control the complexity of the proposed structures, a regularization mechanism could be investigated. In addition, this study could be extended to the Bayesian case that is relevant in situations where training data is limited. The application to landmine detection could be extended to include streams from different feature extraction methods or even from different sensors.

Appendix 1

Generalized Baum-Welch for the mixture level MSCHMM

The objective function in (29) involves the quantity Pr(O,Q,E,F| λ ̄ ) which could be expressed analytically as:

Pr ( O , Q , E , F | λ ̄ c ) = π q 0 ( c ) t = 1 T 1 a q t q t + 1 ( c ) t = 1 T u q t e t ( c ) w q t e t f t ( c ) b q t e t f t ( c ) ( o t )
(70)

Thus, the objective function in (29) can be expanded as follows:

Q ( λ , λ ̄ ) = Q E F Pr ( Q , E , F | O , λ ) log π ̄ q 1 + t = 1 T 1 Q E F Pr ( Q , E , F | O , λ ) log ā q t q t + 1
+ t = 1 T 1 Q E F Pr ( Q , E , F | O , λ ) log ū q t e t + t = 1 T 1 Q E F Pr ( Q , E , F | O , λ ) log w ̄ q t e t f t + t = 1 T 1 Q E F Pr ( Q , E , F | O , λ ) log N ( o t ( f t ) , μ ̄ q t e t f t , Σ ̄ q t e t f t )
(71)

After the estimation step, the maximization step consists of finding the parameters of λ ̄ that maximize the function in (71). The expanded form of the function Q(λ, λ ̄ ) in (71) has 5 terms involving π ¯ , a ¯ ,and ( w ¯ , b ¯ ) independently. To find the values of π ¯ i , a ¯ ij , w ¯ ijk , and b ¯ ijk that maximize Q(λ, λ ̄ ), we consider the terms in (71) that depend on π ¯ , a ¯ , w ¯ , and b ¯ . In particular, the first and second terms in (71) depend on π ¯ and a ¯ , and they have the same analytical expressions sketched in the case of the baseline CHMM (refer to (2.1)). It follows that the update equations for π ¯ i , a ¯ ij , and u ¯ ij are the same as in the standard CHMM. That is,

π ¯ i = γ 1 ( i ) , a ¯ ij = t = 1 T ξ t ( i , j ) t = 1 T γ t ( i ) ,

and

u ¯ ij = t = 1 T Pr ( q t = i , e t = j | o , λ ) t = 1 T Pr ( q t = i | o , λ ) .

To find the value of w ¯ ijk that maximizes the auxiliary function Q(.,.), only the fourth term of the expression in (71) is considered since it is the only part of Q(.,.) that depends on w ¯ ijk . This term can be expressed as:

t = 1 T Q E F Pr ( Q , E , F | O , λ ) log w ̄ q t e t f t = t = 1 T i j k log ( w ̄ ijk ) × Q E F Pr ( Q , E , F | O , λ ) δ ( i , q t ) δ ( j , e t ) δ ( k , f t ) ,
(72)

where δ(i,q t )δ(j,e t )δ(k,f t ) keeps only those cases for which q t =i, e t =j and f t =k. That is,

Q E F Pr ( Q , E , F | O , λ ) δ ( i , q t ) δ ( j , e t ) δ ( k , f t ) = Pr ( q t = i , e t = j , f t = k | o t , λ ) ,
(73)

therefore:

t = 1 T Q E F Pr ( Q , E , F | O , λ ) log w ̄ q t e t f t = t = 1 T i = 1 N s j = 1 M k = 1 L Pr ( q t = i , e t = j , f t = k | o t , λ ) log w ̄ q t e t f t
(74)

To find the update equation of w ¯ ijk we use the Lagrange multipliers optimization with the constraint in (28), and obtain

w ijk = t = 1 T γ t ( i , j , k ) t = 1 T γ t ( i , j ) ,
(75)

where

γ t ( i ) = α t ( i ) β t ( i ) j = 1 N s α t ( j ) β t ( j ) ,
(76)
κ t ( i , j ) = γ t ( i ) u ij b ij ( o t ) b i ( o t ) ,
(77)

and

ν t ( i , j , k ) = γ t ( i ) u ij w ijk N ( o t ( k ) , μ ijk , Σ ijk ) b j ( o t ) .
(78)

Similarly, it can be shown that the update equations for the rest of the parameters are:

μ ijkd = t = 1 T ν t ( i , j , k ) o td ( k ) t = 1 T ν t ( i , j , k ) ,
(79)

and

Σ ijk = t = 1 T ν t ( i , j , k ) ( o t ( k ) μ ijk ) t ( o t ( k ) μ ijk ) t = 1 T ν t ( i , j , k ) .
(80)

Appendix 2

Generalized Baum-Welch for the state level MSCHMM

The MSCHMMLs model parameters can be learned using a maximum Likelihood approach. Given a sequence of training observation O=[o 1,…,o T ], the parameters of λ could be learned by maximizing the likelihood of the observation sequence O, i.e., Pr(O|λ). We achieve this by generalizing the Baum-Welch algorithm to include a stream relevance weight component. We define the generalized Baum-Welch algorithm by extending the auxiliary function in (5) to

Q(λ, λ ̄ )= Q F E Pr(Q,F,E|O,λ)lnPr(O,Q,F,E| λ ̄ ),
(81)

where F=[f 1,…,f T ] and E=[e 1,…,e T ] are two sequences of random variables representing, respectively, the stream and component indices for each time step. It can be shown that a critical point of Pr(O|λ), with respect to λ, is a critical point of the new auxiliary function Q(λ, λ ̄ ) with respect to λ ̄ when λ ̄ =λ, that is:

Pr ( O | λ ) ∂λ = ∂Q ( λ , λ ̄ ) λ ̄ | λ ̄ = λ .
(82)

Similar to the discrete and mixture level cases, it could be shown that the formulation of the maximization of the likelihood Pr(O|λ) through maximizing the auxiliary function Q(λ, λ ̄ ) is an EM [37] type optimization that is performed in two steps: the estimation step and the maximization step. The estimation step consists of computing the conditional expectation in (81) and writing it in an analytical form. The objective function in (81) involves the quantity Pr(O,Q,F,E| λ ̄ ) which could be expressed analytically as

Pr(O,Q,F,E| λ ̄ c )= π q 0 ( c ) t = 1 T 1 a q t q t + 1 ( c ) t = 1 T w q t f t ( c ) u q t f t e t ( c ) b q t f t e t ( c ) ( o t )
(83)

Thus, the objective function in (81) can be expanded as

Q ( λ , λ ̄ ) = Q E F Pr ( Q , E , F | O , λ ) log π ̄ q 1 + t = 1 T 1 Q F E Pr ( Q , F , E | O , λ ) log ā q t q t + 1 + t = 1 T 1 Q F E Pr ( Q , F , E | O , λ ) log w ̄ q t f t + t = 1 T 1 Q F E Pr ( Q , F , E | O , λ ) log ū q t f t e t + t = 1 T 1 Q F E Pr ( Q , F , E | O , λ ) log N ( o t ( f t ) , μ ̄ q t f t e t , Σ ̄ q t f t e t )
(84)

After the estimation step, the maximization step consists on finding the parameters of λ ̄ that maximize the function in (84). The expanded form of the function Q(λ, λ ̄ ) in (84) has five terms involving π ¯ , a ¯ , w ¯ , u ¯ , and (μ, Σ). To find the values of π ¯ i , a ¯ ij , w ¯ ik , u ¯ ikj , μ ¯ ikjd , and σ ¯ ikjd that maximize Q(λ, λ ̄ ), we consider the terms in (84) that depend on π ¯ , a ¯ , w ¯ , u ¯ , and (μ, Σ). In particular, the first and second terms in (71) depend on π ¯ and a ¯ , and they have the same analytical expressions sketched in the case of the baseline CHMM in (5). It follows that the update equations for π ¯ i , and a ¯ ij are the same as in the standard CHMM. That is,

π ¯ i = γ 1 ( i ) ,

and

a ¯ ij = t = 1 T ξ t ( i , j ) t = 1 T γ t ( i ) .

To find the value of w ¯ ik that maximizes the auxiliary function Q(.,.), only the third term of the expression in (84) is considered since it is the only part of Q(.,.) that depends on w ¯ ik . This term can be expressed as:

t = 1 T Q F E Pr ( Q , E , F | O , λ ) log w ̄ q t f t = t = 1 T Q F Pr ( Q , F | O , λ ) log w ̄ q t f t = t = 1 T i k log ( w ̄ ik ) × Q F × Pr ( Q , F | O , λ ) δ ( i , q t ) δ ( k , f t ) ,
(85)

where δ(i,q t )δ(k,f t ) keeps only those cases for which q t =i, and f t =k. That is,

Q F Pr ( Q , F | O , λ ) δ ( i , q t ) δ ( k , f t ) = Pr ( q t = i , f t = k | o t , λ ) ,
(86)

therefore:

t = 1 T Q F Pr ( Q , F | O , λ ) log w ̄ q t f t = t = 1 T i = 1 N s k = 1 L Pr ( q t = i , f t = k | o t , λ ) log w ̄ q t f t
(87)

To find the update equation of w ¯ ik we use the Lagrange multipliers optimization with the constraint in (54), and obtain

w ¯ ik = t = 1 T κ t ( i , k ) t = 1 T γ t ( i ) ,
(88)

where,

γ t ( i ) = Pr ( q t = i | O , λ ) ,

and

κ t ( i , k ) = γ t ( i ) w ik b ik ( o t ) b i ( o t ) .

Similarly, it can be shown that the update equations for the rest of the parameters are:

u ¯ ikj = t = 1 T ν t ( i , k , j ) t = 1 T κ t ( i , k ) ,
(89)
μ ¯ ikjd = t = 1 T ν t ( i , k , j ) o td ( l ) t = 1 T ν t ( i , k , j ) ,
(90)
Σ ¯ ikj = t = 1 T ν t ( i , k , j ) ( o t ( k ) μ ijd ( k ) ) t ( o t ( k ) μ ijd ( k ) ) t = 1 T ν t ( i , k , j ) ,
(91)

where

ν t ( i , k , j ) = γ t ( i ) w ik u ijk N ( o t ( k ) , μ ijk , Σ ijk ) b i ( o t )

References

  1. Rabiner L: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE 1989, 257-286.

    Google Scholar 

  2. Runkle P, Bharadwaj P, Carin L: Hidden Markov model multi-aspect target classification. IEEE Trans. Signal Proc 1999, 47: 2035-2040.

    Article  Google Scholar 

  3. Baldi P, Chauvin Y, Hunkapiller T, McClure M: Hidden Markov models of biological primary sequence information. In Nat. Acad. Science. USA; 1994:1059-1063.

    Google Scholar 

  4. Koski T: Hidden Markov Models for Bioinformatics. Netherlands: Kluwer Academic Publishers; 2001.

    Book  MATH  Google Scholar 

  5. Frigui H, Ho K, Gader P: Real-time landmine detection with ground-penetrating radar using discriminative and adaptive hidden Markov models. EURASIP J. Appl. Signal Process 2005, 2005: 1867-1885.

    Article  Google Scholar 

  6. Mohamed M, Gader P: Generalized hidden Markov models part 2: applications to handwritten word recognition. IEEE Trans. Fuzzy Syst 2000, 8: 186-194.

    Article  Google Scholar 

  7. Bunke H, Caelli T: Hidden Markov Models: Applications in Computer Vision. Singapore: World Scientific Publishing Co; 2001.

    Google Scholar 

  8. Zhiyong W, Lianhong C, Helen M: Multi-level fusion of audio and visual features for speaker identification. Adv. Biometrics 2005, 493-499.

    Chapter  Google Scholar 

  9. Chibelushi CC, Mason JSD, Deravi F: Feature-level data fusion for bimodal person recognition. In Image Processing and Its Applications, 1997. Sixth International Conference on. IET; 1997:399-403.

    Chapter  Google Scholar 

  10. Chatziz V, Bors A, Pitas I: Multimodal decision level fusion for person authentication. IEEE Trans. Syst. Man Cybern. A 1999, 29: 674-680.

    Article  Google Scholar 

  11. Jordan MI, Ghahramani Z: Factorial Hidden Markov Models. In Advances in Neural Information Processing Systems 8: Proceedings of the 1995 Conference. MIT Press; 1996:472-472.

    Google Scholar 

  12. Ara N, Liang L, Fu T, Liu X: A Bayesian approach to audio-visual speaker identification. In Audio-and Video-Based Biometric Person Authentication. Berlin/Heidelberg: Springer; 2003:1056-1056.

    Google Scholar 

  13. Dupont S, Luettin J: Audio-visual speech modeling for continuous speech recognition. IEEE Trans. Multimedia 2000, 2(3):141-151.

    Article  Google Scholar 

  14. Gerasimos P, Chalapathy N, Juergen L, Iain M: Audio-visual automatic speech recognition: an overview. In Audio-Visual Speech Processing. Edited by: Vatikiotis-Bateson E, Bailly G, Perrier P. MIT Press; 2009:356-396. ISBN:0-26-222078-4

    Google Scholar 

  15. Hernando J: Maximum likelihood weighting of dynamic speech features for CDHMM speech recognition. In IEEE Acoustics, Speech, and Signal Processing (ICASSP). Munich; 1997:1267-1270.

    Google Scholar 

  16. Torre A, Peinado A, Rubio A, Segura J, Benitez C: Discriminative feature weighting for HMM-based continuous speech recognizers. Speech Commun 2002, 38: 267-286.

    Article  MATH  Google Scholar 

  17. Missaoui O, Frigui H, Gader P: Landmine detection with ground penetrating radar using multistream discrete hidden Markov models. IEEE Trans. Geosci. Rem. Sens 2011, 49: 2080-2099.

    Article  Google Scholar 

  18. Juang BH, Chou W, Lee CH: Minimum classification error rate methods for speech recognition. Trans. Speech Audio Process 1997, 5(3):257-265.

    Article  Google Scholar 

  19. Missaoui O, Frigui H: Optimal feature weighting for continuous HMM. In International Conference of Pattern Recognition. Florida, USA; 2008:1-4.

    Google Scholar 

  20. Li X, Parizeau M, Plamondon R: Training hidden Markov models with multiple observations-a combinatorial method. IEEE Trans. Pattern Anal. Mach. Intell 2000, 22(4):371-377.

    Article  Google Scholar 

  21. Nadas A: A decision theoretic formulation of a training problem in speech recognition and a comparison of training by unconditional vesus conditional maximum likelihood. IEEE Trans. Acoust. Speech Signal Process 1983, 31(4):814-817.

    Article  Google Scholar 

  22. Forney G: The Viterbi algorithm. Proc. IEEE 1973, 61: 268-278.

    Article  MathSciNet  Google Scholar 

  23. Masaru M, Iori S, Masafumi N, Yasuo H, Shingo K: Sign language recognition based on position and movement using multi-stream HMM. In Universal Communication, 2008. ISUC’08. Second International Symposium on. IEEE; 2008:478-481.

    Google Scholar 

  24. Atta N, Sid-Ahmed S, Hesham T, Douglas O: Incorporating phonetic knowledge into a multi-stream HMM framework. In Electrical and Computer Engineering, 2008. CCECE 2008. Canadian Conference on. IEEE; 2008:001705-001708.

    Google Scholar 

  25. Yousri K, Thierry P, AbdelMajid B: A multi-stream approach to off-line handwritten word recognition. In Document Analysis and Recognition, 2007. ICDAR 2007. Ninth International Conference on. IEEE; 2007:317-321.

    Google Scholar 

  26. Kapadia S: Discriminative training of hidden Markov models. PhD thesis, University of Cambridge 1998.

    Google Scholar 

  27. Potamianos G, Graf H: Discriminative training of HMM stream exponents for audio-visual seech recognition. In Proc. of the Inter. Conf. on Acoustics, Speech, and Signal Processing. Seattle; 1998:3733-3736.

    Google Scholar 

  28. Potamianos G, Potamianos A: Speaker adpatation for audio-visual speech recognition. In Proc. EUROSPEECH. Budapest; 1999:1291-1294.

    Google Scholar 

  29. Frigui H, Salem S: Fuzzy clustering and subset feature weighting. In Fuzzy Systems, 2003. FUZZ’03. The 12th IEEE International Conference on. IEEE; 2003:857-862.

    Chapter  Google Scholar 

  30. Ghahramani Z, Jordan MI: Factorial hidden Markov models. Mach. Lear 1997, 29(2):245-273.

    Article  MATH  Google Scholar 

  31. Hintz KJ: SNR improvements in NIITEK ground penetrating radar. In Proceedings of the SPIE Conference on Detection and Remediation Technologies for Mines and Minelike Targets. Orlando, FL, USA; 2004:399-408.

    Google Scholar 

  32. Frigui H, Missaoui O, Gader P: Landmine detection using discrete hidden Markov models with Gabor features. In Proc. SPIE. Orlando; 2007. http://dx.doi.org/10.1117/12.722241

    Google Scholar 

  33. Bezdek J: Pattern Recognition with Fuzzy Objective Function Algorithms. New York: Plenum Press; 1981.

    Book  MATH  Google Scholar 

  34. Duda RO, Hart PE, Stork DG: Pattern Classification (2nd Edition). Wiley-Interscience; 2000.

    Google Scholar 

  35. Gader P, Grandhi R, Lee W, Wislon J, Ho D: Feature analysis for the NIITEK ground-penetrating radar using order weighted averaging operators for landmine detection. In SPIE Conf. Detect. Remediation Technol. Mines Minelike Targets. Orlando, FL; 2004:953-962.

    Google Scholar 

  36. Frigui H, Gader P: Detection and discrimination of land mines in ground-penetrating radar based on edge histogram descriptors and a possibilistic K-Nearest neighbor classifier. IEEE Trans. Fuzzy Syst 2009, 17(1):185-199.

    Article  Google Scholar 

  37. Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc. Series B (Methodological) 1977, 39(1):1-38.

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This study was supported in part by U.S. Army Research Office Grants Number W911NF-08-0255 and by a grant from the Kentucky Science and Engineering Foundation as per Grant Agreement No. KSEF-2079-RDE-013 with the Kentucky Science and Technology Corporation. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office, or the U.S. Government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hichem Frigui.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Missaoui, O., Frigui, H. & Gader, P. Multi-stream continuous hidden Markov models with application to landmine detection. EURASIP J. Adv. Signal Process. 2013, 40 (2013). https://doi.org/10.1186/1687-6180-2013-40

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-40

Keywords