It can be assumed that speech frames have different amount of speaker information according to their acoustic-phonetic classes [13] as well as speaker’s voice characteristics. Under this assumption, a speech frame *x*_{
t
} can be classified into its corresponding acoustic-phonetic class if some classification scheme is provided in advance. Of a variety of classification methods, we employed a hard classification approach based on the vector quantization technique with GMM for algorithmic simplicity. The unsupervised clustering capability of GMM can automatically provide a number of acoustic-phonetic classes for the whole acoustic space which spans the entire training data. The vector quantization-based hard classification approach can be defined by

Q\left({x}_{t}\right)=arg\underset{m}{max}P\left({\psi}_{m}|{x}_{t}\right),

(4)

where *Ψ*_{
m
} denotes a set of Gaussian model parameters of the *m* th acoustic-phonetic class of a total of *M* acoustic-phonetic classes which are given by a GMM estimated from the training data which are assumed to cover the whole acoustic-phonetic space of speech signals.

Then, the speaker identification rule in (3) can be represented with the concept of acoustic-phonetic classes by classifying each frame into its acoustic-phonetic class and computing the probabilistic scores on the basis of class as

\widehat{S}=arg\underset{1\le j\le S}{max}{\displaystyle \sum _{m=1}^{M}{\displaystyle \sum _{\forall {x}_{t},{x}_{t}\in m}logP\left({x}_{t}|{\mathrm{\Lambda}}_{j}\right)}}.

(5)

Under this framework, each frame-level log-likelihood score can be discriminatively weighted on the basis of the acoustic-phonetic class as well as speaker to consider its acoustic-phonetic classes as well as speaker’s voice characteristics in speaker identification. The speaker identification rule based on this discriminative score weighting (DSW) scheme is given by

{\widehat{S}}_{\mathrm{DSW}}=arg\underset{1\le j\le S}{max}{\displaystyle \sum _{m=1}^{M}{w}_{\mathit{jm}}{\displaystyle \sum _{\forall {x}_{t},{x}_{t}\in m}logP\left({x}_{t}|{\Lambda}_{j}\right)}},

(6)

where *w*_{
jm
} stands for the discriminative weight for the *m* th acoustic-phonetic class and the *j* th speaker model. The optimal weights for this speaker identification scheme can be obtained by using the MCE-based discriminative training algorithm [5, 12, 14–16], which aims at deriving a set of speaker models which minimizes classification errors, that is, speaker identification errors for training data. To train these weights discriminatively with the MCE criterion for speaker identification, we define a discriminative function for each speaker which represents the log-likelihood of the feature vector sequence *X* given model Λ_{
j
} of speaker *j* as

{g}_{j}\left(X,{\mathrm{\Phi}}_{W}\right)={\displaystyle \sum _{m=1}^{M}{w}_{\mathit{jm}}{\displaystyle \sum _{\forall {x}_{t},{x}_{t}\in m}logP\left({x}_{t}|{\Lambda}_{j}\right)}},

(7)

where Φ_{
W
} stand for the weight parameters. In (7), the weights represent the amount of score contribution from their corresponding classes. In the equation, their integral sum needs to be normalized to avoid ill-training of the weights. According to these requirements, the weights need to satisfy such constraints [5, 16] as

{\displaystyle \sum _{m=1}^{M}{w}_{\mathit{jm}}=1},\phantom{\rule{0.5em}{0ex}}{w}_{\mathit{jm}}\ge 0.

(8)

Then, the misclassification measure is defined for the true speaker that is the label information for the input feature vector sequence *k* to measure how much the input feature sequence spoken by the true speaker is misclassified as

{d}_{k}\left(X,{\mathrm{\Phi}}_{W}\right)=-{g}_{k}\left(X,{\mathrm{\Phi}}_{W}\right)+{G}_{k}\left(X,{\mathrm{\Phi}}_{W}\right),

(9)

with

{G}_{k}\left(X,{\mathrm{\Phi}}_{W}\right)=log{\left(\frac{1}{S-1}{\displaystyle \sum _{j\ne k}exp\left[\eta {g}_{j}\left(X,{\mathrm{\Phi}}_{W}\right)\right]}\right)}^{\frac{1}{\eta}},

(10)

where *η* is a positive constant for weight controlling of the competing speaker classes.

A loss function for approximating the empirical loss related to the soft count of classification errors is defined as

{l}_{k}\left(X,{\mathrm{\Phi}}_{W}\right)=\frac{1}{\left(\right)}\n

(11)

where *γ* is a positive constant used to control the slope of the sigmoid function.

To satisfy the constraints in (8), we take logarithms as

{\tilde{w}}_{\mathit{jm}}=log{w}_{\mathit{jm}}.

(12)

This new parameter set \left\{{\tilde{w}}_{\mathit{jm}}\right\} is trained by using the generalized probabilistic descent (GPD) algorithm [16] as

{\tilde{w}}_{\mathit{jm}}^{\left(n+1\right)}={\tilde{w}}_{\mathit{jm}}^{\left(n\right)}-\u03f5\nabla {l}_{k}\left({\tilde{w}}_{\mathit{jm}}^{\left(n\right)}\right),

(13)

where *ϵ* is a step size of the GPD algorithm and \Delta {l}_{k}\left({\tilde{w}}_{\mathit{jm}}\right) is derived as

\nabla {l}_{k}\left({\tilde{w}}_{\mathit{jm}}\right)=\frac{\partial {l}_{k}}{\partial {d}_{k}}\frac{\partial {d}_{k}}{\partial {g}_{j}}\frac{\partial {g}_{j}}{\partial {\tilde{w}}_{\mathit{jm}}},

(14)

where

\frac{\partial {l}_{k}}{\partial {d}_{k}}=\gamma {l}_{k}\left(1-{l}_{k}\right),

(15)

\frac{\partial {d}_{k}}{\partial {g}_{j}}=\left\{\begin{array}{c}\hfill -1,\phantom{\rule{7.5em}{0ex}}\mathrm{if}\phantom{\rule{0.25em}{0ex}}j\phantom{\rule{0.25em}{0ex}}=k\phantom{\rule{1em}{0ex}}\hfill \\ \hfill \frac{exp\left[\eta {g}_{j}\left(X,{\mathrm{\Phi}}_{W}\right)\right]}{{\displaystyle \sum _{i\ne k}exp\left[\eta {g}_{i}\left(X,{\mathrm{\Phi}}_{W}\right)\right]}},\phantom{\rule{0.25em}{0ex}}\mathrm{\text{otherwise}}\hfill \end{array}\right.,

(16)

\frac{\partial {g}_{j}}{\partial {\tilde{w}}_{\mathit{jm}}}={w}_{\mathit{jm}}{\displaystyle \sum _{\forall {x}_{t},{x}_{t}\in m}logP\left({x}_{t}|{\mathrm{\Lambda}}_{j}\right)}.

(17)

After {\tilde{w}}_{\mathit{jm}} is updated, *w*_{
jm
} is obtained by using the following transformation to satisfy the constraints in (8) as

{w}_{\mathit{jm}}=\frac{exp\left({\tilde{w}}_{\mathit{jm}}\right)}{{\displaystyle \sum _{m=1}^{M}exp\left({\tilde{w}}_{\mathit{jm}}\right)}}.

(18)

The pseudocode of this training algorithm for the discriminative score weights is given in Algorithm 1.