Multi-task hidden Markov modeling of spectrogram feature from radar high-resolution range profiles

In radar high-resolution range profile (HRRP)-based statistical target recognition, one of the most challenging task is the feature extraction. This article utilizes spectrogram feature of HRRP data for improving the recognition performance, of which the spectrogram is a two-dimensional feature providing the variation of frequency domain feature with time domain feature. And then, a new radar HRRP target recognition method is presented via a truncated stick-breaking hidden Markov model (TSB-HMM). Moreover, multi-task learning (MTL) is employed, from which a full posterior distribution on the numbers of states associated with the targets can be inferred and the target-dependent states information are shared among multiple target-aspect frames of each target. The framework of TSB-HMM allows efficient variational Bayesian inference, of interest for large-scale problem. Experimental results for measured data show that the spectrogram feature has significant advantages over the time domain sample in both the recognition and rejection performance, and MTL provides a better recognition performance.

Several studies [8][9][10][11][12][13][14][15][16] show that statistical recognition is an efficient method for RATR. Figure 2 shows a typical flow chart of radar HRRP statistical recognition. By statistical recognition is meant the feature vector y extracted from test HRRP sample x will be assigned to the class with maximum posterior probability p(c|y), where c {1, ..., C} denotes the class membership. According to Bayes algorithm, p(c|y) ∝ p(y|c)p(c), where p(y|c) is the class-conditional likelihood and p(c) is the prior class probability. Since the prior class probability is usually assumed to be uniformly distributed, estimation of the posterior probability p(c|y) of each class is turned into estimation of the class-conditional likelihood p(y|c) of each class. There are usually two stages (training and classification) in the statistical recognition procedure. We suppose the classconditional likelihood p(y|c) can be described via a model with a set of parameters (i.e., a parametric model). In the training phase, these parameters are estimated via training data (known as statistical modeling); and in the classification phase, as discussed above, given a test sample x, we first extract the feature vector y from the test sample, then we calculate the class-conditional likelihood p(y|c) for each target c, finally, the test sample is associated with target c' if c = arg max c p(c|y) . The focus of this article is on the feature extraction and statistical modeling.
Feature extraction from HRRP is a key step of our recognition system. One of the general feature extraction methods is feature dimensionality reduction [5]. This method is generally supervised, and some discriminative information may be lost during the dimensional reduction procedure. Another general feature extraction method is feature transformation. The study given in [6] investigates various higher-order spectra, and shows that the power spectrum has the best recognition performance. Rather than just utilizing the frequency domain feature as in [6], this study exploits the spectrogram feature of HRRP data for combining both time domain and frequency domain features, which is a two-dimensional feature providing the variation of frequency domain feature with time domain feature. Some statistical models [8][9][10][11][12][13][14][15][16] are developed for HRRP-based RATR, of which [14][15][16] successfully utilized the hidden Markov model (HMM) for modeling the feature vectors from the HRRP sequence. Since the HRRP sample is a typical high-dimensional distributed signal, it is computationally prohibitive to build such a high-dimensional HMM for describing HRRP sequences directly. Therefore, to avoid this problem, some dimensionality reduction methods are utilized. For example, in [14,15], the relax algorithm is employed to extract the waveform constituents from the HRRP radar echoes, and then an HMM is utilized to characterize the features; in [16], a nonstationary HMM is utilized to characterize the following two features, i.e., the location information feature of scattering centers which are extracted via the multirelax algorithm, and the moments of HRRP radar echoes. Nevertheless, some information contained in HRRP samples will be inevitably lost during the dimensional reduction procedure. Moreover, since multiple aspect-dependent looks at a single target are utilized in the classification phase, the angular velocity of the target relative to the LOS of the radar is required to remain the same for the training and classification phases, which can hardly be satisfied for a noncooperative target.
The study reported here seeks an alternative way of exploiting HMM, in which we characterize the spectrogram feature from a single HRRP sample via the hidden Markov structure. In our model, the spectrogram feature extracted from each single HRRP is viewed as a d-dimensional sequence (d is the length of spectrogram feature in the frequency dimension), thus only a single HRRP sample is required for the classification phase rather than an aspect-dependent sequence. The main contribution of this study can be summarized as follows.
(a) Spectrogram feature: The time domain HRRP samples only characterize the time domain feature of the target, which is too simple to obtain good performance. By contrast, the spectrogram feature introduced in this article is a time-frequency representation of HRRP data. The physical meaning of the spectrogram feature extracted from an HRRP sample is that the spectrogram feature in each time bin characterizes the frequency domain property of a fragment of the target, which can reflect the scattering properties of different physical constructions. Therefore, the spectrogram feature should be a better choice for the recognition problem. (b) Nonparametric model selection via stick-breaking construction: In the context of target recognition using HMMs, a key issue is to develop a methodology for defining an appropriate set of states to avoid over-or under-fitting. A Bayesian nonparametric infinite HMM (iHMM) which constituted by the hierarchical Dirichlet process (HDP) has proven  effective to infer the number of states in acoustic sensing scenarios [17,18]. However, the lack of conjugacy between the two-level HDP means that a truly variational Bayesian (VB) solution is difficult for HDP-HMM, which makes computationally prohibitive in large data problems. Recent study [19] proposes another way to constitute iHMM, where each row of transition matrix and initial state probability is given with a fully conjugate infinite dimensional stick-breaking prior, which can accommodate an infinite number of states, with the statistical property that only a subset of these states will be used with substantial probabilities, referred to as the stick-breaking HMM (SB-HMM). We utilize the truncated version of such stick-breaking construction in our model to characterize the HMM states, which is referred to as the truncated stick-breaking HMM (TSB-HMM). (c) Multi-task learning (MTL): A limitation of statistical model is that it usually requires substantial training data, assumed to be similar to the data on which the model is tested. However, in radar target recognition problems, one may have limited training data. In addition, the test data may be obtained under different motion circumstance. Rather than building models for each data subset associated with different target-aspect individually, it is desirable to appropriately share the information among these related data, thus offering the potential to improve overall recognition performance. If the modeling of one data subset is termed one learning task, the learning of models for all tasks jointly is referred to as MTL [20]. We here extend the TSB-HMM learning in a multi-task setting for spectrogram feature-based radar HRRP recognition. (d) Full Bayesian inference: We present a fully conjugate Bayesian model structure, which does have an efficient VB solution.
The remainder of this article is organized as follows. We introduce spectrogram feature of HRRP data and analyze its advantages over time domain samples in Section 2. Section 3 briefly reviews the traditional HMMs. In Section 4, the proposed model construction is introduced, and the model learning and classification are implemented based on VB inference. We present experimental results on both single-task and multi-task TSB-HMM with time domain feature and spectrogram feature of measured HRRP data in Section 5. Finally, the conclusions are addressed in Section 6.

Definition of the spectrogram
The spectrogram analysis is a common signal processing procedure in spectral analysis and other fields. It is a view of a signal represented over both time and frequency domains, and has widely been used in the fields of radar signal processing, and speech processing [21,22], etc.
Spectrograms can readily be created by calculating the short-time Fourier transform (STFT) of the time signal. The STFT transform may be represented as where x(u) is the signal to be transformed, w(·) is the window function.
The spectrogram is given by the squared magnitude of the STFT function of the signal: From (1) and (2), we can see that the spectrogram function shows how the spectral density of signal varies with time.
In RATR problems, employing some nonlinear transformation (e.g., power transform metric) in feature domain may correct for the departures of samples from normal distribution to some extent and improve average recognition of learning models [2]. The power transform metric is defined as where a is the power parameter.

Spectrogram feature of HRRP data
The sequential relationship across the range cells within a single HRRP echo can reflect the physical composition of the target. This can be illustrated by Figure 3, which presents the HRRP samples and corresponding spectrogram features from three plane targets, i.e., Yark-42, An-26, and Cessna Citation S/II. The advantages of spectrogram are as follows: (i) HRRP scattering from complex targets are a strong function of the target sensor orientation; and even a slight variation of the target-aspect may yield the scatterers at the edges of the target moving across some range cells [23]. When the target-aspect changes a little, the scatterers within several continuous range cells (referred to as a chunk) are more robust than the scatterers in a single range cell. Therefore, the sequential relationship across the chunks in spectrogram of a single HRRP echo, rather than that across the range cells within a single HRRP, can reflect the target physical composition more robustly. (ii) Spectrogram is a timefrequency representation of a signal. It describes not only time domain feature, but also the spectral density varying with the time. (iii) At each discrete time (each chunk or each range cell), the observation of a spectrogram feature is a vector, while that of time domain feature (HRRP sample) is a scalar. Thus, the highdimensional feature vector may reflect more details than a single point for discrimination.

Review of traditional HMM: finite HMM and infinite HMM
The HMM [24] has widely been used in speech recognition and target recognition. It is a generative representation of sequential data with an underlying markovian process selecting state-dependents distributions, from which observations are drawn. Specially for a sequence of length T, a state sequence s = (s 1 , s 2 , ..., s T ) is drawn from P(s t |s t-1 ). Given the observation model f(·), the observation sequence x = (x 1 , x 2 , ..., x T ) can then be drawn as f (θ s t ), where θ s t is a set of parameters for the observation models which is indexed by the state at time t.
An HMM can be modeled as F = {w 0 , w, θ}, each parameter is defined as Given model parameters F, the probability of the complete data can be expressed as And the data likelihood p(x|F) can be obtained by integrating over the states using the forward algorithm [24]. In a classical HMM [24], the number of the states with the HMM is initialized and fixed. Therefore, we have to specify model structure before learning. However, in many practical applications, it needs an expansive model selection process to obtain a correct model structure. To avoid the model selection process, a fully nonparametric Bayesian approach with countably infinite state spaces is employed, first proposed by Beal, and termed infinite Markov model (iHMM) [25].
Recent study [19] proposes the iHMM with stickbreaking priors (SB-HMM), which can be used to develop an HMM with an unknown number of states. In this model, each row of the infinite state transition matrix w is given a stick-breaking prior. The model is expressed as follows where the mixture distribution G i has weights represented as w i = [w i, 1 , w i, 2 , ..., w i, j , ...], δ(θ j ) is a point measure concentrated at θ j , Beta(1, b i ) represents the Beta distribution with hidden variable b i , the drawn variables {v i,j } ∞ j=1 are independent and identically distributed (i.i.d), Ga(a a , b a ) represents the Gamma distribution with preset parameters a a , b a , and H denotes a prior distribution from which the set {θ j } ∞ j=1 is i.i.d drawn. The initial state probability mass function, w 0 , is also constructed according to an infinite stick-breaking construction. When w i terminates at some finite number I - [26], which is denoted as , where 1 (I-1) × 1 denotes an (I -1)-length vector of ones, [b i ] (I-1) × 1 is an (I -1)length vector of b i , and I represents the truncation number of states.
The key advantage of stick-breaking construction is that the corresponding state-dependent parameters θ j are drawn separately, effectively detaching the construction θ from the construction initial-state probability w 0 and state transition matrix w. This is contrast with HDP priors [27], where these matrices are linked with twolevel construction. Therefore, the stick-breaking construction makes fast variational inference feasible. In addition, the SB-HMM has a good sparse property, which promotes a sparse utilization of the underlying states [19].

MTL model construction
According to the scattering center model [4], for a highresolution radar system, a radar target does not appear as a "point target" any more, but consists of many scatterers distributed in some range cells along radar LOS. For a certain target, the scattering center model varies throughout the whole target-aspect. Therefore, pre-processing techniques should be applied to the raw HRRP data. In our previous work [11][12][13], we divide the HRRP into frames according to the aspect-sectors without most scatterers' motion through resolution cells (MTRC), and use distinct parametric models for statistical characterization of each HRRP frame, which are referred to as the aspect-frames and corresponding single-task learning (STL) models in our articles.
For the motivating HRRP recognition problems of interested here, we utilize TSB-HMM for analyzing spectrogram features extracted from HRRP data. For a multi-aspect HRRP sequence of target c (c {1, ..., C} with C denoting the number of targets here), we divide the data into M c aspect frames, e.g., the mth set (here m where N denotes the number of samples in the frame, and x (c, m, n) = [x (c, m, n) (1), ..., x (c, m, n) (L x )] T represents the nth HRRP sample in the mth frame, with L x denoting the number of range cells in an HRRP sample. Each aspect frame corresponds to a small aspect-sector avoiding scatters' MTRC [13], and the HRRP samples inside each target-aspect frame can be assumed to be i.i.d. We extract the spectrogram feature of each HRRP sample, and Y (c, m, n) = [y (c, m, n) (1), ..., y (c, m, n) (L y )] denotes the spectrogram feature of x (c, m, n) as defined in (2) with L y denoting the number of time bins in spectrogram feature.
If learning a separate TSB-HMM for each frame of the target, i.e., {Y (c,m,n) } N n=1 , is termed the single-task TSB-HMM (STL TSB-HMM). Here, we wish to learn a TSB-HMM for all the aspect-frames (tasks) of one target jointly, which is referred to as multi-task TSB-HMM (MTL TSB-HMM). MTL is an approach to inductive transfer that improves generalization by using the domain information contained in the training samples of related tasks as an inductive bias [20]. In our learning problems, the aspect-frames of one target may be viewed as a set of related learning tasks. Rather than building models for each aspect-frame individually (due to target-aspect sensitivity), it is desirable to appropriately share the information among these related data. Therefore, the training data for each task are strengthened and overall recognition performance is potentially improved.
The construction of the MTL TSB-HMM with parameters for target c is represented as where y (c, m, n) (l) is lth time chunk of nth sample's spectrogram in the mth aspect-frame of the cth target, s (c,m,n) l denotes the corresponding state indicator, (a a , b a ) are the preset hyperparameters. Here, the observation model f(·)is defined as independently normal distribution, and each corresponding element in H(·) is normal-Gamma distribution to preserve conjugacy requirements. Since each time bin of spectrogram feature of a plane corresponds to a fragment of the plane, the HMM states can characterize the frequency domain properties of different fragments of the plane target, i.e., the scattering properties of different physical constructions. A graphical representation of this model is shown in Figure 4a, and Figure 4b depicts that the sequential dependence across time chunks for a given aspect-frame is characterized by an HMM structure.
The main difference between MTL TSB-HMM and STL TSB-HMM is in the proposed MTL TSB-HMM, all the multi-aspect frames of one target are learned jointly, each of the M c tasks of target c is assumed to have an independent state-transition statistics, but the statedependent observation statistics are shared across these tasks, i.e., the observation parameters are learned via all aspect-frames; while in the STL TSB-HMM, each multiaspect frame of target c is learned separately, therefore, each target-aspect frame builds its own model and the corresponding parameters are learned just via this aspect-frame.

Model learning
The parameters of proposed MTL TSB-HMM model are treated as variables, and this model can readily be implemented by Markov Chain Monte Carlo (MCMC) [28] method. However, to approximate the posterior distribution over parameters, MCMC requires large computational resources to assess the convergence and reliability of estimates. In this article, we employ VB inference [19,29,30], which does not generate a single point estimation of the parameters, but regard all model parameters as possible, with the goal of estimating the posterior density function on the model parameters, as a compromise between accuracy and computational cost for large-scale problems.
The goal of Bayesian inference is to estimate the posterior distribution of model parameters Φ. Given the observation data X and hyper parameters g, by Bayes' rule, the posterior density for the model parameters may be expressed as where the denominator ∫p(X|F, g)p(F|g)dF = p(X|g) is the model evidence (marginal likelihood).
VB inference provides a computationally tractable way which seeks a variational distribution q(F) to approximate the true posterior distribution of the latent variables p(F|X, g), we obtain the expression KL(q(F)|| p(F|X, g)) is the Kullback-Leibler (KL) divergence between the variational distributions q(F) and the true posterior p(F|X, g). Since KL(q(F)|| p(F|X, g)) ≥ 0, and it reaches zero when q(F) = p(F|X, g), this forms a lower bound for log p(X|g), so we have log p(X|g) ≥ L(q (F)). The goal of minimizing the KL divergence between the variational distribution and the true posterior is equal to maximize this lower bound, which is known as the negative free energy in statistical physics. For the computational convenience and intractable of the negative-free energy, we assume a factorized q(F), i.e., q( ) = k q k (ϕ k ) , which has same form as employed in p(F|X, g). With this assumption, the mean field approximation of the variational distributions for the proposed MTL TSB-HMM with target c may be expressed as , θ, β} are the latent variables in this MTL model.
A general method for performing variational inference for conjugate-exponential Bayesian networks outlined in [17] is as follows: for a given node in a graphic model, write out the posterior as though everything were known, take the logarithm, the expectation with respect to all known parameters and exponentiate the result. We can implement expectation-maximization (EM) algorithm in variational inference. The lower bound is increased in each of iteration until the algorithm converges. In the following experiments, we terminate EM algorithm when the changes of the lower bound can be neglected (the threshold is 10 -6 ). Since it requires computational resource comparable to EM algorithm, variational inference is faster than MCMC methods. The detailed update equations for the latent variables and hyperparameters of MTL TSB-HMM with HRRP spectrogram feature are summarized in the Appendix.

Main procedure of radar HRRP target recognition based on the proposed MTL TSB-HMM algorithm
The main procedure of radar HRRP target recognition based on the proposed MTL TSB-HMM algorithm is shown as follows.

Classification phase
(1) The amplitude normalized HRRP testing sample is time-shift compensated with respect to the averaged HRRP of each frame model via the slide correlation processing [23]. (5) As discussed in Section 1, the testing HRRP sample will be assigned to the class with the maximum class-conditional likelihood, with the assumption that the prior class probabilities are same for all targets of interests,

Measured data
We examine the performance of the TSB-HMM on the 3-class measured data, including three actual aircrafts (a propeller plane An-26, a small yet plane Cessna Citation S/II, and a big yet plane Yark-42), the radar works on a C band with bandwidth of 400 MHz, the range resolution of the HRRP is about 0.375 m. The parameters of the targets and radar are shown in Table 1, and the projections of target trajectories onto the ground plane are displayed in Figure 5, from which the aspect angle of the airplane can be estimated according to its relative position to radar. As shown in Figure 5, all aspects of targets were measured repeatedly several times in this dataset. The requirements of choosing training data and test data are that the training data and the test data are from different data segments, and the training data cover almost all of target-aspect angles of the test data, but their elevation angles are different. The second and the fifth segments of Yark-42, the sixth and the seventh segments of Cessna Citation S/II and the fifth and the sixth segments of An-26 are taken as the training samples while the remaining data are left for testing. These training data almost cover all of the target-aspect angles. Also, we need test data from different target to measure the rejection performance of our model. Here, we use 18,000 truck HRRP samples generated by the electromagnetic simulator software, XPATCH, as a confuser target. In addition, the HRRP samples are 128-dimensional vectors.
As discussed in the literature [11,12], it is a prerequisite for radar target recognition to deal with the target-aspect, time-shift, and amplitude-scale sensitivity. According to radar parameters and the condition of aspect sectors without MTRC, for training data from 3 targets we totally have 135 HRRP frames, of which 35 from Yark-42, 50 from Cessna Citation S/II and 50 from An-26. Similar to the previous study [11][12][13], HRRP training samples should be aligned by the time-shift compensation techniques in ISAR imaging [23] to avoid the influence of timeshift sensitivity. Each HRRP sample is normalized by L 2 normalization algorithm to avoid the amplitude-scale sensitivity. In the rest of the article, the training HRRPs in each frame are assumed to have been aligned and normalized.
Nine Due to the limited memory resource of our computer, we do not consider 1024 × 135 training dataset in MTL. Since there is no prior knowledge about how many states we should use, and how to set these states, the HMM states are not manually set like [31]. We set a large truncation number in our model to learn the meaningful states automatically. In the following experiments, we set the truncation level I to 40 for both spectrogram feature and time domain feature in STL, and I to 60 for both spectrogram feature and time domain feature in MTL. Similar results were found for lager truncations. In our model, since the parameter b i controls the prior distribution on the number of states, we set the hyper-parameters a a = b a = 10 -6 for each b i to promote sparseness on states.

Time domain feature versus spectrogram feature
In this experiment, STL TSB-HMM and MTL TSB-HMM of HRRP training datasets within each frame are learned, respectively, and the two features, i.e., time domain and spectrogram features, are compared. When using the HRRP time domain feature, we can just substitute the scalar x (c, m, n) (l) for the vector y (c, m, n) (l) in (6), where x (c, m, n) (l) represents the nth HRRP sample in the mth frame of target c, with l denoting the corresponding range cell in the HRRP sample. Figure 6 shows that the performances of STL TSB-HMMs based on time domain feature are better than those based on spectrogram feature when training dataset no more than 32 × 135. The reason is that more parameters need to be estimated for the model with spectrogram feature than that with time domain feature. For example, for a target-aspect frame with 32 training samples, we need to estimate the 40 16-dimensional states for the model with spectrogram feature; while 25 1-dimensional states for the model with time domain feature. When training data size larger than 32 × 135, spectrogram feature obtains obviously better performance. Table 2 further compares the confusion matrices and average recognition rates of using time domain feature and spectrogram feature with 1024 × 135 training samples via STL. We can clearly find that the average recognition rate obtained by spectrogram feature is about 6.6% points larger than that obtained by the time domain feature. The performances of MTL TSB-HMMs are shown in Figure 7. Since MTL sharing states between different tasks of a target, which is better for parameter learning with small training data size, spectrogram domain feature outperforms time domain feature even with few training data.
The posterior state distributions of MTL TSB-HMM with spectrogram feature for all the three plane targets with 128 × 135 training data are shown in Figure 8. In this example, the state truncation level of I = 60 is employed for each plane. Further for each plane, the 60 hidden states are shared across all aspect-frames, and there are 46, 48, and 49 meaningful states with the posterior state usage larger than zero for Yark-42, Cessna Citation S/II, and An-26, respectively, and those of other 14, 12, and 11 states are zero, which justifies using the truncated version stick-breaking prior for our data.
Next, we consider the target rejection problem. Three planes targets are considered as "in-class targets", while the simulated data consists of 18,000 truck HRRP samples are considered as confuser targets. Two examples of confuser targets HRRP samples are shown in Figure 9. Our goal is to test whether a new data is in the family of the in-class targets or not. Figure 10 presents the rejection performance evaluated by the receiver operation characteristic (ROC) curves. The ROC curve depicts the   detection probability versus the false alarm probability. For a fixed false alarm probability, a method with the higher detection probability is better. The dataset size is selected as 128 × 135 in the training phase. The ROC curves are shown in Figure 10. The spectrogram featurebased TSB-HMMs outperforms the time domain featurebased TSB-HMMs, especially for Yark-42. Figure 11 shows the test likelihoods of 1,200 Yark-42 samples and 18,000 confuser target samples obtained with STL TSB-HMM. As shown in Figure 11a, when using the time domain feature, the test likelihoods of Yark-42 samples are relative low, and many confuser samples have higher test likelihoods than Yark-42 samples. That is to say, when we set a high discrimination threshold, the detection probability is very low, and the false alarm probability is high. By contrast, from Figure 11b, we can find that when using the spectrogram feature, the test likelihoods of Yark-42 samples are higher than most of the test likelihoods of the confuser samples. Therefore, the detection performance of the spectrogram feature is much better than that of the time domain feature for Yark-42.

STL versus MTL
In order to model spectrogram feature, two parameters need to be set first, i.e., the width of window function and the length of the overlapped window.
In the feature space, the spectrogram varies with the width of the window function and the overlap across the windows. For HRRP data analysis, a wide window function provides better frequency resolution, but worsens the time resolution, and vice versa. Physically, since the width of a window function determines the length of segments of a target, the longer the segment we divide a target into, the more physical composition of the target will be contained in the each component of the observation vector; meanwhile, the overlap across the windows determines the redundancy of the segments.
We build a set of MTL TSB-HMMs to search these two parameters. The width of the window function is chosen from 10 to 40 range cells with an increment of 1 range cells, and the overlap length is fixed as typically half width of the window function. In this experiment, we use 64 × 135 training samples. As demonstrated in Figure 12, the optimal width of the window function is 33 range cells. We then fix the optimal window function width, and set the overlap length from 1 range cell to 29 range cells to determine the optimal overlap length. From Figure 13, the optimal overlap length is 16 range cells. Therefore, we extract the spectrogram feature with the window function width of 33 range cells and the overlap length of 16 range cells for training.   We compare two methods for spectrogram feature based TSB-HMM: (i) the proposed MTL TSB-HMMs method, for which we learn target-aspect frames of a target collectively; (ii) the STL TSB-HMMs method, for which each target-aspect frame of targets modeled separately. As shown in Figure 14, the proposed MTL TSB-HMMs method consistently outperforms the STL TSB-HMMs method and the improvement is more significant when there is only a small amount of training data available. This is because MTL exploits the sharing states between different tasks and uses the sharing information to enhance the overall performance. In addition, in the state truncation level of I = 60 is employed for each of the three planes. For each plane, the 60 states are shared across the aspect-frames. Therefore, we only impose 60 × 3 = 180 states in the MTL TSB-HMMs. However, in the training phase of STL model, the state truncation level of I = 40 is employed for each aspectframe. As discussed in Section 5.1, we have 50 + 50 + 35 = 135 aspect-frames from the three targets. Therefore, we totally impose 40 × 135 = 5400 states in the STL TSB-HMMs. Table 3   We also consider the target rejection problem here. The ROC curves of MTL TSB-HMMs are presented in Figure 15. Compare with Figure 10, the area under curve (AUC) of STL are slightly larger than that of MTL.

MTL with power transformed spectrogram feature
We employ the power transform metric for all the observation vectors (the vectors in each fixed time) of the spectrogram. Figure 16 show that the optimal parameter a* = 0.4. Figure 17 shows the performance of MTL TSB-HMMs is better than STL-HMMs for power transformed spectrogram feature. Compared with the average recognition rate of original spectrogram feature in Figure 14, that of power transformed spectrogram feature shown in Figure 17 are much larger, especially for small training data sets.
The confusion matrix and average recognition rates of STL TSB-HMMs and MTL TSB-HMMs with power transformed spectrogram feature are shown in Table 4, where 2 × 135 and 128 × 135 training samples are used for learning the models. Note that, when we consider 2 × 135 training samples for MTL TSB-HMMs based on spectrogram feature with the optimal power transformation, the average of recognition rate is nearly equivalent to the case of considering 128 × 135 training samples for that based on original spectrogram feature. When using 128 training samples per target-aspect frame for STL TSB-HMMs, the average recognition rate is gained by 4.3% via power transformation; while for MTL TSB-HMMs the gain is 2.5%.
Similarly, as shown in Figure 18, we obtained the ROC curve for power transformed spectrogram feature in the same experimental environment as that we mentioned in Section 5.3. The AUC of STL and MTL with transformed spectrogram features are gained by 3.3 and 5.0%; therefore, the model of using the transformed spectrogram features can improve the rejection performance.

Computation burden
All experiments have been performed in nonoptimized programme written in Matlab, on a Pentium PC with 3.2-GHz CPU and 2 GB RAM. In our VB algorithm, when the relative change of lower bound between two consecutive iterations is less than the threshold 10 -6 , we believe our algorithm converges. Generally, in a practical application, the larger training dataset requires the huger computational burden in the training phase. When the training dataset contains 128 × 135 training samples, the VB algorithm of the MTL TSB-HMM with time domain feature and the truncation number I = 60 converges after about 400 iterations and requires about 14 h, and the VB algorithm of MTL TSB-HMMs with spectrogram feature and the truncation number I = 60 converges after about 200 iterations and requires about 2 h. Although the above computation is pretty expensive, we know that the computation cost in the training phase can be ignored for an off-line learning (or training) system, and it is more important to evaluate the computation cost in the classification phase. The MTL TSB-HMMs with time domain feature and spectrogram feature require 0.6680 and 1.5893 s, respectively, to match a test sample with all frame models. The computation time given here is averaged over ten runs.

Conclusion
We have utilized spectrogram feature of HRRP data and presented an MTL-based hidden Markov model with truncated stick-breaking prior (MTL TSB-HMM) for radar HRRP target recognition. The construction of this model allows VB inference, which extremely decreases the computational burden.
After resolving the three sensitivity problems, i.e., the target-aspect, time-shift, and amplitude scale sensitivity of HRRP, respectively, we first compare the spectrogram feature of HRRP with the time domain feature of HRRP data via single-task and multi-task learning-based hidden Markov model with truncated stick-breaking prior (STL and MTL TSB-HMM). Second, we measure the   performance of STL TSB-HMM and MTL TSB-HMM with spectrogram feature, where in MTL TSB-HMM, the multiple tasks are linked by different target-aspects. Finally, we introduce power transformation metric to improve the recognition performance of spectrogram feature. It is shown that using spectrogram feature not only have a better ROC, but also obtain a better recognition performance than using HRRP time domain feature. MTL shares the underlying state information among different target-aspects, and can provide a better recognition performance compared to STL. In addition, the power transformation metric can enhance both average recognition rate and ROC. It is worth to point out that our MTL model with spectrogram feature can obtain a good recognition performance with much fewer training data compared with the conventional radar HRRP-based statistical recognition methods, which is a good property for RATR.
where < · > denotes the expectation of the associated variables function. One may derive that