Skip to main content

Distance-based features in pattern classification

Abstract

In data mining and pattern classification, feature extraction and representation methods are a very important step since the extracted features have a direct and significant impact on the classification accuracy. In literature, numbers of novel feature extraction and representation methods have been proposed. However, many of them only focus on specific domain problems. In this article, we introduce a novel distance-based feature extraction method for various pattern classification problems. Specifically, two distances are extracted, which are based on (1) the distance between the data and its intra-cluster center and (2) the distance between the data and its extra-cluster centers. Experiments based on ten datasets containing different numbers of classes, samples, and dimensions are examined. The experimental results using naïve Bayes, k-NN, and SVM classifiers show that concatenating the original features provided by the datasets to the distance-based features can improve classification accuracy except image-related datasets. In particular, the distance-based features are suitable for the datasets which have smaller numbers of classes, numbers of samples, and the lower dimensionality of features. Moreover, two datasets, which have similar characteristics, are further used to validate this finding. The result is consistent with the first experiment result that adding the distance-based features can improve the classification performance.

1. Introduction

Data mining has received unprecedented focus in the recent years. It can be utilized in analyzing a huge amount of data and finding valuable information. Particularly, data mining can extract useful knowledge from the collected data and provide useful information for making decisions [1, 2]. With the rapid increase in the size of organizations' databases and data warehouses, developing efficient and accurate mining techniques have become a challenging problem.

Pattern classification is an important research topic in the fields of data mining and machine learning. In particular, it focuses on constructing a model so that the input data can be assigned to the correct category. Here, the model is also known as a classifier. Classification techniques, such as support vector machine (SVM) [3], can be used in a wide range of applications, e.g., document classification, image recognition, web mining, etc. [4]. Most of the existing approaches perform data classification based on a distance measure in a multivariate feature space.

Because of the importance of classification techniques, the focus of our attention is placed on the approach for improving classification accuracy. For any pattern classification problem, it is very important to choose appropriate or representative features since they have a direct impact on the classification accuracy. Therefore, in this article, we introduce novel distance-based features to improve classification accuracy. Specifically, the distances between the data and cluster centers are considered. This leads to the intra-cluster distance between the data and the cluster center in the same cluster, and the extra-cluster distance between the data and other cluster centers.

The idea behind the distance-based features is to extend and take the advantage of the centroid-based classification approach [5], i.e., all the centroids over a given dataset usually have their discrimination capabilities for distinguishing data between different classes. Therefore, the distance between a specific data and its nearest centroid and other distances between the data and other centroids should be able to provide valuable information for classification.

This rest of the article is organized as follows. Section 2 briefly describes feature selection and several classification techniques. Related work focusing on extracting novel features is reviewed. Section 3 introduces the proposed distance-based feature extraction method. Section 4 presents the experimental setup and results. Finally, conclusion is provided in Section 5.

2. Literature review

2.1. Feature selection

Feature selection can be considered as a combination optimization problem. The goal of feature selection is to select the most discriminant features from the original features [6]. In many pattern classification problems, we are often confronted with the curse of dimensionality, i.e., the raw data contain too many features. Therefore, it is a common practice to remove redundant features so that efficiency and accuracy can be improved [7, 8].

To perform appropriate feature selection, the following considerations should be taken into account [9]:

  1. 1.

    Accuracy: Feature selection can help us exclude irrelevant features from the raw data. These irrelevant features usually have a disrupting effect on the classification accuracy. Therefore, classification accuracy can be improved by filtering out the irrelevant features.

  2. 2.

    Operation time: In general, the operation time is proportional to the number of selected features. Therefore, we can effectively improve classification efficiency using feature selection.

  3. 3.

    Sample size: The more samples we have, the more features can be selected.

  4. 4.

    Cost: Since it takes time and money to collect data, excessive features would definitely incur additional cost. Therefore, feature selection can help us to reduce the cost in collecting data.

In general, there are two approaches for dimensionality reduction, namely, feature selection and feature extraction. In contrast to the feature selection, feature extraction performs transformation or combination on the original features [10]. In other words, feature selection finds the best feature subset from the original feature set. On the other hand, feature extraction projects the original feature to a subspace where classification accuracy can be improved.

In literature, there are many approaches for dimensionality reduction. principal component analysis (PCA) is one of the most widely used techniques to perform this task [1113].

The origin of PCA can be traced back to 1901 [14] and it is an approach for multivariate analysis. In a real-world application, the features from different sources are more and less correlated. Therefore, one can develop a more efficient solution by taking these correlations into account. The PCA algorithm is based on the correlation between features and finds a lower-dimensional subspace where covariance is maximized. The goal of PCA is to use a few extracted features to represent the distribution of the original data. The PCA algorithm can be summarized in the following steps:

  1. 1.

    Compute the mean vector μ and the covariance matrix S of the input data.

  2. 2.

    Compute the eigenvalues and eigenvectors of S. The eigenvalues and the corresponding eigenvectors are sorted according the eigenvalues.

  3. 3.

    The transformation matrix contains the sorted eigenvectors. The number of eigenvectors preserved in the transformation matrix can be adjusted by users.

  4. 4.

    A lower-dimensional feature vector is obtained by subtracting the mean vector μ from an input datum and then multiplied by the projection matrix.

2.2. Pattern clustering

The aim of clustering analysis is to find groups of data samples having similar properties. This is an unsupervised learning method because it does not require the category information associated with each sample [15]. In particular, the clustering algorithms can be divided into five categories [16], namely, hierarchical, partitioning, density-based, grid-based, and model-based methods.

The k-means algorithm is a representative approach belonging to the partition method. In addition, it is a simple, efficient, and widely used clustering method. Given k clusters, each sample is randomly assigned to a cluster. By doing so, we can find the initial locations of cluster centers. We can then reassign each sample to the nearest cluster center. After the reassignment, the locations of cluster centers should be updated. The previous steps are iterated until some termination condition is satisfied.

2.3. Pattern classification

The goal of pattern classification is to predict the category of the input data using its attributes. In particular, a certain number of training samples are available for each class, and they are used to train the classifier. In addition, each training sample is represented by a number of measurements (i.e., feature vectors) corresponding to a specific class. This can be called as supervised learning [15, 17].

In this article, we will utilize three popular classification techniques, namely, naïve Bayes, SVMs, and k-nearest neighbor (k-NN), to evaluate the proposed distance-based features.

2.3.1. Naïve Bayes

The naïve Bayes classifier is a probabilistic classifier based on the Bayes' theorem [15]. It requires all assumptions to be explicitly built into models which are then used to derive 'optimal' decision/classification rules. It can be used to represent the dependence between random variables (features) and to give a concise and tractable specification of the joint probability distribution for a domain. It is constructed using the training data to estimate the probability of each class given the feature vectors of a new instance. Given an example represented by the feature vector X, the Bayes' theorem provides a method to compute the probability that X belongs to class C i , denoted as p(C i |X):

P ( C i X ) = j = 1 N P ( x j C i )
(1)

i.e., the naïve Bayes classifier learns the conditional probability of each attribute x j (j = 1,2,...,N) of X given the class label C i . Therefore, the classification problem can be stated as 'given a set of observed features x j , from an object X, classify X into one of the classes.

2.3.2. Support vector machines

A SVM [3] has widely been applied in many pattern classification problems. It is designed to separate a set of training vectors which belong to two different classes, (x1, y1), (x2, y2),...,(x m , y m ) where x i Rd denotes vectors in a d-dimensional feature space and y i {-1, +1} is a class label. In particular, the input vectors are mapped into a new higher dimensional feature space denoted as Φ: RdHf where d < f. Then, an optimal separating hyperplane in the new feature space is constructed by a kernel function, K(x i , x j ) which products between input vectors x i and x j where K(x i , x j ) = Φ(x i ) Φ(x j ).

All vectors lying on one side of the hyperplane are labeled '-1', and all vectors lying on the other side are labeled '+1'. The training instances that lie closest to the hyperplane in the transformed space are called support vectors.

2.3.3. K-nearest neighbor

The k- NN classifier is a conventional non-parametric classifier [15]. To classify an unknown instance represented by some feature vectors as a point in the feature space, the k-NN classifier calculates the distances between the point (i.e., the unknown instance) and the points in the training dataset. Then, it assigns the point to the class among its k-NNs (where k is an integer).

In the process of creating a k-NN classifier, k is an important parameter and different k values will cause different performances. If k is considerably huge, the neighbors which used for classification will make large classification time and influence the classification accuracy.

2.4. Related work of feature extraction

In this study, the main focus is placed on extracting novel distance-based features so that classification accuracy can be improved. The followings summarize some related studies proposing new feature extraction and representation methods for some pattern classification problems. In addition, the contributions of these research works are briefly discussed.

Tsai and Lin [18] propose a triangle area-based nearest neighbor approach and apply it to the problem of intrusion detection. Each data are represented by a number of triangle areas as its feature vectors, in which a triangle area is based on the data, its cluster center, and one of the other clusters. Their approach achieves high detection rate and low false positive rate on the KDD-cup99 dataset.

Lin [19] proposes an approach called centroid-based and nearest neighbor (CANN). This approach uses cluster centers and their nearest neighbors to yield a one-dimensional feature and can effectively improve the performance of an intrusion detection system. The experimental results over the KDD CUP 99 dataset indicate that CANN can improve the detection rate and reduce computational cost.

Zeng et al. [20] propose a novel feature extraction method based on Delaunay triangle. In particular, a topological structure associated with the handwritten shape can be represented by the Delaunay triangle. Then, an HMM-based recognition system is used to demonstrate that their representation can achieve good performance in the handwritten recognition problem.

Xue et al. [21] propose a Bayesian shape model for facial feature extraction. Their model can tolerate local and global deformation on a human face. The experimental results demonstrate that their approach provides better accuracy in locating facial features than the active shape model.

Choi and Lee [22] propose a feature extraction method based on the Bhattacharyya distance. They consider the classification error as a criterion for extracting features and an iterative gradient descent algorithm is utilized to minimize the estimated classification error. Their feature extraction method performs favorably with conventional methods over remotely sensed data.

To sum up, the limitations of much related work extracting novel features are that they only focuses on solving some specific domain problem. In addition, they use their proposed features to directly compare with original features in terms of classification accuracy and/or errors, i.e., they do not consider 'fusing' the original and novel features as another new feature representation for further comparisons. Therefore, the novel distance-based features proposed in this article are examined over a number of different pattern classification problems and the distance-based features and the original features are concatenated for another new feature representation for classification.

3. Distance-based features

In this section, we will describe the proposed method in detail. The aim of our approach is to augment new features to the raw data so that the classification accuracy can be improved.

3.1. The extraction process

The proposed distance-based feature extraction method can be divided into three main steps. In the first step, given a dataset the cluster center or centroid for every class is identified. Then, for the second step, the distances between each data sample and the centroids are calculated. The final step is to extract two distance-based features, which are calculated in the second step. The first distance-based feature means the distance between the data sample and its cluster center. The second one is the sum of the distances between the data sample and other cluster centers.

As a result, each of the data samples in the dataset can be represented by the two distance-based features. There are two strategies to examine the discrimination power of these two distance-based features. The first one is to use the two distance-based features alone for classification. The second one is to combine the original features with the new distance-based features as a longer feature vectors for classification.

3.2. Cluster center identification

To identify the cluster centers from a given dataset, the k-means clustering algorithm is used to cluster the input data in this article. It is noted that the number of clusters is determined by the number of classes or categories in the dataset. For example, if the dataset is consisted of three categories, then the value of k in the k-means algorithm is set to 3.

3.3. Distances from intra-cluster center

After the cluster center for each class is identified, the distance between a data sample and its cluster center (or intra-cluster center) can be calculated. In this article, the Euclidean distance is utilized. Given two data points A = [a1, a2,...,a n ] and B = [b1,b2,...,b n ], the Euclidean distance between A and B is given by

d i s ( A , B ) = ( a 1 - b 1 ) 2 + ( a 2 - b 2 ) 2 + . . . + ( a n - b n ) 2
(2)

Figure 1 shows an example for the distance between a data sample and its cluster center, where cluster centers are denoted by {C j |j = 1, 2, 3} and data samples are denoted by {D i |i = 1,2,...,8}. In this example, data point D7 is assigned to the third cluster (C3) by the k-means algorithm. As a result, the distance from D7 to its intra-cluster center (C3) is determined by the Euclidean distance from D7 to C3.

Figure 1
figure 1

The distance between the data sample and its intra-cluster center.

In this article, we will utilize the distance between a data sample and its intra-cluster center as a new feature, called Feature 1. Given a datum D i belonging to C j , its Feature 1 is given by

Feature 1 = d i s ( D i , C j )
(3)

where dis(D i , C j ) denotes the Euclidean distance from D i to C j .

3.4. Distances from extra-cluster center

On the other hand, we also calculate the sum of the distances between the data sample and its extra-cluster centers and use them as the second features. Let us look at the graphical example shown in Figure 2, where cluster centers are denoted by {C j |j = 1, 2, 3} and data samples are denoted by {D i |i = 1,2,...,8}. Since the datum D6 is assigned to the second cluster (C2) by the k-means algorithm, the distance between D6 and its extra-cluster centers include dis(D6 , C1) and dis(D6 , C3).

Figure 2
figure 2

The distance between the data sample and its extra-cluster center.

Here, we define another new feature, called Feature 2, as the sum of the distances between a data sample and its extra-cluster centers. Given a datum D i belonging to C j , its Feature 2 is given by

F e a t u r e 2 = j = 1 k d i s ( D i , C j ) - F e a t u r e 1
(4)

where k is the number of clusters identified, dis(D i , C j ) denotes the Euclidean distance from D i to C j .

3.5. Theoretical analysis

To justify the use of the distance-based features, it is necessary to analyze their impacts on classification accuracy. For the sake of simplicity, let us consider the results when the proposed features are applied to two-category classification problems. The generalization of these results to multi-category cases is straightforward, though much more involved. The classification accuracy can readily be evaluated if the class-conditional densities p x C k k = 1 2 are multivariate normal with identical covariance matrices, i.e.,

p x C k ~ N μ k , Σ ,
(5)

where x is a d-dimensional feature vector, μ(k)is the mean vector associated with class k, and is the covariance matrix. If the prior probabilities are equal, it follows that the Bayes error rate is given by

P e = 1 2 π r 2 e - u 2 2 d u ,
(6)

where r is the Mahalanobis distance:

r = ( μ ( 1 ) μ ( 2 ) ) T 1 ( μ ( 1 ) μ ( 2 ) )
(7)

In case d features are conditionally independent, the Mahalanobis distance between two means can be simplified to

r = i = 1 d μ i 1 - μ i 2 2 σ i 2 ,
(8)

where μ i k denotes the mean of the i th feature belonging to class k, and σ i 2 denotes the variance of the i th feature. This shows that adding a new feature, whose mean values for two categories are different, can help to reduce error rate.

Now we can calculate the expected values of the proposed features and see what the implications of this result are for the classification performance. We know that Feature 1 is defined as the distance between each data point and its class mean, i.e.,

Feature 1 = x - μ ( k ) T x - μ ( k ) = i = 1 d x i - μ i k 2 .
(9)

Thus, the mean of Feature 1 is given by

E F e a t u r e 1 = i = 1 d E x i - μ i ( k ) 2 = T r Σ k .
(10)

This reveals that the mean value of Feature 1 is determined by the trace of the covariance matrix associated with each category. In practical applications, the covariance matrices are generally different for each category. Naturally, one can expect to improve classification accuracy by augmenting Feature 1 to the raw data. If the class-conditional densities are distributed more differently, then the Feature 1 will contribute more to reducing error rate.

Similarly, Feature 2 is defined as the sum of the distances from a data point to the centroids of other categories. Given a data point x belonging to class k, we obtain

Feature 2 = k x - μ T x - μ = k x - μ k + μ k - μ T x - μ k + μ k - μ = k x - μ k T x - μ k + 2 x - μ k T μ k - μ + μ k - μ T μ k - μ
(11)

This allows us to write the mean of Feature 2 as

E F e a t u r e 2 = K - 1 T r Σ k + k μ k - μ 2 ,
(12)

where K denotes the number of categories and ||·|| denotes the L2 norm. As mentioned before, the first term in Equation 12 usually differs for each category. On the other hand, the distances between class means are unlikely to be identical in real-world applications and thus the second term in Equation 12 tends to be different for different classes. So, we may conclude that Feature 2 also contributes to reducing the probability of classification error.

4. Experiments

4.1. Experimental setup

4.1.1. The datasets

To evaluate the effectiveness of the proposed distance-based features, ten different datasets from UCI Machine Learning Repository http://archive.ics.uci.edu/ml/index.html are considered for the following experiments. They are Abalone, Balance Scale, Corel, Tic-Tac-Toe Endgame, German, Hayes-Roth, Ionosphere, Iris, Optical Recognition of Handwritten Digits, and Teaching Assistant Evaluation. More details regarding the downloaded datasets, including the number of classes, the number of data samples, and the dimensionality of feature vectors, are summarized in Table 1.

Table 1 Information of the ten datasets

4.1.2. The classifiers

For pattern classification, three popular classification algorithms are applied, which are SVM, k-NN, naïve Bayes. These classifiers are trained and tested by tenfold cross validation. One research objective is to investigate whether different classification approaches could yield consistent results. It is worth noting that the parameter values associated with each classifier have a direct impact on the classification accuracy. To perform a fair comparison, one should carefully choose appropriate parameter values to construct a classifier. The selection of the optimum parameter value for these classifiers is described below.

For SVM, we utilized the LibSVM package [23]. It has been documented in the literature that radial basis function (RBF) achieves good classification performances in a wide range of applications. For this reason, RBF is used as the kernel function to construct the SVM classifier. In RBF, five gamma ('γ') values, i.e., 0, 0.1, 0.3, 0.5, and 1 are examined, so that the best SVM classifier, which provides the highest classification accuracy, can be identified.

For the k-NN classifier, the choice of k is a critical step. In this article, the k values from 1 to 15 are examined. Similar to SVM, the value of k with the highest classification accuracy is used to compare with SVM and naïve Bayes.

Finally, the parameter values of naïve Bayes, i.e., mean and covariance of Gaussian distribution, are estimated by maximum likelihood estimators.

4.2. Pre-test analyses

4.2.1. Principal component analysis

Before examining the classification performance, PCA [24] is used to analyze the level of variance (i.e., discrimination power) of the proposed distance-based features. In particular, the communality, which is the output of PCA, is used to analyze and compare the discrimination power of the distance-based features (also called variables here). The communality measures the percent of variance in a given variable explained by all the factors jointly and may be interpreted as the reliability of the indicator. In this experiment, we use the Euclidean distance to calculate the distance-based features. Table 2 shows the analysis result.

Table 2 The average of communalities of the original and distance-based features

Regarding Table 2, adding the distance-based features can improve the discrimination power over most of the chosen datasets, i.e., the average of communalities of using the distance-based features is higher than the one of using the original features alone. In addition, using the distance-based features can provide above 0.7 for the average of communalities.

On the other hand, as the PCA result of Feature 1 is lower than the one of Features, on average standard deviation using distance-based features is slightly higher than using the original features alone. However, since using the two distance-based features can provide a higher level of variance over most of the datasets, they are all together considered in this article as the main research focus.

4.2.2. Class separability

Furthermore, class separability [25] is considered before examining the classification performance. The class separability is given by

T r { S W - 1 S B }
(13)

where

S W = j = 1 k i C j ( D i - D j ) ( D i - D j ) T
(14)

and N j is the number of samples in class C j , C is the mean of the total dataset. The class separability is large when the between-class scatter is large and the within-class scatter is small. Therefore, it can be regarded as a reasonable indicator of classification performances.

Besides examining the impact of the proposed distance-based features using the Euclidean distance on the classification performance, the chi-squared and Mahalanobis distances are considered. This is because they have quite natural and useful interpretation in discriminant analysis. Consequently, we will calculate the proposed distance-based features by utilizing the three distance metrics for the analysis.

For the chi-squared distance, given n-dimensional vectors a and b, the chi-squared distance between them can be defined as

di s x 1 2 ( a , b ) = ( a 1 - b 1 ) 2 a 1 + . . . + ( a n - b n ) 2 a n
(16)

or

di s x 2 2 ( a , b ) = ( a 1 - b 1 ) 2 a 1 + b 1 + . . . + ( a n - b n ) 2 a n + b n
(17)

On the other hand, the Mahalanobis distance from D i to C j is given by

di s Mah ( D i , C j ) = ( D i - C j ) T j - 1 ( D i - C j )
(18)

where ∑ j is the covariance matrix of the j th cluster. It is particularly useful when each cluster has an asymmetric distribution.

In Table 3, the effect of using different distance-based features is rated in terms of class separability. It is noted that for the high-dimensional datasets, we encounter the small sample size problem and it results in the singularity of the within-class scatter matrix S W [26]. For this reason, we cannot calculate the class separability from the high-dimensional datasets. 'Original' denotes the original feature vectors provided by the UCI Machine Learning Repository. '+2D' means that we add Features 1 and 2 to the original feature.

Table 3 Results of class separability

As shown in Table 3, the class separability is consistently improved over that in the original space by adding the Euclidean distance-based features. For the chi-squared distance metric, the results of using di s x 1 2 and di s x 2 2 are denoted by 'chi-square 1' and 'chi-square 2', respectively. Evidently, the classification performance can always be further enhanced by replacing the Euclidean distance with one of the chi-squared distances. Moreover, reliable improvement can be achieved by augmenting the Mahalanobis distance-based feature to the original data.

4.3. Classification results

4.3.1. Classification accuracy

Table 4 shows the classification performance of naïve Bayes, k-NN, and SVM based on the original features, the combined original and distance based features, and the distance-based features alone, respectively, over the ten datasets. The distance-based features are calculated using the Euclidean distance. It is noted that in Table 4, '2D' denotes that the two distance-based features are used alone for classifier training and testing. For the column of dimensions, the numbers in the parentheses mean the dimensionality of the feature vectors utilized in a particular experiment.

Table 4 Classification accuracy of naïve Bayes, k-NN, and SVM over the ten datasets

Regarding Table 4, we observe that using the distance-based features alone yields the worst results. In other words, classification accuracy cannot be improved by utilizing the two new features and discarding the original features. However, when the original features are concatenated with the new distance-based features, on average the rate of classification accuracy is improved. It is worth noting that the improvement is observed across different classifiers. Overall, these experimental results agree well with our expectation, i.e., classification accuracy can be effectively improved by including the new distance-based features into the original features.

In addition, the results indicate that the distance-based features do not perform well in high-dimensional image-related datasets, such the Corel, Iris, and Optical Recognition of Handwritten Digits datasets. This is primarily due to the curse of dimensionality [15]. In particular, the demand for the amount of training samples grows exponentially with the dimensionality of feature space. Therefore, adding new features beyond a certain limit would have the consequence of insufficient training. As a result, we have worse rather than better performance on the high-dimensional data sets.

4.3.2. Comparisons and discussions

Table 5 compares different classification performances using the original features and the combined original and distance-based features. It is noted that the classification accuracy by the original features is the baseline for the comparison. This result clearly shows that considering the distance-based features can provide some level of performance improvements over the chosen datasets except the high-dimensional ones.

Table 5 Comparisons between the 'original' feature and the '+2D' features

We also calculate the proposed features using different distance metrics. By choosing a fixed classifier (1-NN), we can evaluate the classification performance of different distance metrics over different datasets. The results are summarized in Table 6. Once again, we observe that the classification accuracy is generally improved by concatenating the distance-based features to the original feature. In some cases, e.g., Abalone, Balance Scale, German, and Hayes-Roth, the proposed features have led to significant improvements in classification accuracy.

Table 6 Comparison of classification accuracies obtained using different distance metrics

Since we observe consistent improvement across three different classifiers over five datasets, which are the Balance Scale, German, Ionosphere, Teaching Assistant Evaluation, and Tic-Tac-Toe Endgame datasets, the relationship between classification accuracy and these datasets' characteristics is examined. Table 7 shows the five datasets, which yield classification improvements using the distance-based features. Here, another new feature is obtained by adding the two distance-based features together. Thus, we use '+3D' to denote that the original feature has been augmented with the two distance-based features and their sum. It is noted that the distance-based features are calculated using the Euclidean distance.

Table 7 Classification accuracy versus the dataset's characteristics

Among these five datasets, the number of classes is smaller than or equal to 3; the dimension of the original features is smaller than or equal to 34; and the number of samples is smaller than or equal to 1,000. Therefore, this indicates that the proposed distance-based features are suitable for the datasets whose numbers of classes, numbers of samples, and the dimensionality of features are relatively small.

4.4. Further validations

Based on our observation in the previous section, two datasets are further used to verify our conjecture, which have similar characteristics to these five datasets. These two datasets are the Australian and Japanese datasets, which are also available from the UCI Machine Repository. Table 8 shows the information of these two datasets.

Table 8 Information of the Australian and Japanese datasets

Table 9 shows the rate of classification accuracy obtained by naïve Bayes, k-NN, and SVM using the 'original' and '+2D' features, respectively. Similar to the finding in the previous sections, classification accuracy is improved by concatenating the original features to the distance-based features.

Table 9 Classification accuracy of naïve Bayes, k-NN, and SVM over the Australian and Japanese datasets

5. Conclusion

Pattern classification is one of the most important research topics in the fields of data mining and machine learning. In addition, to improve classification, accuracy is the major research objective. Since feature extraction and representation have a direct and significant impact on the classification performance, we introduce novel distance-based features to improve classification accuracy over various domain datasets. In particular, the novel features are based on the distances between the data and its intra- and extra-cluster centers.

First of all, we show the discrimination power of the distance-based features by the analyses of PCA and class separability. Then, the experiments using naïve Bayes, k-NN, and SVM classifiers over ten various domain datasets show that concatenating the original features with the distance-based features can provide some level of classification improvements over the chosen datasets except high-dimensional image related datasets. In addition, the datasets, which produce higher rates of classification accuracy using the distance-based features, have smaller numbers of data samples, smaller numbers of classes, and lower dimensionalities. Two validation datasets, which have similar characteristics, are further used and the result is consistent with this finding.

To sum up, the experimental results (see Table 7) have shown the applicability of our method to several real-world problems, especially when the dataset sizes are certainly small. In other words, our method is very useful for the problems whose datasets contain about 4-34 features and 150-1000 data samples, e.g., bankruptcy prediction and credit scoring. However, it is the fact that many other problems contain very large numbers of features and data samples, e.g., text classification. Our proposed method can be applied after performing feature selection and instance selection to reduce their dimensionalities and data samples, respectively. In other words, this issue will be considered for our future study. For example, given a large-scale dataset some feature selection method, such as genetic algorithms, can be employed to reduce its dimensionality. When more representative features are selected, the next stage is to extract the proposed distance-based features from these selected features. Then, the classification performances can be examined using the original dataset, the dataset with feature selection, and the dataset with the combination of feature selection, and our method.

References

  1. Fayyad UM, Piatesky SG, Smyth P: From data mining to knowledge discovery in databases. AI Mag 1996,17(3):37-54.

    Google Scholar 

  2. Frawley WJ, Piatetsky-Shapiro GS, Matheus CJ: Knowledge Discovery in Databases: An Overview. Knowledge Discovery in Database. AAAI Press, Menlo Park, CA; 1991:1-27.

    Google Scholar 

  3. Vapnik VN: The Nature of Statistical Learning Theory. Springer, New York; 1995.

    Book  MATH  Google Scholar 

  4. Keerthi S, Chapelle O, DeCoste D: Building support vector machines with reducing classifier complexity. J Mach Learn Res 2006, 7: 1493-1515.

    MathSciNet  MATH  Google Scholar 

  5. Cardoso-Cachopo A, Oliveira A: Semi-supervised single-label text categorization using centroid-based classifiers. Proceedings of the ACM Symposium on Applied Computing 2007, 844-851.

    Google Scholar 

  6. Liu H, Motoda H: Feature Selection for Knowledge Discovery and Data Mining. Kluwer Academic Publishers, Boston; 1998.

    Book  MATH  Google Scholar 

  7. Blum A, Langley P: Selection of relevant features and examples in machine learning. Artif Intell 1997,97(1-2):245-271. 10.1016/S0004-3702(97)00063-5

    Article  MathSciNet  MATH  Google Scholar 

  8. Koller D, Sahami M: Toward optimal feature selection. Proceedings of the Thirteenth International Conference on Machine Learning 1996, 284-292.

    Google Scholar 

  9. Yand JH, Honavar V: Feature subset selection using a genetic algorithm. IEEE Intell Syst 1998,13(2):44-49. 10.1109/5254.671091

    Article  Google Scholar 

  10. Jain AK, Duin RPW, Mao J: Statistical pattern recognition: a review. IEEE Trans Pattern Anal Mach Intell 2000,22(1):4-37. 10.1109/34.824819

    Article  Google Scholar 

  11. Canbas S, Cabuk A, Kilic SB: Prediction of commercial bank failure via multivariate statistical analysis of financial structures: the Turkish case. Eur J Oper Res 2005, 166: 528-546. 10.1016/j.ejor.2004.03.023

    Article  MathSciNet  MATH  Google Scholar 

  12. Min SH, Lee J, Han I: Hybrid genetic algorithms and support vector machines for bankruptcy prediction. Exp Syst Appl 2006, 31: 652-660. 10.1016/j.eswa.2005.09.070

    Article  Google Scholar 

  13. Tsai C-F: Feature selection in bankruptcy prediction. Knowledge Based Syst 2009,22(2):120-127. 10.1016/j.knosys.2008.08.002

    Article  Google Scholar 

  14. Pearson K: On lines and planes of closest fit to system of points in space. Philos Mag 1901, 2: 559-572.

    Article  MATH  Google Scholar 

  15. Duda RO, Hart PE, Stork DG: Pattern Classification. 2nd edition. Wiley, New York; 2001.

    MATH  Google Scholar 

  16. Han J, Kamber M: Data Mining: Concepts and Techniques. 2nd edition. Morgan Kaufmann Publishers, USA; 2001.

    MATH  Google Scholar 

  17. Baralis E, Chiusano S: Essential classification rule sets. ACM Trans Database Syst (TODS) 2004,29(4):635-674. 10.1145/1042046.1042048

    Article  Google Scholar 

  18. Tsai CF, Lin CY: A triangle area based nearest neighbors approach to intrusion detection. Pattern Recog 2010, 43: 222-229. 10.1016/j.patcog.2009.05.017

    Article  MATH  Google Scholar 

  19. Lin J-S: CANN: combining cluster centers and nearest neighbors for intrusion detection systems. Master's Thesis, National Chung Cheng University, Taiwan; 2009.

    Google Scholar 

  20. Zeng W, Meng XX, Yang CL, Huang L: Feature extraction for online handwritten characters using Delaunay triangulation. Comput Graph 2006, 30: 779-786. 10.1016/j.cag.2006.07.007

    Article  Google Scholar 

  21. Xue Z, Li SZ, Teoh EK: Bayesian shape model for facial feature extraction and recognition. Pattern Recogn 2003, 36: 2819-2833. 10.1016/S0031-3203(03)00181-X

    Article  MATH  Google Scholar 

  22. Choi E, Lee C: Feature extraction based on the Bhattacharyya distance. Pattern Recogn 2003, 36: 1703-1709. 10.1016/S0031-3203(03)00035-9

    Article  Google Scholar 

  23. Chang CC, Lin CJ: LIBSVM: a library for support vector machines.2001. [http://www.csie.ntu.edu.tw/~cjlin/libsvm]

    Google Scholar 

  24. Hotelling H: Analysis of a complex of statistical variables into principal components. J Educ Psychol 1933, 24: 498-520.

    Article  MATH  Google Scholar 

  25. Fukunaga K: Introduction to statistical pattern recognition. Academic Press; 1990.

    MATH  Google Scholar 

  26. Huang R, Liu Q, Lu H, Ma S: Solving the small sample size problem of LDA. International Conference on Pattern Recognition 2002, 3: 30029.

    Google Scholar 

Download references

Acknowledgements

The authors have been partially supported by the National Science Council, Taiwan (Grant No. 98-2221-E-194-039-MY3 and 99-2410-H-008-033-MY2).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei-Yang Lin.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tsai, CF., Lin, WY., Hong, ZF. et al. Distance-based features in pattern classification. EURASIP J. Adv. Signal Process. 2011, 62 (2011). https://doi.org/10.1186/1687-6180-2011-62

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2011-62

Keywords