Performance analysis of wavelet transforms and morphological operator-based classification of epilepsy risk levels

The objective of this paper is to compare the performance of singular value decomposition (SVD), expectation maximization (EM), and modified expectation maximization (MEM) as the postclassifiers for classifications of the epilepsy risk levels obtained from extracted features through wavelet transforms and morphological filters from electroencephalogram (EEG) signals. The code converter acts as a level one classifier. The seven features such as energy, variance, positive and negative peaks, spike and sharp waves, events, average duration, and covariance are extracted from EEG signals. Out of which four parameters like positive and negative peaksand spike and sharp waves, events and average duration are extracted using Haar, dB2, dB4, and Sym 8 wavelet transforms with hard and soft thresholding methods. The above said four features are also extracted through morphological filters. Then, the performance of the code converter and classifiers are compared based on the parameters such as performance index (PI) and quality value (QV).The performance index and quality value of code converters are at low value of 33.26% and 12.74, respectively. The highest PI of 98.03% and QV of 23.82 are attained at dB2 wavelet with hard thresholding method for SVD classifier. All the postclassifiers are settled at PI value of more than 90% at QV of 20.


Introduction
The electroencephalogram (EEG) is a measure of cumulative firing of neurons in various parts of the brain [1]. It contains information regarding changes in the electrical potential of the brain obtained from a given set of recording electrodes. These data include the characteristic waveforms with accompanying variations in amplitude, frequency, phase, etc., as well as brief occurrence of electrical patterns such as spindles, sharps, and spike waveforms [2]. EEG patterns have shown to be modified by a wide range of variables including biochemical, metabolic, circulatory, hormonal, neuroelectric, and behavioral factors in [3]. In the past, the encephalographer, by visual inspection, was able to qualitatively distinguish normal EEG activity from either the localized or generalized abnormalities contained within relatively long EEG records [4]. The most important activity possibly detected from the EEG is the epilepsy [5]. Epilepsy is characterized by an uncontrolled excessive activity or potential discharge by either a part or all of the central nervous system [5]. The different types of epileptic seizures are characterized by different EEG waveform patterns [6]. With real-time monitoring to detect epileptic seizures gaining widespread recognition, the advent of computers has made it possible to effectively apply a host of methods to quantify the changes occurring based on the EEG signals [4]. The EEG is an important clinical tool for diagnosing, monitoring, and managing neurological disorders related to epilepsy [7]. This disorder is characterized by sudden recurrent and transient disturbances of mental function and/or movements of body that results in excessive discharge group of brain cells [8]. The presence of epileptiform activity in the EEG confirms the diagnosis of epilepsy, which sometimes may be confused with other disorders producing similar seizure-like activity [9]. Between seizures, the EEG of a patient with epilepsy may be characterized by occasional epileptic form transientsspikes and sharp waves [10]. Seizures are featured by short episodic neural synchronous discharges with considerably enlarged amplitude. This uneven synchrony may happen in the brain accordingly, i.e., partial seizures can be visible only in few channels of the EEG signal or generalized seizures, that are seen in every channel of the EEG signal involving the whole brain [11]. Epileptic seizure is an abnormality in EEG gathering and is featured by short and episodic neuronal synchronous discharges with severely high amplitude. This anomalous synchrony may happen in the brain locally (partial seizures) and is visible only in fewer channels of the EEG signal or including the entire brain, i.e., visible in all the channels of the EEG signal [12].

Related works
In the last three decades, the analysis and classification of epilepsy from EEG signal has become a fascinating research. A huge volume of research has already been performed which includes spike detection, classification epilepsy seizures, ictal and inter ictal analysis, nonlinear and linear analysis and soft computing methods. Gotman [9] discussed the improvement of epileptic seizure detection and evaluation. Pang et al. [10] summarized the history and evaluation of various spike detecting algorithms. The authors in [13] have discussed the different neural networks as a function approximation and universal approximation for epilepsy diagnosis. Rezasarang [14] encapsulated the performance of spike detecting algorithms in terms of sensitivity, specificity, and average detection. Rezasarang [14] orders the performance of spike detecting algorithms in terms of good detection ratio (GDR). McSharry et al. [8] discussed and enumerated the nonlinear methods and its relevance to predict epilepsy by considering EEG samples as time series. Majumdar [15] reviews various soft computing approaches of EEG signals which emphasize more on pattern recognition techniques. The paper [15] mainly focuses on dimensionality reduction, SNR problems, linear and soft computing techniques for EEG signal processing. Majumdar concludes that the neural network and Bayesian approaches are two popular choices even though linear statistical discriminants are easier to implement. Great deals of support vector machines (SVM) are also discussed in this paper for their classification accuracy. Hence, the EEG signal occupies a great deal of data regarding the work of the brain. However, classification and estimation of the signals are inadequate. As there is no explicit category suggested by the experts, visual examination of EEG signals in time domain may be deficient. Routine clinical diagnosis necessitates the analysis of EEG signals [13]. Hence, automation and computer methods have been utilized for this reason. Current multicenter clinical analysis indicates confirmation of premonitory symptoms in 6.2% of 500 patients with epilepsy [16]. Another interview-based study found that 50% of 562 patients felt 'auras' before seizures. Those clinical data provide a motivation to search for premonitoring alterations on EEG recordings from the brain and to employ a device that can act without human intervention to forewarn the patient [17]. On the other hand, despite decades of research, existing techniques do not yield to better performance. This paper addresses the application and comparison of singular value decomposition (SVD), expectation maximization (EM), and modified expectation maximization (MEM) classifiers towards optimization of code converter outputs in the classification of epilepsy risk levels.
Weber et al. [18] have proposed the three-stage design of an EEG seizure detection system. The first stage of the seizure detector compresses the raw data stream and transforms the data into variables which represent the state of the subject's EEG. These state measures are referred to as context parameters. The second stage of the system is a neural network that transforms the state measures into smaller number of parameters that are intended to represent measures of recognized phenomena such as small seizure in the EEG [9,10]. The third stage consists of a few simple rules that confirm the existence of the phenomena under consideration. Similarly, this paper also presents a three-stage design for epilepsy risk level classification. The first stage extracts the seven required distinct features from raw EEG data stream of the patient in time domain. The next stage transforms these features into a code word through a code converter with seven alphabets which represents the patient's state in five distinct risk levels for a 2-s epoch of EEG signal per channel. The last stage is a SVD, EM, or MEM which optimizes the epilepsy risk level of the patient. The organization of the paper is as follows. Section 1 introduces the paper and materials, and its methods are discussed in Section 2. Section 3 describes about the SVD, EM, and MEM as postclassifiers for epilepsy risk level classification. Results are discussed in Section 4, and the paper is concluded in Section 5.

Data acquisition of EEG signals
For a comparative study and to analyze the performance of the pre-and postclassifiers, we have obtained the raw EEG data of 20 epileptic patients in European data format (EDF) who underwent treatment in the Neurology Department of Sri Ramakrishna Hospital, Coimbatore. An issue that has been given great attention is the preprocessing stage of the EEG signals because it is important to use the best technique to extract the useful information embedded in the nonstationary biomedical signals. The obtained EEG records were continuous for about 30 s, each of them were divided into epochs of 2-s duration. A 2-s epoch is long enough to detect any significant changes in activity and presence of artifacts and also short enough to avoid any redundancy in the signal [19]. For a patient, there are 16 channels over three epochs. Having a frequency of 50 Hz, each epoch was sampled at a frequency of 200 Hz. Each sample corresponds to the instantaneous amplitude values of the signal, totaling to 400 values for an epoch. Figure 1 shows the model of the flow diagram of epilepsy risk level classification system. Four types of artifacts were present in our data. They included eye blink, electromyography (EMG) artifact, and chewing and motion artifacts [20]. Approximately, 1% of the data was artifacts. We did not make any attempt to select certain number of artifacts and of a specific nature. The objective of including artifacts was to have spikes versus nonspike categories of waveforms. The latter could be a normal background EEG and/or artifacts [21]. In order to train and test the feature extractor and classifiers, we need to select a suitable segment of EEG data. In our experiment, the training and testing were selected through a short sampling window and all EEG signals were visually examined by a qualified EEG technologist. A neurologist's decision regarding EEG features (or normal EEG segment) was used as the gold standard. We choose a sample window of 400 points corresponding to 2 s of the EEG data. This width can cover almost all types of transient epileptic patterns in the EEG signal, even though seizure often lasts longer [22].
In order to classify the risk level of the patients, certain parameters were chosen which are detailed below: 1. For every epoch, the energy is calculated as [4] E ¼ where x i is the signal sample value and n is the number of such samples.
2. One of the simplest linear statistics that may be used for investigating the dynamics of underlying the EEG is the variance of the signal calculated in consecutive nonoverlapping windows. The variance (σ) is given by where μ is the average amplitude of the epoch. 3. For the average variance, the covariance of duration is determined by using the equation below:  The following are the four parameters which are extracted using morphological filters and wavelet transforms: 1. The total number of positive and negative peaks is found above the threshold. 2. For a zero crossing function, if it lies between 20to 70 ms, then the spikes can be detected. If the zero crossing function lies between 70to 200 ms then the sharp waves are detected when the zero crossing function lies between 70 to 200 ms. 3. After having detected, the total number of spikes and sharp waves were determined as the events. 4. The duration for these waves is determined by the relation: where t i is the peak to peak duration and p is the number of such durations.

Wavelet transforms for feature extraction
The brain signals are nonstationary in nature. In order to capture the transients and events of the waveforms, we are in dire state to visualize the time and frequency simultaneously. Hence, the wavelet transforms are the better choice to extract the transient features and events from the EEG signals. The wavelet transform-based feature extraction is discussed as follows: Let us consider a function f (t). The wavelet transform of this function is defined as [23] wf a; b where ψ* (t) is the complex conjugate of the wavelet function ψ (t).
With the set of the analyzing function, the wavelet family is deduced from the mother wavelet ψ (t) by [24] where a is the dilation parameter and b is the translation parameter.
The feature extraction process is initialized by studying the effect of simple Haar threshold. The Haar wavelet function can be represented as [25].
Wavelet thresholding is a signal estimation technique that exploits the capabilities of wavelet transform for signal denoising or smoothing. It depends on the choice of a threshold parameter which determines to a great extent the efficacy of denoising.
where T is the threshold level. Typical threshold operators for denoising include hard threshold, soft threshold, and affine (firm) threshold. Hard threshold is defined as [24]. Soft thresholding (wavelet shrinkage) is given by Haar, Db2, Db4, and Sym8 wavelets with hard thresholding and four types of soft thresholding methods such as heursure, minimax, rigsure, and sqtwolog are used to extract the parameters from EEG signals. With the help of an expert's knowledge and our experiences with the references [5,20,26], we have identified the following parametric ranges for five linguistic risk levels (very low, low, medium, high, and very high) in the clinical description for the patients which is shown in Table 1.
The output of code converter is encoded into the strings of seven codes corresponding to each EEG signal parameter based on the epilepsy risk levels threshold values as set in Table 1. The expert defined threshold values as containing noise in the form of overlapping ranges. Therefore, we have encoded the patient risk level into the next level of risk instead of a lower level. Likewise, if the input energy is at 3.4, then the code converter output will be at medium risk level instead of low level [26].

Code converter as a preclassifier
The encoding method processes the sampled output values as individual code. Since working on definite alphabets is easier than processing numbers with large decimal accuracy, we encode the outputs as a string of alphabets. The alphabetical representation of the five classifications of the outputs is shown in Table 2.
The ease of operation in using characteristic representation is obviously evident than that in performing cumbersome operations of numbers. By encoding each risk level from one of the five states, a string of seven characters is obtained for each of the 16 channels of each epoch. A sample output with actual patient readings is shown in Table 3 for eight channels over three epochs.
It can be seen that channel 1 shows low-risk levels while channel 7 shows high-risk levels. Also, the risk level classification varies between adjacent epochs. There are 16 different channels for input to the system at three epochs. This gives a total of 48 input and output pairs. Since we deal with known cases of epileptic patients, it is necessary to find the exact level of epilepsy risk in the patient. This will also aid towards the development of automated systems that can be precisely classify the risk level of the epileptic patient under observation. Hence, an optimization is necessary. This will improve the classification of the patient and can provide the EEG with a clear picture [20]. The outputs from each epoch are not identical and are varying in condition such as [YYZXXXX] to [WYZYYYY] to [YYZZYYY]. In this case, energy factor is predominant and thus results in the high-risk level for two epochs and low-risk level for middle epoch. Channels 5 and 6 settle at high-risk level. Due to this type of mixed state output, we cannot come to a proper conclusion. Therefore, we group four adjacent channels and optimize the risk level. The frequently repeated patterns show the average-risk level of the group channels. Same individual patterns depict the constant risk level associated in a particular epoch. Whether a group of channel is at the high-risk level or not is identified by the occurrences of at least one Z pattern in an epoch. It is also true that the variation of the risk level is abrupt across epochs and eventually in channels. Hence, we are in a dilemma and cannot come up with the final verdict. The five risk levels are encoded as Z > Y > X > W > U in binary strings of length five bits using weighted positional representation as shown in Table 4. Encoding each output risk level gives us a string of seven alphabets, the fitness of which is calculated as the sum of probabilities of the individual alphabets. For example, if the output of an epoch is encoded as ZZYXWZZ, its fitness would be 0.419352.
The performance index of the code converter is given as [19].
Where PI is the performance index, PC is the perfect classification, MC is the missed classification and FA is the false alarm.
The performance of code converter is 44.81%. The perfect classification represents when the physician agrees with the epilepsy risk level. Missed classification represents a high level as low level. False alarm represents a low level as high level with respect to the physician's diagnosis. The other performance measures are also defined as below: The sensitivity Se and specificity Sp are represented as [19] Se Relative risk ¼ Sensitivity=Specificity 1:166 ð14Þ The relative risk factor indicates the stability and sensitivity of the classifier. For an ideal classifier, the relative risk will be unity. More sensitive classifier will have this factor slightly above unity, whereas slow response classifier makes this factor lower than unity. We have obtained a low value of just 40% for performance index and 83.33%, 71.42%, 78.87%, and 1.166 for sensitivity, specificity, average detection, and relative risk for the code converter. Due to the low performance measures, it is essential to optimize the output of the code converter. Performance index of code converters output using different wavelet transforms for hard thresholding methods are tabulated in Table 5.

Rhythmicity of code converter
Now, we are about to identify the rhythmicity of code converter techniques which is associated with nonlinearities of the epilepsy risk levels. Let the rhythmicity be defined as [10] where C is the number of categories of patterns and D is the total number of patterns which is 960 in our case. For an ideal classifier, C is to be one and R = 0.001042. Table 6 shows the rhythmicity of the code converter classifier for hard thresholding of each wavelet. Table 6 shows that the value of R is highly deviated from its ideal value. Hence, it is necessary to optimize the code converter output to endure a singleton risk level. In the following section, we discuss about the morphological filtering of EEG signals.

Morphological filtering for feature extraction of EEG signals
Morphological filtering was chosen over other methods such as the temporal approach of the EEG signal and wavelet-based approach due to the fact that morphological filtering can precisely determine the spikes with a very high accuracy rate [14]. Let us call it as a function f (t). Let us also take into account a structuring element g (t) which together with f (t) be the subsets of Euclidean space E. Accordingly, the Minkowski addition and subtraction [6] for the function f (t) is given by the relation Addition: Subtraction: The opening and closing functions of the morphological filter is given as: Opening: Closing: The abovementioned equations help us in determining the peaks and valleys in the original recording [7]. The opening function (erosion-dilation) is used in smoothing of the convex peak of the original signal, and the closing function (dilation-erosion) is used in smoothing the concave peak of the signal. Combinations of opening and closing function lead to the formation of a new filter which when fed with the original signal can divide it into two, the first signal being defined by a structuring element and the second signal being the residue of f (t). This type of filtering When considered separately, the OC and CO functions result in a variation in amplitude, i.e., while OC results in lower amplitude; the CO function yields higher amplitude. For easier interpretation and calculation, we go for the average of the two defined as opening-closing and closing-opening (OCCO) function. The same is depicted below as f (t) is the original signal represented as where x (t) is the spiky part of the signal. Performance index, sensitivity, and specificity of code converter outputs through morphological filter-based feature extraction arrived at the low value of 33.46%, 76.23%, and 77.42%, respectively. This scenario impacts the optimization of code converter outputs using postclassifier to accomplish a singleton result. The following section describes about the outcome of SVD, EM, and MEM techniques as postclassifier.
3. Singular value decomposition, expectation maximization, and modified EM as postclassifier for classification of epilepsy risk levels In this section, we discuss the possible usage of SVD, EM, and MEM as a postclassifier for classification of epilepsy risk levels. The SVD was established in 1870s by Beltrami and Jordan for real square matrices [27]. It is used mainly for dimensionality reduction and determining the modes of a complex linear dynamical system [27]. Since then, SVD is regarded as one of the most important tools of modern numerical analysis and numerical linear algebra.

SVD theorem
Let us have an m × n matrix A = [a1, a2, a3…………, an]. The SVD theorem states that [28]: where A∈Rm × n (with m ≥ n), U∈Rm × n, V∈Rn × n, And S is a diagonal matrix of size Rn × n.

Equation 24
can be further realized as: The columns of U are called the left singular vectors of matrix A, and the columns of V are called the right singular vectors of A. P = min (m, n) and ∑ is called as the singular value matrix with along the diagonal.
We have taken the EEG records of 20 patients for our study. Each patient's sample is composed of a 16 × 3 matrix as code converter outputs depicted in Table 3. Considering this to be as matrix A, SVD is computed. The so obtained eigenvalue is eventually regarded as the patient's epilepsy risk level. The similar procedure is carried out in finding out the remaining eigenvalues of other patients as well.

Expectation maximization as a postclassifier
The EM is often defined as a statistical technique for maximizing complex likelihoods and handling incomplete data problem. EM algorithm consists of two steps namely: Expectation step (E Step): Say for data x, having an estimate of the parameter and the observed data, the expected value is initially computed [29]. For a given measurement,y 1 and based on the current estimate of the parameter, the expected value of x 1 is computed as given below: This implies Maximization step (M Step): From the expectation step, we use the data which were actually measured to determine the maximum likelihood (ML) estimate of the parameter. Considering the code converter output, let us take a set of unit vectors to be as Х. We will have to find out the parameters μ and κ of the distribution Md (μ, k). Accordingly, we can form the equation as [30] Х ¼ X i X i eMd μ; kÞ for 1≤i≤ng ð j f ð28Þ Considering x i ∈Х, the likelihood of Х is: The log likelihood of Equation 25 can be written as: In order to obtain the likelihood parameters μ and κ, we will have to maximize Equation 28 with the help of Lagrange operator λ. The equation can be written as: Derivating Equation 29 with respect to μ, λ, and κ and equating these to zero will yield the parameter constraints aŝ μ ¼k 2λ r ð32Þ In the expectation step, the threshold data are estimated, given the observed data and current estimate of the model parameters [31]. This is achieved using the conditional expectation, explaining the choice of terminology. In the M step, the likelihood function is maximized under the assumption that the threshold data are known. The estimate of the missing data from the E step is used in lieu of the actual threshold data.

Modified expectation maximization algorithm
In this paper, a ML approach uses a modified EM algorithm for pattern optimization. Similar to the conventional EM algorithm, this algorithm alternated between the estimation of the complete loglikelihood function (E step) and the maximization of this estimate over values of the unknown parameters (M step) [32]. Because of the difficulties in the evaluation of the ML function [33], modifications are made to the EM algorithm as follows.
The method of maximum likelihood corresponds to many well-known estimation methods in statistics. For example, one may be interested in the heights of adult female giraffes, but been unable due to cost or time constraints, to measure the height of every single giraffe in a population. Assuming that the heights are normally (Gaussian) distributed with some unknown mean and variance, the mean and variance can be estimated with MLE while only knowing the heights of some sample of the overall population.
Given a set of samples X = {x 1 , x 2 …x n }, the complete data set S = (X, Y) consists of the sample set X and a set Y of variable indicating from which component of the mixtures the samples came. The description which is given below explains how to estimate the parameters of the Gaussian mixtures with the maximization algorithm. After optimization of the patterns, maximum likelihood is adopted to redesign the intracranial area into two clusters. Basically, maximum likelihood algorithm is a statistical estimation algorithm used for finding log likelihood estimates of parameters in probabilistic models [30].
1. Find the initial values of the maximum likelihood parameters which are mean, covariance, and mixing weights. 2. Assign each x i to its nearest cluster centerck by Euclidean distance (d). 3. In maximization step, maximization can be used.The likelihood function is written as:

Repeat iterations and do not stop the loop until it becomes small enough.
The algorithm terminates when the difference between the log likelihood for the previous iteration and current iteration fulfills the tolerance. For μ = 0 and σ = 1, the likelihood function was applied to the 16 × 3 matrix of the code converter output by having truncated to the known endpoints.

Results and discussion
To study the relative performance of these code converter and SVD, EM, and MEM, we measure two parameters, the performance index and the quality value. These parameters are calculated for each set of 20 patients and compared.

Performance index
A sample of performance index of morphological filterbased feature extraction with code converters, singular value decomposition, EM, and MEM for an average of 20 known epilepsy data set shown in Table 7. As shown in Table 7, the morphological filter-based feature extraction along with SVD optimization is ranked at first with high PI of 89.48% against the 80.1% and 83.35% of EM and MEM methods, respectively. But the morphological filter may be plugged into more missed classification rather than less false alarm which is a dangerous trend. Therefore, this method will be considered as a lazy and high threshold classifier. Table 8 depicts the performance analysis of wavelet transform with hard thresholding method. In case of hard thresholding, while code converter has got an average classification rate and false alarm of 62.68% and 18.105%, respectively, EM optimizer has 87.39% of perfect classification with a false alarm rate of 4.43%. With not much of deviations, MEM has 89.36% and 4.46% of average perfect classification and false alarm, respectively. SVD   optimization has the highest value of perfect classification rate of 96.58% with zero false alarms. Hence, SVD optimizer can be regarded as the best postclassifier. In all the four wavelet transforms, SVD postclassifier is the best suited one to achieve the high classification rate. EM and MEM techniques fail miserably to achieve better classification accuracy when compared with SVD classifier. Table 9 represents the performance analysis of wavelet transforms with soft thresholding with code converter, SVD, EM, and MEM. It can be found that, in soft thresholding, the code converter has an average perfect classification of 65.6 and false alarm of 11.94. SVD has a classification rate of over 85% with comparatively higher values of false alarms. MEM optimizer claims to be the best optimizer as it has a classification rate of 93.97% with a false alarm rate of 3.5 only. This is obtained when Haar wavelet is used with minimax soft thresholding.

Quality value
This parameter determines the overall quality of the classifiers used. The relation for quality value is given by [19], Where C is the scaling constant, R fa is the false alarm per set, T dly is the average delay of onset classification, P dct is the percentage of perfect classification, and P msd is the percentage of perfect risk level missed.
By setting the value of 'C' to a constant value, say 10, the classifier with the highest QV is the better one. Table 10 depicts the quality value of wavelet transforms with hard thresholding and SVD, EM, and MEM optimization methods. It was observed that SVD with dB2 wavelet in hard thresholding attained the maximum value of QV at 23.82, and EM with Haar wavelet has the low value of QV at 18.32. Table 11 shows the performance analysis of 20 patients using dB2 wavelet hard thresholding with SVD, EM, and MEM as postclassifiers. The evaluation parameters achieved an appreciable value in the case of SVD postclassifier when compared to the other two classifiers. Hence, we can choose SVD as a good postclassifier for epilepsy risk level classification. All the three postclassifiers are bestowed with the best sensitivity and specificity measures. EM and MEM classifiers are plugged into the higher false alarm rate, and this leads to the lower QV and PI for the system.
Since the Haar wavelet is a predominant wavelet, we had chosen this wavelet for the four types of soft thresholding methods and the same is depicted in Table 12. As seen in Table 12, the highest QV of 22.54 is attained in the minimax soft thresholding with MEM as a postclassifier. Table 13 exhibits the performance analysis of 20 patients using Haar wavelet in soft thresholding with SVD, EM, and MEM postclassifiers. MEM postclassifier with minimax soft thresholding reached the better QV and PI when compared to SVD and EM classifiers. A slight incremental tradeoff in the weighted delay for MEM is responsible for this performancewhen comparedwith SVD and EM classifiers. SVD fails to achieve a good performance in this methodology due to more false alarm rate. EM is struck in the middle path as far as performance index is concerned. Table 14 shows the performance analysis of 20 patients using morphological filters with SVD, EM, and MEM   postclassifiers. In this method, SVD outperforms other classifiers in terms of QV and PI. This morphological filtering is inherited with slow response and is considered to be a high-threshold classifier. SVD classifier is summed with low false alarm and weighted delays. All these methods are in an average, positioned at more than 90% of performance index, and around quality value of 18. Since for all these classifiers, the obtained weighted delay is more than 2 s; it leads to larger threshold and slow response system. We wish to analyze the time complexity of the postclassifiers in terms of weighted delay and quality value. Table 15 shows the performance analysis of postclassifiers in terms of weighted delay and quality value. It is observed that the four types of wavelet transforms in hard thresholding method along with SVD postclassifier attained low weighted delay and high value of QV.
In the case of Table 15, the EM and MEM classifiers are either plugged into more missed classification or false alarms and subsequently leads to lower value of QV less than 20 in most of the wavelet transforms. In case of soft thresholding, dB2 wavelet in rigsure thresholding for MEM postclassifier outperforms other fifteen methods.  Morphological filters are stacked at higher delay with QV set at near 20. Table 16 shows a good collection of recent papers in this area are given in the review paper by Rajendra Acharya etal. in [34]. Studies that presented techniques for two-class (normal, ictal) epilepsy activity classification are that of Nigam and Graupe [35] who used a multistage nonlinear pre-processing filter along with an artificial neural network (ANN) for the automated detection of epileptic signals and obtained an accuracy of around 97.20%. Nonlinear parameters like CD, LLE, H, and entropy were used to characterize the EEG signal and discriminate epileptic and alcoholic EEG from normal EEG with more than 90%accuracy [36]. Using the same dataset, the same group automatically classified EEG signals into normal and epileptic using different entropies using an adaptive neuro-fuzzy interference system(ANFIS) and obtained an accuracy of 92.22% [37]. Time domain and frequency domain EEG features combined with Elman network were used to classify the two classes with an accuracy, sensitivity, and specificity of 99.6% [38]. Normal and epileptic EEG signals were automatically identified with a classification accuracy of85.9% using discrete wavelet transform (DWT) sub-band energy as input features to adaptive neural fuzzy network [39]. Srinivasan et al. [40] developed an automated epileptic EEG detection system using approximate entropy as the feature in Elman and probabilistic neural networks. Elman network yielded an overall accuracy of 100%. Tzallas et al. [41] employed time-frequency methods to analyze selected segments of EEG signals for automated detection of seizure using neural network and obtained an accuracy ranging from 97.72% to 100%. Subasi [42] applied DWT on EEG signals and decomposed them into frequency sub-bands. DWT coefficients were converted into four statistical features, and these were fed to a modular neural network called mixture of experts (MEs). They classified normal and epileptic EEG signals with an accuracy of 94.5%, sensitivity of 95%, and specificity of 94%. Polat and Gunes [43] classified EEG signals into epileptic and normal using fast Fourier transform (FFT)-based Welch method and decision tree classifier and achieved a maximum classification accuracy of 98.72%, sensitivity of 99.4%, and specificity of 99.31%. The same group [44] used the Welch FFT method for feature extraction, PCA for dimensionality reduction, and a new hybrid automated identification system based on artificial immune recognition system (AIRS) with fuzzy resource allocation mechanism for classification of normal and epileptic segments. They reported an accuracy of 100%.The same group ( [45] used autoregressive (AR) for feature extraction and C4.5 decision tree classifier for classification and reported an accuracy of99.32%. Ocak [46] developed a method for automated seizure detection based on ApEn and DWT. They were able to distinguish seizures with more than 96% accuracy. Guo et al.conducted many studies using ANN for classification and reported an accuracy of95.2%, sensitivity of 98.17%, and specificity of 92.12% using relative wavelet energy-based features [47]; an accuracy of 99.85%, sensitivity of 100%, and specificity of 99.2% using wavelet transform and ApEn features [48]; an accuracy of 99.60% using wavelet transform and line length feature [49]; and an accuracy of 99% using genetic programming-based features in a K-nearest neighbor (KNN)classifier [50]. The DWT features were reduced using PCA, ICA, andLDA, and the resultant features were used to classify normal andepilepsy EEG signals using SVM classifier [51]. They obtained an accuracy of 98.85% using PCA method,99.5% using ICA method, and 100% using LDA method. Ubeyli [52] used AR methods for feature extraction and SVM for classificationand reported an accuracy of 99.56%. In other recent studies, 100% classification accuracy was achieved by Lima et al. [53] (wavelet transform and SVM), Wang et al. [54] (wavelet packet entropy and KNN), Iscan et al. [55] (cross correlation and PSDand SVM), and Orhan et al. [56] (DWT and ANN). Finally, the proposed system by authors used seven parameters along with wavelet hard thresholding and obtained an accuracy of 99.03%, sensitivityof 99.05%,and specificity of 99.1%. Table 16 gives a summary of the above listed studies for automated detection of normal and epileptic classes. It can be observed that a variety of methods like FFT, time-frequency, DWT, morphological filtering,wavelet, statisticalmeasures, nonlinear, chaotic, and entropy measures, and dimension reduction methods like PCA, ICA,SVD,EM,MEM, and LDA are used to analy-zeEEG to detect epileptic state from normal state.

Conclusions
The objective of this paper is to classify the risk level of the epileptic patients from the EEG signals. The aim is to obtain high classification rate, performance index, quality value with low false alarm, and missed classification. Due to the nonlinearity obtained and also the poor performance found in the code converters, an optimization was vital for the effective classification of the signals. We opted SVD, EM, and MEM as postclassifiers. Morphological filters were also used for the feature extraction of the EEG signals. After having computed the values of PI and QV discussed under the results column, we found that SVD was working perfectly with a high classification rate of 91.22% and a false alarm as low as 1.42. Therefore, SVD was chosen to be the best postclassifier. The accuracy of the results obtained can be made even better by using extreme learning machine as a postclassifier, and further research will be in this direction.