Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms
© Gurkan; licensee Springer. 2012
Received: 13 May 2011
Accepted: 31 May 2012
Published: 31 May 2012
In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.
Keywordselectrocardiogram data compression variable-length classified vector sets energy based ECG segmentation
An electrocardiogram (ECG) signal, which is a graphical display of the electrical activity of the heart, is one of the essential biological signals for the monitoring and diagnosis of heart diseases. ECG signals recorded by the digital equipments are most widely used in the applications such as monitoring, cardiac diagnosis, real-time transmission over telephone networks, patient databases and long-term recording. Some key parameters such as the sampling rate, sampling precision, number of leads and recording time play an important role in the increase of the amount of data collected from an ECG signal. Evidently, when continuously generating the huge amount of ECG data, in order to be able to process these data, we need the proper equipments that have the high storage capacity. On the other hand, when the equipments are used in the remote monitoring activities, they must have the wide transmission band. Therefore, in order to achieve removing the redundant information from the ECG signal with retaining all clinically significant features including P-wave, QRS complex and T -wave [1, 2], we need to employ an effective ECG compression algorithm.
In the recent years, the studies dealing with the modeling and compression of the ECG signals essentially utilize one of the following methods: (i) The direct time-domain methods, (ii) the transform-based methods, (iii) the parameter extraction methods [2, 3].
The direct time-domain methods [4–10] such as AZTEC , CORTES , SAPA , FAN , SAIES , mean-shape vector quantization method , gain-shape vector quantization  use the actual samples of the original signal. In the transform-based methods [11–22], the domain of the original signal is transformed into another domain by using the orthogonal transformations such as principal component analysis (PCA) [11, 12], discrete cosine transformation (DCT) , singular value decomposition (SVD)  and wavelet transformation (WT) [15–22]. Then, the appropriate inverse transformation is applied to the transformed signal to reconstruct the original signal in its original domain with an acceptable reconstruction error. The parameter extraction methods [23, 24] such as linear prediction and neural network based methods generally use the idea of generating a set of parameters which is extracted from the original signal.
Among the proposed methods in the literature, one of the most known and powerful algorithm is the set partitioning in hierarchical trees (SPIHT) compression algorithm . Another efficient ECG compression method uses the cosine modulated filter banks to reconstruct the original ECG signals . In , another ECG compression method is proposed, which is based on the adaptive wavelet coefficients quantization by using a modified two-role encoder. Most recently, the wavelet-based ECG data compression system having a linear quality control scheme was proposed .
In some previously published articles [26, 27], it has been shown that the predefined signature and envelope vector sets best describe the speech and ECG signals. It has also been demonstrated in [26, 27] that, by introducing and employing a new systematic procedure called SYMPES, the predefined signature and envelope vector sets have been used to model the speech and ECG signals frame by frame. In this procedure, each frame of the reconstructed speech or ECG signal is represented by a combination of multiplication of three major quantities, which are the gain factor, the signature vector, and the envelope vector.
In , a novel EEG compression method was proposed, which is based on the construction of the classified signature and envelope vector sets (CSEVS). The signature and envelope vector sets obtained for the speech and ECG signals in [26, 27] were then extended to the EEG signals in  to obtain the signature and envelope vector sets for the EEG signals. Then, these vector sets were classified by using k-means clustering algorithm to determine the centroid vectors of each classified vector sets, which were to be used in constructing of the CSEVS. The main advantage of the method proposed in  is that it reduces the size of vector sets and computational complexity of the searching and matching processes. The method introduced in  also proved to have advantages over the wavelet transform coding technique as far as the average RMSE, average PRD, average PRD1, and CR(%) are concerned.
In , a new block-based image compression scheme was presented based on generation of classified energy and pattern blocks (CEPBs). In the method, first the clasified enesrgy blocks (CEB) and clasified pattern blocks (CPB) sets were constructed and any image data can be reconstructed block by block using a block scaling coefficient and the index numbers of the CEPBs placed in the CEB and CPB. The CEB and CPB sets were constructed for different sizes of image blocks such as 8 × 8 or 16 × 16 with respect to different compression ratios (CRs) desired. At the end of a series of the experimental works, the evaluation results show that the proposed method provides high CRs such as 21.33:1, 85.33:1 while preserving the image quality at 27-30.5 dB level on the average. When the CR versus image quality (PSNR) results in the proposed method compared to the other works, it seems that the method is superior to the DCT and DWT particularly at low bit rates or high CRs.
In the current article, we propose a new and more efficient ECG compression algorithm which relies on the variable-length CSEVS (VL-CSEVS) and wavelet transform. In this proposed algorithm, we first use the energy based segmentation method to represent an ECG frame with high energy by short segments and an ECG frame with low energy by long segments. Then, the unique patterns VL-CSEVS are generated from these ECG segments of two different lengths. Thus, when compared with the previous results obtained in [26–28], our new method significantly improves the CR, and then the use of wavelet transform based residual error coding both enhances the quality of the reconstructed signal. In order to check the performance of our new method for a different classes of ECG signals, given that the original unique patterns VL-CSEVS remain unchanged, we have used the MIT-BIH compression test database called the worst-case database by the its developers .
The parameters PRD, MPRD, and maximum error (MAXERR) for compression the ECG of the unique pattern VL-CSEVS derived from the original ECG are measured by changing both the training set and the test set at each round of the 4-fold cross-validation method, whose average values are used to determine the performance of our new proposed method. We should point out here that the sampling frequency, resolution, mean value, and amplitude value of the ECG signals in the test database are different from those of the ECG signals used to construct the unique patterns VL-CSEVS.
The article is organized as follows. Section 2 describes the details of the newly proposed compression algorithm. In Section 3, we present the experimental results obtained by using the proposed compression algorithm, which are then compared with some known successful ECG compression methods reported in [21, 22, 25]. In Section 4, we give the conclusion.
2 Proposed compression algorithm
In this article, an efficient ECG compression algorithm which is based on the modeling ECG signals via VL-CSEVS and employs the residual error coding by using the wavelet transform is proposed. One of the main advantages of our method is to ensure the quality in the reconstruction of an ECG signal.
We use the variable-length approach to generate the CSEVS. In this context, an ECG frame with high energy carrying useful information such as QRS complex is represented by the short segments. At the same time, an ECG frame with low energy with or without possessing clinical information is represented by the long segments. The length of the short segments is determined to be 16 and that of long segments is determined to be 64.
In determination of the length of the segments, we first check the relationship between the segment length and blocking effect for various segment lengths, and then choose the segment length which minimizes the blocking effect on the reconstructed ECG signal.
After the variable-length segmentation process, the signature and envelope vectors are extracted from many of thousands of ECG segments. Then, the signature and envelope vectors are classified by employing effective k-means algorithm which helps us to eliminate the similar signature and envelope vectors. Thus, the VL-CSEVS are constructed by using non-similar signature and envelope patterns, implying that the VL-CSEVS will have unique patterns.
In conclusion, the ECG segments with low energy can be more compressed than the ECG segments that have high energy. Thus, our new method allows us to significantly increase the total CR of ECG signals. On the other hand, some ECG frames containing p-wave or t-wave carries valuable clinical information may have low energy. In the reconstruction of these types of ECG frames, the reconstruction error is substantially decreased by employing the wavelet based residual error coding technique. The proposed algorithm is superior to the powerful wavelet based ECG compression methods, especially at low bit rates.
The newly proposed algorithm basically consists of three processing stages: the pre-processing stage, the stage of construction of the VL-CSEVS, and reconstruction process of an ECG signal. In the following subsections, each stage is explained in details.
2.1 Preprocessing stage
The preprocessing is one of the most important stages of an ECG compression method because it plays a crucial role in enhancing the compression performance of the algorithm. The preprocessing stage is carried out in three steps.
The final step of this stage is the segmentation process. There are two traditional ECG segmentation methods in the literature. The first method is based on the QRS detection algorithm. In this method, each QRS peak of heartbeat or each R-R interval is identified as a segment. Due to the heart rate variability, this segmentation method increases the computational cost of the compression process. The other method is the fixed-length segmentation which is one of the mostly used method in the past literature. In our previous work , we employed the fixed-length segmentation method to split ECG signals into short and quasi-periodic segments. In this research work, energy based segmentation method that splits ECG signal into two different lengths according to the energy variation of the signal is utilized to improve the compression performance of the proposed algorithm. This segmentation method divides the ECG frames with high energy into the short segments whose length is 16 samples while the ECG frames with low energy are divided into the long segments whose each contains 64 samples.
When the preprocessing stage is completed, the normalized ECG segments of two different lengths are obtained to construct the VL-CSVES which are explained in detailed in the next subsection.
2.2 Construction of the VL-CSEVS
in which L F is the number of the samples in any ECG segment which is equal to either 16 or 64.
Since the autocorrelation matrix R i is a positive semi-definite, real-symmetrical and toeplitz matrix, the eigenvalues λ ik are real and non-negative and the eigenvectors v ik are all orthonormal.
After determination of the centroid vectors for each cluster of the signature and envelope vectors, two types of sets were constructed by using these centroid vectors. The centroid vectors obtained from the signature vectors and the envelope vectors are renamed as classified signature vectors (CSV) and classified envelope vectors (CEV), respectively. The CSVs are collected under either the classified signature set-16 (CSS16) or the Classified Signature Set-64 (CSS64) according to their segment length. The CSVs are represented by Ψ NS (n); NS = 1, 2, ..., R, ..., N S . The integer n represents total number of samples in the each CSV while the integer N S designates the total number of the CSVs in the CCS16 and CCS64, individually. In the same way, the CEVs are collected under either the CES16 or the CES64 according to their segment length. The CEVs are represented by Φ NE (n); NE = 1, 2, ..., K, ..., N E . The integer n represents total number of samples in the each CEV while the integer N E denotes the number of the CEVs in the CES16 and CES64, individually. Afterwards, CSS16, CES16, CSS64, and CES64 are collected in the VL-CSEVS. Details of the reconstruction process of measured ECG signals by means of VL-CSEVS are given step by step in the following subsection.
2.3 Reconstruction process of ECG signals by using VL-CSEVS
Step 1: The original ECG signal is first normalized, and then it is segmented in the pre-processing stage. If the segment length is 16 the switch-codebook bit bSWCB is assigned as 1. Otherwise, bSWCB is equal to 0.
Step 2b: The index number R that refers to CSV is stored.
Step 3a: An appropriate CEV from either CES16 or CES64 according to the value of b SWCB is pulled out such as the error shown below is minimized for all .
Step 8: The residual error is down-sampled by two using cubic spline interpolation technique and three-level discrete wavelet transform using Biorthogonal wavelet (Bior 4.4) is applied to the down-sampled residual signal.
Step 9: The modified two-role encoder  is employed for coding the obtained wavelet coefficient, and thus, the encoded residual bit stream is obtained.
Step 10: Encoded bit stream of the index number of R is obtained by using Huffman coding.
Step 11: Encoded bit stream of the index number of K is obtained by using Huffman coding.
Step12: The new gain coefficients C i are coded by using 6 bits.
Step 1: The encoded bit stream of the index number of R and K are decoded by using Huffman decoder.
Step 2: For each segment, the index number of R and K are used to pull out the appropriate CSV and CEV from the VL-CSEVS according to the switch-codebook bit bSWCB.
Step 5: The encoded bit stream of the residual signal is decoded by using the modified two-role decoder .
Step 6: The reconstructed residual signal errrec is produced by applying the inverse WT and up-sampling process by a factor of two, respectively.
In the following section, the simulation results for the proposed compression algorithm are presented.
3 Simulation results
3.1 Evaluation metrics to measure the performance of the proposed compression algorithm
where borg and brec represent the number of the bits required for the original and recon-structed signals, respectively.
where xorg(n) refers to the original signal, xrec(n) denotes the reconstructed signal and N represents the length of the frame.
where denotes the mean value of the original signal .
All of the evaluation criteria explained above are employed in our experiments. We will compare the results of our algorithm with the results of the algorithms given [21, 22, 25] as far as the above mentioned evaluation criteria are concerned.
3.2 Mean opinion score test
The MOS test
ECG Signal Name:####
A. The measure of similarity between the original ECG signal and reconstructed ECG signal.
B. Would you give a different diagnosis with reconstructed signal if you had not seen the original signal.
C. The measure of segment based similarity between the original ECG signal and reconstructed ECG signal.
where a, an integer ranging from 1 to 5, is the measure of the similarity between the original and reconstructed signals. b is the answer to section B related to the diagnosis. If the answer is YES, b is equal to 0, otherwise, b is equal to 1 .
The SMOS defined as the second distortion measure shows the similarity between the important segment and waves of the original and reconstructed ECG signals specifically QRS segment, P and T waves. In this test, SMOS is determined for QRS segment, P and T waves, separately. The results obtained for each segment of the signal are represented by SMOSQRS, SMOSP, and SMOS T , respectively. We should point out here that the lower values of the MOSERROR represent the better signal quality while the higher values of SMOS indicate the better signal quality.
3.3 Experimental results and comparisons
The compression algorithm explained in the previous section was first run in Matlab 7.0.1 platform, and then it was tested with ECG recordings on an Intel Core2 Quad 2.66 GHz processor. In order to evaluate the performance of the proposed compression algorithm, MIT-BIH Arrhythmia Database  and MIT-BIH Compression Test Database  were used in this research work. The MIT-BIH Arrhythmia Database consists of 48 ECG recordings which are sampled at 360 Hz and quantized at 11-bit resolution . On the other hand, the MIT-BIH Compression Test Database consists of 168 ECG recordings. Each data in this database is sampled at 250 Hz and quantized at 12-bit resolution . Each record in both database was first resampled at 500 Hz by using a cubic spline interpolation technique, and then the amplitudes of these records were normalized between 0 and 1.
The selection of the appropriate database is very important in order to construct the VL-CSEVS. The MIT-BIH arrhythmia database was selected as the training set because it contains a large set of ECG beats and many different examples of cardiac pathologies. Then, VL-CSEVS having the unique patterns were generated by analyzing a huge number of the ECG segments obtained from this database.
The number of CSV, CEV, and the required total bit in the VL-CSEVS
1+6+3+6 = 16
1+6+3+7 = 17
In this table, bSWCB refers to the switch-codebook bit that controls the length of an incoming segment. b Ci , b R , b K are the minimum numbers of the bits required to represent the gain coefficient C i , and the integers N S and N E , respectively.
The performance of the proposed algorithm tested on the MIT-BIH Arrhythmia Database with respect to average CR, PRD, MAXERR, encoding end decoding time
Encoding time (s)
Decoding time (s)
The proposed compression algorithm achieves the average CRs from 4:1 to 20:1 with average PRDa varies between 1.2 and 5.6%. Since the acceptable values of PRD were reported to be less than 9% in the literature , it can be emphasized that the results obtained in the proposed compression algorithm provide high CR with very low PRD levels. Furthermore, the average encoding and decoding times of the proposed compression algorithm are 0.687 and 0.318 s, respectively.
The performance of the proposed algorithm tested on the MIT-BIH Compression Test Database with respect to average CR, MPRD, MAXERR, encoding and decoding times
Encoding time (s)
Decoding time (s)
As it can be seen from Table 4, the proposed algorithm achieves the average CRs from 4:1 to 20:1 with an average MPRD in the range of 1.627-8.631%. Moreover, the MAXERR, representing the local distortion, varies between 1.015 and 4.209%. Furthermore, the average encoding and decoding times of the proposed algorithm are 0.619 and 0.279 s, respectively. Figure 9 shows that the compression performance of our previous method mentioned in  is significantly improved by employing the VL-CSEVS in this research work. Also, it is clearly seen from Figure 9 that the compression performance of the proposed algorithm is significantly better than the results given in Hilton  in the light of the MPRD.
It is important to note that in Hilton , the PRD was used as the distortion measure. Although the PRD results are always smaller than the MPRD results due to the mean value of the signal, MPRD results obtained in the proposed algorithm are smaller than the PRD results obtained in Hilton .
3.4 Clinical evaluation and discussion
In the clinical evaluation of our results, we have used 11 original ECG signals from the MIT-BIH Arrhythmia Dataset and 11 original ECG signals from the MIT-BIH Compression Test Database. These 22 original ECG signals were reconstructed at 4:1, 6:1, 8:1, 10:1, 12:1, 14:1, 16:1, 18:1, and 20:1 CRs by using our proposed method. As a result, these 22 original and 198 reconstructed ECG signals were evaluated by the cardiologists in order to validate the performance of the proposed algorithm from clinical point of view.
In the first step of the clinical evaluation, the cardiologistb expressed his opinions by examining these original and reconstructed ECG signals without applying any test. He explained that, the onset, off set and duration of the segments (or intervals) of the ECG signals such as PR, QRS, ST are correctly determined in the reconstructed or compressed ECG signals obtained by the proposed algorithm also at 20:1 CR. He pointed out that the proposed algorithm provides the nearly perfect reconstruction of the QRS segments at 20:1 CR. Although the p-wave and t-wave of the reconstructed ECG signals have more reconstruction error than the QRS segments of the reconstructed ECG signals, these distortions are not critically important in terms of diagnosis. He also explained that the quality of the reconstructed ECG signals is also acceptable at low bit rates.
On the other hand, he also emphasized that, it is very difficult to obtain high CRs with low reconstruction errors in the compression of the Holter ECG's or Stress ECG's which are recorded during movement or exercise, since these types of ECG records contain more variation or artifacts compared with ECG signals recorded in the resting mode. Therefore, the CR has to be selected by the cardiologists to ensure the clinical information depending on the ECG signal being compressed. In this context, it is an important advantage that the CR of the proposed algorithm can be adjusted easily according to the desired CR starting at 1 to 20 or higher.
Furthermore, an average opinion score is requested from the cardiologist in order to determine the clinical quality of the reconstructed ECG signals and he rated the clinical quality of the proposed compression algorithm at 20:1 CR as 4 over 5. As a result, the clinical operational range of the proposed compression algorithm is up to 20:1 CR.
The average results of the clinical test of the proposed compression algorithm with respect to the CR, MOS, SMOSQRS, SMOS T , SMOS P , and MOSERROR
When analyzing the values of MOS given in Table 5, it is clearly seen that the quality of all reconstructed ECG signals is acceptable also at the CR of 20:1. Furthermore, the results of SMOSQRS show that the proposed compression algorithm provides nearly perfect reconstruction of the QRS segments of the reconstructed ECG signals also at the CR of 20:1. In the light of the results of the MOS and SMOSQRS, the cardiologist pointed out that the proposed compression algorithm provides the useful CRs ranging from 4:1 to 20:1. On the other hand, the results of the SMOS T and SMOS P are lower in comparison with the results of SMOSQRS as shown in Figure 14. This is an expected result since the proposed compression algorithm further compresses the ECG segments with low energy in comparison with the ECG segments with high energy.
The diagnostic performance of the proposed compression algorithm for the original ECG signals used in the clinical test
The number of original ECG signals
In conclusion, the ranges of the utility of the proposed compression algorithm are from 4:1 to 20:1 CRs depending on the ECG signal to be compressed.
We have introduced an efficient compression algorithm for ECG signals. The proposed algorithm is based on modeling ECG signals via VL-CSEVS and using residual error coding by wavelet transform to ensure the reconstruction quality. The main advantage of the proposed compression algorithm is to provide low level reconstruction errors at high CRs while preserving diagnostic information in the reconstructed ECG signals, which has been supported by the clinical tests that we have carried out. Especially at the CR of 20:1, the proposed compression algorithm achieves almost 13% lower PRD values in the reconstructed ECG signals in comparison with the other ECG compression methods given in [21, 22, 25]. In this work, the VL-CSEVS which have unique patterns are specifically designed for ECG signals by using the relationship between energy variation and clinical information.
In this research work, ECG signals are segmented by using energy based segmentation so that ECG frames which have the high energy are represented by the short segments while the other frames with low energy are represented by the long segments. Therefore, both the size of VL-CSEVS and the computational complexity of the searching and matching process are reduced significantly in comparison with the predefined signature and envelope vector sets proposed in our previous works [26, 27].
In conclusion, the CR of the proposed algorithm is significantly improved in comparison with the results of our previous method . Besides the good performance in the average CR, the low reconstruction error is ensured by applying the residual error coding.
The performance of the proposed algorithm is evaluated and compared with the three well-known ECG compression methods given in [21, 22, 25]. The results of the performance evaluations show that the proposed algorithm provides the better results than the other methods in terms of the average CR, the average PRD, the average MPRD, and the MAXERR which are well-known objective evaluation criteria. Moreover, the computational complexity of the proposed algorithm is also very low so that the average encoding and decoding times are almost 0.7 and 0.3 s, respectively.
In the experiments, the 4-fold cross-validation is employed to expose the relationship between the CR and PRD at different levels. The results obtained at each round show that there is almost no change in the PRD levels which correspond the same CR values. Furthermore, the performance of the VL-CSEVS is also tested on the ECG signals from a different database which is called as MIT-BIH compression test database. During the experiments, we observed some small differences in the PRD levels at the same CR values in the worst-case condition employing the MIT-BIH compression test database. These experimental results show that the proposed algorithm does not need any adaption process to reconstruct any ECG signals which have different characteristics. That is to say, the proposed VL-CSEVS do not require to re-created specifically for an ECG database so that the VL-CSEVS are constructed from the unique patterns extracted by examining many of thousands ECG segments and they are fixed.
We finally point out that the generation of the VL-CSEVS is carried out off-line and the unique VL-CSEVS are fixed and located at the receiver side of the system. In other words, the unique VL-CSEVS do not required to be redesigned in order to compress and reconstruct any ECG signal. On the other hand, the encoding and decoding parts of the proposed method are on-line procedures. When the average encoding and decoding times are analyzed it can be said that the proposed method is appropriate for real-time applications.
aEach signal in the MIT-BIH Arrhythmia Database included a baseline of 1024 added for storage purposes. Consequently, the PRD which is given in (27) is worked out by subtracting 1024 from each data sample. bThe clinical evaluation was carried out by Prof. Osman Akdemir who is a cardiologist in the Department of Cardiology at the T.C. Maltepe University, Istanbul Turkey. cThe clinical test was carried out by Dr. Ruken Bengi Bakal who is a cardiologist in the Department of Cardiology at the Kartal Kosuyolu Yuksek Ihtisas Education and Research Hospital, Istanbul, Turkey.
The author would like to special thank Prof. Siddik Yarman who is Board of Trustees Chairman of the ISIK University and Umit Guz, Assistant Professor at the ISIK University for their valuable contributions and continuous interest in this article. The author also would like to thank Prof. Osman Akdemir who is a cardiologist in the Department of Cardiology at the T.C. Maltepe University and Dr. Ruken Bengi Bakal who is a cardiologist in the Department of Cardiology at the Kartal Kosuyolu Yuksek Ihtisas Education and Research Hospital for their valuable clinical contributions and suggestions and the reviewers for their constructive comments which improved the technical quality and presentation of the article. The present work was supported by the Scientific Research Fund of ISIK University, Project number 06B302.
- Rangayyan R: Biomedical Signal Analysis: A Case Study Approach. Wiley, New York; 2002.Google Scholar
- Sornmo L, Laguna P: Bioelectrical Signal Processing in Cardiac and Neurological Applications. Elsevier Academic Press, London; 2005.Google Scholar
- Jalaleddine SMS, Hutchens CG, Strattan RD, Coberly WA: Ecg data compression techniques--a unified approach. IEEE Trans Biomed Eng 1990, 37(4):329-343. 10.1109/10.52340View ArticleGoogle Scholar
- Cox JR, Nolle FM, Fozzard A, Oliver G: Aztec, a preprocessing program for real-time ecg rhythm analysis. IEEE Trans Biomed Eng 1968, 15(4):128-129.View ArticleGoogle Scholar
- Abenstein JP, Tompkins WJ: New data reduction algorithm for real-time ecg analysis. IEEE Trans Biomed Eng 1982, 29(1):43-48.View ArticleGoogle Scholar
- Ishijima M, Shin SB, Hostetter GH, Sklansky J: Scan-along polygon approximation for data compression of electrocardiograms. Med Biol Eng Comput 1983, 30(11):723-729.Google Scholar
- Dipersio DA, Barr RC: Evaluation of the fan method of adaptive sampling on human electrocardiogram. Med Biol Eng Comput 1985, 23(5):401-410. 10.1007/BF02448926View ArticleGoogle Scholar
- Jalaleddine SMS, Hutchens CG: Saies--a new ecg data compression algorithm. J Clin Eng 1990, 15(1):45-51.View ArticleGoogle Scholar
- Cardenas-Barrera JL, Lorenzo-Ginori JV: Mean-shape vector quantizer for ecg signal compression. IEEE Trans Biomed Eng 1999, 46(1):62-70. 10.1109/10.736756View ArticleGoogle Scholar
- Sun C-C, Tai S-C: Beat-based ecg compression using gain-shape vector quantization. IEEE Trans Biomed Eng 2005, 52(11):1182-1888.View ArticleGoogle Scholar
- Blanchett T, Kember GC, Fenton GA: Klt-based quality controlled compression of single-lead ecg. IEEE Trans Biomed Eng 1998, 45(7):942-945. 10.1109/10.686803View ArticleGoogle Scholar
- Castells F, Laguna P, Srnmo L, Bollmann A, Roig JM: Principle component analysis in ecg signal porcessing. EURASIP J. Appl. Signal Process. (EURASIP JASP), Hindawi, Special issue on Adv. Electrocardiogr. Signal Process Anal 2007, 2007: 1-21. (Article ID 74580), doi:10.1155/2007/74580Google Scholar
- Batista LV, Melcher EUK, Carvalho LC: Compression of ecg signals by optimized quantization of discrete cosine transform coefficients. Med Eng Phys 2001, 23(2):127-134. 10.1016/S1350-4533(01)00030-3View ArticleGoogle Scholar
- Wei J-J, Chang C-J, Chou N-K, Jan G-J: Ecg data compression using truncated singular value decomposition. IEEE Trans Inf Technol Biomed 2001, 5(4):290-299. 10.1109/4233.966104View ArticleGoogle Scholar
- Hilton ML: Wavelet and wavelet packed compression of electrocardiograms. IEEE Trans Biomed Eng 1997, 44(5):394-402. 10.1109/10.568915View ArticleGoogle Scholar
- Miaou S-G, Yen H-L, Lin C-L: Wavelet-based ecg compression using dynamic vector quantization with tree codevectors in single codebook. IEEE Trans Biomed Eng 2002, 49(7):671-680. 10.1109/TBME.2002.1010850View ArticleGoogle Scholar
- Tai S-C, Sun C-C, Yan W-C: a 2-d ecg compression method based on wavelet transform and modified spiht. IEEE Trans Biomed Eng 2005, 52(6):999-1008. 10.1109/TBME.2005.846727View ArticleGoogle Scholar
- Miaou S-G, Chao S-N: Wavelet-based lossy-to-lossless ecg compression in a unified vector quantization framework. IEEE Trans Biomed Eng 2005, 52(3):539-543. 10.1109/TBME.2004.842791View ArticleGoogle Scholar
- Kim BS, Yoo SK, Lee MH: Wavelet-based low-delay ecg compression algorithm for continuous ecg transmission. IEEE Trans Inf Technol Biomed 2006, 10(1):77-83. 10.1109/TITB.2005.856854View ArticleGoogle Scholar
- Ku C-T, Hung K-C, Wu T-C, Wang H-S: Wavelet-based ecg data compression system with linear quality control scheme. IEEE Trans Biomed Eng 2010, 57(6):1399-1409.View ArticleGoogle Scholar
- Lu Z, Kim DY, Pearlman WA: Wavelet compression of ecg signals by the set partitioning in hierarchical trees (spiht) algorithm. IEEE Trans Biomed Eng 2000, 47(7):849-856. 10.1109/10.846678View ArticleGoogle Scholar
- Benzid R, Marir F, Bouguechal N: Electrocardiagram compression method based on the adaptive wavelet coefficients quantization combined to a modified two-role encoder. IEEE Signal Process Lett 2007, 14(6):373-376.View ArticleGoogle Scholar
- Nave G, Cohen A: Ecg compression using long term prediction. IEEE Trans Biomed Eng 1993, 40(9):877-885. 10.1109/10.245608View ArticleGoogle Scholar
- Zigel Y, Cohen A, Katz A: Ecg signal compression using analysis by synthesis coding. IEEE Trans Biomed Eng 2000, 47(10):1308-1316. 10.1109/10.871403View ArticleGoogle Scholar
- Blanco-Valesco M, Cruz-Roldan F, Godino-Llorente J, Barner K: Ecg compression with retrieved quality guaranteed. Electron Lett 2004, 40(23):1466-1467. 10.1049/el:20046382View ArticleGoogle Scholar
- Guz U, Gurkan H, Yarman B: A new method to represent speech signals via predefined signature and envelope sequences. EURASIP J. Appl. Signal Process. (EURASIP JASP), Hindawi, Special Issue on Adv. Subspace-Based Tech. Signal Process Commun 2007, 2007: 1-17. (Article ID 56382), doi:10.1155/2007/56382Google Scholar
- Gurkan H, Guz U, Yarman BS: Modeling of electrocardiogram signals using predefined signature and envelope vector sets. EURASIP J Appl Process (EURASIP JASP), Hindawi, Special issue on Adv Electrocardiogr Signal Process Anal 2007, 2007: 1-12. (Article ID 12071), doi:10.1155/2007/12071MATHGoogle Scholar
- Gurkan H, Guz U, Yarman BS: Eeg signal compression based on classified signature and envelope vector sets. Wiley Int J Circ Theory Appl (Special Issue on Bridging technology innovations to foundations) 2009, 37(2):351-363.MATHGoogle Scholar
- Guz U: A novel image compression method based on classified energy and pattern building blocks. EURASIP J. Adv. Signal Process. In Hindawi, Special Issue on Theory Appl. Volume 2011. General Linear Image Process. (GLIP); 2011:1-20. (Article ID 730694), doi:10.1155/2011/730694Google Scholar
- Blanco-Valesco M, Cruz-Roldan F, Godino-Llorente J, Blanco-Valesco J, Armeins-Aparicio C, Ferreras F: On the use of prd and cr parameters for ecg compression. Med Eng Phys 2005, 27: 798-802. 10.1016/j.medengphy.2005.02.007View ArticleGoogle Scholar
- Zigel Y, Cohen A, Katz A: The weighted diagnostic distortion (wdd) measure for ecg signal compression. IEEE Trans Biomed Eng 2000, 47(11):1422-1430. 10.1109/TBME.2000.880093View ArticleGoogle Scholar
- Moody G: The MIT-BIH Arrhythmia Database CD-ROM. 2nd edition. Harvard-MIT Division of Health Sciences and Technology, Cambridge; 1992.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.