- Research Article
- Open Access
Medical Image Fusion via an Effective Wavelet-Based Approach
© Yong Yang et al. 2010
- Received: 30 July 2009
- Accepted: 10 March 2010
- Published: 21 April 2010
A novel wavelet-based approach for medical image fusion is presented, which is developed by taking into not only account the characteristics of human visual system (HVS) but also the physical meaning of the wavelet coefficients. After the medical images to be fused are decomposed by the wavelet transform, different-fusion schemes for combining the coefficients are proposed: coefficients in low-frequency band are selected with a visibility-based scheme, and coefficients in high-frequency bands are selected with a variance based method. To overcome the presence of noise and guarantee the homogeneity of the fused image, all the coefficients are subsequently performed by a window-based consistency verification process. The fused image is finally constructed by the inverse wavelet transform with all composite coefficients. To quantitatively evaluate and prove the performance of the proposed method, series of experiments and comparisons with some existing fusion methods are carried out in the paper. Experimental results on simulated and real medical images indicate that the proposed method is effective and can get satisfactory fusion results.
- Discrete Wavelet Transform
- Image Fusion
- Wavelet Coefficient
- Fusion Rule
- Cross Entropy
Nowadays, with the rapid development in high-technology and modern instrumentations, medical imaging has become a vital component of a large number of applications, including diagnosis, research, and treatment. In order to support more accurate clinical information for physicians to deal with medical diagnosis and evaluation, multimodality medical images are needed, such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance angiography (MRA), and positron emission tomography (PET) images . These multimodality medical images usually provide complementary and occasionally conflicting information. For example, the CT image can provide dense structures like bones and implants with less distortion, but it cannot detect physiological changes, while the MR image can provide normal and pathological soft tissues information, but it cannot support the bones information. In this case, only one kind of image may not be sufficient to provide accurate clinical requirements for the physicians. Therefore, the fusion of the multimodal medical images is necessary and it has become a promising and very challenging research area in recent years [2, 3].
Image fusion can be broadly defined as the process of combing multiple input images or some of their features into a single image without the introduction of distortion or loss of information . The aim of image fusion is to integrate complementary as well as redundant information from multiple images to create a fused image output. Therefore, the new image generated should contain a more accurate description of the scene than any of the individual source images and is more suitable for human visual and machine perception or further image processing and analysis tasks . For medical image fusion, the fusion of images can often lead to additional clinical information not apparent in the separate images. Another advantage is that it can reduce the storage cost by storing just the single fused image instead of multisource images.
So far, many techniques for image fusion have been proposed in the literature and a thorough overview of these methods can be viewed in . According to the stage at which the combination mechanism takes place, the image fusion methods can be generally grouped into three categories, namely, pixel level or sensor level, feature level, and decision level . Since the pixel level fusion has the advantage that the images used contain the original measured quantities, and the algorithms are computationally efficient and easy to implement, the most image fusion applications employ pixel level-based methods . Therefore, in this paper, we are still concerned about pixel level fusion, and when the terms "image fusion" or "fusion" are used, pixel-level fusion is intended.
The simplest way of image fusion is to take the average of the two images pixel by pixel. However, this method usually leads to undesirable side effect such as reduced contrast . More robust algorithm for pixel level fusion is the weighted average approach. In this method, the fused pixel is estimated as the weighted average of the corresponding input pixels. However, the weight estimation usually requires a user-specific threshold. Other methods have been developed, such as intensity-hue-saturation (IHS), principal component analysis (PCA), and the Brovey transform . These techniques are easy to understand and implement. However, although the fused images obtained by these methods have high spatial quality, they usually suffer from spectral degradation; that is, they can yield high spatial resolution-fused image, but they overlook the high quality of spectral information which is especially crucial for remote sensing image fusion . Artificial neural network (ANN) has also been introduced to make image fusion, as seen in . However, the performance of ANN depends on the sample images and this is not an appealing characteristic. Yang et al. used a statistical approach to fuse the images ; however, in his method the distortion is modeled as a mixture of Gaussian probability density functions (pdfs) which is a limiting assumption. Because the real-world objects usually contain structures at many different scales or resolutions and mutilresolution or multiscale approaches can provide a means to exploit this fact, the multiresolution techniques have then attracted more and more interest in image fusion.
The multiresolution techniques involve two kinds, one is pyramid transform; another is wavelet transform. In the pyramid fusion, the input images are first transformed into their multiresolution pyramid representations. The fusion process then creates a new fused pyramid from the input image pyramids in a certain fusion rule. The fused image is finally reconstructed by performing an inverse multiresolution transform. Examples of this approach include the Laplacian pyramid , the gradient pyramid , the contrast pyramid , the ratio-of-low-pass pyramid , and the morphological pyramid . However, for the reason of the pyramid method fails to introduce any spatial orientation selectivity in the decomposition process, the above mentioned methods often cause blocking effects in the fusion results . Matsopoulos et al. earlier applied the morphological pyramid method to fuse the MR and CT images , but this method can occasionally create many undesired edges. Another family of the multiresolution fusion techniques is the wavelet-based method, which usually used the discrete wavelet transform (DWT) in the fusion. Since the DWT of image signals produces a nonredundant image representation, it can provide better spatial and spectral localization of image information as compared to other multiresolution representations. The research results reveal that DWT schemes have some advantages over pyramid schemes such as increased directional information, no blocking artifacts that often occur in pyramid fused images; better signal-to-noise ratios . Therefore, the wavelet-based method has been popular widely used for image fusion [5, 18, 20–23], and two detailed surveys can be seen in [24, 25]. Although there are considerable wavelet-based fusion works today, most of them concerned on remote images, multifocus images, and infrared images, while less work has been done for medical images. Yu et al. fused the medical images by the wavelet-based method with a maximum-selection fusion rule , which is similar to Burt's method . However, this method suffers from the noise and artifacts as they tend to have higher contrast. Qu et al. used the modulus maxima selection criteria for the wavelet transform coefficients in the medical image fusion . The disadvantage of this method is that they consider only wavelet coefficients (pixel) values while making decisions about constructing the fused image . More recently, Cheng et al. proposed a weighted wavelet-based method for fusion of PET and CT images . However, their method confronted with the problem of selecting the parameters of weight; that is to say their method depended on the weights given by the user. Therefore, different weights will lead to different fused results.
In this paper, a novel and fully automated wavelet-based method for medical image fusion is proposed. The main contribution of this work is that after the source images are decomposed by the wavelet transform, the coefficients of the low-frequency portion and high-frequency portions are performed with different fusion schemes. This new technique is developed by not only taking into account the characteristics of the human visual system (HVS) for the wavelet coefficients but also considering the physical meaning of the coefficients. Therefore, the coefficients of the low-frequency and high-frequency bands are treated with different ways: the former is selected with a visibility based scheme, and the latter is selected by a maximum local variance scheme. Besides, in order to avoid the presence of noise and guarantee the homogeneity of the fused image, all the coefficients are finally performed with a consistency verification. The fused image can then be achieved by an inverse wavelet transform with the coefficients obtained from all frequency bands. Both qualitative and quantitative performance evaluations are made and verified in the paper.
The remainder of the paper is organized as follows. The related wavelet-based image fusion technique is reviewed and given in Section 2. The proposed method for fusing multimodal medical images is described in Section 3. Experimental results and analysis are presented in Section 4 and the conclusions are given in Section 5.
The original concept and theory of wavelet-based multiresolution analysis came from Mallat . The wavelet transform is a mathematical tool that can detect local features in a signal process. It also can be used to decompose two-dimensional (2D) signals such as 2D gray-scale image signals into different resolution levels for multiresolution analysis. Wavelet transform has been greatly used in many areas, such as texture analysis, data compression, feature detection, and image fusion. In this section, we briefly review and analyze the wavelet-based image fusion technique.
2.1. Wavelet Transform
Wavelet transforms provide a framework in which a signal is decomposed, with each level corresponding to a coarser resolution or lower-frequency band and higher-frequency bands. There are two main groups of transforms, continuous and discrete. Of particular interest is the DWT, which applies a two-channel filter bank (with downsampling) iteratively to the lowpass band (initially the original signal). The wavelet representation then consists of the low-pass band at the lowest resolution and the highpass bands obtained at each step. This transform is invertible and nonredundant.
In order to develop a multiresolution analysis, a scaling function is needed, together with the dilated and translated version of it, . According to the characteristics of the scale spaces spanned by and , the signal can be decomposed in its coarse part and details of various sizes by projecting it onto the corresponding spaces.
2.2. Fusion with Wavelet Transform
The images to be fused must be registered to assure that the corresponding pixels are aligned.
These images are decomposed into wavelet transformed images, respectively, based on wavelet transformation. The transformed images with -level decomposition will include one low-frequency portion (low-low band) and high-frequency portions (low-high bands, high-low bands, and high-high bands).
The transform coefficients of different portions or bands are performed with a certain fusion rule.
The fused image is constructed by performing an inverse wavelet transform based on the combined transform coefficients from Step 3.
As shown in the fusion block, Figure 2, it is easy to find that the core step in image fusion based on wavelet is that of coefficient combination, namely, the fusion rule because it will decide how to merge the coefficients in an appropriate way so that a high-quality fused image can be obtained. Therefore, for this kind of image fusion method the key issue is its fusion rule design, and it should be paid more attention. Over the past years, various fusion rules have been proposed, which can be divided into pixel-based method and window-based method. The popular widely used pixel-based fusion rule is the aforementioned maximum selection scheme . This method can select the salient features from the source images; however, it is sensitive to noise and artifacts as they intend to have higher contrast. As a result, with this method some noise and artifacts are easily introduced into the fused image, which will reduce the resultant image quality consequently. Averaging fusion rule is another pixel-based method and it can lead to a stabilization of the fusion result. However, this scheme tends to blur images and reduce the contrast of features appearing in only one image. More complex fusion rules such as window-based or region-based are also proposed because these types of schemes are more robust than the pixel-based scheme against the image misregistration. Burt and Kolczynshi  proposed a window-based weighted average fusion rule. However, the weights in this scheme rely on a user predefined threshold. Li et al.  used an area-based maximum selection rule to determine which of the input is likely to contain the most useful information by considering the maximum absolute variance value of the central coefficients within a window. Although this method has been proved better than the pyramid-based method, the disadvantage of this method is that it treats the wavelet coefficients of both low-frequency band and high-frequency bands in the same way. However, as we know in many applications, the ultimate user or interpreter of the fused image is a human. So the human perception should be considered in the image fusion. According to the theoretical models of the HVS, it is easy to know that the human eyes have different sensitiveness to the wavelet coefficients of low resolution band and high resolution bands [32, 33]. Hence, the above fusion rules that treat all the coefficients in same way will have some disadvantages.
3.1. Low-Frequency Band Fusion
In this paper, to simplify the description of the different alternatives available in forming a fusion rule, as in [5, 24] we also consider only two source images, and , and the fused image . The method can of course be easily extended to more than two images. Generally, an image has its multiscale decomposition (MSD) representation denoted . Hence we will encounter , , and . Let indicate the index corresponding to a particular MSD coefficient, where and indicate the spatial position in a given frequency band, is the decomposition level, and is the frequency band of the MSD representation. Therefore, denote the MSD value of the corresponding coefficient at the position with decomposition level and frequency band .
3.2. High Frequency Bands Fusion
It is worthy to note again that the high-frequency bands referred here include the vertical, horizontal, and diagonal high-frequencies of the image, respectively. Therefore, the fusion process should be performed in all these domains.
3.3. Consistency Verification
Through the above three procedures, the combined coefficients are then performed by an inverse wavelet transform, and the fused image can achieved consequently. Thus, the steps of our fusion approach in this paper can be briefly summarized as follows.
Register the multimodal medical images.
Decompose the images to 3-4 wavelet planes (resolution levels).
The wavelet coefficients of the low-frequency are selected by (6) and (7), and the wavelet coefficients of the high-frequency are selected by (8).
The coefficients of both the low-frequency and high-frequency are performed by the consistency verification of (10) and (18).
Perform the inverse wavelet transform with the combined coefficients obtained from Step 4.
In this section, the application results of the proposed wavelet-based method for medical image fusion are presented. The performance of the proposed method is compared with those of pixel averaging method , the gradient pyramid method , and the conventional DWT method with maximum selection rule . Since image registration is out of scope of this paper, like most of the literatures [5, 36], in all test cases we assume the source medical images to be in perfect registration. A thorough survey of image registration techniques can be referred to . We use the Daubechies' db8, also with a decomposition level of 3, as the wavelet-basis for DWT and the proposed method. A window size for calculating the variance is considered in this paper, which has been proved to be more effective by many researchers [38, 39]. We have carried out some comparisons on different values of the visual constant and found that the fusion result is insensitive to this parameter. Therefore, the parameter is chosen to be 0.7 in this paper. Furthermore, we invited a radiologist (Associate Professor Xianjun Zeng, Department of the Medical Imaging, the First Affiliation Hospital of Nanchang University) to do subjective evaluation (visual assessment) of all the experiments.
Therefore, in order to better evaluate the above fusion methods, quantitative assessment of the performance of the four methods is then carried out. However, as we know actually for image fusion it is often hard to get the ideal or reference composite image; so the above MI metric cannot be used here. Consequently, four other evaluation criteria are then introduced and employed in this paper [41, 43].
(i) Standard Deviation
where is the pixel value of the fused image at the position , is the mean value of the image. The standard deviation is the most common measure of statistical dispersion, which can be used to evaluate how widely spread the gray values in an image. So, the larger the standard deviation, the better the result.
(ii) Average Gradient
where is the same meaning as in the standard deviation. The average gradient reflects the clarity of the fused image. It is used to measure the spatial resolution of the fused image; that is, larger average gradient means a higher resolution.
(iii) Information Entropy
where is the number of gray level, and equals the ratio between the number of pixels whose gray value is and the total pixel number contained in the image. The information entropy measures the richness of information in an image. Thus, the higher the entropy, the better the performance.
(iv) Cross Entropy (CE)
Quantitative evaluation results of the four different fusion methods in Figure 7.
Quantitative evaluation results of the four different fusion methods in Figure 8.
The fusion of multimodal medical images plays an important role in many clinical applications for they can support more accurate information than any individual source image. This paper presents a novel wavelet-based approach for medical image fusion, which consists of three steps. In the first step, the medical images to be fused are decomposed into subimages by wavelet transform. In the second step, after considering the characteristics of HVS and the physical meaning of the wavelet coefficients, the coefficients of the low-frequency band and high-frequency bands are performed with different fusion strategies: the former is selected using a maximum visibility scheme, and the latter is selected by a maximum local variance rule. In order to improve the quality of the resultant image, all the combined coefficients are then performed by a window based consistency verification. In the last step, the fused image is constructed by the inverse wavelet transform with the composite coefficients. The performance of the proposed method is qualitatively and quantitatively compared with some existing fusion approaches. Experimental results show that the proposed method can preserve more useful information in the fused image with higher spatial resolution and less difference to the source images.
This work was supported by the National Natural Science Foundation of China under the grant no. 60963012, by the Ministry of Education, Science Technology (MEST) and Korea Industrial Technology Foundation (KOTEF) though the Humana Resource Training Project for Regional Innovation, by the second stage of Brain Korea 21, by the China Postdoctoral Special Science Foundation funded project under the grant no. 200902614, and by the Science and Technology Research Project of the Education Department of Jiangxi Province under the grants no. GJJ10125 and no. GJJ09287. The authors also thank the anonymous referees for their valuable suggestions.
- Maes F, Vandermeulen D, Suetens P: Medical image registration using mutual information. Proceedings of the IEEE 2003, 91(10):1699-1721. 10.1109/JPROC.2003.817864View ArticleMATHGoogle Scholar
- Barra V, Boire J-Y: A general framework for the fusion of anatomical and functional medical images. NeuroImage 2001, 13(3):410-424. 10.1006/nimg.2000.0707View ArticleGoogle Scholar
- Zhu Y-M, Cochoff SM: An object-oriented framework for medical image registration, fusion, and visualization. Computer Methods and Programs in Biomedicine 2006, 82(3):258-267. 10.1016/j.cmpb.2006.04.007View ArticleGoogle Scholar
- Petrovic VS, Xydeas CS: Gradient-based multiresolution image fusion. IEEE Transactions on Image Processing 2004, 13(2):228-237. 10.1109/TIP.2004.823821View ArticleMATHGoogle Scholar
- Zhang Z, Blum RS: A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE 1999, 87(8):1315-1326. 10.1109/5.775414View ArticleGoogle Scholar
- Wang Y, Lohmann B: Multisensor image fusion: concept, method and applications. Institute of Automatic Technology, University of Bremen, Bremen, Germany; 2000.Google Scholar
- Shivappa ST, Rao BD, Trivedi MM: An iterative decoding algorithm for fusion of multimodal information. EURASIP Journal on Advances in Signal Processing 2008, 2008:-10.Google Scholar
- Redondo R, Sroubek F, Fischer S, Cristobal G: Multifocus image fusion using the log-Gabor transform and a Multisize Windows technique. Information Fusion 2009, 10(2):163-171. 10.1016/j.inffus.2008.08.006View ArticleGoogle Scholar
- Li S, Yang B: Multifocus image fusion using region segmentation and spatial frequency. Image and Vision Computing 2008, 26(7):971-979. 10.1016/j.imavis.2007.10.012View ArticleGoogle Scholar
- Pradhan PS, King RL, Younan NH, Holcomb DW: Estimation of the number of decomposition levels for a wavelet-based multiresolution multisensor image fusion. IEEE Transactions on Geoscience and Remote Sensing 2006, 44(12):3674-3686.View ArticleGoogle Scholar
- Li S, Kwok JT, Wang Y: Multifocus image fusion using artificial neural networks. Pattern Recognition Letters 2002, 23(8):985-997. 10.1016/S0167-8655(02)00029-6View ArticleMATHGoogle Scholar
- Yang J, Blum RS: A statistical signal processing approach to image fusion for conceled weapon detection. Proceedings of the IEEE International Conference on Image Processing, 2002 1: 513-516.View ArticleGoogle Scholar
- Burt PJ, Adelson EH: The Laplacian pyramid as a compact image code. IEEE Transactions on Communications 1983, 31(4):532-540. 10.1109/TCOM.1983.1095851View ArticleGoogle Scholar
- Burt PJ, Kolczynski RJ: Enhanced image capture through fusion. Proceedings of the 4th IEEE International Conference on Computer Vision (ICCV '93), 1993 173-182.Google Scholar
- Toet A, van Ruyven JJ, Valeton JM: Merging thermal and visual images by a contrast pyramid. Optical Engineering 1989, 28(7):789-792.View ArticleGoogle Scholar
- Toet A: Image fusion by a ration of low-pass pyramid. Pattern Recognition Letters 1989, 9(4):245-253. 10.1016/0167-8655(89)90003-2View ArticleMATHGoogle Scholar
- Toet A: A morphological pyramidal image decomposition. Pattern Recognition Letters 1989, 9(4):255-261. 10.1016/0167-8655(89)90004-4View ArticleMATHGoogle Scholar
- Li H, Manjunath BS, Mitra SK: Multisensor image fusion using the wavelet transform. Graphical Models and Image Processing 1995, 57(3):235-245. 10.1006/gmip.1995.1022View ArticleGoogle Scholar
- Matsopoulos GK, Marshall S: Application of morphological pyramids: fusion of MR and CT phantoms. Journal of Visual Communication and Image Representation 1995, 6(2):196-207. 10.1006/jvci.1995.1018View ArticleGoogle Scholar
- Chipman LJ, Orr TM, Graham LN: Wavelets and image fusion. Proceedings of the IEEE International Conference on Image Processing, 1995 3: 248-251.Google Scholar
- Pu T, Ni G: Contrast-based image fusion using the discrete wavelet transform. Optical Engineering 2000, 39(8):2075-2082. 10.1117/1.1303728View ArticleGoogle Scholar
- Ma H, Jia CY, Liu S: Multisource image fusion based on wavelet transform. International Journal of Information Technology 2005, 11(7):81-91.Google Scholar
- Acerbi-Junior FW, Clevers JGPW, Schaepman ME: The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna. International Journal of Applied Earth Observation and Geoinformation 2006, 8(4):278-288. 10.1016/j.jag.2006.01.001View ArticleGoogle Scholar
- Pajares G, Cruz JMDL: A wavelet-based image fusion tutorial. Pattern Recognition 2004, 37(9):1855-1872. 10.1016/j.patcog.2004.03.010View ArticleGoogle Scholar
- Amolins K, Zhang Y, Dare P: Wavelet based image fusion techniques—an introduction, review and comparison. ISPRS Journal of Photogrammetry & Remote Sensing 2007, 62(4):249-263. 10.1016/j.isprsjprs.2007.05.009View ArticleGoogle Scholar
- Yu LF, Zu DL, Wang WD, Bao SL: Multi-modality medical image fusion based on wavelet analysis and quality evaluation. Journal of Systems Engineering and Electronics 2001, 12(1):42-48.Google Scholar
- Qu GH, Zhang DL, Yan PF: Medical image fusion by wavelet transform modulus maxima. Optics Express 2001, 9(4):184-190. 10.1364/OE.9.000184View ArticleGoogle Scholar
- Garg S, Kiran KU, Mohan R, Tiwary US: Multilevel medical image fusion using segmented image by level set evolution with region competition. Proceedings of the 27th Annual International Conference of the IEEE Engineering in Medicine and Biology (EMBS '06), 2006 7680-7683.Google Scholar
- Cheng SL, He JM, Lv ZW: Medical image of PET/CT weighted fusion based on wavelet transform. Proceedings of the 2nd International Conference on Bioinformatics and Biomedical Engineering (iCBBE '08), 2008 2523-2525.Google Scholar
- Mallat SG: A theory for multiresolution signal decomposition: the wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 1989, 11(7):674-693. 10.1109/34.192463View ArticleMATHGoogle Scholar
- Nikolov SG, Hill P, Bull DR, Canagarajah CN: Wavelets for image fusion. In Wavelets in Signal and Image Analysis, Computational Imaging and Vision Series. Kluwer Academic Publishers, Dodrecht, The Netherlands; 2001:213-244.View ArticleGoogle Scholar
- Lewis AS, Knowles G: Image compression using the 2-D wavelet transform. IEEE Transactions on Image Processing 1992, 1(2):244-250. 10.1109/83.136601View ArticleGoogle Scholar
- Barni M, Bartolini F, Piva A: Improved wavelet-based watermarking through pixel-wise masking. IEEE Transactions on Image Processing 2001, 10(5):783-791. 10.1109/83.918570View ArticleMATHGoogle Scholar
- Huang JW, Yun QS, Dai XH: A segmentation-based image coding algorithm using the features of human vision system. Journal of Image and Graphics 1999, 4(5):400-404.Google Scholar
- Watson AB: Efficiency of a model human image code. Journal of the Optical Society of America. A 1987, 4(12):2401-2417. 10.1364/JOSAA.4.002401View ArticleGoogle Scholar
- Mitianoudis N, Stathaki T: Pixel-based and region-based image fusion schemes using ICA bases. Information Fusion 2007, 8(2):131-142. 10.1016/j.inffus.2005.09.001View ArticleMATHGoogle Scholar
- Zitova B, Flusser J: Image registration methods: a survey. Image and Vision Computing 2003, 21(11):977-1000. 10.1016/S0262-8856(03)00137-9View ArticleGoogle Scholar
- Liu G-X, Yang W-H: A wavelet-decomposition-based image fusion scheme and its performance evaluation. Acta Automatica Sinica 2002, 28(6):927-934.Google Scholar
- Li M, Zhang XY, Mao J: Neighboring region variance weighted mean image fusion based on wavelet transform. Foreign Electronic Measurement Technology 2008, 27(1):5-6.Google Scholar
- Piella G: A general framework for multiresolution image fusion: from pixels to regions. Information Fusion 2003, 4(4):259-280. 10.1016/S1566-2535(03)00046-0View ArticleGoogle Scholar
- Shi WZ, Zhu CQ, Tian Y, Nichol J: Wavelet-based image fusion and quality assessment. International Journal of Applied Earth Observation and Geoinformation 2005, 6(3-4):241-251. 10.1016/j.jag.2004.10.010View ArticleGoogle Scholar
- Zheng YF, Essock EA, Hansen BC, Haun AM: A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Information Fusion 2007, 8(2):177-192. 10.1016/j.inffus.2005.04.003View ArticleGoogle Scholar
- Li M, Cai W, Tan Z: A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recognition Letters 2006, 27(16):1948-1956. 10.1016/j.patrec.2006.05.004View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.