 Research
 Open access
 Published:
Balloon energy based on parametric active contour and directional Walsh–Hadamard transform and its application in tracking of texture object in texture background
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 253 (2012)
Abstract
One of the popular approaches in object boundary detecting and tracking is active contour models (ACM). This article presents a new balloon energy in parametric active contour for tracking a texture object in texture background. In this proposed method, by adding the balloon energy to the energy function of the parametric ACM, a precise detection and tracking of texture target in texture background has been elaborated. In this method, texture feature of contour and object points have been calculated using directional Walsh–Hadamard transform, which is a modified version of the Walsh–Hadamard. Then, by comparing the texture feature of contour points with texture feature of the target object, movement direction of the balloon has been determined, whereupon contour curves are expanded or shrunk in order to adapt to the target boundaries. The tracking process is iterated to the last frames. The comparison between our method and the active contour method based on the moment demonstrates that our method is more effective in tracking object boundary edges used for video streams with a changing background. Consequently, the tracking precision of our method is higher; in addition, it converges more rapidly due to it slower complexity.
1. Introduction
Object tracking is one of the most interesting topics in many computer vision applications such as traffic monitoring in the intelligent transportation systems, video surveillance, medical applications, military object tracking, objectbased video compression, etc. [1–4]. Detection and competitions of object motion in sequence of image or video are called tracking. Various tracking methods have been proposed and improved, from the simple and rigid object tracking with static camera, to the complex and nonrigid object tracking with moving camera [5]. These methods are categorized into five groups [6, 7] namely, regionbased tracking [8], featurebased tracking [9], meshbased tracking [10, 11], modelbased tracking [12], and active contour models (ACM)based tracking [13].
Active contour method was introduced by Kass in 1987 [14]. In general, ACM can be classified into two main types: parametric and geometric active contours. Parametric ACM is an initial curve in two or threedimensional images. It is modified by internal and external forces and it stops at the real boundaries of the image. Although this method was proposed for segmentation and video object tracking, it faces problems such as speed and accuracy [15].
Geometric ACM, which was presented by Caselless and Malladi, are based on the theory of curve evolution and level set techniques in which curves and levels are evaluated by some geometric criteria [16, 17]. Simultaneous detection of several object boundaries is one of the great advantages of this method. However, due to its higher computational cost and complexity, geometric active contour is slower than parametric ACM.
On the other hand, in the absence of strong edges, traditional parametric ACM cannot detect object boundaries correctly. Hence, to overcome this problem, Ivins and Porrill [18] and Schaub and Smith [19] proposed different color active contours. In their method, contour curves are directed toward the target object with a specified color. This method provides the possibility of detecting and tracking of targets with weak edges. The most noticeable disadvantage of the method is that the color of object and background should be simple and the method is not capable of tracking targets with a complex color or texture.
In the method proposed by Vard et al. [15], tracking of texture object in texture background is presented by means of adding a novel pressure energy, named texture pressure energy, to the energy function of the parametric active contour. Texture features of contour and object points are calculated by a method based on moment. Then, according to the difference between these features of target object, the contour curve is shrunk or expanded in order to adapt to the target object boundaries. When the background texture is changed considerably, this method cannot track the texture object with high accuracy. In addition, calculation of texture pressure energy needs huge computations, which leads to a low convergence speed. In another research, Vard et al. [20] used texture pressure based on the directional Walsh–Hadamard transform (DWHT) for segmentation texture object in texture background. In this article, the texture images are synthetic images selected from Brodatz album. In [15, 20], the number of iterations is not provided automatically and should be selected by the user. So, the speed of algorithm is low.
Compared with [20], in which the textured feature based on the DWHT is used in order to segment the textured object, in this article we aim to modify their method in such a way that tracking the textured object against a textured background would be possible automatically and accurately. DWHT algorithm is an implementation of multiscale and multidirectional decomposition in ordinary Walsh–Hadamard transform (WHT) domain that was first introduced by Monadjemi and Moallem [21]. This method is based on a particular sort of input image rotation before the WHT is applied. DWHT preserves all the features of WHT except for the extra advantage of preserving the directional properties of the texture. Another advantage of DWHT method is its less computations.
In summary, in this article, our focus is on “object boundary detection and object tracking” which is achieved by using a parametric ACM. In fact, by adding a new balloon energy based on texture features to the energy function of parametric ACM, we are able to detect texture objects in texture background more accurately than momentbased active contour. Moreover, the tracking process is accelerated by the proposed method. These improvements are accomplished thanks to the calculation of the texture features, of both contour and object points, by means of DWHTbased method. Also, the parameters, which are required for calculating balloon energy and the number of iteration in each frame is obtained automatically, so that the speed of algorithm is improved.
This article is organized as follows: in the next section, we review the mathematical description; in Section 3, the DWHT is explained; The proposed method is discussed in Section 4; we explain the experimental results of the proposed method compared with those of momentbased active contour in terms of accuracy and convergence speed in Section 5, and finally, conclusions are given in Section 6.
2. Mathematical description of ACM
In parametric ACM, snake is a parametric curve which is defined in the following [14]:
I is the image intensity at (x,y). In order to implement, the vector function S(u) is approximated discontinuously at {u_{ i }}, i = 0, 1,…, M, in which M is the number of points on the contour. Finally, continuous curve will be obtained from interpolation of these points. The traditional flexible parametric method is based on the application of contour, which minimizes the weighted sum of the internal and external energies. Therefore, the final contour is defined by minimizing the following energy function.
where E_{int} is the internal energy of the contour defined as follows:
In the above equation, the first and second parts of the energy equation prevent contour from excessive stretching and bending along with preserving its coherence and smoothness. Weighting parameters, α and β, are used to adjust the properties of elasticity and rigidity. Image energy directs contour curve to desirable features such as edges, lines, and corners. This energy in initial formula of ACM is defined and approximated to detect the edge and is calculated as [22]
where G_{σ} is a 2D Gaussian kernel with standard deviation σ, ∇ and * present gradient and convolution operators, respectively. P is the weighting parameter that controls the image energy which is constant. Equation (4) is used for noise reduction by applying a Gaussian filtering.
Consequently, the total energy of active contour is defined as follows:
Texture pressure energy is proposed to track texture object in the texture background. This pressure energy replaces the edge energy in energy function of ACM [15]. Then, texture features have been extracted using a momentbased method. Figure 1 shows six masks that correspond to the moments up to order two with a window size of 3 × 3.
Consequently, texture features are extracted using the convolution of image and those masks. According to each moment mask, moment images M_{1}, M_{2}, M_{3}, M_{4}, M_{5}, M_{6} will be acquired. Then, the corresponding texture features to these moment images are obtained using the nonlinear transform:
where L × L is the window size in which pixel (i, j) is located at its center and ε is a parameter that controls the shape of the logistic function and is determined by the user. For each pixel of image, a texture feature vector in the form of F(i,j) = [F_{1}, F_{2}, F_{3}, F_{4}, F_{5}, F_{6}] is generated and can be used for image segmentation or target object detection in tracking application.
Texture pressure energy is defined as
where ρ and S are the weighting parameter and snake curve, respectively. The ⊥ indicates that the texture pressure is perpendicularly applied to the tangent of the contour. T is defined as
where F is the texture feature vector of the contour, Oμ and Oσ are the mean and standard deviation of the texture feature vector of the target object points, respectively. K is a parameter that is defined in the following:
where Bμ is the mean of texture feature vector of background.
According to the following research studies presented in this study, when the texture complexity increases, this method does not work out well.
3. DWHT
The WHT is known for its important computational advantages. For instance, it is a real (not complex) transform, it only needs addition and subtraction operations and if the input signal is a set of integervalued data (as in the case of digital images), it only uses integer operations. Furthermore, there is a fast algorithm for Walsh transforms proposed in [23]. The transform matrix, usually referred to as Hadamard matrix, can also be saved in the binary format resulting in the memory requirements reduction [24]. Moreover, hardware implementation of WHT is rather easier than other transforms [25].
Inspired by oriented/multiband structures of Gabor filters [26], novel DWHT is recommended by Monadjemi and Moallem [21]. The algorithm of DWHT is capable of extracting texture features in different directions and sequency scales. As mentioned before, DWHT keeps all the advantages of WHT. Furthermore, the DWHT preserves the directional properties of texture. The DWHT can be defined as
In which, H is sequencyordered Hadamard (SOH) matrix [25, 27] where the rows (and columns) are ordered according to their sequency. In other words, while there is no sign of change in the first row, there are n – 1 sign changes in the n th row. As an example, see Figure 2 in which SOH matrix is shown for a rank is 3 (or 8 × 8).
In fact, for a Hadamard matrix, H is always equal its transpose, H^{′}. In this article, we use the second rank of Hadamard matrix (4 × 4).
In Equation (10), A_{α}, α = {0°, 45°, 90°, 135°}, is the rotated version of A. The rotation is applied to each element in the top row of the image matrix. At border pixels, corresponding elements are used from a repeated imaginary version of the same image matrix (i.e., image is vertically and horizontally wrapped around).
The full rotation set where α = {0°, 45°, 90°, 135°} can be defined for a simple 4 × 4 image matrix as follows
Note that this is not an ordinary geometrical rotation. For example, we create the rows of A_{45°} image by considering the pixels that sit in a 45° direction in the image A_{0°} and so on. This means that the resulting horizontal rows capture the information at the specified angles. In fact, it looks more like a pixel rearrangement rather than a geometrical rotation.
This rotation means that after applying the DWHT transform we need only extracting the row sequency information, corresponding to the direction used. As Equation (12) shows, the operation DWHT_{ α }(A) = A_{ α } × H^{′} is computed and gathers the sequency information of input matrix rows into transformed matrix columns. Hence, the same half transform for a rotated matrix (e.g., A_{45°}) will give us the sequency information of pixels with a 45°orientation, again into the columns of transformed matrix. In transfer matrix, the number of sign changes in each column of the sequences is the same and it increases from left to right. In other words, the transformed matrix columns from left to right correspond to the lower to higher sequency elements.
In the Hadamardbased feature extraction procedure, we exploited the mentioned rotation and transformation for four different orientations:
Since the relative arrangement of pixels is essential in texture analysis [28], sequencybased features which represent the number of zerocrossing of pixels in a particular direction can convey a notable amount of textural information. We can measure the DWHT energy in DWHT_{ α }(A) as the absolute value of the DWHT output along each column. Columns can be divided into a few groups that represent different sequency bands. Then the statistics of each band can be extracted to configure a feature vector with reasonable dimensionality. So, a DWHT output and feature vector can be defined as
where H is the transform’s output matrix, N is the matrix size, F is the feature vector, M indicates the applied statistical function, and b is the desired sequency band. Again, log_{2}or semilog_{2}bandwidth scales could be applied. However, we mostly use a simpler \left\{\frac{1}{4},\frac{1}{4},\frac{1}{2}\right\} division for a threeband and a \left\{\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4}\right\} division for a fourband feature sets. For example, in threeband division of fourcolumn transform matrix, the band b 1 is determined by a first column sequency, the band b 2 is determined by a second column sequency, and the third and fourth columns generate band b 3. For example, the sequency bands of DWHD_{0°} (A) are defined as follows:
4. Proposed method
In this section, first, we explain the method used for feature extraction by DWHT. Then, in Section 4.2, we introduce the DWHTbased balloon energy. After that, in Section 4.3, tracking algorithm based on the proposed method is presented. Finally, the criterion to stop the contour is explained in Section 4.4.
4.1. Feature extraction using DWHT
The procedure of feature extraction using DWHT is as follows.

1.
Determine a local window (A) with a size of 4 × 4 around the object and contour points.

2.
Matrixes: A _{0°}, A _{45°}, A _{90°}, A _{135°} are generated by rotating A in four orientations α = {0°, 45°, 90°, 135°}.

3.
For each matrix in 2, we use a \left\{\frac{1}{4},\frac{1}{4},\frac{1}{2}\right\}division (see Equation 15), and obtain b 1, b 2, b 3, sequency bands.

4.
The mean of each band is calculated as the texture feature vector, F, as in the following: F(i, j) = [F _{1}, …, F _{12}].
This procedure is also illustrated in the block diagram of Figure 3.
4.2. Balloon energy based on DWHT
Balloon energy was introduced by Cohen in 1991 [29]. In this study, we apply balloon energy for texture features calculated by DWHT. External energy is calculated as
E_{SCB} is obtained by Equation (4). E_{SCB} as a texturebased energy is defined as
where \overrightarrow{n}\left(s\right)is the normal unitary vector and B is a threshold function which is defined as
where F is the texture feature vector of contour, and Oμ and Oσ are the mean and standard deviation of the F_ob vector, respectively, F_ ob is the texture feature vector of the target object points (see Figure 3). K is defined as follows
where β_{μ} is the mean of feature vector of background. By calculating Equation (19), 12 K parameters are achieved, then variance and maximum of K vector are calculated, after that, the distance between them is obtained. Each of the K parameters which is bigger than this number will be selected. K parameters which have the abovementioned features are used in Equation (18). It is evident that compared with [20] the K parameters are obtained automatically.
Balloon force allows the contour curve to expand or to shrink in order to fit to the target boundaries.
Consequently, the new energy of active contour is
4.3. Tracking algorithm based on proposed method
Different steps for tracking the object in textured background are demonstrated as a flowchart in Figure 4.
As shown in this flowchart, in the first step, the initial contour is determined by the user. This is done by inserting several points around the target (B 1,B 2,B 3, B 4).The center of these points is determined and several points, around the central point (O), are calculated as the object points (O 1,…,On) (see Figure 5).
In the next step, the texture feature vector of both contour points, (F), and the object points, (F_ob), are determined. Then, the related mean and standard deviation vectors are calculated, Bμ, Oσ, Oμ. Accordingly, K parameters are determined using Equation (19) for the first frame.
In the next step, balloon energy based on the texture feature is obtained from Equation (17) and then total energy function of active contour is calculated using Equation (20). Then, the new location of contour points is found by minimizing energy function. For the next frames, we use these contour points as initial points and follow the same procedure until the optimized location of contour points is obtained. This process is repeated to the last frame.
4.4. The criterion to stop the contour
Unlike [15, 20], in our method, the number of iterations in each frame is not selected by the user and is obtained automatically. The procedure of stopping the contour in every frame is as follows

(1)
The distance between the current iteration of contour points and the last and next ones are calculated and then Equation (21) is obtained:
M=\left\frac{{d}^{\u2033}{d}^{\prime}}{{d}^{\prime}}\right(21)where d^{′} is the distance between contour points in current iteration and the last iteration d″is the distance between contour points in the current and the next iterations.
If M is less than 0.01, the iteration will be stopped. The threshold value of 0.01 is obtained by trial and error.
5. The result of the experiments
In this section, we have implemented the proposed method (balloon energy based on DWHT added to parametric active contour) and a momentbased active contour method in MATLAB and the results are compared in terms of speed and accuracy. In this article, we have tried to make the images more realistic than [20] by preparing different texture environments and then capturing films from those environments. In this way at least the filming process is done realistically.
All experiments are performed using2.5 GHz Intel Core 2 Dou processor with windows Vista. In these experiments, different texture objects in various texture backgrounds were used.The frame size is chosen to be 352 × 288 pixels.
Figure 6 shows the tracking result of the proposed method and momentbased active contour of the first experiment. In this experiment, background is made of two different cloth’s materials, satin and silk.
As Figure 6A shows, the momentbased active contour cannot detect the object boundary correctly. On the other hand, regarding to the momentbased active contour proposed method could detect and track object boundaries in all frames with high accuracy (Figure 6B).
Figure 7 shows the place of initializing the snake and its evolution in different iteration until the snake is adapted in object boundary for the first frame. The number of iterations is automatically adjusted, which results in decreasing of tracking time.
To measure the accuracy of tracking, we employed the error criterion introduced in [30]:
where n is the obtained number of contour points, SCB(n) refers to the number of contour points on the correct boundary. This measure converges to 100% when the correct boundary detection is achieved. Figure 8 shows comparative diagram of E_{SCB} for these two methods. Table 1 demonstrates the average of E_{SCB} and the convergence speed of two methods for 41 frames.
In the second experiment (Figure 9), the performance of proposed method is compared with momentbased active contour when the texture of background is changed. This experiment is designed in such a way that a texture object moved from a texture background to another one. The former texture is made from fabric and the latter is a wooden texture. Relevant figures are shown in Figure 9. As shown in Figure 9A, the momentbased active contour cannot be detected and the object boundary in the frames in background transition occurs. The contour is expanded in these frames. Then, since these false contours will be the initial contours of next frames, the error would be propagated and the contour would not fit to edges any more. But regarding to the momentbased active contour, our proposed method is successful in detecting object boundary and correctly tracking target objects in all frames despite the changes of texture background (Figure 9B).
Figure 10 shows the place of initializing the snake and its evolution in different iteration until the snake is adapted in object boundary for the first frame.
Figure 11 demonstrates the comparative diagram of E_{SCB} for these two methods. Table 1 shows the average of E_{SCB} and the convergence speed of two methods for 50 frames.
In the third experiment (Figure 12), we have evaluated the performance of proposed and momentbased active contour methods when the toy bus moves in front of the teddy bear. The momentbased active contour cannot detect target object when the bus moved in front of teddy bear, and contour curve is expanded, but the proposed method is capable of detecting toy bus better than momentbased active contour, even when it passes in front of teddy bear.
Figure 13 demonstrates the place of initializing the snake and its evolution in different iteration until the snake is adapted in object boundary for the first frame.
Again in this experiment superior performance of proposed method is convincingly shown. Figure 14 shows the comparative diagram of E_{SCB} for these two methods. Table 1 illustrates the average of E_{SCB} and the convergence speed of two methods for 66 frames.
In these three experiments, the accuracy of proposed method is 33% more than moment method and the convergence speed of DWHT is 74.18% more than moment method.
5.1. Noise robustness of tracking algorithms
In the last experiment, the proposed method is examined in the presence of Gaussian noise. Gaussian noise is added to each frame of experiment 1’s sequence such that a noisy sequence is obtained. Figure 15 demonstrates the performance of proposed method for different signaltonoise ratios (SNR).
SNR is defined as follows.
Figure 16 shows the error measure E_{SCB} for different values of SNR. Average of E_{SCB} over 41 frames for different SNR values are given in Table 2. The proposed method has higher accuracy for SNR ≥ 2.70 dB and cannot track target object for SNR = 0.49 dBand for the frame numbers of more than 27, there is no contour for tracking texture object in texture background.
Figure 17 demonstrates the average of the E_{SCB} for different SNR values.
6. Conclusion
Parametric active contour based on moment was introduced for tracking texture object in texture background. This model cannot detect target object correctly, when texture background is changing. In these cases, the contour curve around the target is expanded. In this article, a new balloon energy based on the texture feature is added to parametric active contour. The texture feature is calculated based on DWHT to identify the direction of the balloon. Experimental results demonstrate that our method can achieve a higher tracking precision than that of the momentbased method. In addition, the convergence speed of DWHT is higher than momentbased active contour. We use different levels of Gaussian noise to evaluate resistance of the method against the noise. It shows that the proposed method has high robustness to noise. As a result, target object is tracked well against texture background. This leads to a better detection of object against texture background.
References
Eng H, Thida M, Chew B, Leman K, Anggrelly S: Modelbased detection and segmentation of vehicles for intelligent transportation system. In 3rd IEEE Conference on Industrial Electronics and Applications. Singapore; 2008:21272132.
Huang K, Tan T: Vsstar: A visual interpretation system for visual surveillance. Pattern Recognit. Lett. 2010, 31(14):22652285. 10.1016/j.patrec.2010.05.029
Sirakov N, Kojouharov H, Sirakova N: Tracking neutrophil cells by active contours with coherence and boundary improvement filter. In Image Analysis & Interpretation (SSIAI). Austin, TX, USA; 2010:58.
Lee M, Chen W, Lin C, Gu C, Markoc T, Zabinsky S, Szeliski R: A layered video object coding system using sprite and affine motion model. IEEE Trans. Circuits Syst. Video Technol. 1997, 7(1):130145. 10.1109/76.554424
Sun Q, Heng P, Xia D: Twostage object tracking method based on kernel and active contour. IEEE Trans. Circuits Syst. Video Technol. 2010, 20(4):605609.
Hariharakrishnan K, Schonfeld D: Fast object tracking using adaptive block matching. IEEE Trans. Multimed. 2005, 7(5):853859.
Cavallaro A: From visual information to knowledge: semantic video object segmentation, tracking and description, PhD thesis. EPFL Switzerland; 2002.
Salembier P, Marqúes F: Regionbased representations of image and video: segmentation tools for multimedia services. IEEE Trans. Circuit Syst. Video Technol. 1999, 9(8):11471167. 10.1109/76.809153
Moallm P, Memmarmoghaddam A, Ashourian M: Robust and fast tracking algorithm in video sequences by adaptive window sizing using a novel analysis on spatiotemporal gradient powers. Int. J. Circuit Syst. Comput. 2007, 16(2):305317. 10.1142/S0218126607003617
Celasun I, Tekalp A: Optimal 2D hierarchical contentbased mesh design and update for objectbased video. IEEE Trans. Circuit Syst. Video Technol. 2000, 10(7):11351153. 10.1109/76.875517
Dufour A, Thibeaux R, Labruyère E, Guillén N, OlivoMarin J: 3D active meshes: fast discrete deformable models for cell tracking in 3D timelapse microscopy. IEEE Trans. Image Process 2011, 20(7):19251937.
Comport A, Marchand E, Chaumette F: Efficient modelbased tracking for robot vision. Adv. Robot. 2005, 19(10):10971113. 10.1163/156855305774662226
Bing X, Wei Y, Charocnsak C: Face contour tracking in video using active contour model. In Proceedings of the IEEE International Conference on Image Processing. Singapore; 2005:10211024.
Kass M, Witkin A, Terzopoulos D: Snakes: active contour models. International Journal of Computer Vision, London 1987, 1(4):321331.
Vard A, Moallem P, Naghsh A: Texture based parametric active contour target detection and tracking. Int. J. Imag. Syst. Technol. Wiley 2009, 19(3):187198. 10.1002/ima.20194
Caselles V, Catte F, Coll T: A geometric model for active contours. In International Conference on Image Processing. Washington, DC; 1995:131.
Malladi R, Sethian J, Vemuri B: Shape modeling with front propagation: a level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17(2):158175. 10.1109/34.368173
Ivins J, Porrill J: Active region models for segmenting medical images. In Proceedings of the 1st international conference on image processing. Austin TX, USA; 1994:227231.
Schaub H, Smith C: Color snakes for dynamic lighting conditions on mobile manipulation platforms. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems. Albuquerque, NM, USA; 2003:12721277.
Vard A, Monadjemi A, Jamshidi K, Movahhedinian N: Fast texture energy based image segmentation using directional Walsh–Hadamard transform and parametric active contour models. Expert Syst. Appl. 2011, 38(9):1172211729. 10.1016/j.eswa.2011.03.058
Monadjemi A, Moallem P: Texture classification using a novel Walsh/Hadamard transform. In 10th WSEAS International Conference on Computers. Athens, Greece; 2006:10551060.
Prince J, Xu C: A new external force model for snakes. Image and Multidimensional Signal Processing Workshop 1996, 3031.
Monadjemi A: Towards efficient texture classification and abnormality detection, PhD thesis. Bristol University; 2004.
Gonzalez R, Woods R: Digital Image Processing. 3rd edition. Pearson/Prentice Hall; 2008.
Eghbali H: Image enhancement using a high sequencyordered Hadamard transform filtering. Comput. Graph. 1980, 5(1):2329. 10.1016/00978493(80)900047
Jain A, Farrokhnia F: Unsupervised texture segmentation using Gabor filters. In Proceeding of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Los Angeles, CA; 1990:1419.
Beauchamp K: Applications of Walsh and Related Functions. Academic Press, New York; 1984.
Williams D, Julesz B: Filters versus textons in human and machine texture discrimination. Academic Press, San Diego; 1992:145175.
Cohen LD: On active contour models and balloons. Comput. Vis. Graph. Image Process. Image Understand. 1991, 53(2):211218.
Seo K, Lee J: Realtime object tracking and segmentation using adaptive color snake model. Int. J. Control. Autom. Syst. 2006, 4(2):236246.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interest
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Tahvilian, H., Moallem, P. & Monadjemi, A. Balloon energy based on parametric active contour and directional Walsh–Hadamard transform and its application in tracking of texture object in texture background. EURASIP J. Adv. Signal Process. 2012, 253 (2012). https://doi.org/10.1186/168761802012253
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168761802012253