Skip to content

Advertisement

Open Access

Balloon energy based on parametric active contour and directional Walsh–Hadamard transform and its application in tracking of texture object in texture background

EURASIP Journal on Advances in Signal Processing20122012:253

https://doi.org/10.1186/1687-6180-2012-253

Received: 7 November 2011

Accepted: 2 November 2012

Published: 8 December 2012

Abstract

One of the popular approaches in object boundary detecting and tracking is active contour models (ACM). This article presents a new balloon energy in parametric active contour for tracking a texture object in texture background. In this proposed method, by adding the balloon energy to the energy function of the parametric ACM, a precise detection and tracking of texture target in texture background has been elaborated. In this method, texture feature of contour and object points have been calculated using directional Walsh–Hadamard transform, which is a modified version of the Walsh–Hadamard. Then, by comparing the texture feature of contour points with texture feature of the target object, movement direction of the balloon has been determined, whereupon contour curves are expanded or shrunk in order to adapt to the target boundaries. The tracking process is iterated to the last frames. The comparison between our method and the active contour method based on the moment demonstrates that our method is more effective in tracking object boundary edges used for video streams with a changing background. Consequently, the tracking precision of our method is higher; in addition, it converges more rapidly due to it slower complexity.

Keywords

TrackingActive contour modelsEnergy functionDirectional Walsh–Hadamard transform (DWHT)Texture featureMomentBalloon energy

1. Introduction

Object tracking is one of the most interesting topics in many computer vision applications such as traffic monitoring in the intelligent transportation systems, video surveillance, medical applications, military object tracking, object-based video compression, etc. [14]. Detection and competitions of object motion in sequence of image or video are called tracking. Various tracking methods have been proposed and improved, from the simple and rigid object tracking with static camera, to the complex and non-rigid object tracking with moving camera [5]. These methods are categorized into five groups [6, 7] namely, region-based tracking [8], feature-based tracking [9], mesh-based tracking [10, 11], model-based tracking [12], and active contour models (ACM)-based tracking [13].

Active contour method was introduced by Kass in 1987 [14]. In general, ACM can be classified into two main types: parametric and geometric active contours. Parametric ACM is an initial curve in two- or three-dimensional images. It is modified by internal and external forces and it stops at the real boundaries of the image. Although this method was proposed for segmentation and video object tracking, it faces problems such as speed and accuracy [15].

Geometric ACM, which was presented by Caselless and Malladi, are based on the theory of curve evolution and level set techniques in which curves and levels are evaluated by some geometric criteria [16, 17]. Simultaneous detection of several object boundaries is one of the great advantages of this method. However, due to its higher computational cost and complexity, geometric active contour is slower than parametric ACM.

On the other hand, in the absence of strong edges, traditional parametric ACM cannot detect object boundaries correctly. Hence, to overcome this problem, Ivins and Porrill [18] and Schaub and Smith [19] proposed different color active contours. In their method, contour curves are directed toward the target object with a specified color. This method provides the possibility of detecting and tracking of targets with weak edges. The most noticeable disadvantage of the method is that the color of object and background should be simple and the method is not capable of tracking targets with a complex color or texture.

In the method proposed by Vard et al. [15], tracking of texture object in texture background is presented by means of adding a novel pressure energy, named texture pressure energy, to the energy function of the parametric active contour. Texture features of contour and object points are calculated by a method based on moment. Then, according to the difference between these features of target object, the contour curve is shrunk or expanded in order to adapt to the target object boundaries. When the background texture is changed considerably, this method cannot track the texture object with high accuracy. In addition, calculation of texture pressure energy needs huge computations, which leads to a low convergence speed. In another research, Vard et al. [20] used texture pressure based on the directional Walsh–Hadamard transform (DWHT) for segmentation texture object in texture background. In this article, the texture images are synthetic images selected from Brodatz album. In [15, 20], the number of iterations is not provided automatically and should be selected by the user. So, the speed of algorithm is low.

Compared with [20], in which the textured feature based on the DWHT is used in order to segment the textured object, in this article we aim to modify their method in such a way that tracking the textured object against a textured background would be possible automatically and accurately. DWHT algorithm is an implementation of multi-scale and multi-directional decomposition in ordinary Walsh–Hadamard transform (WHT) domain that was first introduced by Monadjemi and Moallem [21]. This method is based on a particular sort of input image rotation before the WHT is applied. DWHT preserves all the features of WHT except for the extra advantage of preserving the directional properties of the texture. Another advantage of DWHT method is its less computations.

In summary, in this article, our focus is on “object boundary detection and object tracking” which is achieved by using a parametric ACM. In fact, by adding a new balloon energy based on texture features to the energy function of parametric ACM, we are able to detect texture objects in texture background more accurately than moment-based active contour. Moreover, the tracking process is accelerated by the proposed method. These improvements are accomplished thanks to the calculation of the texture features, of both contour and object points, by means of DWHT-based method. Also, the parameters, which are required for calculating balloon energy and the number of iteration in each frame is obtained automatically, so that the speed of algorithm is improved.

This article is organized as follows: in the next section, we review the mathematical description; in Section 3, the DWHT is explained; The proposed method is discussed in Section 4; we explain the experimental results of the proposed method compared with those of moment-based active contour in terms of accuracy and convergence speed in Section 5, and finally, conclusions are given in Section 6.

2. Mathematical description of ACM

In parametric ACM, snake is a parametric curve which is defined in the following [14]:
S u = I x u , y u , u 01
(1)
I is the image intensity at (x,y). In order to implement, the vector function S(u) is approximated discontinuously at {u i }, i = 0, 1,…, M, in which M is the number of points on the contour. Finally, continuous curve will be obtained from interpolation of these points. The traditional flexible parametric method is based on the application of contour, which minimizes the weighted sum of the internal and external energies. Therefore, the final contour is defined by minimizing the following energy function.
E s u = E int s u + E img s u ,
(2)
where Eint is the internal energy of the contour defined as follows:
E int = α 2 u S u 2 + β 2 2 u 2 S u 2
(3)
In the above equation, the first and second parts of the energy equation prevent contour from excessive stretching and bending along with preserving its coherence and smoothness. Weighting parameters, α and β, are used to adjust the properties of elasticity and rigidity. Image energy directs contour curve to desirable features such as edges, lines, and corners. This energy in initial formula of ACM is defined and approximated to detect the edge and is calculated as [22]
E img = E edge = P ( G σ s * I s 2
(4)

where Gσ is a 2D Gaussian kernel with standard deviation σ, and * present gradient and convolution operators, respectively. P is the weighting parameter that controls the image energy which is constant. Equation (4) is used for noise reduction by applying a Gaussian filtering.

Consequently, the total energy of active contour is defined as follows:
E = α 2 u S u 2 du + β 2 2 u 2 S u 2 du + E edge S u du
(5)
Texture pressure energy is proposed to track texture object in the texture background. This pressure energy replaces the edge energy in energy function of ACM [15]. Then, texture features have been extracted using a moment-based method. Figure 1 shows six masks that correspond to the moments up to order two with a window size of 3 × 3.
Figure 1

The masks corresponding to the moment up to order two with window size of 3 × 3[15].

Consequently, texture features are extracted using the convolution of image and those masks. According to each moment mask, moment images M1, M2, M3, M4, M5, M6 will be acquired. Then, the corresponding texture features to these moment images are obtained using the nonlinear transform:
F t i , j = 1 L 2 a = L L b = L L tanh ε M t i + a , j + b t = 1 6
(6)

where L × L is the window size in which pixel (i, j) is located at its center and ε is a parameter that controls the shape of the logistic function and is determined by the user. For each pixel of image, a texture feature vector in the form of F(i,j) = [F1, F2, F3, F4, F5, F6] is generated and can be used for image segmentation or target object detection in tracking application.

Texture pressure energy is defined as
E texture = ρ . T I S S u
(7)
where ρ and S are the weighting parameter and snake curve, respectively. The indicates that the texture pressure is perpendicularly applied to the tangent of the contour. T is defined as
T I S = 1 1 k i = 1 6 F i I S O μ i O σ i 2
(8)
where F is the texture feature vector of the contour, and are the mean and standard deviation of the texture feature vector of the target object points, respectively. K is a parameter that is defined in the following:
K =
(9)

where is the mean of texture feature vector of background.

According to the following research studies presented in this study, when the texture complexity increases, this method does not work out well.

3. DWHT

The WHT is known for its important computational advantages. For instance, it is a real (not complex) transform, it only needs addition and subtraction operations and if the input signal is a set of integer-valued data (as in the case of digital images), it only uses integer operations. Furthermore, there is a fast algorithm for Walsh transforms proposed in [23]. The transform matrix, usually referred to as Hadamard matrix, can also be saved in the binary format resulting in the memory requirements reduction [24]. Moreover, hardware implementation of WHT is rather easier than other transforms [25].

Inspired by oriented/multi-band structures of Gabor filters [26], novel DWHT is recommended by Monadjemi and Moallem [21]. The algorithm of DWHT is capable of extracting texture features in different directions and sequency scales. As mentioned before, DWHT keeps all the advantages of WHT. Furthermore, the DWHT preserves the directional properties of texture. The DWHT can be defined as
DWHT α A = A α × H
(10)
In which, H is sequency-ordered Hadamard (SOH) matrix [25, 27] where the rows (and columns) are ordered according to their sequency. In other words, while there is no sign of change in the first row, there are n – 1 sign changes in the n th row. As an example, see Figure 2 in which SOH matrix is shown for a rank is 3 (or 8 × 8).
Figure 2

Sequency-ordered 8 × 8 Hadamard (left). Sequency bands of SOH in a transform domain (right).

In fact, for a Hadamard matrix, H is always equal its transpose, H. In this article, we use the second rank of Hadamard matrix (4 × 4).

In Equation (10), Aα, α = {0°, 45°, 90°, 135°}, is the rotated version of A. The rotation is applied to each element in the top row of the image matrix. At border pixels, corresponding elements are used from a repeated imaginary version of the same image matrix (i.e., image is vertically and horizontally wrapped around).

The full rotation set where α = {0°, 45°, 90°, 135°} can be defined for a simple 4 × 4 image matrix as follows
A 0 ° = a b c d e f g h i j k l m n o p A 45 ° = a f k p b g l m c h i n d e j o A 90 ° = a e i m b f j n c g k o d h l p A 135 ° = a h k n b e l o c f i p d g j m
(11)

Note that this is not an ordinary geometrical rotation. For example, we create the rows of A45° image by considering the pixels that sit in a 45° direction in the image A and so on. This means that the resulting horizontal rows capture the information at the specified angles. In fact, it looks more like a pixel rearrangement rather than a geometrical rotation.

This rotation means that after applying the DWHT transform we need only extracting the row sequency information, corresponding to the direction used. As Equation (12) shows, the operation DWHT α (A) = A α × H is computed and gathers the sequency information of input matrix rows into transformed matrix columns. Hence, the same half transform for a rotated matrix (e.g., A45°) will give us the sequency information of pixels with a 45°-orientation, again into the columns of transformed matrix. In transfer matrix, the number of sign changes in each column of the sequences is the same and it increases from left to right. In other words, the transformed matrix columns from left to right correspond to the lower to higher sequency elements.
DWHT 0 ° A = A 0 ° × H = a b c d e f g h i j k l m n o p × 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 = a + b + c + d a + b c d a b c + d a b + c d e + f + g + g e + f g h e f g + h e f + g h i + j + k + l i + j k l i g k + l i g + k l m + n + o + p m + n o p m n o + p m n + o p
(12)
In the Hadamard-based feature extraction procedure, we exploited the mentioned rotation and transformation for four different orientations:
{ DWH D 0 ° A = A 0 ° × H DWH D 45 ° A = A 45 ° × H DWH D 90 ° A = A 90 ° × H DWH D 135 ° A = A 135 ° × H
(13)
Since the relative arrangement of pixels is essential in texture analysis [28], sequency-based features which represent the number of zero-crossing of pixels in a particular direction can convey a notable amount of textural information. We can measure the DWHT energy in DWHT α (A) as the absolute value of the DWHT output along each column. Columns can be divided into a few groups that represent different sequency bands. Then the statistics of each band can be extracted to configure a feature vector with reasonable dimensionality. So, a DWHT output and feature vector can be defined as
H α , b = DWHT α A | i , j , 1 i N , j b , and F DWHT = M H α , b
(14)
where H is the transform’s output matrix, N is the matrix size, F is the feature vector, M indicates the applied statistical function, and b is the desired sequency band. Again, log2or semi-log2bandwidth scales could be applied. However, we mostly use a simpler 1 4 , 1 4 , 1 2 division for a three-band and a 1 4 , 1 4 , 1 4 , 1 4 division for a four-band feature sets. For example, in three-band division of four-column transform matrix, the band b 1 is determined by a first column sequency, the band b 2 is determined by a second column sequency, and the third and fourth columns generate band b 3. For example, the sequency bands of DWHD (A) are defined as follows:
b 1 = a + b + c + d e + f + g + h i + j + k + l m + n + o + p , to b 2 = a + b c d e + f g h i + j k l m + n o p , to b 3 = a b c + d e f g + h i j k + l m n o + p a b + c d e f + g h i j + k l m n + o p
(15)

4. Proposed method

In this section, first, we explain the method used for feature extraction by DWHT. Then, in Section 4.2, we introduce the DWHT-based balloon energy. After that, in Section 4.3, tracking algorithm based on the proposed method is presented. Finally, the criterion to stop the contour is explained in Section 4.4.

4.1. Feature extraction using DWHT

The procedure of feature extraction using DWHT is as follows.
  1. 1.

    Determine a local window (A) with a size of 4 × 4 around the object and contour points.

     
  2. 2.

    Matrixes: A , A 45°, A 90°, A 135° are generated by rotating A in four orientations α = {0°, 45°, 90°, 135°}.

     
  3. 3.

    For each matrix in 2, we use a 1 4 , 1 4 , 1 2 division (see Equation 15), and obtain b 1, b 2, b 3, sequency bands.

     
  4. 4.

    The mean of each band is calculated as the texture feature vector, F, as in the following: F(i, j) = [F 1, …, F 12].

     
This procedure is also illustrated in the block diagram of Figure 3.
Figure 3

The block diagram of texture feature extraction using DWHT for both active contour and target object.

4.2. Balloon energy based on DWHT

Balloon energy was introduced by Cohen in 1991 [29]. In this study, we apply balloon energy for texture features calculated by DWHT. External energy is calculated as
E ext = E img + E bal
(16)
ESCB is obtained by Equation (4). ESCB as a texture-based energy is defined as
E bal = B I S × n s
(17)
where n s is the normal unitary vector and B is a threshold function which is defined as
B I s = { 1 if F I s O σ < K 1 otherwise
(18)
where F is the texture feature vector of contour, and and are the mean and standard deviation of the F_ob vector, respectively, F_ ob is the texture feature vector of the target object points (see Figure 3). K is defined as follows
k i = 2 × B μ i O μ i O σ i
(19)

where βμ is the mean of feature vector of background. By calculating Equation (19), 12 K parameters are achieved, then variance and maximum of K vector are calculated, after that, the distance between them is obtained. Each of the K parameters which is bigger than this number will be selected. K parameters which have the above-mentioned features are used in Equation (18). It is evident that compared with [20] the K parameters are obtained automatically.

Balloon force allows the contour curve to expand or to shrink in order to fit to the target boundaries.

Consequently, the new energy of active contour is
E = α 2 u S u 2 du + β 2 2 u 2 S u 2 Internal Energy du + E ext S u du External energy
(20)

4.3. Tracking algorithm based on proposed method

Different steps for tracking the object in textured background are demonstrated as a flowchart in Figure 4.
Figure 4

Tracking flowchart based on the proposed method.

As shown in this flowchart, in the first step, the initial contour is determined by the user. This is done by inserting several points around the target (B 1,B 2,B 3, B 4).The center of these points is determined and several points, around the central point (O), are calculated as the object points (O 1,…,On) (see Figure 5).
Figure 5

Calculate object point and K parameter.

In the next step, the texture feature vector of both contour points, (F), and the object points, (F_ob), are determined. Then, the related mean and standard deviation vectors are calculated, , , . Accordingly, K parameters are determined using Equation (19) for the first frame.

In the next step, balloon energy based on the texture feature is obtained from Equation (17) and then total energy function of active contour is calculated using Equation (20). Then, the new location of contour points is found by minimizing energy function. For the next frames, we use these contour points as initial points and follow the same procedure until the optimized location of contour points is obtained. This process is repeated to the last frame.

4.4. The criterion to stop the contour

Unlike [15, 20], in our method, the number of iterations in each frame is not selected by the user and is obtained automatically. The procedure of stopping the contour in every frame is as follows
  1. (1)
    The distance between the current iteration of contour points and the last and next ones are calculated and then Equation (21) is obtained:
    M = d d d
    (21)

    where d is the distance between contour points in current iteration and the last iteration d″is the distance between contour points in the current and the next iterations.

     

If M is less than 0.01, the iteration will be stopped. The threshold value of 0.01 is obtained by trial and error.

5. The result of the experiments

In this section, we have implemented the proposed method (balloon energy based on DWHT added to parametric active contour) and a moment-based active contour method in MATLAB and the results are compared in terms of speed and accuracy. In this article, we have tried to make the images more realistic than [20] by preparing different texture environments and then capturing films from those environments. In this way at least the filming process is done realistically.

All experiments are performed using2.5 GHz Intel Core 2 Dou processor with windows Vista. In these experiments, different texture objects in various texture backgrounds were used.The frame size is chosen to be 352 × 288 pixels.

Figure 6 shows the tracking result of the proposed method and moment-based active contour of the first experiment. In this experiment, background is made of two different cloth’s materials, satin and silk.
Figure 6

Tracking texture target in texture background using moment-based active contour (top) and proposed method (bottom). Frames from left to right: 1, 41.

As Figure 6A shows, the moment-based active contour cannot detect the object boundary correctly. On the other hand, regarding to the moment-based active contour proposed method could detect and track object boundaries in all frames with high accuracy (Figure 6B).

Figure 7 shows the place of initializing the snake and its evolution in different iteration until the snake is adapted in object boundary for the first frame. The number of iterations is automatically adjusted, which results in decreasing of tracking time.
Figure 7

The place of initializing the snake and its evolution in different iteration until the snake is adapted in object boundary for the first frame.

To measure the accuracy of tracking, we employed the error criterion introduced in [30]:
E SCB = SCB n n
(22)
where n is the obtained number of contour points, SCB(n) refers to the number of contour points on the correct boundary. This measure converges to 100% when the correct boundary detection is achieved. Figure 8 shows comparative diagram of ESCB for these two methods. Table 1 demonstrates the average of ESCB and the convergence speed of two methods for 41 frames.
Figure 8

Comparative diagram of E SCB for two methods calculated for each frame.

Table 1

The average of E SCB and convergence speed for two methods obtained by three experiments

Experiments

Tracking method

Average of ESCB (%)

Total tracking time (s)

Improvement of speed (%)

Experiment 1

Proposed method

96.44

10.89

67.80

In 41 frames

Moment_ACM

71.34

33.82

 

Experiment 2

Proposed method

94.88

9.96

74.54

In 50 frames

Moment_ACM

59.28

39.12

 

Experiment 3

Proposed method

94.91

14.14

77.42

In 66 frames

Moment_ACM

61.41

62.62

 
In the second experiment (Figure 9), the performance of proposed method is compared with moment-based active contour when the texture of background is changed. This experiment is designed in such a way that a texture object moved from a texture background to another one. The former texture is made from fabric and the latter is a wooden texture. Relevant figures are shown in Figure 9. As shown in Figure 9A, the moment-based active contour cannot be detected and the object boundary in the frames in background transition occurs. The contour is expanded in these frames. Then, since these false contours will be the initial contours of next frames, the error would be propagated and the contour would not fit to edges any more. But regarding to the moment-based active contour, our proposed method is successful in detecting object boundary and correctly tracking target objects in all frames despite the changes of texture background (Figure 9B).
Figure 9

Tracking textured target in textured background while the texture of background is changing, moment-based active contour (top) and proposed method (bottom). Frames from left to right: 1, 25, 50.

Figure 10 shows the place of initializing the snake and its evolution in different iteration until the snake is adapted in object boundary for the first frame.
Figure 10

The place of initializing the snake and its evolution in different iteration.

Figure 11 demonstrates the comparative diagram of ESCB for these two methods. Table 1 shows the average of ESCB and the convergence speed of two methods for 50 frames.
Figure 11

Comparative diagram of E SCB for two methods calculated for each frame.

In the third experiment (Figure 12), we have evaluated the performance of proposed and moment-based active contour methods when the toy bus moves in front of the teddy bear. The moment-based active contour cannot detect target object when the bus moved in front of teddy bear, and contour curve is expanded, but the proposed method is capable of detecting toy bus better than moment-based active contour, even when it passes in front of teddy bear.
Figure 12

Tracking of toy bus using moment-based active contour (top) and proposed method (bottom). Frames from left to right: 1,40, 66.

Figure 13 demonstrates the place of initializing the snake and its evolution in different iteration until the snake is adapted in object boundary for the first frame.
Figure 13

The place of initializing the snake and its evolution in different iteration.

Again in this experiment superior performance of proposed method is convincingly shown. Figure 14 shows the comparative diagram of ESCB for these two methods. Table 1 illustrates the average of ESCB and the convergence speed of two methods for 66 frames.
Figure 14

Comparative diagram of E SCB for two methods calculated for each frame.

In these three experiments, the accuracy of proposed method is 33% more than moment method and the convergence speed of DWHT is 74.18% more than moment method.

5.1. Noise robustness of tracking algorithms

In the last experiment, the proposed method is examined in the presence of Gaussian noise. Gaussian noise is added to each frame of experiment 1’s sequence such that a noisy sequence is obtained. Figure 15 demonstrates the performance of proposed method for different signal-to-noise ratios (SNR).
Figure 15

Evaluating effect of noise on the proposed method for tracking texture object in texture background for different SNR. SNR = 14.47 dB (1strow), 7.48 dB (2ndrow), 4.47 dB (3rdrow), 2.70 dB (4throw), 1.46 dB (5throw), and 0.49 dB (6throw) for frames from left to right:1, 20, 30,41.

SNR is defined as follows.
SNR dB = 10 × log 10 P Signal P Noise
(23)
Figure 16 shows the error measure ESCB for different values of SNR. Average of ESCB over 41 frames for different SNR values are given in Table 2. The proposed method has higher accuracy for SNR ≥ 2.70 dB and cannot track target object for SNR = 0.49 dBand for the frame numbers of more than 27, there is no contour for tracking texture object in texture background.
Figure 16

E SCB diagram of the proposed method for the sequence including 41 frames in different value of SNR.

Table 2

The average of E SCB for different values of SNR

 

SNR (dB)

 

14.47

7.48

4.47

2.70

1.46

0.49

Average of ESCB (%)

91.27

84.18

75

64.07

37.49

18.83

Figure 17 demonstrates the average of the ESCB for different SNR values.
Figure 17

The average of the E SCB for different SNR values.

6. Conclusion

Parametric active contour based on moment was introduced for tracking texture object in texture background. This model cannot detect target object correctly, when texture background is changing. In these cases, the contour curve around the target is expanded. In this article, a new balloon energy based on the texture feature is added to parametric active contour. The texture feature is calculated based on DWHT to identify the direction of the balloon. Experimental results demonstrate that our method can achieve a higher tracking precision than that of the moment-based method. In addition, the convergence speed of DWHT is higher than moment-based active contour. We use different levels of Gaussian noise to evaluate resistance of the method against the noise. It shows that the proposed method has high robustness to noise. As a result, target object is tracked well against texture background. This leads to a better detection of object against texture background.

Declarations

Authors’ Affiliations

(1)
Department of Electrical Engineering, Najafabad Branch, Islamic Azad University, Esfahan, Iran
(2)
Department of Electrical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
(3)
Department of Computer Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran

References

  1. Eng H, Thida M, Chew B, Leman K, Anggrelly S: Model-based detection and segmentation of vehicles for intelligent transportation system. In 3rd IEEE Conference on Industrial Electronics and Applications. Singapore; 2008:2127-2132.Google Scholar
  2. Huang K, Tan T: Vs-star: A visual interpretation system for visual surveillance. Pattern Recognit. Lett. 2010, 31(14):2265-2285. 10.1016/j.patrec.2010.05.029View ArticleGoogle Scholar
  3. Sirakov N, Kojouharov H, Sirakova N: Tracking neutrophil cells by active contours with coherence and boundary improvement filter. In Image Analysis & Interpretation (SSIAI). Austin, TX, USA; 2010:5-8.Google Scholar
  4. Lee M, Chen W, Lin C, Gu C, Markoc T, Zabinsky S, Szeliski R: A layered video object coding system using sprite and affine motion model. IEEE Trans. Circuits Syst. Video Technol. 1997, 7(1):130-145. 10.1109/76.554424View ArticleGoogle Scholar
  5. Sun Q, Heng P, Xia D: Two-stage object tracking method based on kernel and active contour. IEEE Trans. Circuits Syst. Video Technol. 2010, 20(4):605-609.View ArticleGoogle Scholar
  6. Hariharakrishnan K, Schonfeld D: Fast object tracking using adaptive block matching. IEEE Trans. Multimed. 2005, 7(5):853-859.View ArticleGoogle Scholar
  7. Cavallaro A: From visual information to knowledge: semantic video object segmentation, tracking and description, PhD thesis. EPFL Switzerland; 2002.Google Scholar
  8. Salembier P, Marqúes F: Region-based representations of image and video: segmentation tools for multimedia services. IEEE Trans. Circuit Syst. Video Technol. 1999, 9(8):1147-1167. 10.1109/76.809153View ArticleGoogle Scholar
  9. Moallm P, Memmarmoghaddam A, Ashourian M: Robust and fast tracking algorithm in video sequences by adaptive window sizing using a novel analysis on spatiotemporal gradient powers. Int. J. Circuit Syst. Comput. 2007, 16(2):305-317. 10.1142/S0218126607003617View ArticleGoogle Scholar
  10. Celasun I, Tekalp A: Optimal 2-D hierarchical content-based mesh design and update for object-based video. IEEE Trans. Circuit Syst. Video Technol. 2000, 10(7):1135-1153. 10.1109/76.875517View ArticleGoogle Scholar
  11. Dufour A, Thibeaux R, Labruyère E, Guillén N, Olivo-Marin J: 3-D active meshes: fast discrete deformable models for cell tracking in 3-D time-lapse microscopy. IEEE Trans. Image Process 2011, 20(7):1925-1937.MathSciNetView ArticleGoogle Scholar
  12. Comport A, Marchand E, Chaumette F: Efficient model-based tracking for robot vision. Adv. Robot. 2005, 19(10):1097-1113. 10.1163/156855305774662226View ArticleGoogle Scholar
  13. Bing X, Wei Y, Charocnsak C: Face contour tracking in video using active contour model. In Proceedings of the IEEE International Conference on Image Processing. Singapore; 2005:1021-1024.Google Scholar
  14. Kass M, Witkin A, Terzopoulos D: Snakes: active contour models. International Journal of Computer Vision, London 1987, 1(4):321-331.View ArticleGoogle Scholar
  15. Vard A, Moallem P, Naghsh A: Texture based parametric active contour target detection and tracking. Int. J. Imag. Syst. Technol. Wiley 2009, 19(3):187-198. 10.1002/ima.20194View ArticleGoogle Scholar
  16. Caselles V, Catte F, Coll T: A geometric model for active contours. In International Conference on Image Processing. Washington, DC; 1995:1-31.Google Scholar
  17. Malladi R, Sethian J, Vemuri B: Shape modeling with front propagation: a level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17(2):158-175. 10.1109/34.368173View ArticleGoogle Scholar
  18. Ivins J, Porrill J: Active region models for segmenting medical images. In Proceedings of the 1st international conference on image processing. Austin TX, USA; 1994:227-231.Google Scholar
  19. Schaub H, Smith C: Color snakes for dynamic lighting conditions on mobile manipulation platforms. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems. Albuquerque, NM, USA; 2003:1272-1277.Google Scholar
  20. Vard A, Monadjemi A, Jamshidi K, Movahhedinian N: Fast texture energy based image segmentation using directional Walsh–Hadamard transform and parametric active contour models. Expert Syst. Appl. 2011, 38(9):11722-11729. 10.1016/j.eswa.2011.03.058View ArticleGoogle Scholar
  21. Monadjemi A, Moallem P: Texture classification using a novel Walsh/Hadamard transform. In 10th WSEAS International Conference on Computers. Athens, Greece; 2006:1055-1060.Google Scholar
  22. Prince J, Xu C: A new external force model for snakes. Image and Multidimensional Signal Processing Workshop 1996, 30-31.Google Scholar
  23. Monadjemi A: Towards efficient texture classification and abnormality detection, PhD thesis. Bristol University; 2004.Google Scholar
  24. Gonzalez R, Woods R: Digital Image Processing. 3rd edition. Pearson/Prentice Hall; 2008.Google Scholar
  25. Eghbali H: Image enhancement using a high sequency-ordered Hadamard transform filtering. Comput. Graph. 1980, 5(1):23-29. 10.1016/0097-8493(80)90004-7MathSciNetView ArticleGoogle Scholar
  26. Jain A, Farrokhnia F: Unsupervised texture segmentation using Gabor filters. In Proceeding of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Los Angeles, CA; 1990:14-19.Google Scholar
  27. Beauchamp K: Applications of Walsh and Related Functions. Academic Press, New York; 1984.Google Scholar
  28. Williams D, Julesz B: Filters versus textons in human and machine texture discrimination. Academic Press, San Diego; 1992:145-175.Google Scholar
  29. Cohen LD: On active contour models and balloons. Comput. Vis. Graph. Image Process. Image Understand. 1991, 53(2):211-218.MATHGoogle Scholar
  30. Seo K, Lee J: Real-time object tracking and segmentation using adaptive color snake model. Int. J. Control. Autom. Syst. 2006, 4(2):236-246.MathSciNetGoogle Scholar

Copyright

© Tahvilian et al.; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement