 Research Article
 Open Access
 Published:
Video Frames Reconstruction Based on TimeFrequency Analysis and Hermite Projection Method
EURASIP Journal on Advances in Signal Processing volume 2010, Article number: 970105 (2010)
Abstract
A method for temporal analysis and reconstruction of video sequences based on the timefrequency analysis and Hermite projection method is proposed. The Smethodbased timefrequency distribution is used to characterize stationarity within the sequence. Namely, a sequence of DCT coefficients along the time axes is used to create a frequencymodulated signal. The reconstruction of nonstationary sequences is done using the Hermite expansion coefficients. Here, a small number of Hermite coefficients can be used, which may provide significant savings for some videobased applications. The results are illustrated with video examples.
1. Introduction
Video signal exchange and storage are very important in multimedia applications. For this purpose, different kinds of video processing techniques are needed, such as video compression algorithms, video denoising methods, and scene analysis [1–4]. Depending on the video quality and bitrate constraints, various compression algorithms have been developed [5–10]. These algorithms commonly employ motioncompensated differential coding (known as P and B frames), that is the interframe prediction based on the reference frames (Iframes). Iframes are set at userdefined intervals (e.g., 1 key frame for every 5 frames, or 15 frames, etc.). Thus, the algorithm compares two images and sends only the parts of the following images (B and Pframes) that differ from the reference image [5]. For example, such algorithms are MPEG2 compression and its improved version MPEG4 [6]. A good implementation of MPEG4 can additionally reduce the bit rate for approximately 15%, but it requires high processing power. Furthermore, the H.264 standard improves compression in comparison to MPEG4 [6–8]. It offers many additional but optional tools, so that the compression ratio will significantly vary for different implementations. The most popular Baseline Profile provides a bit rate reduction of 10%–30% over MPEG4, but it requires almost twice the CPU power. An overly simple H.264 implementation may produce worse results than an MPEG4 implementation while the Main Profile is computationally heavy. Finally, some applications use the MovingJPEG (MJPEG) multimedia format, where video frames are separately compressed as JPEG images [9]. It does not include interframe prediction, which results in lower compression ratio. However, it has been commonly used by digital still cameras for the unified treatment of still and video compression. Also, it has been used for IPbased video cameras via HTTP streams.
Here, we propose a method for video sequence reconstruction based on the timefrequency analysis and Hermite projections. The main goal of this paper is not to provide a specific compression solution for video applications, but rather an auxiliary tool for other video processing algorithms, such as video surveillance, motion tracking, and video compression. Combined with the existing compression algorithms, this approach can additionally reduce the amount of data required for highquality video reconstruction. It does not use the exhaustive search procedures for motion estimation, spatial or temporal prediction, or the computationally demanding advanced options included in other approaches. The proposed procedure can be applied to the coefficients of raw video format or the reference frames (I frames) of coded video, or to the coefficients within the sequence of JPEG images. Therefore, the possibility to merge it with the existing techniques could be interesting for researchers and could provide additional improvements of compression ratio.
The procedure consists of two parts. The first one employs the timefrequency analysis to examine the temporal stationarity/nonstationarity of the coefficients over time. When observing a sequence of video frames, one may distinguish between stationary scene regions that do not change over time and dynamic scene regions containing moving objects (nonstationary regions). Video sequences usually contain noise, causing coefficients to vary, even in the absence of moving objects. In order to reduce the noise influence, here we propose a timefrequencybased procedure for temporally stationary and nonstationary coefficients characterization. Various timefrequency distributions have been used for the analysis of noisy nonstationary signals with different instantaneous frequency laws [11, 12]. Here, we focus on the use of computationally efficient quadratic distribution called the Smethod [13, 14]. To characterize temporal behaviour, the sequence of coefficients at the position is analyzed by using the Smethod.
The second part of the proposed procedure deals with the highquality reconstruction of the coefficients. The reconstruction of a stationary sequence is based on its first coefficient. On the other hand, the efficient reconstruction of nonstationary sequences of coefficients is obtained by using the Hermite projection method [15]. Namely, by using a certain number of Hermite coefficients, nonstationary sequence can be reconstructed. This number could be quite smaller than the length of original sequence. Although, the quality of reconstructed video depends on the number of Hermite functions, significant savings can be achieved even if a high video quality is required.
The paper is organized as follows. Section 2 describes the theory behind the timefrequency analysis and its application for characterizing the temporal stationarity. In Section 3, the reconstruction procedure based on the Hermite projection method is proposed. In Section 4, the proposed method is applied to the examples. Concluding remarks are given in Section 5.
2. Theoretical Background
A brief theoretical background on the Smethodbased timefrequency analysis and the Hermite projection method is presented in this Section. The timefrequency analysis will be used to characterize the stationarity of video coefficients over time while the Hermite projection method reduces the amount of data for highquality video reconstruction.
2.1. TimeFrequency Analysis—the SMethod
Timefrequency representations have been used to analyze the timevarying spectral properties of nonstationary signals. The commonly used approaches are obtained by introducing the time dependency into the Fourier analysis using the timewindowing technique. Hence, the short time Fourier transform (STFT) is defined as follows [12]:
where is a signal, and w(t) is a window function. The spectrogram is the energetic version of STFT and it is defined as . The main drawback of the spectrogram is a low timefrequency resolution. Therefore, the quadratic distributions are introduced to improve timefrequency concentration. An efficient quadratic timefrequency distribution is obtained by the Smethod. It is defined as follows [13]:
where is a finite frequency domain window. The Smethod preserves the autocomponents concentration as in the Wigner distribution but significantly reduces or removes the crossterms. Unlike the Wigner distribution, the oversampling in time domain is not necessary because the aliasing components will be removed in the same way as the crossterms. The discrete form of the Smethod can be written as follows:
where n and k denote discrete time and frequency, respectively, while the rectangular window P(l) is assumed. Parameter L determines the frequency window width which is . Windowing the product in the convolution through the narrow window , the crossterms will be reduced or even removed. Thus, by choosing an appropriate value of , the sharpness of the Wigner distribution can be preserved while avoiding the crossterms. Namely, high autoterms concentration is obtained with only a few summation terms due to the fast convergence within . Hence, in many practical applications is a suitable choice (e.g., ). Also, as shown in the sequel, a lower L value requires a fewer number of computations.
The Smethod is computationally less demanding in comparison with other quadratic distributions. It requires complex multiplications and complex additions (N is the number of samples within the window), unlike the Wigner distribution which requires complex multiplications and complex additions. Also, the Smethod allows simple and efficient hardware realization that has already been done [14].
2.2. Fast Hermite Projection Method
The Hermite projection method has been introduced in various image and speech processing applications [15–19]. Namely, it has been shown that this method could be efficient in image database retrieval, image filtering, texture analysis, textindependent speaker indentification, and so forth. The expansion into Hermite functions provides good localization in both signal and transform domain. Although the computation of Hermite functions seems to be a demanding task, they could be easily obtained using recursive realization as follows:
The first step in the Hermite projection method is to remove the baseline since:
The baseline is defined as follows:
where is a twodimensional signal, and while the baseline is for a fixed . Further, the baseline is subtracted from the original values as follows:
The decomposition into N Hermite functions is defined as:
where holds for a fixed , while the coefficients of Hermite expansion are
Fast Hermite projection method uses the GaussHermite quadrature to calculate the Hermite expansion coefficients as follows [15, 16]:
where are zeros of Hermite polynomials
The constants are obtained using the Hermite functions as follows:
3. Video Analysis and Reconstruction Using TimeFrequency Representations and Fast Hermite Projection Method
3.1. Analysis of Temporal Stationarity within the Video Sequence
By observing a video scene over time, usually there are some blocks that do not change (the box marked by 1 in Figure 1) while the others vary, for example, due to the presence of moving objects (the box marked by 2 in Figure 1). These two types of blocks will be referred to as stationary and nonstationary blocks, respectively. For example, a temporal sequence of pixels belonging to the stationary block should represent a constant amplitude signal, unlike the sequence of pixels from nonstationary block. The same holds when a sequence of frequency coefficients, for example Discrete Cosine Transform (DCT) coefficients, is observed instead of pixels. Thus, in order to analyze the stationarity/nonstationarity within the sequence of frames, a procedure described in the sequel can be applied to different coefficients. We focus on the DCT coefficients, since they are usually employed in image and video processing algorithms.
The video frames are split in blocks and DCT coefficients are calculated. Further, the sequence of DC coefficients within the K consecutive frames is considered as follows:
where block position is determined by the position of its first coefficient while indicate frames' numbers.
The temporal sequence of coefficients may contain the nonstationarities due to the motion, noise, or luminance variations. Thus, the stationary sequence becomes slightly nonstationary even in the presence of a small amount of noise. The comparison between consecutive coefficients may lead to an incorrect conclusion. Consequently, cannot be used to indicate whether a sequence is stationary or not. In order to eliminate the influence of noise, the timefrequency analysis is employed. Therefore, the examination of stationarity is performed by using the timefrequencybased instantaneous frequency estimation. It is estimated as a position of the timefrequency distribution maxima as explained below.
Based on , a frequencymodulated signal is created as follows [17]:
where while μ is a constant that controls timefrequency resolution and t is a time vector. Thus, for each block, 64 frequencymodulated signals are created. Further, for the signal , the timefrequency distribution is obtained by using the Smethod as follows:
One may note that
Therefore, if , the block at the position is stationary and will remain unaltered within K consecutive frames. Otherwise, the observed block is nonstationary.
The AC components (the alternating components, that is, the remaining 63 components in the DCT block) within the stationary block are stationary as well. The AC components within the nonstationary block should be analyzed separately. The Smethod of a sequence of DC components belonging to nonstationary and stationary block are given in Figures 2(a) and 2(b), respectively. Also, timefrequency representations of two AC components are included.
The timefrequency representation of stationary sequence should be robust to certain amount of noise, meaning that it should be flat even in the presence of noise. Otherwise, the nonstationarities caused by the noise may be interpreted as nonstationarities due to the motion. Note that additive noise within the sequence becomes multiplicative one after the frequencymodulated signal is formed (according to (14)). The performance of timefrequency distributions in the presence of multiplicative noises has been studied in the literature [20–23], where various analyses and optimality conditions have been derived. Here, numerous experiments have been performed to prove good characteristics of the proposed approach in a noisy environment.
It has been shown, (in Figure 3), that the proposed method can be robust in the presence of some additional Gaussian (zero mean and variance up to 0.001) and impulse noise (noise density up to 0.002) added to the video frames. In particular, three cases are observed for a stationary sequence:

(i)
Figure 3(a)—no additional noise (just the noise caused by luminance variations),

(ii)
Figure 3(b)—with Gaussian noise,

(iii)
Figure 3(c)—with impulse noise.
In each case, one sample frame is illustrated (left), as well as the noisy sequence of DC coefficients and its timefrequency representation (right), which is flat even in the presence of noise.
In order to speed up the procedure, the Smethod can be calculated for several components at the same time. Namely, a frequencymodulated signal can be modified into multicomponent signal as follows:
where is an AC component within the block. The Smethod provides a crossterm free representation, but the components have to be spaced from each other by using the constants Namely, these constants are used to shift the components up and down from the central frequency, so that they do not overlap. They are integers whose values depend on the window width and can be chosen experimentally.
3.2. Hermite ProjectionBased Temporal Reconstruction of Nonstationary Pixels within the Sequence of Video Frames
The Hermite functions are used as the basis functions for the video sequence expansion method due to their favorable properties. They represent an independent set of orthogonal functions, with good localization. Therefore, they can provide a unique representation of signals, while the coefficients of expansion are easily computed. Hence, the Hermite functionsbased transform has been used in many applications for different types of signals, especially for images [15, 16]. Beside the Hermite functions, some other possible basis functions with desirable properties are Legendre polynomial, Laguerre polynomials, Bessel functions, and so forth [18]. For instance, the Legendre polynomials are defined on normalized intervals and their Fourier transform has infinite spread. Thus, there are difficulties to determine the expansion coefficients when the original signal is not explicitly given. The uncertainty inequalities for Laguerre polynomials cannot be easily reduced to a form that involves only expansion coefficients. In the case of Bessel function, the derivation of the coefficients from explicit or implicit information about the signal is very complicated [18].
Furthermore, by using the Hermite expansion, the signal energy is approximated by the numerical integral of the GaussHermite type and converges more rapidly than the rectangle rule in the case of the DCT [19]. Therefore, the Hermite functions allow for a higher concentration of signal energy at lower frequencies and lead to better compression.
Consider the pixels , whose intensity varies over time. For frames, we can observe a nonstationary sequence in the following form:
where represents a pixel value in the k th frame. The sequence can be decomposed into N Hermite functions: A sequence of K elements can be reconstructed even by a small number of Hermite coefficients c _{ p }, that is, for An error, depending on the value of N, is introduced by the reconstruction. Thus, with a suitable choice of N, a sequence with K pixels can be represented using smaller number (N) of coefficients without significant quality degradation.
Instead of pixels, one can reconstruct DCT coefficients within the blocks. For instance, a temporal sequence of DC components from the blocks whose central pixels are on the position is
The original nonstationary sequence V_{DC} for video frames is illustrated in Figure 4(a). Its timefrequency representation is given in Figure 2(a) (frames from 224 to 584). The two reconstructed sequences with and Hermite coefficients are illustrated in Figures 4(b) and 4(c), respectively. An additional moving average smoothing procedure is applied as well
where denotes the k th element of sequence reconstructed by N Hermite coefficients. Namely, the moving average smoothing is used to reduce the errors introduced by the reconstruction when the number of Hermite coefficients is significantly lower than the number of the original coefficients, such as
Therefore, in the case with , the sequence is reconstructed by using a number of Hermite coefficients that is half the number of original coefficients, that is, . In the second case, the saving rate is .
The previously described procedure should be done for all AC components, as well.
4. Examples
Example 1.
A video sequence with 1200 frames (48 seconds) is considered. It is recorded by the video surveillance camera in the shopping center. It is split into three parts in order to illustrate different moving objects. Several frames for each of them are merged in Figure 5.
First, the temporal stationarity of blocks is analyzed. For this purpose, the frames are divided into blocks and the DCT is performed. Then, the DC sequences are obtained for
In the timefrequency analysis, the window width influences the resolution in the timefrequency domain. A narrow window produces good time resolution while a wide window produces good frequency resolution. In practical applications, the window width should be chosen to provide a good tradeoff between resolutions along the two axes. Here, the window widths of 32, 64, and 128 samples are analyzed and it has been shown experimentally that the width of 64 samples is the most appropriate for the considered sequence length. Thus, the stationarity of a DC sequence is analyzed by using the Smethod with window width of 64 samples while . An appropriate value of is chosen to produce a smoothed representation of stationary coefficients, keeping the variations of nonstationary (dynamic) coefficients still intensive.
Here, three representative cases are observed as follows:

(i)
stationary block (e.g., box 1 in Figure 5),

(ii)
partly nonstationary (e.g., box 2), and

(iii)
nonstationary block (e.g., box 3).
The blocks with DC sequences producing constant value in the timefrequency domain (Figure 6(a)) are stationary over the considered time and could be reconstructed from the first frame. Therefore, a temporally stationary sequence of DC components is reconstructed over time by a single coefficient. The same holds for AC components from the stationary block.
Furthermore, we have considered a sequence which is a combination of stationary and nonstationary ones. Namely, a sequence of blocks that is mostly stationary over time and has just a couple of short nonstationary parts (Figure 6(b)) will be called partly nonstationary. Here, we assume that a partly nonstationary sequence has at least 2/3 of stationary coefficients over time (800 out of 1200 coefficients). In other words, the timefrequency representation of partly nonstationary sequence is linear along 2/3 of the sequence length. For instance, the partly nonstationary sequence presented by the Smethod in Figure 6(b) can be reconstructed as follows:

(i)
stationary part 1:3601 coefficient,

(ii)
nonstationary part 361:45060 Hermite coefficients, that is,

(iii)
stationary part 451:9001 coefficient,

(iv)
nonstationary part 901:1200200 Hermite coefficients, that is, .
Thus, the total number of coefficients, required for the reconstruction of partly nonstationary sequence (Figure 6(b)) of length 1200, is 262. Note that two coefficients should be added for the baseline calculation of each nonstationary part. However, they do not have significant influence to the total number of coefficients.
The block whose DC sequence is mostly made of nonstationary segments is called a nonstationary block. An illustrative example is given in Figure 6(c). Due to its complexity and dynamics, the reconstruction of such a sequence requires a higher number of coefficients:

(i)
nonstationary part 1:360257 Hermite coefficients

(ii)
stationary part 361:4601 coefficient,

(iii)
nonstationary part 461:52042 coefficients

(iv)
stationary part 521:6901 coefficient,

(v)
nonstationary part 691:1100230 coefficients,

(vi)
stationary part 1101:12001 coefficient.
The total number of coefficients is 532 (without the baseline ones). For the three observed sequences, the average number of Hermite coefficients, required for the reconstruction, is 265 per sequence. It provides the average saving ratio .
Note that, if the DC component is nonstationary, most of the AC components are also nonstationary. The Smethod obtained for a few AC components within the nonstationary block is shown in Figure 7(a)–7(d). In the case of AC components reconstruction, a high quality is achieved with . Although the block is nonstationary, some coefficients (e.g., AC (4, 4) in Figure 7(d)) can be partly nonstationary and require just a partial reconstruction with Hermite coefficients.
The total number of stationary, partly nonstationary, and nonstationary blocks within the 1200 frames of the observed sequences is given in Table 1. For the sake of simplicity, it is assumed that all 64 components within the block have almost the same temporal behavior. Nevertheless, there could be slight variations for some of the AC components.
From the presented statistics, we can calculate the total number of coefficients for video reconstruction, which is approximately 20% of the number of original coefficients.
Some of the reconstructed and original nonstationary blocks are illustrated in Figure 8. Each row presents a reconstructed block (left) versus its original version (right). The blocks are chosen randomly from different frames to illustrate the quality of reconstruction. Note that the difference between the original and reconstructed blocks is imperceptible. Additionally, an original and corresponding reconstructed frame is shown in Figure 9. It can be seen that the reconstructed frame preserves the quality of the original one.
The peak signal to noise ratio (PSNR) is calculated and it is approximately around 47 dB, which is significantly higher than in the other compression algorithms [10]. As previously estimated, the proposed method requires approximately 20% of the original coefficients for such a highquality reconstruction, entailing the compression ratio 5 : 1. Thus, if combined with the existing algorithms it may significantly improve the total compression ratio, without degrading the quality. The estimated compression ratio can be further increased which will produce a lower PSNR.
Note that the number of Hermite coefficients N, used for the reconstruction, has been set empirically, based on a large number of tests. Namely, in the experiments we have with PSNR ≈ 47 dB. By increasing the ratio K/N, PSNR between the original and reconstructed frames slowly decreases (e.g., = 1.8 ⇒ PSNR ≈ 43 dB, = 2.2 ⇒ PSNR ≈ 40 dB, etc.).
Example 2.
This example aims to show that the proposed method can be performed even on a set of nonconsecutive frames, such as I frames in the MPEG sequence. For this purpose, we made a new sequence of frames that will be called I sequence by selecting each 13th frame from the starting video sequence (we assumed that the I frame rate is set at every 13 frames). However, without loss of generality, we can also use each 5th, 12th, or 15th frame, depending on I frame refreshing rate which can be user defined. The total number of frames within the sequence is 126. Due to a smaller number of coefficients than in the previous case, the window width is 42 samples for the calculation of the Smethod.
In order to optimize the processing time, the Smethod is calculated for several components at once. The illustrations are given in Figure 10, where the multicomponent timefrequency representation is given for four DCT components from two image blocks. Note that, the DCT components within the first block (Figure 10(a)) are mostly stationary, unlike the components from the second block.
The reconstruction procedure is performed for each coefficient as described in the previous example. The stationary segments are reconstructed by a single coefficient, the nonstationary parts of DC components with ratio , while the ratio for nonstationary segments of AC sequences is An example with the original and corresponding reconstructed sequence is shown in Figure 11. The reconstructed and the corresponding original blocks from different frames are zoomed in Figure 12. The same blocks from Example 1 are observed. Although the I sequence contains significant discontinuities comparing to the case when each frame is used, the proposed approach again provides a highquality reconstruction, with a slightly lower PSNR than in the previous example.
Example 3 (Performance comparison with MJPEG).
In this example, we discuss one simple solution for combining the proposed approach with the Motion JPEG algorithm in order to improve the compression ratio. A part of a video sequence having 126 JPEG frames (as a basis of MJPEG format) of total size 1.38 MB is used. The frame size is while the average number of bits per block is .
The proposed approach classifies DCT blocks into stationary (S) and nonstationary (NS) ones. In the considered sequence, the number of S blocks is , while . All the coefficients from the S blocks are constant over time and can be reconstructed from the corresponding first frame's blocks. Thus, while the set of 126 JPEG frames requires No 126 bits, the proposed approach needs bits to represent the coefficients of S blocks.
Each NS block of DCT coefficients during 126 frames forms a matrix of the size . Using the proposed approach, it is represented by the matrix of Hermite coefficients of the size . In other words, instead of 126 DCT blocks, we have 70 blocks of Hermite coefficients. The blocks of Hermite coefficients (rounded to the integer values), look very similar to the quantized DCT blocks, having the same range and distribution of values. Thus, they can be treated and coded in the same way as DCT blocks in the JPEG algorithm (zigzag scan, lossless entropy coding, etc.). The total number of bits (for the observed sequence) can be calculated as follows:

(i)
for Motion JPEG: ,

(ii)
for the combined (proposed + MJPEG) approach: .
In this example, the combined approach leads to 10 times smaller size of video sequence.
5. Conclusion
The proposed method for video sequence reconstruction employs two different signal processing techniques: the timefrequency analysis and the Hermite projection method. The timefrequency distribution provides an efficient analysis of temporal variations of coefficients. In that sense, it is used to distinguish stationary and nonstationary coefficients. Temporally nonstationary coefficients are reconstructed using a smaller number of Hermite expansion coefficients. The results have shown that the highquality video reconstruction can be achieved by using significantly reduced number of coefficients. An additional improvement can be obtained by using the JPEG compression to reduce the number of AC components that should be reconstructed. The future works could include the timefrequencybased analysis of temporal stationarity in video surveillance applications to detect the appearance of moving objects. For instance, the surveillance system may ignore nonstationarities of short duration (e.g., bird flyover) while the attention should be paid when nonstationary segments last longer (meaning that significant movements appear). To make the proposed method faster for possible real time applications, it would be necessary to develop a special purpose hardware implementation.
References
 1.
Sullivan GJ, Wiegand T: Video compressionfrom concepts to the H.264/AVC standard. Proceedings of the IEEE 2005, 93(1):1831.
 2.
Mitchell JL, Pennebaker WB, Fogg CE, LeGall DJ: MPEG Video Compression Standard. Chapman & Hall, Boca Raton, Fla, USA; 1997.
 3.
Pižurica A, Zlokolica V, Philips W: Noise reduction in video sequences using waveletdomain and temporal filtering. Wavelet Applications in Industrial Processing, October 2003, Proceedings of SPIE 5266: 4859.
 4.
Zlokolica V, Ptžurica A, Philips W: Waveletdomain video denoising based on reliability measures. IEEE Transactions on Circuits and Systems for Video Technology 2006, 16(8):9931007.
 5.
Sikora T: MPEG digital video coding standards. In Digital Electronics Consumer Handbook. McGraw Hill, New York, NY, USA; 1997.
 6.
Richardson E: H.264 and MPEG4 Video Compression Video Coding for Nextgeneration Multimedia. John Wiley & Sons, New York, NY, USA; 2003.
 7.
Sullivan GJ, Topiwala P, Lutha A: The H264/AVC advanced video coding standard, overview and introduction to the fidelity range extensions. Applications of Digital Image Processing XXVII, August 2004, Proceedings of SPIE 5558: 454474.
 8.
Wiegand T, Sullivan GJ, Bjøntegaard G, Luthra A: Overview of the H.264/AVC video coding standard. IEEE Transactions on Circuits and Systems for Video Technology 2003, 13(7):560576.
 9.
Pearson G, Gill M: An evaluation of Motion JPEG 2000 for video archiving. Proceedings of the Archiving, April 2005, Washington, DC, USA 237243.
 10.
Hakeem A, Shafique K, Shah M: An object based video coding framework for video sequences obtained from static cameras. Proceedings of the 13th annual ACM International Conference on Multimedia (MULTIMEDIA '05), November 2005, Singapore 608617.
 11.
Cohen L: TimeFrequency Analysis. Prentice Hall, Upper Saddle River, NJ, USA; 1995.
 12.
Boashash B: Estimating and interpreting the instantaneous frequency of a signalPart 1: fundamentals. Proceedings of the IEEE 1992, 80(4):520538. 10.1109/5.135376
 13.
Stanković L: Method for timefrequency analysis. IEEE Transactions on Signal Processing 1994, 42(1):225229. 10.1109/78.258146
 14.
Stanković S, Stanković L, Ivanović V, Stojanović R: An architecture for the VLSI design of systems for timefrequency analysis and timevarying filtering. Annales des Telecommunications 2002, 57(910):974995.
 15.
Krylov A, Korchagin D: Fast hermite projection method. Proceedings of the 3rd International Conference on Image Analysis and Recognition (ICIAR '06), September 2006, Povoa de Varzim, Portugal, Lecture Notes in Computer Science 4141: 329338.
 16.
Kortchagine DN, Krylov AS: Projection Filtering in image processing. Proceedings of the International conference on the Computer Graphics and Vision (Graphicon '00) 4245.
 17.
Stanković S, Orović I, Žarić N: An application of multidimensional timefrequency analysis as a base for the unified watermarking approach. IEEE Transactions on Image Processing 2010, 19(3):736745.
 18.
Venkatesh YV: Hermite polynomials for signal reconstruction from zerocrossings. Part 1: onedimensional signals. IEE Proceedings, Part I 1992, 139(6):587596.
 19.
Lazaridis P, Debarge G, Gallion P, et al.: Signal compression method for biomedical image using the discrete orthogonal GaussHermite transform. Proceedings of the 6th WSEAS International Conference on Signal Processing, Computational Geometry & Artificial Vision, August 2006 3438.
 20.
Barkat B: Analysis of frequency modulated signals in multiplicative noise. Proceedings of the 6th International Symposium on Signal Processing and its Applications, 2001 2: 753756.
 21.
Barkat B: Instantaneous frequency estimation of nonlinear frequencymodulated signals in the presence of multiplicative and additive noise. IEEE Transactions on Signal Processing 2001, 49(10):22142222. 10.1109/78.950777
 22.
Boashash B, Ristic B: Polynomial timefrequency distributions and timevarying higher order spectra: application to the analysis of multicomponent FM signals and to the treatment of multiplicative noise. Signal Processing 1998, 67(1):123. 10.1016/S01651684(98)000188
 23.
Nguyen LT: Estimation and separation of linear frequencymodulated signals in wireless communications using timefrequency signal processing, Ph.D. thesis. Signal Processing Research Center, Queensland University of Technology, Brisbane, Australia; 2004.
Acknowledgments
The authors are thankful to the anonymous reviewers for their valuable comments and suggestions. Test video data used in the experiments are coming from the EC Funded CAVIAR Project/IST 2001 37540, found at URL: http://homepages.inf.ed.ac.uk/rbf/CAVIAR/.
Author information
Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Stanković, S., Orović, I. & Krylov, A. Video Frames Reconstruction Based on TimeFrequency Analysis and Hermite Projection Method. EURASIP J. Adv. Signal Process. 2010, 970105 (2010). https://doi.org/10.1155/2010/970105
Received:
Revised:
Accepted:
Published:
Keywords
 Discrete Cosine Transform
 Discrete Cosine Transform Coefficient
 Hermite Function
 Short Time Fourier Transform
 Discrete Cosine Transform Block