Skip to main content

Error analysis and implementation considerations of decoding algorithms for time-encoding machine

Abstract

Time-encoding circuits operate in an asynchronous mode and thus are very suitable for ultra-wideband applications. However, this asynchronous mode leads to nonuniform sampling that requires computationally complex decoding algorithms to recover the input signals. In the encoding and decoding process, many non-idealities in circuits and the computing system can affect the final signal recovery. In this article, the sources of the distortion are analyzed for proper parameter setting. In the analysis, the decoding problem is generalized as a function approximation problem. The characteristics of the bases used in existing algorithms are examined. These bases typically require long time support to reach good frequency property. Long time support not only increases computation complexity, but also increases approximation error when the signal is reconstructed through short patches. Hence, a new approximation basis, the Gaussian basis, which is more compact both in time and frequency domain, is proposed. The reconstruction results from different bases under different parameter settings are compared.

Introduction

Time encoding is an asynchronous process for mapping the amplitude information of a band-limited signal x(t) into a sequence of strictly increasing time points (t k ). Well-known nonlinear asynchronous analog circuits can be used to build a time-encoding machine (TEM). For example, the TEM shown in Figure 1 consists of an input transconductance amplifier, a feedback 1-bit DAC, an integrator, and a hysteresis quantizer. The output of such a TEM is a train of asynchronous pulses that change sign at t k . Using a TEM to transform the input analog signal into such pulses avoids the clock jitter that limits traditional ADCs for ultra-wideband applications [1] and provides better timing resolution. Furthermore, all the information in the original signal is preserved in the durations of the pulses. Hence, time encoding can also be used as a modulation scheme that modulates input analog signals onto pulses. Processing the input signal can be performed using the pulse durations in a similar manner as conventional signal processing with repetitively pulsed, amplitude quantized pulses. Processing on the asynchronous pulses has two main advantages: (1) it overcomes the limits of voltage resolution of analog signals in deep-submicron processes; (2) it overcomes the limits on the programmability of traditional analog processors. The pulses can also be used for direct communication of signals in very wideband systems as an alternative to existing UWB signals.

Figure 1
figure 1

Time-encoding system.

Lazar and Tóth [2] have proved that band-limited signals encoded by TEM can be perfectly recovered in theory. However, the reconstruction algorithm requires the inversion of an infinite matrix. This problem can be solved by reconstructing small intervals of the signal ("clips") and stitching these "clips" together [3]. A Toeplitz formulation of the reconstruction problem was proposed by Lazar and co-workers [4] to increase the speed of the reconstruction algorithm. In both reconstruction algorithms, the inversion of an infinite matrix is replaced by a finite matrix inversion, but the recovered signal is no longer a perfect reconstruction. In addition, numerical errors and circuit noise in real systems limit the reconstruction accuracy. These non-idealities replace clock jitter as the limiting sources of errors. Understanding the effect of these non-idealities is necessary for determining the optimal design parameters for applying the TEM in a real system. In addition, by analyzing the non-idealities, we can determine the circuit specifications based on the system performance requirements, which is an important step in many applications.

The reconstruction process can be thought of as a generalized function approximation problem and choice of a proper basis is critical for function approximation. In this article, a new basis that overcomes some shortcomings of the bases used previously in the existing decoding algorithms is proposed.

A detailed study of the non-idealities encountered in the encoding and decoding process is carried out and reported in this article. We will base our analysis on the system model in Figure 1. In particular, we analyze the error sources and their effects on the final reconstruction SNR. In the encoding side, the errors mainly come from circuit imperfections, including the non-linearity of the amplifier, the deviation of circuit parameter from their set values in the hysteresis quantizer, and the quantization noise of the ADC. In the decoding side, the major error contributors are the basis approximation error and the numerical computation errors, including the matrix inversion errors and the matrix boundary problems. Since these errors come from software algorithm and theoretical analysis, they are all incorporated in the Decoding Process block in Figure 1. All errors will be analyzed individually.

This article is organized as follows. The next section reviews existing reconstruction algorithms and introduces the new reconstruction basis. In "Non-ideality analysis," the origin of the non-idealities is analyzed and their effects are examined. Concluding remarks are drawn finally.

Reconstruction algorithms

Before discussing reconstruction algorithms, it is useful to understand the sampling process. Unlike traditional sampling processes, the TEM does not measure the amplitude of the input signal directly. Instead, it converts the amplitude information into time information through the nonlinear components in the TEM. For the circuit model in Figure 1, the operation equation of the TEM can be expressed as:

(1)

For a signal with maximum amplitude c, the interval between two time points T k = tk+1- t k satisfies:

(2)

Lazar and Tóth [2] proved that a finite energy signal band-limited to [-Ω, Ω] can be perfectly recovered once the following condition is satisfied: the maximum interval between time points in (2) should be less than the half of the minimum period π/Ω, i.e.,

(3)

The oversampling ratio (OSR) of a time-encoding system is the Nyquist period π/Ω divided by the average of the T k 's. According to (2), . In fact, in function approximation where time limited basis are used, there is no perfect signal reconstruction. The parameter r often plays an important role in the performance of the reconstruction algorithms. Hence, we believe it is a fair comparison of algorithms only if each reaches same level of OSR. We will include the OSR value in many of our comparison results too.

Reconstruction method I: iterative algorithm

Since time-encoding sampling is an asynchronous process, time points are not sampled on a uniform time grid. Hence, time encoding has many similarities to other non-uniform sampling processes. Typical non-uniform sampling reconstruction algorithms involve an iterative process [5]. Similarly, Lazar et al. proved that the iterative operation of (4) can reach perfect reconstruction:

(4)

where x l is the reconstructed signal in the l th iteration. A is an operator defined as:

(5)

Reconstruction method II: sinc basis

In the iterative algorithm, the result from the l th iteration can be expressed as [2]

(6)

Taking limit as l goes to infinity, the final reconstruction result can be expressed in matrix format as

(7)

where

(8)

and G+ denotes the pseudo-inverse of G.

Reconstruction method III: Toeplitz formulation

Replacing the scaled sinc function g(t) by its approximation , the recovered signal in (7) can be expressed as [4]:

(9)

The coefficients c are obtained through the equation:

(10)

where

(11)

This method is so named because the matrix SSH is a Hermitian Toeplitz matrix.

For a given space, we can express any signal in the space as a linear combination of basis functions of the space. Then in essence, the reconstruction process is a function approximation problem, i.e., finding the coefficients associated with the basis functions. Uniformly spaced sinc functions are a complete set of bases for the space of band-limited signals. In traditional uniform sampling, the bases are orthogonal to each other [6]. The sampled values are the coefficients for the bases. However, once the samples are not uniformly taken, sinc bases are no longer orthogonal to each other. Hence, we cannot directly use the sampled values as the coefficients. Instead, we have to solve for the coefficients. Following this concept, we can see that the major difference between methods III and II is that the bases of method II are scaled sinc functions and that of method III are scaled sine waves .

Using the same basis, the reconstruction process can also be formulated by a Vandermonde system as in [3]:

The coefficient c can then be obtained through the equation

where q is the same as in (11) and

Algorithms exist to solve the linear equations involving Vandermonde matrix [7] that avoids matrix inversion. Hence, the Vandermonde formulation is numerically more stable. This advantage will be discussed further in "Non-ideality analysis".

Remark: The scaled sine basis is one type of trigonometric polynomial kernels. Other similar trigonometric polynomials kernels such as the Dirichlet kernel can also be used. One advantage of using these kernels is that they have closed form integration, reducing computation complexity.

Reconstruction method IV: Gaussian basis

The bases in methods II and III are both infinite in time, but in practice, we have to use a finite basis. Hence, the bases have to be truncated. Although the infinite sinc functions can faithfully represent the signal, the same is no longer true for the truncated basis, which means that the sinc basis may not be the best basis for signal reconstruction. Similarly, the trigonometric polynomial kernels can approximate periodic signals very well. But it can generate large error in approximating general nonperiodic band-limited signals. Instead, a basis that is more compact than the sinc basis may be a better candidate for our application. Since we focus on band-limited signals, we also want the basis to be compact in the frequency domain. This motivates us to use the Gaussian function which has the smallest time-frequency window [8]. The Gabor transform, which uses the Gaussian function as the basis, also finds wide use for expanding functions that are simultaneously limited in both time and frequency [9]. A basis derived from the Gaussian function, which has flatter frequency response and also exhibits similar properties is given by [10]:

(12)

In the approximation history, many different basis functions have been found and studied. Each has its own merit. Research by Lehmann et al. [11] shows that this Gaussian basis has the flattest passband and smallest side lobes among all the finite time bases they compared. Hence, we developed the reconstruction method IV using the Gaussian basis to reconstruct the signal as:

(13)

The coefficients of c are obtained through the equation:

(14)

where

(15)

For all these methods, we make the bases finite by applying a window function w(t) to cut the signal into clips as in [3]:

(16)

Within each window, we solve equations to get the coefficients c as before.

For convenience of expression, the matrices G in (8), SSH in (10) and K in (15) will all be designated hereafter as "Basis" matrices.

Non-ideality analysis

Although in certain theoretical cases, the signal sampled through TEM can be perfectly recovered, in all practical applications, there are multiple non-idealities that lead to reconstruction errors both in the encoding and in the decoding processes. In this section, several common non-idealities are analyzed. Some reconstruction errors are affected by the choice of parameters used in the system. Sometimes, a parameter can have opposite effects on two different types of non-idealities, and a tradeoff study is required to find the optimal parameters. In previous error analysis, the authors assume an OSR of 2-3 [2]. Here, we are interested in a system with a much smaller OSR because when sampling ultra wideband signals, a smaller OSR means smaller bandwidth requirements on the TEM and decoder circuitry. In our analysis and simulations, we restrict the OSR to be less than 2. In this case, the parameter r in (3) is close to 1, and hence reconstruction method I converges slowly. Measurement errors caused by non-idealities in the TEM circuit accumulate over iterations, and this limits the reconstruction accuracy. In our test, as long as there is reasonable quantization noise in the measured time intervals, this method always generates high reconstruction mean square error (MSE). In the following error analysis and comparison, this method is not included.

Sensitivity analysis and parameter selection

Since the TEM runs asynchronously, it has no clock and thus avoids the clock jitter that currently is one of the major limitations in high-rate, high-resolution ADCs [1]. However, two other common types of ADC non-idealities still exist: quantization noise (which includes thermal noise, comparator ambiguity, etc.) and circuit nonlinearity. There are also numerical errors in calculating the coefficients for the bases in the reconstruction process. Another circuit non-ideality is the implementation error of the circuit parameters.

Circuit parameter mismatch

Several circuit parameters are involved in the decoding process, including the gain of the amplifiers, the triggering level of the hysteresis quantizer δ as well as the output voltage level of the quantizer. The effect of the amplifier will be analyzed later. Here, we will focus on the parameters of the hysteresis quantizer. In previous analysis, we have assumed the output voltage of the quantizer is +1/-1. In the real circuit, this value will be a voltage b. The exact value of b will not affect the result as long as we know this value accurately. However, the mismatch between the positive level and the negative level as well as the imperfect knowledge of the value will cause the decoding error to increase. This is also true for the triggering level of the quantizer δ. The effect of δ has been thoroughly analyzed in [2]. Notice that δ and b only appear in the calculation of the measurement q for all non-iterative methods as can be seen in Equation 8, 11, and 15. Rewriting these equations using the real voltage value b, we get

(17)

Following the compensation principle in [2], by summing up the consecutive measurements as

(18)

we can get reconstruction algorithm that is insensitive to δ as

(19)

where elements of B = [B kl ] are given by B kl = 1 for k = l or k = l + 1 and zero otherwise. The matrix K here refers to the "Basis" matrix in the three methods. By applying the compensation principle, the imperfection in the knowledge δ of will not affect the reconstruction result. Hence, we will assume perfect knowledge of δ.

From Equation 18, we can see that the mismatch between the positive and the negative voltage level of the quantizer b1 and b2 will further increase the inaccuracy in the time interval measurement. To an extent, this mismatch can be incorporated in the quantization noise discussed next. Since it is a multiplicative factor, its effect on the reconstruction result will be very complicate and is left for future study.

Quantization noise

The quantization noise mainly comes from the ADC that is used to measure the interval between the transition time points t k . Equation 2 can be used to determine the ADC's voltage range and DC bias. By removing the DC bias, we can set the ADC voltage range to be 4 δg1c/(g32 - g12c2) instead of 2 δ/(g3 - g1c). This is equivalent to adding one extra bit to the ADC for g3 = 1, g1c = 0.33.

In the decoding machine analysis [2], the authors analyzed the effect of quantization noise on the T k 's, but did not include the accumulation of noise with increasing k. Since the time points t k 's are monotonically increasing, it is not realistic to measure the time points themselves. Instead, what are measured in real circuits are the time intervals, i.e., the T k 's. The time points are then calculated from the measured intervals. The quantization noise in each measurement is independent identically distributed [12]. Since the time points are calculated as the summation of measured time intervals, the variance of the quantization error of time points increases with time. To overcome this problem, we developed a "resynchronization" scheme. After every N r time intervals are measured, the difference between the calculated time points and the true time point δt is measured. The true time points can be obtained from a highly accurate external clock. This difference is then used to calibrate each time interval through . In this way, we can reduce or eliminate the quantization error accumulation. The effect of the resynchronization period N r is plotted in Figure 2. From this figure, we can see that the reconstruction SNR decreases linearly with the size of the resynchronization period. Since the resynchronization process requires extra measurements, the optimal resynchronization period is determined by a tradeoff between efficiency and reconstruction SNR.

Figure 2
figure 2

The reconstruction error versus the resynchronization period. The resynchronization period is the number of time intervals T k between resynchronization

Amplifier nonlinearity

Although the TEM is a nonlinear system, its linear components still need to maintain high linearity to avoid distortion in the measurements. An important linear component in the system is the amplifier (the Gm cell in Figure 1). When the amplifier is nonlinear, not only does it fail to amplify the signal as much as assumed, but it also generates harmonics of the signal. We can use a simple hyperbolic function to model the nonlinearity of the amplifier. Let n l represent the strength of the nonlinearity. When the input is composed of two tones, the output of the amplifier is:

(20)

The effect of the amplifier nonlinearity is simulated and shown in Figure 3. The reconstruction signal-to-noise and distortion ratio (SNDR) is converted to effective number of bits (ENOB) through the equation

Figure 3
figure 3

Effect of amplifier nonlinearity. Red line is calculated from the reconstruction SNDR of a traditional ADC; black line is from the reconstruction SNDR of the TEM system.

(21)

At low nonlinearity, the TEM system performs much better than the traditional ADC. When nonlinearity increases, the performance of the TEM system deteriorates quickly and is worse than that of the traditional ADC at high nonlinearity.

Basis approximation error

The uniformly spaced infinite length sinc functions form a complete basis for the space of band-limited signals. However, when the sinc functions are time limited and non-uniformly spaced, they are no longer a complete basis. The bases used in other reconstruction methods are not complete for the space of band-limited signals either. Using any of these bases to approximate the input signal generates approximation error. Intuitively, we want the basis to closely resemble the boxcar shape of the infinite sinc function's frequency response. To compare how good the bases are in approximation, the time and frequency response of the three bases in reconstruction methods II-IV are shown in Figure 4a,b. All bases are cut off at t = 5 to make them time-limited. The frequency in the plot is normalized so that the bandwidth of the signal 2Ω is 2Ω. We denote for N = 12, the approximate sinc basis, which is the basis used in the Toeplitz formation.

Figure 4
figure 4

The time and frequency response of the three basis functions in reconstruction methods II - IV. (a) time response; (b) frequency response.

As can be from Figure 4a, the envelope of the sinc and approximate sinc basis decreases slowly. Note that a long time window is necessary for these bases to have good frequency response. However, a long time window increases the condition number of the basis matrix, resulting in higher numerical error, which will be discussed next. The Gaussian basis is compact in the time domain; hence, its basis matrix has a much lower condition number, resulting in smaller numerical error. However, it is not very flat in the passband from -0.5 to 0.5 in Figure 4b. The transition from the passband to the stopband is not very sharp either. By sacrificing its time compactness through increasing γ in (10), we can reduce the transition band. But expanding the basis in time corresponds to reducing bandwidth. Hence, the Gaussian basis typically requires higher OSR than the other two bases for the same recovery error.

Matrix inversion

All three reconstruction methods used in our study require basis matrix inversion. Unfortunately, the basis matrices usually have large condition number, especially when the size of the matrices is large, and the inverse of such a matrix usually has very large elements that amplify the noise in the measurements. There may also be disastrous cancellation that brings computation error [7]. Using a short window is one way to control the noise amplification, but a shorter window adversely affects the frequency response of the basis. The base 2 logarithm of the condition numbers of the three basis matrices at different window sizes (measured in number of minimum signal period 2π/Ω is listed in Table 1. In the test, we set the oversampling ratio to be 1.55. Hence, the Gaussian basis is expanded a little bit in time to improve its performance, but its condition number is also larger. However, as can be seen in Table 1, the Gaussian basis still has a much smaller condition number than the other two bases. We found setting the window size to be four times the minimum signal period generally gives best results.

Table 1 Log2 of the condition numbers of coefficient matrices

Another way to control the noise amplification problem is to use the pseudo-inverse of the coefficient matrix. By setting a tolerance level, the pseudo-inverse procedure will treat any singular value of the matrix that is less than the tolerance level as noise and set it to zero. In this way, the inverse matrix will not contain very large elements. However, high tolerance level is only good when the quantization noise is high. When the quantization noise is small, error generated in matrix inversion will dominate and hurts the reconstruction result. Because of its low condition number, the Gaussian basis is not very sensitive to the choice of the tolerance level.

As mentioned in "Reconstruction method III: Toeplitz formulation," the Toeplitz reconstruction method can be replaced by a Vandermonde formulation which avoids matrix inversion completely. Under this formulation, pseudo-inverse is not necessary. However, the condition number of the Vandermonde matrix still affects the reconstruction error as in other methods, although to a less extent. The gain of the Vandermonde formulation and formulating other methods in a similar fashion will be an extension to this article.

Boundary effect

At the boundary of each reconstruction window, the reconstruction result is very inaccurate. This phenomenon is known as the Runge phenomenon. Employing 2M time points outside the reconstruction window is suggested in [3]. Setting M to a large value reduces the boundary effect and improves the reconstruction result, but the improvement levels off quickly. In addition, increasing M also increases the basis matrix condition number and the computational complexity of the reconstruction algorithm. Hence, the value of M should be kept small. In our simulations, we found M = 3 is a good choice.

Reconstruction method comparison

Based on the previous analysis, we can balance the different error sources by setting parameters properly. To compare the reconstruction methods, we try to set their parameters to have the same value unless a different value significantly improves the result. The values of the aforementioned parameters for the different methods are listed in Table 2.

Table 2 Simulation parameters

Figure 5a,b shows the output ENOB as a function of the ADC quantization ENOB at two different OSRs. The matrix inversion tolerance level (MITL) is set to balance the low noise and high noise performance. It is clear that output ENOB levels off when the quantization noise is low and the matrix inversion error dominates. At OSR = 1.55, the Gaussian basis cannot approximate the signal well and hence its output ENOB saturates with low quantization noise. But when the OSR is increased to 1.9, the ENOB for the Gaussian basis does not saturate as a function of quantization ENOB while results for the other two bases saturate because of the low tolerance level. In contrast, if we set the tolerance level of the other two methods to a low value to boost the low noise performance, their performance would be much worse at high noise level, as shown in Figure 5c (for example the performance of the sinc basis-blue curve-is 7.7 dB worse than in Figure 5a when input ENOB is 6). An interesting observation from Figure 5c is that even though the Toeplitz matrix also has a large condition number, it is not sensitive to the tolerance level until a critical level because of its robustness against small noise [3, 7]. When the tolerance is below 2.5e-13, its output ENOB cannot pass 10.5 bits.

Figure 5
figure 5

Output ENOB vs. quantization noise: (a) OSR = 1.9; (b) OSR = 1.55; (c) OSR = 1.9, MITL = 1e-12.

Conclusion and discussion

In this article, several reconstruction algorithms for the TEM are reviewed and generalized as a function approximation problem. Based on the generalization, a new reconstruction method using Gaussion basis function is derived. Compare to other basis, this basis has the smallest time-frequency window, which is particularly important in the ultra-wideband applications. Sources of reconstruction error are analyzed and TEM circuit and reconstruction parameters are selected to minimize recovery error by balancing different error sources. Finally, results from different reconstruction methods are compared. The sinc and approximate sinc bases have bad condition number, but by properly controlling the matrix inversion procedure, they can still have good performance at high noise level, although the low noise performance will be sacrificed. The Vandermonde formulation of the approximate sinc basis, which avoids matrix inversion completely, may remove this trade-off. But large entries from division operation in solving the Vandermonde system may still amplify the quantization noise contained in the measurements. The exact gain of the Vandermonde formulation is still under investigation. On the other hand, the Gaussian basis is more robust to the quantization noise, but due to its worse frequency response, it usually requires high OSR to reach good results. Overall, the best results for ENOB less than about 14 bits are obtained using the sinc basis at an OSR of 1.9. In this case the output ENOB of the TEM is only a few 1/10's of an ENOB worse than the theoretical limit given by the quantization ENOB.

Endnotes

1The theoretical analysis in [2] shows that the MSE caused by quantization error is inversely proportional to δ and (1 - r)2. When r is close to 1, this value can be very large. Although the MSE in the simulation results given in [2] is much smaller than the theoretical bound, our simulations that use a different signal model and a longer signal period show that the MSE with r = 0.91 reaches -53 dB. With no other sources of error, this MSE translates into an SNR of 36 dB, which is too low for our applications.

Abbreviations

ENOB:

effective number of bits

MITL:

matrix inversion tolerance level

MSE:

mean square error

OSR:

oversampling ratio

SNDR:

signal-to-noise and distortion ratio

TEM:

time encoding machine.

References

  1. Walden RH: Analog-to-digital converter survey and analysis. IEEE J Sel Areas Commun 1999, 17: 539-550. 10.1109/49.761034

    Article  Google Scholar 

  2. Lazar AA, Tóth LT: Perfect recovery and sensitivity analysis of time encoded bandlimited signals. IEEE Trans Circuits Syst I 2004, 51: 2060-2073. 10.1109/TCSI.2004.835026

    Article  Google Scholar 

  3. Lazar AA, Simonyi EK, Tóth LT: An over-complete stitching algorithm for time decoding machines. IEEE Trans on Circuits Syst I 2008, 55: 2619-2630.

    Article  Google Scholar 

  4. Lazar AA, Simonyi EK, Tóth LT: A Toeplitz formulation of a real-time algorithm for time decoding machines. Proceedings of the Conference on Telecommunication Systems, Modeling and Analysis (ICTSM'05) 2005.

    Google Scholar 

  5. Feichtinger HG, Grochenig K: Theory and practice of irregular sampling. In Wavelets: Mathematics and Applications. Edited by: Benedetto JJ, Frazier MW. CRC Press, Boca Raton; 1994:305-363.

    Google Scholar 

  6. Unser M: Sampling-50 years after Shannon. Proceedings of the IEEE 2000, 88: 569-587. 10.1109/5.843002

    Article  Google Scholar 

  7. Golub GH, Van Loan CF: Matrix Computation. 3rd edition. The Johns Hopkins University Press, Baltimore; 1996.

    Google Scholar 

  8. Sarkar TK, Su C: A tutorial on wavelets from an electrical engineering perspective, Part 2: the continuous case. IEEE Antenna Propag Mag 1998, 40: 36-49. 10.1109/74.739190

    Article  Google Scholar 

  9. Gabor D: Theory of communications. J IEE 1946, 93: 429-457.

    Google Scholar 

  10. Appledorn CR: A new approach to the interpolation of sampled data. IEEE Trans Med Imaging 1996, 15: 369-376. 10.1109/42.500145

    Article  Google Scholar 

  11. Lehmann TM, Gönner C, Spitzer K: Survey: interpolation methods in medical image processing. IEEE Trans Med Imaging 1999,18(11):1049-1075. 10.1109/42.816070

    Article  Google Scholar 

  12. Benett WR: Spectra of quantized signals. Bell Syst Tech J 1948, 27: 446-472.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by DARPA under the Analog-to-Information program through grant DARPA N00014-09-C-0324. Approved for Public Release, Distribution Unlimited. The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangming Kong.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kong, X., Valley, G.C. & Matic, R. Error analysis and implementation considerations of decoding algorithms for time-encoding machine. EURASIP J. Adv. Signal Process. 2011, 1 (2011). https://doi.org/10.1186/1687-6180-2011-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2011-1

Keywords