Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most di ﬃ cult industrial challenges, in mobile computing. Most of e ﬀ orts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power e ﬃ ciency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate ﬁltering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme) presents two ﬁltering techniques, able to adapt their sampling rate and ﬁlter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to ﬁlter only the relevant signal parts, by employing the relevant order ﬁlters. This idea leads towards a drastic gain in the computational e ﬃ ciency and hence in the processing power when compared to the classical techniques. Copyright


Introduction
This work is part of a large project aimed to enhance the signal processing chain implemented in the mobile systems. The motivation is to reduce their size, cost, processing noise, electromagnetic emission and especially power consumption, as they are most often powered by batteries. This can be achieved by intelligently reorganizing their associated signal processing theory, and architecture. The idea is to combine event driven signal processing with asynchronous circuit design, in order to reduce the system processing activity and energy cost.
Almost all natural signals like speech, seismic, and biomedical are time varying in nature. Moreover, the man made signals like Doppler, Amplitude Shift Keying (ASK), and Frequency Shift Keying (FSK), also lay in the same category. The spectral contents of these signals vary with time, which is a direct consequence of the signal generation process [1] The classical systems are based on the Nyquist signal processing architectures. These systems do not exploit the signal variations. Indeed, they sample the signal at a fixed rate without taking into account the intrinsic signal nature. Moreover they are highly constrained due to the Shannon theory especially in the case of low activity sporadic signals like electrocardiogram, phonocardiogram, seismic, and so forth. It causes to capture, and to process a large number of samples without any relevant information, a useless increase of the system activity, and its power consumption.
The power efficiency can be enhanced by intelligently adapting the system processing load according to the signal local variations. In this end, a signal driven sampling scheme, which is based on "level-crossing" is employed. The Level Crossing Sampling Scheme (LCSS) [2] adapts the sampling rate by following the local characteristics of the input signal [3,4]. Hence, it drastically reduces the activity of the post-processing chain, because it only captures the relevant information [5,6]. In this context, LCSS Based Analog to Digital Converters (LCADCs) have been developed [7][8][9]. Algorithms for processing [6,[10][11][12], and analysis [3,5,13,14] of the nonuniformly spaced out in time-sampled data, obtained with the LCSS have also been developed.

EURASIP Journal on Advances in Signal Processing
Filtering is a basic operation, almost required in every signal processing chain. Therefore, this paper focuses on the development of efficient Finite Impulse Response (FIR) filtering techniques. The idea is to pilot the system processing activity by the input signal variations. By following this idea, an efficient solution is proposed by intelligently combining the features of both nonuniform and uniform signal processing tools, which promise a drastic computational gain of the proposed techniques compared to the classical one.
Section 2 briefly reviews the nonuniform signal processing tools employed in the proposed approach. Complete functionality of the proposed filtering techniques is described in Section 3. Section 4 demonstrates the appealing features of the proposed techniques with the help of an illustrative example. The computational complexities of both proposed techniques are deduced and compared, among and to the classical case in Section 5. Section 6 discusses the processing error. In Section 7, the proposed techniques performance is evaluated for a speech signal. Section 8 finally concludes the article.

LCSS (Level Crossing Sampling Scheme).
The LCSS belongs to the signal-dependent sampling schemes like zero-crossing sampling [15], Lebesgue sampling [16], and reference signal crossing sampling [17]. The concept of LCSS is not new and has been known at least since 1950s [18]. It is also known as an event-based sampling [19,20]. In recent years, there have been considerable interests in the LCSS, in a broad spectrum of technology and applications. In [21][22][23][24], authors have employed it for monitoring and control systems. It has also been suggested in literature for compression [2], random processes [25], and band-limited Gaussian random processes [26].
The LCSS is a natural choice for sampling the timevarying signals. It lets the signal to dictate the sampling process [4]. The nonuniformity in the sampling process represents the signal local variations [3]. In the case of LCSS, a sample is captured only when the input analog signal x(t) crosses one of the predefined thresholds. The samples are not uniformly spaced in time because they depend on x(t) variations as it is clear from Figure 1.
Let a set of levels which span the analog signal amplitude range be ΔV in . These levels are equally spaced by a quantum q. When x(t) crosses one of these predefined levels, a sample is taken [2]. This sample is the couple (x n , t n ) of an amplitude x n and a time t n . However x n is clearly equal to one of the levels and t n can be computed by employing t n = t n−1 + dt n . (1) In (1), t n is the current sampling instant, t n−1 is the previous one, and dt n is the time elapsed between the current and the previous sampling instants.

LCADC (LCSS-Based Analog to Digital Converter).
Classically, during an ideal A/D conversion process the sampling instants are exactly known, where as samples amplitudes are quantized at the ADC resolution [27], which is defined by the ADC number of bits. This error is characterized by the Signal to Noise Ratio (SNR) [27], which can be expressed by SNR dB = 1.76 + 6.02M.
Here, M is the ADC number of bits. It follows that the SNR of an ideal ADC depends only on M and it can be improved by 6.02 dB for each increment in M. The A/D conversion process, which occurs in the LCADCs [7][8][9], is dual in nature. Ideally in this case, samples amplitudes are exactly known since they are exactly equal to one of the predefined levels, while the sampling instants are quantized at the timer resolution T timer . According to [7,8], the SNR in this case is given by Here, P x and P x are the powers of x(t) and of its derivative, respectively. It shows that in this case, the SNR does not depend on M any more, but on x(t) characteristics and T timer . An improvement of 6.02 dB in the SNR can be achieved by simply halving T timer . The choice of M is however crucial. It should be taken large enough to ensure a proper reconstruction of the signal. This problem has been addressed in [28][29][30][31]. In particular, in [31], it is shown that a band-limited signal can be ideally reconstructed from nonuniformly spaced samples if the average number of samples satisfies the Nyquist criterion. In the case of LCADCs, the average sampling frequency depends on M and the signal characteristics [7][8][9]. Thus, for a given application an appropriate M should be chosen in order to respect the reconstruction criterion [31].
In [7][8][9], authors have shown advantages of the LCADCs over the classical ones. The major advantages are the reduced activity, the power saving, the reduced electromagnetic emission, and the processing noise reduction. Inspiring from these interesting features, the Asynchronous Analog to Digital Converter (AADC) [7] is employed to digitize x(t) in the studied case. The characteristics of the filtering techniques described in the sequel are highly determined by the characteristics of the nonuniformly sampled signal EURASIP Journal on Advances in Signal Processing 3 produced by the AADC. We have already defined the AADC amplitude range ΔV in , the number of bits M and the quantum q. They are linked by the following relation: This quantum together with the AADC processing delay for one sample δ yields the upper limit on the input signal slope, which can be captured properly: In order to respect the reconstruction criterion [31] and the tracking condition [7], a band pass filter with pass-band [ f min ; f max ] is employed at the AADC input. This together with a given M induces the AADC maximum and minimum sampling frequencies [6,11], defined by Fs min = 2 f min 2 M − 1 .
Here, f max and f min are the x(t) bandwidth and fundamental frequencies, Fs max and Fs min are the AADC maximum and minimum sampling frequencies, respectively.

ASA (Activity Selection Algorithm).
The nonuniformly sampled signal obtained with the AADC can be used for further nonuniform digital processing [3,10,13]. However in the studied case, the nonuniformity of the sampling process, which yields information on the signal local features, is employed to select only the relevant signal parts. Furthermore, the characteristics of each signal selected part are analyzed and are employed later on to adapt the proposed system parameters accordingly. This selection and localfeatures extraction process is named as the ASA.
For activity selection, the ASA exploits the information laying in the level-crossing sampled signal nonuniformity [5]. This selection process corresponds to an adaptive length rectangular windowing. It defines a series of selected windows within the whole signal length. The ability of activity selection is extremely important to reduce the proposed system processing activity and consequently its power consumption. Indeed, in the proposed case, no processing is performed during idle signal parts, which is one of the reasons of the achieved computational gain compared to the classical case. The ASA is defined as follow: Here, dt n is clear from (1). T 0 = 1/ f min is the fundamental period of the bandlimited signal x(t), T 0 and dt n detect parts of the nonuniformly sampled signal with activity. If the measured time delay dt n is greater than T 0 /2, x(t) is considered to be idle. The condition dt n ≤ T 0 /2 is chosen to ensure the Nyquist sampling criterion for f min . T ref is the reference window length. Its choice depends on the input signal characteristics and the system resources. The upper bound on T ref is posed by the maximum number of samples that the system can treat at once. Whereas the lower bound on T ref is posed by the condition T ref ≥ T 0 , which should be respected in order to achieve a proper spectral representation [5].
T i represents the length in seconds of the ith selected window W i . T ref poses the upper bound on T i . N i represents the number of nonuniform samples laying in W i , which lies on the jth active part of the nonuniformly sampled signal. i and j both belong to the set of natural numbers N * . The jth signal activity can be longer than T ref .
In this case, it will be splitted into more than one selected windows.
The above-described loop repeats for each selected window, which occurs during the observation length of x(t). Every time before starting the next loop, i is incremented and N i and T i are initialized to zero.
The maximum number of samples N max , which can take place within a chosen T ref can be calculated by employing The ASA displays interesting features, which are not available in the classical case. It only selects the active parts of the nonuniformly sampled signal. Moreover, it correlates the length of the selected window with the input signal activity, laying in it. In addition, it also provides an efficient reduction of the phenomenon of spectral leakage in the case of transient signals. The leakage reduction is achieved by avoiding the signal truncation problem with a simple and an efficient algorithm, instead of employing a smoothening (cosine) window function, which is used in the classical schemes [5]. These abilities make the ASA extremely effective in reducing the overall system processing activity, especially in the case of low activity sporadic signals [5,6,11,12,14].

Proposed Adaptive Rate Filtering
3.1. General Principle. Two techniques are described to filter the selected signal obtained at the ASA output. The signal processing chain common to both filtering techniques is shown in Figure 2.
The activity selection and the local features extraction are the bases of the proposed techniques. They make to achieve the adaptive rate sampling (only relevant samples to process) along with the adaptive rate filtering (only relevant operations to deliver a filtered sample). Such an achievement assures a drastic computational gain of the proposed filtering techniques compared to the classical one. The steps of realizing these ideas are detailed in the following subsections.  follows that the local sampling frequency Fs i can be specific for W i . According to [5] Fs i can be calculated by employing The upper and the lower bounds on Fs i are posed by Fs max and Fs min , respectively. In order to perform a classical filtering algorithm, the selected signal laying in W i is uniformly resampled before proceeding to the filtering stage (cf. Figure 2). Characteristics of the selected signal part laying in W i are employed to choose its resampling frequency Frs i . Once the resampling is done, there are Nr i samples in W i . Choice of Frs i is crucial and this procedure is detailed in the following subsection.

Adaptive Rate Filtering.
It is known that for fixed design parameters (cut-off frequency, transition-band width, pass-band, and stop-band ripples) the FIR filter order varies as a function of the operational sampling frequency. For high sampling frequency, the order is high and vice versa. In the classical case, the sampling frequency and filter order both remains unique regardless of the input signal variations, so they have to be chosen for the worst case. This time invariant nature of the classical filtering causes a useless increase of the computational load. This drawback has been resolved up to a certain extent by employing the multirate filtering techniques [32][33][34].
The proposed filtering techniques of this paper are the intelligent alternatives to the multirate filtering techniques. They achieve computational efficiency by adapting the sampling frequency and the filter order according to the input signal local variations. Both techniques have some common features, which are described in the following.
In both cases, a reference FIR filter is offline designed for a reference sampling frequency F ref . Its impulse response is h k , where k is indexing the reference filter coefficients. F ref is chosen in order to satisfy the Nyquist sampling criterion for During online computation, F ref and the local sampling frequency Fs i of window W i are used to define the local resampling frequency Frs i and a decimation factor d i . The Frs i is employed to uniformly resample the selected signal laying in W i , where as d i is employed to decimate h k for filtering W i .
Frs i can be specific depending upon Fs i [11,12] In the opposite case, that is, Fs i < F ref , Frs i = Fs i is chosen and h k is online decimated in order to reduce F ref to Frs i . In this case, the reference filter order is reduced for W i , which reduces the number of operations to deliver a filtered sample [6,11]. Hence, it improves the proposed techniques computational efficiency. In this case, it appears that Frs i may be lower than the Nyquist frequency of x(t) and so it can cause aliasing. According to [6,11], if the local signal amplitude is of the order of the maximal range ΔV in , then for a suitable choice of M (application-dependent) the signal crosses enough consecutive thresholds. Thus, it is locally oversampled with respect to its local bandwidth and so there is no aliasing problem. This statement is further illustrated with the results summarized in Table 3.
In order to decimate h k the decimation factor d i for W i is online calculated by employing Equation (12) shows that the decimated filter impulse response for the ith selected window h i j is obtained by picking every (D i )th coefficient from h k . Here, j is indexing the decimated filter coefficients. If the order of h k is P, then the order of h i j is given as: EURASIP Journal on Advances in Signal Processing

If no
If yes Figure 3: Flowchart of the ARD.

If no
If yes Figure 4: Flowchart of the ARR. A simple decimation causes a reduction of the decimated filter energy compared to the reference one. It will lead to an attenuated version of the filtered signal. D i is a good approximate of the ratio between the energy of the reference filter and that of the decimated one. Thus, this effect of decimation is compensated by scaling h i j with D i . The process is clear from The two techniques mainly differ in the way of decimating h k for a fractional d i . The process is explained in the following Sections.

ARD (Activity Reduction by Filter Decimation).
In the ARD technique, h k is decimated by employing D i . It calls for an adjustment of Frs i which is achieved as The complete procedure of obtaining Frs i and h i j for the ARD is described in Figure 3.

Illustrative Example
In order to illustrate the ARD and the ARR filtering techniques, an input signal x(t) shown on the left part of Figure 5 is employed. Its total duration is 20 seconds and it consists of three active parts. Summary of x(t) activities is given in Table 1. Table 1 shows that x(t) is band limited between f min = 5 Hz and f max = 1 kHz. In this case, x(t) is digitized by employing a 3-bit resolution AADC. Thus, for given ENOB the corresponding minimum and maximum sampling frequencies are Fs min = 70 Hz and Fs max = 14 kHz. The AADC amplitude range ΔV in = 1.8 v is chosen, which results into a quantum q = 0.2571 v.   Each activity contains a low-and a high-frequency component (cf. Table 1). In order to filter out the highfrequency parts from each activity, a low pass reference FIR filter is implemented by employing the standard Parks-McClellan algorithm. The reference filter parameters are summarized in Table 2.
For this example the reference window length T ref = 1 second is chosen. It satisfies the boundary conditions discussed in Section 2.3. The given T ref delivers N max = 14000 samples in this case (cf. Equation (9)). The ASA delivers three selected windows for the whole x(t) span of 20 seconds, which are shown on the right part of Figure 5. The selected windows parameters are displayed in Table 3. Table 3 shows that the first window is an example of the Fs i ≥ F ref case, so it is tackled similarly by both techniques. In the other windows, Fs i < F ref is valid, so the online h k decimation is employed. As d 2 and d 3 , calculated by employing Equation (11) are fractional ones, so this case is tackled in a different way by the ARD and the ARR.
Values of Frs i , D i , Nr i and P i are calculated for the ARD, and the ARR by employing the methods shown in Figures 3  and 4, respectively. The obtained results are summarized in Tables 4 and 5.  1  2500  1250  1  127  2  1250  1250  2  64  3  500  500  5  26   Tables 3, 4, and 5 jointly exhibit the interesting features of the proposed filtering techniques, which are achieved by an intelligent combination of the nonuniform, and the uniform signal processing tools (cf. Figure 2). Fs i represents the sampling frequency adaptation by following the local variations of x(t). N i shows that the relevant signal parts are locally over-sampled in time with respect to their local bandwidths [6,11]. Frs i shows the adaptation of the resampling frequency for each selected window. It further adds to the computational gain of the proposed techniques by avoiding the unnecessary interpolations during the resampling process. Nr i shows how the adjustment of Frs i avoids the processing of unnecessary samples during the post filtering process. P i represents how the adaptation of h k for W i avoids the unnecessary operations to deliver the filtered signal. T i exhibits the dynamic feature of ASA, which is to correlate T ref with the signal activity laying in it [5].
These results have to be compared with what is done in the corresponding classical case. If F ref is chosen as the sampling frequency, then the total x(t) span is sampled at 2500 Hz. It makes N = 20×2500 = 50000 samples to process with the 127th-order FIR filter. On the other hand, in both proposed techniques the total number of resampled data points is much lower, 3000 and 2794 for the ARD and the ARR, respectively. Moreover, the local filter orders in W 2 and W 3 are also lower than 127. It promises the computational efficiency of the proposed techniques compared to the classical one. A detailed complexity comparison is made in the following Section.

Computational Complexity
In the classical case, with a P order filter, it is well known that P multiplications and P additions are required to compute each filtered sample. If N is the number of samples then the total computational complexity C can be calculated by employing In the adaptive techniques presented here, the adaptation process requires extra operations for each selected window. The computational complexities of both techniques, C ARD and C ARR are deduces as follow.
The following steps are common to both the ARD and the ARR techniques. The choice of Frs i is a common operation for both proposed techniques. It requires one comparison between F ref and Fs i . The data resampling operation is also required in both techniques before filtering. In the studied case, the resampling process is performed by employing the Nearest Neighbour Resampling Interpolation (NNRI). The NNRI is chosen because of its simplicity, as it employs only one nonuniform observation for each resampled one. Moreover, it provides an unbiased estimate of the original signal variance. Due to this reason, it is also known as a robust interpolation method [35,36]. The detailed reasons of inclination toward NNRI are discussed in [5,35,36]. The NNRI is performed as follow.
For each interpolation instant tr n , the interval of nonuniform samples [t n , t n+1 ], within which tr n lies is determined. Then the distance of tr n to each t n and t n+1 is computed and a comparison among the computed distances is performed to decide the smaller among them. ForW i , the complexity of the first step is N i + Nr i comparisons and the complexity of the second step is 2Nr i additions and Nr i comparisons. Hence, the NNRI total complexity for W i becomes N i + 2Nr i comparisons and 2Nr i additions.
In the case, when Fs i < F re f , the decimation of h k is performed in both techniques. In order to do so, d i is computed by performing a division between F re f and Frs i . D i is calculated by employing a floor operation on d i . A comparison is made between D i and d i . In the case when D i = d i , the process of obtaining h i j is similar for both techniques (cf. Figures 3 and 4). In this case, the decimator simply picks every (D i )th coefficient from h k . It has a negligible complexity compared to the operations like addition and multiplication. This is the reason why its complexity is not taken into account during the complexity evaluation process. In both techniques, the decimated filter impulse response is scaled, it requires P i multiplications. The fractional d i is tackled in a different way by each filtering technique and is detailed in the following subsections.

Complexity of the ARD Technique.
Even if d i is fractional in the case of ARD technique, h k decimation is performed by employing D i . Frs i is modified in order to keep it coherent with F ref and it requires one division (cf. Figure 3). Finally, a P i -order filter performs P i Nr i multiplications and P i Nr i additions for W i . The combine computational complexity for the ARD technique C ARD is given by

Complexity of the ARR Technique.
In the case of ARR technique, d i is employed as the decimation factor. The fractional decimation is achieved by resampling h k at Frs i . The resampling is performed by employing the NNRI, which performs P+2P i comparisons and 2P i additions to deliver h i j . The remaining operation cost between the ARD and the ARR is common. The combine computational complexity for the ARR technique C ARR is given by In Equations (15) and (16), i = 1, 2, 3, . . . , I, represents the selected windows index. α and β are the multiplying factors. α is 0 for the case when Fs i ≥ F ref and it is 1 otherwise. β is 0 for the case when d i = D i and it is 1 otherwise. Filtering. From (14), (15), and (16), it is clear that there are uncommon operations between the classical and the proposed adaptive rate filtering techniques. In order to make them approximately comparable, it is assumed that a comparison has the same processing cost as that of an addition and a division or a floor has the same processing cost as that of a multiplication. By following these assumptions, comparisons are merged into 8 EURASIP Journal on Advances in Signal Processing   (15) and (16) can be written as follow:

Complexity Comparison of the ARD and the ARR with the Classical
By employing results of the example studied in the previous section, computational comparisons of the ARD and the ARR with the classical one are made in terms of additions and multiplications. The results are computed for different x(t) time spans and are summarized in Tables 6 and  7.
Gains in additions and multiplications of the proposed techniques over the classical one are clear from the above results. In the case of W 1 , where the resampling frequency and the filter order is the same as in the classical case (cf. Tables 4 and 5), a gain is achieved by using the proposed adaptive techniques. This is only due to the fact that the ASA correlates the window length to the activity (0.5 second), while the classic case computes during the total duration of T re f = 1 second. Gains are of course much larger in other windows, since the proposed techniques are taking benefit of processing the lesser samples along with the lower filter orders. When treating the whole x(t) span of 20 seconds, the proposed techniques also take advantage of the idle x(t) parts, which further induces additional gains compared to the classical case.
The above results confirm that the proposed filtering techniques lead toward a drastic reduction in the number of operations compared to the classical one. This reduction in operations is achieved due to the joint benefits of the AADC, the ASA and the resampling, as they enable to adapt the sampling frequency and the filter order by following the input signal local variations.

Complexity Comparison between the ARD and the ARR.
The main difference between both proposed techniques occurs for the case when Fs i < F re f and d i is fractional (cf. Section 3).
The ARD makes an increment in Frs i in order to keep it coherent with F re f . Increase in Frs i causes to increase Nr i and also to increase P i . Thus, in comparison to the ARR, this technique increases the computational load of the postfiltering operation, while keeping the decimation process of h k simple.
The ARR performs h k resampling at Frs i . Thus, in comparison to the ARD, this technique increases the complexity of the decimation process of h k , while keeping the computational load of the post-filtering process lower.
In continuation to Section 5.3, a complexity comparison between the ARD and the ARR is made in terms of additions, and multiplications by employing Equations (17) and (18), respectively. It concludes that the ARR remains computationally efficient compared to the ARD, in terms of additions and multiplications, as far as the conditions given by expressions (19) and (20) remain true. Please note that Nr i and P i can be different for the ARD and the ARR (cf. Tables  4 and 5): For this studied example, d 2 and d 3 are fractional ones, thus the ARD and the ARR proceed differently. Conditions (19) and (20) remain true for both W 2 and W 3 (cf . Tables 4  and 5). Hence, the gains in additions and multiplications of the ARR are higher than those of the ARD for W 2 and W 3 (cf. Tables 6 and 7). It shows that except for very specific situation the ARR technique will always remain less expensive than the ARD. The ARR achieves this computational performance by employing the fractional decimation of h k , which may lead a quality compromise of the ARR compared to the ARD. This issue is addressed in the following section.

Approximation Error.
In the proposed techniques, the approximation error occurs due to two effects: the time quantization error which occurs due to the AADC finite timer precision and the interpolation error which occurs in the course of the uniform resampling process. After these two operations, the mean approximation error for W i can be computed by employing the following: Here, xr n is the nth resampled observation, interpolated with respect to the time instant tr n , xo n is the original sample value which should be obtained by sampling x(t) at tr n . In the studied example discussed in Section 4, x(t) is analytically known, thus it is possible to compute its original sample value at any given time instant. It allows us to compute the approximation error introduced by the proposed adaptive rate techniques by employing Equation (21). The results obtained for each selected window for both the ARD and the ARR are summarized in Table 8. Table 8 shows the approximation error introduced by the proposed techniques. This process is accurate enough for a 3-bit AADC. For the higher precision applications, the approximation accuracy can be improved by increasing the AADC resolution M and the interpolation order [6,8,37,38]. Thus, an increased accuracy can be achieved at the cost of an increased computational load. Therefore, by making a suitable compromise between the accuracy level and the computational load, an appropriate solution can be devised for a specific application.
For a given M and interpolation order the approximation accuracy can be further improved by employing the symmetry during the interpolation process. It results into a reduced resampling error [38,39]. The pros and cons of this approach are under investigation and a description on it is given in [40].

Filtering Error.
In the proposed filtering techniques, a reference filter h k is employed and then it is online decimated for W i , depending on the chosen Frs i . This online decimation can cause the filtering precision degradation. In order to evaluate this phenomenon on our test signal the following procedure is adapted.
A reference filtered signal is generated. In this case, instead of decimating h k to obtain h i j , a specific filter h i m is directly designed for W i by using the Parks-McClellan algorithm. It is designed for Frs i by employing the same design parameters, summarized in Table 2. The signal activity corresponding to W i is sampled at Frs i with a high precision classical ADC. This sampled signal is filtered by employing h i m . The filtered signal obtained in this way is used as a reference one for W i , and its comparison is made with the results obtained by the proposed techniques.
Let y n be the nth reference-filtered sample and y n be the nth filtered sample obtained by one of the proposed The mean filtering error of both proposed techniques is calculated, for each x(t) activity by employing (22). The results are summarized in Table 9. Table 9 shows that the online decimation of h k in the proposed techniques causes a loss of the desired filtering quality. Indeed, the filtering error increases with the increase in d i . The measure of this error can be used to decide an upper bound to d i (by performing an offline calculation), for which the decimated and the scaled filters provide results with an acceptable level of accuracy. The level of accuracy is application-dependent. Moreover, for high precision applications, an appropriate filter can be online calculated for each selected window at the cost of an increased computational load. The process is clear from generating the reference filtered signal y n , discussed above. Table 9 shows that MFE 2 and MFE 3 for the ARR are higher than that of the ARD. It is due to the fact of h k resampling for the ARR to deliver h 2 j and h 3 j . It makes to employ the interpolated coefficients of h k for filtering the resampled data, lies in W 2 and W 3 , respectively, which results in an increased filtering error of the ARR compared to the ARD. Similar to Section 6.1, this resampling error can also be reduced to a certain extent, by employing a higher order interpolator [37,38]. In conclusion, a certain increase in the accuracy can be achieved at a certain loss of the processing efficiency.

Speech Signal as a Case Study
In order to evaluate performances of the ARD and the ARR for real life signals, a speech signal x(t) shown on Figure 6(a) is employed. x(t) is a 1.6 second, [50 Hz; 5000 Hz] bandlimited signal corresponding to a three-word sentence. The goal is to determine the pitch (fundamental frequency) of x(t) in order to determine the speaker's gender. For a male speaker, the pitch lies with the frequency range [100 Hz, 150 Hz], whereas for a female speaker, the pitch lies with the frequency range [200 Hz, 300 Hz] [41].
The reference frequency is chosen as F ref = 11.2 kHz, which is a common sampling frequency for speech. A 4-bit resolution AADC is used for digitizing x(t), and therefore we have Fs min = 1.5 kHz, and Fs max = 150 kHz. The amplitude range is always set to ΔV in = 1.8 V, which leads to a quantum q = 0.12 v. The amplitude of x(t) is normalized to 0.9 v in order to avoid the AADC saturation. The studied signal is part of a conversation and during a dialog, the speech activity is 25% of the total dialog time [42]. A classical filtering system would remain active during the total dialog duration. The proposed LCSS-based filtering techniques will remain active only during 25% of the dialog time span, which will reduce the system power consumption.
A speech signal mainly consists of vowels and consonants. Consonants are of lower amplitude compared to vowels [41,43]. In order to determine the speakers pitch, vowels are the relevant parts of x(t). For q = 0.12 v, consonants are ignored during the signal acquisition process, and are considered as low amplitude noise. In contrast, vowels are locally over-sampled like any harmonic signal [6,10,11]. This intelligent signal acquisition further avoids the processing of useless samples, within the 25% of x(t) activity, and so further improves the proposed techniques computational efficiency.
In order to apply the ASA, T ref = 0.5 seconds is chosen. It results in N max = T ref F max = 75000 in this case (cf. Equation (9)). The ASA delivers three selected windows, which are shown on Figure 6(b). The parameters of each selected window are summarized in Table 10.
Although the consonants are partially filtered out during the data acquisition process, yet for proper pitch estimation, it is required to filter out the remaining effect of high frequencies still present in x(t). To this aim, a reference low pass filter is designed, with the standard Parks-McClellan algorithm. Its characteristics are summarized in Table 11. To find the pitch, we now focus on W 2 , which corresponds to the vowel "a". A zoom on this signal part is plotted on Figure 6(c). The condition Fs 2 ≤ F ref is valid, and d 2 is fractional (cf. Equation (11)).Thus, the filtering process for each proposed technique will differ, which makes it possible to compare their performances. The values of Frs 2 , Nr 2 , D 2 , and P 2 for both techniques are given in Table 12.
Computational gains of the proposed filtering techniques compared to the classical one are computed by employing Equations (14), (17), and (18). The results show 8.62 and 13.17 times gains in additions and 8.71 and 13.26 times gains in multiplications, respectively, for the ARD and the ARR, for W 2 . It confirms the computational efficiency of the proposed techniques compared to the classical one. It is gained firstly by achieving an intelligent signal acquisition and secondly by adapting the sampling frequency and the filter order by following the local variations of x(t).
Once more the conditions (19) and (20) remain true for W 2 so the ARR technique remains computationally efficient than the ARD one.  Spectra of the filtered signal laying in W 2 , obtained with the reference filtering (cf. Section 6.2), with the ARD and with the ARR techniques are plotted, respectively, on Figures 6(d), 6(e), and 6(f).
The spectra on Figure 6 show that the fundamental frequency is about 215 HZ. Thus, one can easily conclude that the analyzed sentence is pronounced by a female speaker. Although it is required to decimate the reference filter 3 times and 3.7 times, respectively, for the ARD and the ARR, yet spectra of the filtered signal, obtained with the proposed techniques are quite comparable to spectrum of the reference-filtered signal. It shows that even after such a level of decimation, results delivered by the proposed techniques are of acceptable quality for the studied speech application.
The above discussion shows the suitability of the proposed techniques for the low activity time-varying signals like electrocardiogram, phonocardiogram, seismic, and speech. Speech is a common, and easily accessible signal. Therefore, the proposed techniques performance is studied for a speech application, though it can be applied to other appropriate real signals like electrocardiogram, phonocardiogram, and seismic. The devised approach versatility lays in the appropriate choice of system parameters like the AADC resolution M, the distribution of level crossing thresholds, and the interpolation order. These parameters should be tactfully chosen for a targeted application, so that they ensure an attractive tradeoff between the system computational complexity and the delivered output quality.

Conclusion
Two novel adaptive rate filtering techniques have been devised. These are well suited for low activity sporadic signals like electrocardiogram, phonocardiogram and seismic signals. For both filtering techniques, a reference filter is offline designed by taking into account the input signal statistical characteristics and the application requirements.
The complete procedure of obtaining the resampling frequency Frs i and the decimated filter coefficients h i j for W i is described for both proposed techniques. The computational complexities of the ARD and the ARR are deduced and compared with the classical one. It is shown that the proposed techniques result into a more than one-order magnitude gain in terms of additions and multiplications over the classical one. It is achieved due to the joint benefits of the AADC, the ASA and the resampling as they allow the online adaptation of parameters (Fs i , Frs i , N i , Nr i , D i , and P i ) by exploiting the input signal local variations. It drastically reduces the total number of operations and therefore, the energy consumption compared to the classical case.
A complexity comparison between the ARD and the ARR is also made. It is shown that the ARR outperforms the ARD in most of the cases. Performances of the ARD and the ARR are also demonstrated for a speech application. The results obtained in this case are in coherence with those obtained for the illustrative example.
Methods to compute the approximation and the filtering errors for the proposed techniques are also devised. It is shown that the errors made by the proposed techniques are minor ones, in the studied case. A higher precision can be achieved by increasing the AADC resolution and the interpolation order. Thus, a suitable solution can be proposed for a given application by making an appropriate tradeoff between the accuracy level and the computational load.
A detailed study of the proposed filtering techniques computational complexities by taking into account the real processing cost at circuit level is in progress. Future works focus on the optimization of these filtering techniques and their further employment in real life applications.