# Fast algorithms for band-limited extrapolation by over sampling and Fourier series

## Abstract

In this paper, fast algorithms for the extrapolation of band-limited signals are presented by the sampling theorem and Fourier series in the case of over sampling. Assume the band-limited signal is known in a finite interval. We update the signal outside the interval by the Shannon sampling theorem in the case of over sampling. Then we obtain a fast algorithm in the form of Fourier series instead of the Fourier transform in the Papoulisâ€“Gerchberg algorithm. Gibbs phenomena is analyzed in the method. An algorithm is presented to control the Gibbs phenomena, and some examples are given in the experimental results.

## 1 Introduction

The extrapolation of band-limited signals is widely applied in Engineering, Seismology, Industrial Electronics and other fields [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19].

In [19], it is pointed out that the extrapolation for bandlimited signals is one of essential research objects in signal processing, wireless communication, and positioning scenarios where the transmitted signals are always bandlimited.

In [1] and [2], Papoulis and Gerchberg presented an iterative algorithm in which the iteration converges to the extrapolation of the band-limited signal. In [3], an improved version of Papoulisâ€“Gerchberg algorithm is given. In [4], the application of the extrapolation in spectral estimation is described. In [5,6,7,8,9,10,11,12,13,14], other extrapolation algorithms are presented. In [15, 16], applications of extrapolation and regularization method in image restoration are given. In [20], the discrete iteration of the Papoulisâ€“Gerchberg algorithm is studied. Fast iterative and noniterative methods are presented in [21]. In [22], the connections between two approaches to band-limited interpolation are clarified. In [19], a piece-wise extrapolation method is proposed to solve the problems according to the error variation and error accumulation within the iterations of Papoulisâ€“Gerchberg algorithm. The extrapolation process is divided into several pieces and the minimum energy least squares error extrapolation result based on Papoulisâ€“Gerchberg algorithm is given to reduce the accumulated error. In [23], the Papoulisâ€“Gerchberg algorithm is extended for extrapolation of signals in higher dimensions, and the regularization method is applied in the noisy case due to the instability. In [24] and [25], a fast convergence algorithm is presented. This algorithm has been proved that it converges faster than the Papoulisâ€“Gerchberg algorithm. In [26], the fast convergence algorithm is taken to be a generalization of the Papoulisâ€“Gerchberg algorithm. In [27] it is cited in the nonpatent citations.

In the extrapolation algorithms described above, we need to compute the integrals in the Fourier transform. The novelty of this paper is to revise the fast convergence algorithm in [24] and [25] by the Shannon Sampling Theorem. The formula of iteration can be simplified in the case of over sampling. We replaced the Fourier transform [1, 2, 24, 25] by Fourier series in the new algorithm in this paper. In this way, we obtain a fast algorithm in the form of Fourier series instead of the Fourier transform in the Papoulisâ€“Gerchberg algorithm. So the computation of the integrals in the previous algorithms [1, 2, 24, 25] is omitted. The amount of computation is reduced and the accuracy is even better. We even consider the truncation of the Fourier series in which the Gibbs phenomena occurs, and a fast algorithm is presented to remove the Gibbs phenomena.

In this paper, we will improve the fast convergence algorithm in [24] by the Shannon Sampling Theorem in the case of over sampling [28]. By over sampling, we will present a fast algorithm which is more effective than the fast convergence algorithm in [24] and [25]. In Sect.Â 2, we will describe the band-limited extrapolation problem. In Sect.Â 3, we will describe how the sampling theorem is used to obtain a fast algorithm and explain why we can obtain more accurate solutions by the fast algorithm. In Sect.Â 4, the method and experiment for extrapolation are given. In Sect.Â 5, Gibbs phenomena is analyzed and shown by some examples. Also another algorithm is presented to control the Gibbs phenomena. In Sect.Â 6, we will discuss the experimental results. Finally the conclusion is given in Sect.Â 7.

## 2 Extrapolation of band-limited signals

In this section, we introduce the problem of extrapolation of band-limited signals.

The definition of band-limited functions is as following:

### Definition

For $$\Omega =const.>0$$, a function $$f\in L^2({\textbf {R}})$$ is said to be $$\Omega$$-band-limited if $${\hat{f}}(\omega )=0 \,\,\forall \omega \in {\textbf {R}}\backslash [-\Omega ,\Omega ]$$. Here $${\hat{f}}$$ is the Fourier transform of f [29]:

\begin{aligned} {\mathcal{F}} (f)(\omega )=\hat{f}(\omega ):=\int _{-\infty }^{+\infty }f(t)e^{-j\omega t}\textrm{d}t,\, \omega \in {\textbf {R}}, \end{aligned}
(1)

where $${\textbf {R}}$$ is the set of all real numbers.

We then have the inversion formula:

\begin{aligned} \mathcal{F}\ ^{-1} ({\hat{f}})(t) = f(t)=\frac{1}{2\pi } \int _{-\Omega }^{\Omega }{\hat{f}}(\omega )e^{j\omega t}\textrm{d}\omega ,\,\mathrm{a.e.}\, t\in {\textbf {R}}. \end{aligned}
(2)

By the derivative

\begin{aligned} f'(t)=\frac{1}{2\pi } \int _{-\Omega }^{\Omega }(j\omega ){\hat{f}}(\omega )e^{j\omega t}\textrm{d}\omega \end{aligned}

we can see that f(t) is an analytical function if we take t to be a complex variable. For analytical functions, we have the uniqueness theorem [30].

The Uniqueness Theorem. Let f(z) and g(z) be analytic in a region $$\mathcal R$$. If the set of points z in $$\mathcal R$$ where $$f(z) = g(z)$$ has a limit point in $$\mathcal R$$, then $$f(z) = g(z)$$ for all $$z \in \mathcal R$$. In particular, if the set of zeros of f(z) in $$\mathcal R$$ has a limit point in $$\mathcal R$$, then f(z) is identically zero in $$\mathcal R$$.

In this paper we just consider t is in the set of real numbers.

By the uniqueness theorem for analytical functions, f(t) is uniquely determined by f(t) for $$t\in [-T,\,T]$$ where $$T=const.>0$$ since any point in $$[-T,\,T]$$ is a limit point. So we present the band-limited extrapolation problem:

Assume that $$f:{\textbf {R}}\rightarrow R$$Â is a band-limited function and T is a positive constant,

\begin{aligned} \textrm{given}\, f(t)\,\,\, t\in [-T,T],\, \textrm{find}\,\, f(t)\,\,\, t\in {\textbf {R}}\backslash [-T,T]. \end{aligned}
(3)

### Remark 1

The problem (3) is the problem of extrapolation in the time domain. In some papers, the problem of extrapolation in the frequency domain to obtain a higher resolution signal in the time domain is discussed [31, 32].

Here we give an example of band-limited signal.

### Example

Suppose

\begin{aligned} f(t):=\frac{1-\cos t}{\pi t^2} \end{aligned}

is the signal in Fig.Â 1.

Then construct

\begin{aligned} {\hat{f}}(\omega )= \left\{ \begin{array}{ll} 1- |\omega |, &{} \quad \omega \in [-\Omega ,\Omega ] \\ 0,&{} \quad \omega \in {\textbf {R}}\backslash [-\Omega ,\Omega ]. \end{array} \right. \end{aligned}

Here $$\Omega =1$$. f(t) is a band-limited signal, and it is uniquely determined by f(t), $$t\in [-T,T]$$.

The Papoulisâ€“Gerchberg algorithm in [1, 2] is as follows:

\begin{aligned} f_0:=\mathcal{P}_Tf. \end{aligned}

For $$k=0,1,2,...$$,

\begin{aligned} f_{k+1}:=\mathcal{P}_Tf+(\mathcal{I}-\mathcal{P}_T)\mathcal{F} ^{-1} \mathcal{P}_{\Omega } \mathcal{F} f_k, \end{aligned}

where $$\mathcal I$$ is the identity operator,

\begin{aligned} \mathcal{P}_Tf(t):= \left\{ \begin{array}{ll} f(t), &{} \quad t\in [-T,T] \\ 0,&{} \quad t\in {\textbf {R}}\backslash [-T,T] \end{array} \right. \end{aligned}

and

\begin{aligned} \mathcal{P}_{\Omega }{\hat{f}}(\omega ):= \left\{ \begin{array}{ll} {\hat{f}}(\omega ), &{} \quad \omega \in [-\Omega ,\Omega ] \\ 0,&{} \quad \omega \in {\textbf {R}}\backslash [-\Omega ,\Omega ]. \end{array} \right. \end{aligned}

The convergence $$||f_k -f||_{L^2} \rightarrow 0$$ is proven in [1] by Papoulis.

## 3 A fast extrapolation algorithm

In this section we present a fast extrapolation algorithm in the case of over sampling by sampling theorem. We first describe the Shannon Sampling Theorem in [29].

Shannon Sampling Theorem. The $$\Omega _0$$-band-limited signal $$f(t)\in L^2({\textbf {R}})$$ can be exactly reconstructed from its samples $$f(nh_0)$$, and

\begin{aligned} f(t)=\sum _{n=-\infty }^\infty f(nh_0)\frac{\sin \Omega _0(t-nh_0)}{\Omega _0(t-nh_0)} \end{aligned}

where $$h_0:= \pi /\Omega _0$$.

### Remark 2

Since $$\Omega _0$$-band-limited signals are also $$\Omega$$-band-limited for $$\Omega \ge \Omega _0$$, we have the formula for oversampling

\begin{aligned} f(t)=\sum _{n=-\infty }^\infty \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} f(nh) \end{aligned}

where $$h:= \pi /\Omega$$ and $$h\le h_0$$.

In [24] and [25], a fast convergence algorithm is presented (Chen Fast Convergence Algorithm):

\begin{aligned} f^C_0:=\mathcal{P}_Tf. \end{aligned}

For $$k=0,1,2,...$$

\begin{aligned} f^C_{k+1}:=\mathcal{P}_Tf+(\mathcal{I}-\mathcal{P}_T){\mathcal{F}} ^{-1}\mathcal{P}_{\Omega }{\hat{f}}^C_k \end{aligned}

where

\begin{aligned} {\hat{f}}^C_{k+1}(\omega ):= & {} \int _{-T}^T f(t)e^{-j\omega t}\textrm{d}t\nonumber \\{} & {} +\sum _{|nh|\le T} f(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] \nonumber \\{} & {} + \sum _{|nh|>T} g^C_k(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] \end{aligned}
(4)

in which $$g^C_k={\mathcal{F}} ^{-1}\mathcal{P}_{\Omega }{\hat{f}}^C_k.$$

For each $$\omega$$, in

\begin{aligned} \int _{-T}^T f(t)e^{-j\omega t}\textrm{d}t \approx \sum _{i=-M}^{M-1} f(t_i)e^{-j\omega t_i}\Delta t \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4a) \end{aligned}

where $$t_i=i\Delta t$$ and $$\Delta t = T /M$$, we need $$8M+1$$ real multiplications and $$6M-1$$ real additions. Here M is a large positive integer.

We assume we have $$2N_T+1$$ terms in

\begin{aligned} \sum _{|nh|\le T} f(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] \end{aligned}

then we need $$(2N_T+1)(8M+1)+2N_T+1$$ real multiplications and $$(2N_T+1)(6M+1)+2N_T$$ real additions. We assume we have chosen $$2N_1$$ terms in

\begin{aligned} \sum _{|nh|>T} g^C_k(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] , \end{aligned}

then we need $$(2N_1)(8M+1)+2N_1$$ real multiplications and $$(2N_1)(6M+1)+2N_1-1$$ real additions.

Now, we try to find a fast algorithm by the Shannon Sampling Theorem.

In (4), we need the computation of the integrals

\begin{aligned} \int _{-T}^T f(t)e^{-j\omega t}\textrm{d}t \end{aligned}

for extrapolations. We can use the rectangular formula, trapezoidal formula or Simpson formula. In (4a) we use the rectangular formula.

If we replace f(t) in (4) by its approximation

\begin{aligned} f(t)\approx \sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} \end{aligned}
(5)

and we choose $$\Omega$$ that is large enough such that $$\Delta t = h= \pi /\Omega$$, then we will obtain the same result in computation since the function values of f(t) at $$t=t_i,\, i= -M, -M+1,...,M-1, M$$ are not changed.

In (5), we have taken

\begin{aligned} \sum _{|nh|> T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}=0. \end{aligned}

However, in the k-th step of the iteration, the signal out of $$[-T,T]$$ has been updated by $$g^C_k$$. We can add

\begin{aligned} \sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} \end{aligned}
(6)

into (5) and obtain another approximation of f(t)

\begin{aligned} f(t)\approx \sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} + \sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}. \end{aligned}
(7)

We can replace f(t) in (4) by (7):

\begin{aligned} {\hat{f}}^C_{k+1}(\omega ):= & {} \int _{-T}^T \left[ \sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} + \sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right] e^{-j\omega t}\textrm{d}t\\{} & {} +\sum _{|nh|\le T} f(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] \\{} & {} + \sum _{|nh|>T} g^C_k(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] . \end{aligned}

By canceling like terms, we obtain another expression of $${\hat{f}}^C_{k+1}(\omega )$$:

\begin{aligned} {\hat{f}}^C_{k+1}(\omega ):=\sum _{|nh|\le T}f(nh) he^{-jnh\omega } + \sum _{|nh|> T} g^C_k(nh) he^{-jnh\omega }. \end{aligned}
(8)

This is a Fourier series in the case of over sampling and it is much simpler than (4).

Based on the formula (8), we present a fast algorithm:

\begin{aligned} {\hat{f}}^C_0(\omega ):=\sum _{|nh|\le T} f(nh) he^{-jnh\omega }. \end{aligned}
(9)

For $$k=0,1,2,...$$,

\begin{aligned} {\hat{f}}^C_{k+1}(\omega ):={\hat{f}}^C_0(\omega ) + \sum _{|nh|> T} g^C_k(nh) he^{-jnh\omega } \end{aligned}
(10)

where

\begin{aligned} g^C_k=\mathcal{F}^{-1}\mathcal{P}_{\Omega _0}{\hat{f}}^C_k. \end{aligned}

This algorithm should have a more accurate approximation since we have added (6) into (5). This can be seen from the theorem below.

### Theorem 1

The error energy of (5) is

\begin{aligned} \left\| f(t)-\sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right\| ^2 =h\sum _{|nh|> T} |f(nh)|^2. \end{aligned}

The error energy of (7) is

\begin{aligned} \left\| f(t)-\sum _{|nh|\le T} f(nh)\frac{\sin \Omega (t-nh)}{\Omega (t-nh)} -\sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right\| ^2 =h\sum _{|nh|> T} |f(nh)- g^C_k(nh)|^2 \end{aligned}

where $$h=\frac{\pi }{\Omega }$$, $$g^C_k$$ is the k-th approximation of of f for $$|t|>T$$.

### Proof

Assume s(t) is $$\Omega$$-band-limited. By Parsevalâ€™s theorem for Fourier transform, we have

\begin{aligned} \int _{-\infty }^{\infty }|s(t)|^2 \textrm{d}t=\frac{1}{2\pi }\int _{-\Omega }^{\Omega }|{\hat{s}}(\omega )|^2 d. \end{aligned}

By Parsevalâ€™s theorem for Fourier series, we have

\begin{aligned} \int _{-\Omega }^{\Omega }|{\hat{s}}(\omega )|^2 d =\frac{2\pi ^2}{\Omega }\sum _{n=-\infty }^{\infty } |s(nh)|^2. \end{aligned}

Then

\begin{aligned} \int _{-\infty }^{\infty }|s(t)|^2 \textrm{d}t =h \sum _{n=-\infty }^{\infty } |s(nh)|^2. \end{aligned}

Let

\begin{aligned} s(t)=f(t)-\sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} =\sum _{|nh|> T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} \end{aligned}

we have

\begin{aligned} \left\| f(t)-\sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right\| ^2 =h\sum _{|nh|> T} |f(nh)|^2. \end{aligned}

Let

\begin{aligned} s(t)= & {} f(t)-\sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} -\sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\\= & {} \sum _{|nh|> T} (f(nh)-g^C_k(nh)) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} \end{aligned}

we have

\begin{aligned} \left\| f(t)-\sum _{|nh|\le T} f(nh)\frac{\sin \Omega (t-nh)}{\Omega (t-nh)} -\sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right\| ^2 =h\sum _{|nh|> T} |f(nh)- g^C_k(nh)|^2. \end{aligned}

$$\square$$

### Theorem 2

The error energy

\begin{aligned} ||f^C_k(t)-f(t)||^2\ge ||f^C_{k+1}(t)-f(t)||^2. \end{aligned}

### Proof

By Parsevalâ€™s identity

\begin{aligned} ||f^C_k(t)-f(t)||^2= & {} \frac{1}{2\pi }||{\hat{f}}^C_k(\omega )-{\hat{f}}(\omega )||^2\\= & {} \frac{1}{2\pi }\int _{-\infty } ^{\infty }|{\hat{f}}^C_k(\omega )-{\hat{f}}(\omega )|^2 d\omega \ge \frac{1}{2\pi }\int _{-\Omega _0} ^{\Omega _0}|{\hat{f}}^C_k(\omega )-{\hat{f}}(\omega )|^2\textrm{d}\omega \\= & {} \frac{1}{2\pi }\int _{-\Omega _0} ^{\Omega _0}|{\hat{g}}^C_k(\omega )-{\hat{f}}(\omega )|^2\textrm{d}\omega =||g^C_k(t)-f(t)||^2. \end{aligned}

By the Shannon Sampling Theorem

\begin{aligned} g_k ^C(t)=\sum _{n=-\infty }^\infty g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}. \end{aligned}

By the inversion of the Fourier transform on (10)

\begin{aligned} f_{k+1} ^C(t)= & {} \sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\\{} & {} + \sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}. \end{aligned}

By Parsevalâ€™s theorem

\begin{aligned}{} & {} ||g^C_k(t)-f(t)||^2=h\sum _{n=-\infty }^\infty | g^C_k(nh)-f(nh)|^2 \\{} & {} ||f^C_{k+1}(t)-f(t)||^2=h\sum _{|nh|>T} | g^C_k(nh)-f(nh)|^2. \end{aligned}

So

\begin{aligned} ||g^C_k(t)-f(t)||^2\ge ||f^C_{k+1}(t)-f(t)||^2. \end{aligned}

$$\square$$

### Remark 3

This theorem shows that the error energy is decreasing in each step of iterations. The â€ś=â€ť can be satisfied only if $$f^C_k(t)=f(t)$$. So, before we have reached the exact extrapolation, we have

\begin{aligned} ||f^C_k(t)-f(t)||^2>||f^C_{k+1}(t)-f(t)||^2. \end{aligned}

## 4 Method and experiment for extrapolation

In this section, we give the method for the algorithm. In computation we can only use finite terms in (10). We choose a large integer $$N>0$$ to approximate (10):

\begin{aligned} {\hat{f}}^C_{k+1}(\omega )={\hat{f}}^C_0(\omega ) + \sum _{|nh|> T \,\textrm{and}\, |n|\le N} g^C_k(nh) he^{-jnh\omega }. \end{aligned}
(11)

We refer to it as Chen Fast Algorithm 1.

We have $$2M+1$$ terms in $${\hat{f}}^C_0(\omega _l)$$. For each $$\omega _l$$, in $${\hat{f}}^C_0(\omega _l)$$ we need 8M real multiplications and 6M real additions. In $$g^C_k(t_m)$$, for each $$t_m$$, we need 8L multiplication and 6L addition, in $${\hat{f}}^C_{k+1}(\omega _l)$$ for each $$\omega _l$$, we need $$8(N-M)$$ real multiplications and $$6(N-M)$$ real additions.

Now we give an example of an application of Chen Fast Algorithm 1 and compare it with Chen Fast Convergence Algorithm and Papoulisâ€“Gerchberg algorithm.

### Example 1

Suppose

\begin{aligned} f(t):=\frac{1-\cos t}{\pi t^2}. \end{aligned}

Then

\begin{aligned} {\hat{f}}(\omega )=\mathcal{P}_{\Omega _0}(1- |\omega |). \end{aligned}

Here $$\Omega _0=1$$.

Suppose the signal is known on $$[-T, T]=[-\pi /5, \pi /5]$$. In Chen Fast Algorithm 1, we input: $$T=\pi /5$$, $$\Omega =10$$, $$L=50$$, $$N=10$$ and $$NumofIterations=6$$. Since $$\Omega =10>> \Omega _0=1$$ and the step size of sampling $$h=\pi /\Omega$$, it is the case of over sampling.

The numerical results by Papoulisâ€“Gerchberg algorithm, Chen Fast Convergence Algorithm and Chen Fast Algorithm 1 for the iterations 1, 2, 3, 4 are in Figs. 2, 3, 4, 5, 6, 7, 8 and 9.

The result of the Papoulisâ€“Gerchberg algorithm is given by the dotted lines in the black color. The result of the Chen Fast Convergence Algorithm is given by the dash dot lines in the red color. The result of the Chen Fast Algorithm 1 is given by the dashed lines in the blue color.

The error energy

\begin{aligned} \int _{-\Omega _0}^{\Omega _0}|{\hat{f}}_k(\omega )-{\hat{f}}(\omega )|^2\textrm{d}\omega \end{aligned}

of Papoulisâ€“Gerchberg algorithm, Chen Fast Algorithm Convergence Algorithm and Chen Fast Algorithm 1 for the iterations 1â€“6 is in TableÂ 1.

In the first iteration, the results of Papoulisâ€“Gerchberg algorithm and Chen Fast Convergence Algorithm are same. But the Chen Fast Algorithm 1 is better. In the iterations 2â€“4, the Papoulisâ€“Gerchberg algorithm is the worst, the Chen Fast Algorithm 1 is the best. This can also be seen in TableÂ 1.

## 5 Method to remove Gibbs phenomena

In this section we give some examples to show the Gibbs phenomena. In Chen Fast Algorithm 1, we truncated the Fourier series. This gives rise to the Gibbs phenomena in the frequency domain if $${\hat{f}}(\omega )$$ is not continuous at $$\omega =\Omega$$ and $$\omega =-\Omega$$. This happens when $$f\in L^2$$ but $$f\notin L^1$$

### Example 2

Suppose

\begin{aligned} f(t)=\frac{\sin (\pi t)}{\pi t}. \end{aligned}

Then

\begin{aligned} {\hat{f}}(\omega ):= \left\{ \begin{array}{l} 1,\,\,\,\,\,\omega \in [-\Omega _0,\Omega _0] \\ 0,\,\,\,\,\,\omega \in {\textbf {R}}\backslash [-\Omega _0,\Omega _0]. \end{array} \right. \end{aligned}

Here $$\Omega _0=1$$.

Suppose the signal is known on $$[-T, T]=[-\pi /5, \pi /5]$$. In Chen Fast Algorithm 1, we input $$T=\pi /5$$, $$\Omega =10$$, $$L=50$$, $$N=200$$ and $$NumofIterations=6$$. The numerical results by the algorithm for the iterations 1, 2, 3, 4 are in Fig.Â 10. The results of Chen Fast Algorithm 1 are given by dashed lines.

Now, we present a fast algorithm without truncation errors in computation of infinite terms of the Fourier series. We can change (10) to

\begin{aligned} {\hat{f}}^C_{k+1}(\omega )= & {} {\hat{f}}^C_0(\omega ) - \sum _{|nh|\le T} g^C_k(nh) he^{-jnh\omega } + \sum _{n=-\infty }^\infty g^C_k(nh) he^{-jnh\omega } \nonumber \\= & {} {\hat{f}}^C_0(\omega )- \sum _{|nh|\le T} g^C_k(nh) he^{-jnh\omega } + {\hat{g}}^C_k(\omega ) \nonumber \\= & {} {\hat{f}}^C_0(\omega )- \sum _{|nh|\le T} g^C_k(nh) he^{-jnh\omega } + \mathcal{P}_{\Omega _0}\hat{f^C_k}(\omega ). \end{aligned}
(12)

So we present another new method and we refer to it as Chen Fast Algorithm 2:

We have $$2M+1$$ terms in $${\hat{f}}^C_0(\omega _l)$$. For each $$\omega _l$$, in $${\hat{f}}^C_0(\omega _l)$$ we need 8M real multiplications and 6M real additions. In $$g^C_k(t_m)$$, for each $$t_m$$, we need 8L multiplication and 6L addition, in $${\hat{f}}^C_{k+1}(\omega _l)$$ for each $$\omega _l$$, we need 8M real multiplications and 6M real additions.

To compare the complexity of Chen Fast Algorithm 1 and Chen Fast Algorithm 2, if the number of terms in $$\sum _{|nh|\le T} g^C_k(nh) he^{jnh\omega }$$ is less than N, the Chen Fast Algorithm 2 is of less complexity.

### Theorem 3

For each integer $$k\ge 0$$, the limits

\begin{aligned} \lim _{\omega \rightarrow \Omega ^-}{\hat{f}}^C _k(\omega ) \,\,\,\textrm{and} \,\, \lim _{\omega \rightarrow -\Omega ^+}{\hat{f}}^C _k(\omega ) \end{aligned}

exist.

### Proof

By mathematical induction, first $$k=0$$,

\begin{aligned} {\hat{f}}^C_0(\omega )=\sum _{|nh|\le T} f(nh) he^{-jnh\omega }, \end{aligned}

we can see the limits exist. Assume the limits exist for $${\hat{f}}^C_k(\omega )$$, by (12)

\begin{aligned} {\hat{f}}^C_{k+1}(\omega )={\hat{f}}^C_0(\omega )- \sum _{|nh|\le T} g^C_k(nh) he^{-jnh\omega } + \mathcal{P}_{\Omega _0}\hat{f^C_k}(\omega ), \end{aligned}

we can see the limits exist for $${\hat{f}}^C_{k+1}(\omega )$$. $$\square$$

### Remark 4

By this theorem we can see that the limits exist and are indepedent of the N in (11), so the Gibbs phenomena is disappeared.

### Example 3

We choose the function in example 2.

Suppose the signal is known on $$[-T, T]=[-\pi /5, \pi /5]$$.

In Chen Fast Algorithm 2, we input $$T=\pi /5$$, $$\Omega =10$$, $$L=50$$ and $$NumofIterations=6$$.

Since $$\Omega =10>> \Omega _0=1$$ and the step size of sampling $$h=\pi /\Omega$$, it is the case of over sampling.

The numerical results by Chen Fast Algorithm 2 for the iterations 1, 2, 3, 4 are in Fig.Â 11.

The error energy

\begin{aligned} \int _{-\Omega _0}^{\Omega _0}|{\hat{f}}_k(\omega )-{\hat{f}}(\omega )|^2\textrm{d}\omega \end{aligned}

of Chen Fast Algorithm 1 and 2 for the iterations 1â€“6 is in TableÂ 2. We give the results of Chen Fast Algorithm 1 for the case $$T=10$$ in Fig.Â 12.

## 6 Results and discussion

In (11) and (12), the computation of the integrals in (4) is omitted. Therefore the amount of computation is greatly reduced. In the computation of (11) FFT can be used. The amount of computation is of the order $$O(N\log N)$$.

In example 1, we can see the numerical result of Chen Fast Algorithm 1 is more accurate than the numerical result of the Papoulisâ€“Gerchberg algorithm and Chen Fast Convergence Algorithm. The amount of computation is less in Chen Fast Algorithm 1 since there is no computation of integrals in it.

In example 2, the Gibbs phenomena occurs. The Gibbs phenomena can be in control in Chen Fast Algorithm 2.

In example 3, we can see that the Gibbs phenomena affects the accuracy of the numerical results severely even we choose a large enough T. This is not due to the error energy in the residual sequence beyond $$|t| = T$$. This is due to the truncation error in the Fourier series. And this occurs when $${\hat{f}}(-\Omega +0)\not = 0$$ or $${\hat{f}}(\Omega -0)\not = 0$$.

## 7 Conclusion

Fast extrapolation algorithms for band-limited signals are introduced by the Shannon Sampling Theorem in this paper. By adding the updated term into the Shannon Sampling Theorem in the procedure of iteration we can obtain more accurate extrapolations signal and reduce the amount of computation. This can be seen in the experimental results. The Gibbs phenomena can be in control if the computation of infinite Fourier series is not required.

Not applicable.

## References

1. A. Papoulis, A new algorithm in spectral analysis and band-limited extrapolation. IEEE Trans. Circuits Syst. CASâ€“22, 735â€“742 (1975)

2. R.W. Gerchberg, Super-resolution through error energy reduction. Opt. Acta 21(9), 709â€“720 (1974)

3. C.C. Chamzas, W.Y. Xu, An improved version of Papoulisâ€“Gerchberg algorithm on band-limited extrapolation. IEEE Trans. Acoust. Speech Signal Process. ASSPâ€“32(2), 437â€“440 (1984)

4. A.K. Jain, S. Ranganath, Extrapolation algorithms for discrete signals with application in spectral estimation. IEEE Trans. Acoust. Speech Signal Process. ASSPâ€“29, 830â€“845 (1981)

5. V. Vaibhav, New method of bandlimited extrapolation, v2, arXiv:1804.04713v2 (2020)

6. S. Xu, S. Tao, Y. Chai, X. Yang, Y. He, The extrapolation of bandlimited signals in the offset linear canonical transform domain. Optik 180, 626â€“634 (2019)

7. S. Xu, L. Feng, Y. Chai et al., Extrapolation theorem for bandlimited signals associated with the offset linear canonical transform. Circuits Syst. Signal Process 39, 1699â€“1712 (2020)

8. X.-G. Xia, C.C.J. Kuo, Z. Zhang, Signal extrapolation in wavelet subspaces. SIAM J. Sci. Comput. 16(1), 50â€“73 (1995)

9. T. Strohmer, On discrete band-limited signal extrapolation. AMS Contemp. Math. 190, 323â€“337 (1995)

10. X. Zhou, X.-G. Xia, A Sanzâ€“Huang conjecture on band-limited signal extrapolation with noise. IEEE Trans. Acoust. Speech Signal Process. 37(9), 1468â€“1472 (1989)

11. J.L.C. Sanz, T.S. Huang, Some aspects of band-limited signal extrapolation: models, discrete approximation, and noise. IEEE Trans. Acoust. Speech Signal Process. ASSPâ€“31, 1492â€“1501 (1983)

12. J. Cadzow, Observations on the extrapolation of band-limited signal problem. IEEE Trans. Acoust. Speech Signal Process. ASSPâ€“29, 1208â€“1209 (1981)

13. A. Sano, T. Furuya, H. Tsuji, H. Onmori, Simultaneous optimization method of regularization and singular-value decomposition in least squares parameter identification, in Proceedings of IEEE Conference on Acoustics, Speech, and Signal Processing, Glasgow, UK, pp. 2290â€“2293 (1989)

14. W. Chen, A new extrapolation algorithm for band-limited signals using regularization method. IEEE Trans. Signal Process. SPâ€“41, 1048â€“1060 (1993)

15. S.J. Reeves, R.M. Mersereau, Optimal estimation of the regularization parameter and stabilizing functional for regularized image restoration. Opt. Eng. 29, 446â€“454 (1990)

16. K. Drouiche, D. Kateb, C. Noiret, Regularization of the ill-posed problem of extrapolation with the Malvar-Wilson wavelets. Inverse Probl. 17, 1513â€“1533 (2001)

17. W. Chen, An efficient method for an ill-posed problemâ€”band-limited extrapolation by regularization. IEEE Trans. Signal Process. 54(12), 4611â€“4618 (2006)

18. J. Weng, One-step band-limited extrapolation using empirical orthogonal functions, in 2006 1ST IEEE Conference on Industrial Electronics and Applications

19. Y. Zhang, X. Sha, X. Fang, X. Lin, Piecewise iterative extrapolation method for bandlimited signal. Electronics 11, 1175 (2022)

20. P. Ferreira, Interpolation and the discrete Papoulisâ€“Gerchberg algorithm. IEEE Trans. Signal Process. 42(10), 2596â€“2606 (1994)

21. P. Ferreira, Noniterative and faster iterative methods for interpolation and extrapolation. IEEE Trans. Signal Process. 42(11), 3278â€“3282 (1994)

22. P. Ferreira, Interpolation in the time and frequency domains. IEEE Signal Process Lett. 3(6), 176â€“178 (1996)

23. C. Frankenbach, P. Martinez-Nuevo, M. Moller, W. Kellermann, Extrapolation of bandlimited multidimensional signals from continuous measurements, in 2020 28th European Signal Processing Conference (EUSIPCO), pp. 2309â€“2313 (2021)

24. W. Chen, A fast convergence algorithm for band-limited extrapolation by sampling. IEEE Trans. Signal Process. 57(1), 161â€“167 (2009)

25. W. Chen, Some aspects of band-limited extrapolations. IEEE Trans. Signal Process. 58(5), 2647â€“2653 (2010)

26. D. Batenkov, L. Demanet, Soft extrapolation of bandlimited functions, in 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)

27. C.P. Wang, T.T. Chen, Y.L. Liu, C.F. Dai, H.K. Fu, P.T. Chou, Testing apparatus, https://patents.google.com/patent/US9341669B2/en

28. P. Ferreira, Incomplete sampling series and the recovery of missing samples from oversampled band-limited signals. IEEE Trans. Signal Process. 40(1), 225â€“227 (1992)

29. A. Steiner, Plancherelâ€™s theorem and the Shannon series derived simultaneously. Am. Math. Mon. 87, 193â€“197 (1980)

30. J.P.S. Kung, C.C. Yang, Complex analysis, in Encyclopedia of Physical Science and Technology, 3rd edn. (2003)

31. E.J. Candes, C. Fernandez-Granda, Towards a mathematical theory of super-resolution. Commun. Pure Appl. Math. 67(6), 906â€“956 (2014)

32. E.J. Candes, C. Fernandez-Granda, Super-resolution from noisy data. J. Fourier Anal. Appl. 19(6), 1229â€“1254 (2013)

Not applicable.

Not applicable.

## Author information

Authors

Not applicable.

### Corresponding author

Correspondence to Weidong Chen.

## Ethics declarations

Not applicable.

Not applicable.

Not applicable.

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

Chen, W. Fast algorithms for band-limited extrapolation by over sampling and Fourier series. EURASIP J. Adv. Signal Process. 2023, 107 (2023). https://doi.org/10.1186/s13634-023-01060-9