Skip to main content

Analysis and performance of coded symbol recovery loop using oversampling

Abstract

In this work, we propose a closed-loop analog system to detect the source information of a binary data stream coded by a flexible finite automaton. We initially consider the dual sideband suppressed-carrier modulation of a base band binary amplitude waveform. The automaton introduces a symbol redundancy as phase contribution of the modulated signal by a simple mapping scheme. The proposed recovery system performs a coherent demodulation, presenting the base-band binary wave to a maximum likelihood hard detector, a simple analog trigger that estimates the source data within the symbol period. This wave is over-sampled, and the final decision comes by counting the positive samples and a majority vote. We prove our approach is valid answering the most important concerns: the stability of the closed loop, a first analytical expression of the error rate when a Markov birth process models the counting phase, and finally the role of this last loop to lower the bit error rate compared to a simple Costas loop. The analysis continues by solving the problem of carrier and symbol rate recovery and the impact of non-linearity and noise in the basic analog blocks. Behavioral simulations describe a competitive scenario in terms of error rate, comparing the proposed approach to the Costas Loop and the basic convolutional decoding strategies based on Viterbi algorithm both in the hard (Hamming metrics) and soft (Euclidean metrics) versions.

1 Introduction

The need for efficient utilization of the radio channel under additive and multiplicative noise sources has stimulated the investigation of advanced digital modulations and coding techniques. Because highly stable oscillators are available for practical applications, it has been possible to detect digital phase-modulated signals, and in these 60 years, there are many developed communication systems with such modulation. Furthermore, coding theory increases the error-correcting capability of transmitted information by symbol redundancy, requiring more bandwidth for the complete demodulation and decoding [1]. Maximum likelihood (ML) and maximum a posteriori probability (MAP) [2] detection methods require the log-likelihood ratio (LLR) computation, which is hardware expensive for energy-constrained applications [3]. The problem of area reduction in electronic systems concerns about cost of silicon wafers, and therefore, it has an economic impact. Instead, low-power dissipation affects the prolonging of system lifetime of battery-operated devices such as wireless sensor networks, implantable devices, radio frequency identification, and much more. However, low-power CMOS and radio frequency (RF) design is not exclusively for portable systems; today, reducing power dissipation in electronic circuits is a mandatory target in consumer, industrial, space, and military applications [4].

The relative simple characterization of a digital communication system is an important advantage over analog communication, where there are many more ways to degrade a transmission. Analog decoding systems have gained many interests in the research community since the contributions by Hagenauer [5] and Loeliger et. al [6]. The main advantages are the extremely low-power dissipation and a faster ML algorithm execution up to 1000 times than a common digital signal processor (DSP). This approach uses bipolar transistors and diodes, which realize the exponential and logarithmic function respectively [7]. These basic components calculate the log-likelihood ratio as a main operation in the detection theory. There are in the recent literature some industrial applications [810] that uses the analog decoder, although process variability and device mismatches are the most important drawbacks that affect the precision of LLR estimation [11].

In this work, we search an alternate way to recover a binary information from the modulated waveform that does not need the LLR computation [12]. The related hardware therefore is well suited for the used modulation, and we expect some difficulties to consider different waves for the same system. We extend the application of recovery loops used mainly in digital transmission over a serial link [13, 14] (e.g., cable) to the radio frequency domain. Test cases debug, when using a 1/3 rate convolutional code [15] and a double sideband suppressed carrier amplitude modulation (DSB-SC AM), indicated the encoder output matrix and a symmetric phase mapping as responsible of poor error performance. Consequently, we consider the flexible finite automata where loop stable points may correspond, by properly settings, to the correct decoding. The amplitude modulation (AM) part of the used modulated waveform is an antipodal binary base-band signal, mapping the information bit (0, 1) to uk(− 1,+ 1). Equation (1) shows our former pass-band waveform in the current time interval [kT,(k+1)T], k is the discrete time step and T is the symbol period; f0 is the carrier frequency, and θ0 the initial phase.

$$ s(t) = \sqrt{E_{s}} \cdot u_{k} \cdot \text{cos} \left(2 \cdot \pi \cdot f_{0} \cdot t +\theta_{k}+\theta_{0} \right) $$
(1)

This wave is functionally equivalent to the well-known trellis code modulation (TCM) illustrated in [1618] and measured as engineering unit (EU). The variable Es is the signal’s energy measured in joule. The finite automaton receives the binary information, generating the code word ck; Eq. (2) shows our used mapping scheme to calculate the contribution θk in (1).

$$ \theta_{k} = \frac{2 \cdot \pi}{M} \cdot c_{k} $$
(2)

The variable M represents the mapping order as the number of allowed symmetric phases. The automaton has rate 1/R such that M=2R. The proposed recovery loop receives the signal (1) corrupted by a pass-band additive white Gaussian noise (AWGN) [19]. A proper coherent demodulation by a multi-phase voltage-controlled oscillator (MP-VCO), a mixer, and finally a loop filter applies to a simple ML hard detector (e.g., a trigger [20]), the amplitude signal corrupted by a base-band additive white Gaussian noise. The trigger’s output is an analog estimation within the current symbol interval [kT,(k+1)T]. Finally, the decoded source symbol is chosen collecting the positive samples at rate T/S where S is generally a power of two; the current output is 1 if the counter is greater than or equal to S/2 otherwise 0.

The loop’s role is to electrically remove the cosine in (1), so it is composed by a mixer, an MP-VCO at frequency f0 and initial phase θv, a loop filter with Laplace function H(s), and a copy of the used finite automaton to generate the current estimated phase \(\hat {\theta }_{k}\left (t\right)\). Since finite automata are discrete-time linear time-invariant (LTI) systems, it is impossible to track the current phase in the analog domain. We solve this apparent problem by using a hybrid finite automaton [21], where the output network works in the continuous time domain, receiving the source symbol’s analog estimation and tracking the current code word. The internal state update network works in the discrete-time domain, receiving the final estimation at the current time step, preparing the hardware to decode the next symbol.

We analyze the Costas loop as our reference approach, completed internally by a mechanism of triggering, sampling, and finally a majority vote. We include also the Viterbi algorithms of a M-ary phase shift keying (M-PSK) modulation and a convolutional encoder as additional benchmarks. We use the SystemC/SystemC-AMS class library to model these systems in the scenario of a point-to-point link over AWGN channel. The loop’s bit error rate (BER) is better than our reference system working with perfect phase and timing recovery. Next, we simulate the recovery loop when the carrier’s initial phase and the timing reference introduce a jitter. After, we include the effects of non-linearity, in the mixer and the phase noise [22] in the MP-VCO. Typical architectures of high-frequency MP-VCO use a closed loop of elementary voltage-controlled oscillator (VCO) as shown in the contributions [23] and [24].

The paper has this organization. We analyze in depth the behavior and the necessary conditions for the complete feasibility of our solution in Section 2. We illustrate in details the structure and the behavior of our solution. Section 3 tackles the noiseless stability of the closed loop, followed by a first analytical expression of the bit error rate in Section 4, this last when a Markov chain models the counting process. Section 5 addresses the problem of phase and timing recovery. Section 6 shows our conducted simulations. Finally, we consider the single-sideband suppressed-carrier amplitude modulation (SSB-SC AM), which introduces ideally a spectrum efficiency of 100%. Our conclusions underline the importance of this approach and a consideration of the main telecommunication problems as future directions.

2 Proposed implementation method

In this study, we demonstrate that our analog/mixed signal system accomplishes the coded data recovery algorithm without the LLR computation. Our closed loop uses a hybrid finite automaton, whose output network works in the transient state. This successful approach is valid when the symbol time T is greater than the latency of the automaton’s output network. We extend the information illustrated in the introduction to complete the hardware description of our recovery loop. Figure 1 illustrates the proposed transmission system. The finite automaton, with rate 1/R, receives N binary symbols in the set (0, 1) generating serially the current code word ck. The same source binary digit is represented in antipodal form in the set uk(− 1,+ 1). The pass band modulator receives the amplitude signal \(m(t)=\sum _{k=0}^{N-1}u_{k}\cdot p(t-kT)\), where p(t) is the rectangular function in [0, T] and a final amplifier drives the transmitter antenna.

Fig. 1
figure 1

The DSB-SC AM modulator. The proposed transmitter; it is a DSB-SC AM modulator

Figure 2 shows the proposed coded symbol recovery loop using oversampling. The antenna receives the wave (1) scaled by the channel attenuation and corrupted by a AWGN noise. The low-noise amplifier (LNA) magnifies the signal-to-noise ratio (SNR) presenting the DSB-SC AM signal to the demodulation section. We assume the LNA is an ideal band-pass filter, with the flat spectrum in the considered bandwidth. The proposed loop uses a mixer, modeled originally as a simple ideal multiplier, a MP-VCO that receives the current code word estimation \(\hat {\theta }_{k}(t),\) and finally a loop filter with the internal state reset port. Equation (3) shows the MP-VCO I/O characteristic, when the oscillation has the same frequency f0 aligned in phase with the transmitter modulator (θ0=θv).

$$ v(t) = \text{cos} \left(2 \cdot \pi \cdot f_{0} \cdot t + \hat{\theta}_{k}\left(t\right) + \theta_{v} \right) $$
(3)
Fig. 2
figure 2

The proposed coded symbol recovery loop using oversampling. This mixed-signal system recovers the source data by analog estimation, oversampling, and majority voting

The mixer’s output is an oscillation at frequency twice, removed by the low-pass loop filter, and the product of the AM wave m(t) and the cosine of difference as illustrated in the Eq. (4). If n(t) is the base-band additive white Gaussian noise and the differential phase \(\Delta _{k}(t)=\theta _{k} - \hat {\theta }_{k}(t)\), we have at the loop filter’s output, modeled as a simple integrator (1/s), this waveform:

$$ \begin{aligned} y(t) =& 0.5 \cdot \sqrt{E_{s}} \cdot u_{k} \cdot (t/T) \cdot \text{cos} \left(\Delta_{k} \right) +\\ & + n(t) \quad \left(H(s)=\frac{1}{sT}\right) \end{aligned} $$
(4)

The letter s in lowercase is the Laplace’s variable. We consider in this work a single-pole low-pass filter (LPF). Let fp be the filter’s pole measured in hertz and ω=2.0·π·fp measured in rad/sec (fp·T>>1), the signal at the trigger input is therefore:

$$ \begin{aligned} y(t) \approx& 0.5 \cdot \sqrt{E_{s}} \cdot u_{k} \cdot \text{cos} \left(\Delta_{k}\right) +\\ &+ n(t) \quad \left(H(s)=\frac{\omega_{p}}{s+\omega_{p}}\right) \end{aligned} $$
(5)

Noise model at the antenna is a band-pass zero-mean additive Gaussian random process with two-sided power spectral density N0/2 measured in watt/hertz. Noise random process n(t) in (5) is low-pass and zero-mean Gaussian, whose energy is:

$$ \begin{aligned} \sigma^{2} &= \frac{N_{0}}{2} \cdot T \cdot \int_{-B/2 \cdot f_{p}}^{B/2 \cdot f_{p}} \frac{f_{p}}{1+x^{2}} dx = \\ &= N_{0} \cdot f_{p} \cdot T \cdot \arctan \left(\frac{B}{2 \cdot f_{p}}\right) \quad [\text{Joule}] \end{aligned} $$
(6)

Figure 3 shows the loop filter’s output under the models (4) and (5); the integrator generates positive and negative ramps; instead, the one-pole low-pass filter generates exponential smoothing. A deterministic finite automaton (DFA) is represented by digraph called state diagram. In fact, a DFA can be represented by a 7-tuple (Q, Σ, δ, q0, F, O, X):

  • Q is a finite set of states.

    Fig. 3
    figure 3

    Loop filter’s output: low-pass (top) and integrator (down). The filter’s output is a continuous function in the current time interval with the polarity aligned to the antipodal source symbol. The threshold for ML detection is zero

  • Σ is the alphabet.

  • δ is the transition function where: δ:Q×ΣQ.

  • q0 the initial state q0Q.

  • F is the set of final states (FQ).

  • Ois a finite set of symbols called the output alphabet.

  • X is the output transition function: X:Q×ΣO.

The set Σ has two elements (− 1,+ 1), instead the output alphabet O has M different values ck[0,1,…,M−1]. We use a flexible finite automaton instead of a convolutional code, to have the freedom for the proper selection of the output alphabet O and the output transition function X. A preliminary noiseless stability analysis discussed later drove these final values for theX function in (7). A random output alphabet may cause oscillations due to loop instabilities and unacceptable global error rate. The cardinality of Q is eight and δ and X are two matrices with eight rows and two columns. The output values in X limit the possible differential phase Δk, when the two automata have the same internal state, to two different values: zero (loop is in-lock) and 2π/M (loop out-of-lock).

$$ \delta : \left| \begin{array}{cc} 0 & 4 \\ 0 & 4 \\ 1 & 5 \\ 1 & 5 \\ 2 & 6 \\ 2 & 6 \\ 3 & 7 \\ 3 & 7 \\ \end{array} \right| X : \left| \begin{array}{cc} 4 & 3 \\ 7 & 6 \\ 2 & 1 \\ 0 & 7 \\ 6 & 5 \\ 5 & 4 \\ 1 & 0 \\ 3 & 2 \end{array} \right| $$
(7)

In Eqs. (4) and (5), when the receiver is in the in-lock state, the differential phase is zero, so the ML hard detector (trigger) estimates the source symbol at the minimum error probability:

$$ \hat{u}_{k}\left(t\right) = \text{sign} \left(y(t)\right) $$
(8)

When the differential phase is in the interval [ π/2, 3π/2], the cosine is negative so the wrong decision event improves its probability. This simple issue suggests an implementation of the automaton output function X, avoiding the cosine negative. The choice (7) satisfies this criterion under the hypothesis the two automata (in the transmitter and the recovery loop) have the same internal state. We see later the cosine positive matches the loop’s noiseless stability criterion. The sample and hold (S/H) samples the analog estimation (8) at rate T/S. Finally, a log2(S) binary counter counts the positive samples, and a final decision \(\hat {u}_{k}\) is majority vote-based. This theory applies to a binary and identically distributed source symbols. An important point, Viterbi decoders deliver the output-estimated source symbol with a delay proportional to the depth of traceback path; the proposed recovery loop estimates the source symbol always with one symbol delay (T).

The hybrid automaton uses the final decision \(\hat {u}_{k}\) to update its internal state (Q), preparing the recovery loop for the next decoding. Figure 4 shows a possible implementation of the hybrid automaton based on D-type flip-flop and combinational logic. Hereafter, we call this block hybrid encoder. Finally, the proposed analog system requires a form of synchronization; the hybrid encoder receives the clock rate T; instead, the counter receives a clock at higher frequency S-time (T/S). Additionally, a reset pulse with period T and duty cycle 1/S, at the beginning of the current symbol period, puts all the subsystems with memory in a known initial state. It clears the internal counters, the hybrid encoder, and the loop filter internal states.

Fig. 4
figure 4

The register-based hybrid encoder. This hybrid automaton allows the code word tracking

3 The noiseless stability analysis

The proposed recovery system uses the analog loop to track the current phase code word, applying the AM section of the supported modulation to an ideal ML hard detector, a simple trigger. For this reason, stability represents one of the most important concern for the concrete deployment of such system. In this section, we tackle the problem of loop’s stability demonstrating, under specific hypothesis, the correct decoding is a stable point for the whole system. We analyze the stability of the proposed loop by removing the effect of the AWGN noise and modeling the analog loop by a switching system made of the mixer, the loop filter, the ML hard detector, and finally the analog path of the hybrid encoder. Figure 5 shows this hypothetical equivalent description of the tracking process. Let x(t) be the loop filter’s internal state under the model (5) and y(t)=x(t), so the switching system allows these constituent equations:

$$ \begin{array}{l} f_{1}(x) = - \thinspace \omega_{p} \cdot x + \omega_{p} \cdot u_{k} \quad x \ge 0 \\ f_{2}(x) = - \thinspace \omega_{p} \cdot x + a \cdot \omega_{p} \cdot u_{k} \quad x<0\\ \end{array} $$
(9)
Fig. 5
figure 5

The closed-loop mathematical model as switched system. The loop is stable when the correct detection is the unique statistical equilibrium point

In (9), we replaced the cosine with a generic variable a which is either positive or negative. The Lyapunov stability criterion for the single equation in (9) is not a sufficient condition for the stability of the whole switching system [25, 26]. Let uk be 1 for simplicity; equilibrium points are calculated by placing \(\dot {x}= f_{i}(x) = 0, i =1,2\) in both equations in (9) so they are:

$$ \begin{array}{l} x_{eq}^{1} = 1 \quad x_{eq}^{2} = a \\ \end{array} $$
(10)

When a is positive, the correct detection is the unique equilibrium point; instead, if the variable is negative, the wrong detection and therefore the out-of-lock internal state is an additional equilibrium point. Lastly, we conclude the cosine of phase difference must be positive for the stability of the whole system. This result implies the correct selection of the automata output transition function (X); the choice (7) matches our stability condition. A positive cosine and the phase mapping in (2) imply the transmitted (ck) and received (\(\hat {c}_{k}(t)\)) code words to be:

$$ -\frac{M}{4} < \hat{c}_{k}\left(t \right) - c_{k} < \frac{M}{4} \qquad \forall k, t \in \{kT,(k+1)T\} $$
(11)

This last result justifies the use of a flexible automaton instead of a convolutional or turbo code [27, 28]. As a consequence of Eq. (11), a stable loop that supports M generic different symmetric phases, similarly to (7), has the output transition matrix X where in each row the two possible values xi,1 and xi,2 generate a positive cosine.

$$ X : \left| \begin{array}{cc} x_{1,1} & x_{1,2} \\ x_{2,1} & x_{2,2} \\ \dots & \dots \\ x_{i,1} & x_{i,2} \\ \dots & \dots \\ \end{array} \right| $$
(12)

The matrix X therefore matches for any row the following formula:

$$ cos \left(\frac{2\cdot\pi}{M} \cdot \left(x_{i,1}-x_{i,2}\right) \right) > 0, \forall x_{i,1},x_{i,2} \in \mathbb{N} <M $$
(13)

The recovery loop with two symmetrical phases (M = 2) has the wrong detection as an additional equilibrium point. If our loop supports four symmetric phases (M = 4), the stability point zero is not an additional equilibrium point for this switching system. However, this choice implies the highest out-of-lock probability responsible of an unaccetable global error rate. So, M = 8 is the minimum for the concrete deployment of our solution.

4 The lower bound analytical bit error rate

A secondary goal, in the analysis of the proposed recovery loop, is to compare the BER from behavioral simulations with a predicted model. The detection algorithm, after a preliminary coherent demodulation, is mainly a counting process, analyzing the positive samples of (8). For this reason, a Markov chain and the specialization to a pure birth process represent the optimal mathematical description. For the loop state (in-lock or out-of-lock), we consider a two-state Markov chain. The internal states S0 and S1 represents the in-lock and out-of-lock value respectively. In this section, we approximate the single-pole loop filter’s step function with a series of rectangular pulses, a phase mapping with eight different values (M = 8), and the phase difference Δk(t) aligned to the model (7). The equations below describe the sampled wave at the trigger’s input when the loop is in-lock or out-of-lock.

$$ \left\{ \begin{array}{l} S_{0} \rightarrow y_{m} = 0.5 \cdot \sqrt{E_{s}} \cdot u_{k} + n_{m} \\ S_{1} \rightarrow y_{m} = 0.5 \cdot \sqrt{E_{s}} \cdot u_{k} \cdot \text{cos}\left(\Delta_{k,m}\right) + n_{m} \\ \end{array}\right. $$
(14)

The random processes ym and nm represent the signal and the noise at the trigger’s input respectively at sampling step m=0,1,2,..,S−1. Samples of the differential phase use the variable Δk,m. The Markov chain’s transition probabilities (PR) p(S0→S1) and q(S1→S0) come from a statistical analysis; they are the result of binary antipodal amplitude modulation analysis:

$$ \begin{aligned} P_{R} \{ S_{m+1} &= S_{0} | S_{m} = S_{0} \} = \\ &=1- Q_{f} \left(0.5 \cdot \sqrt{\frac{E_{s}}{\sigma^{2}}}\right) = 1 - p\\ \end{aligned} $$
(15)
$$ \begin{aligned} P_{R} \{ S_{m+1} &= S_{1} | S_{m} = S_{1} \} = \\ &=Q_{f} \left(0.5 \cdot \text{cos}(\Delta_{k,m}) \cdot \sqrt{\frac{E_{s}}{\sigma^{2}}}\right) = 1-q\\ \end{aligned} $$
(16)

The term Qf is the cumulative distribution function of the standardized normal random variable. We solve the Chapman-Kolomogrov equations for the stationary distribution, deriving the in-lock probability PL such as:

$$ P_{L} = P(S0) = \frac{1}{1+\frac{p}{q}} $$
(17)

The decision unit is a binary counter, so we use a S-state pure birth process. We identify four different scenarios that correspond four different transition probabilities λ:

  • E0 is loop in-lock and source symbol positive.

  • E1 is loop out-lock and source symbol positive.

  • E2 is loop in-lock and source symbol negative.

  • E3 is loop out-lock and source symbol negative.

We specialized the model (14) in these four different scenarios, deriving four birth rates as the probability \(\lambda _{E_{i}} = P_{R} \{ y_{m} > 0 | E_{i}\}\):

$$ \begin{array}{l} E_{0} \rightarrow \lambda_{E_{0}} = 1-Q_{f}\left(0.5 \cdot \sqrt{E_{s}} / \sigma\right) \\ E_{1} \rightarrow \lambda_{E_{1}} = 1-Q_{f}\left(0.5 \cdot \text{cos}(\Delta_{k,m}) \cdot \sqrt{E_{s}} / \sigma\right) \\ E_{2} \rightarrow \lambda_{E_{2}} = Q_{f}\left(0.5 \cdot \sqrt{E_{s}} / \sigma\right) \\ E_{3} \rightarrow \lambda_{E_{3}} = Q_{f}\left(0.5 \cdot \text{cos}(\Delta_{k,m}) \cdot \sqrt{E_{s}} / \sigma\right) \\ \end{array} $$
(18)

The binary counter receives S clock impulses, so BER is the result of the chain’s stochastic model at step S. Let C be the counter value after S steps; the approximate error rate is therefore:

$$ \begin{aligned} BER \approx& 0.5 \cdot ((P_{R} \{ C < S/2 | E_{0}\} + &\\ &+ P_{R} \{ C \ge S/2 | E_{2}\}) \cdot P_{L} + (P_{R} \{C < S/2 | E_{1}\} + \\ &+ P_{R} \{ C \ge S/2 | E_{3}\}) \cdot (1-P_{L})) \end{aligned} $$
(19)

Conditional probabilities in (19) require the counter’s stochastic model:

$$ \begin{array}{l} P_{R} \{ C < S/2 | E_{i}\} = \sum_{n=0}^{S/2-1} P_{R}\{ C = n | E_{i}\} \\ \\ P_{R} \{ C \ge S/2 | E_{i}\} = \sum_{n=S/2}^{S-1} P_{R}\{ C = n | E_{i}\} \\ \end{array} $$
(20)

This error rate assumes the used finite automata aligned at current discrete time step; for this reason, we expect the proposed number is a lower bound. The use of four symmetric phase (M = 4) makes the highest value of probability (16) to 0.5. Additionally, M = 4 influences the in-lock probability in (17) and the profile of the four birth rates showed in (18). Globally, the statistical effects of M = 4, although there are no oscillations in the noiseless switched system, are the highest bit error rate.

5 Phase and timing recovery

The proposed theory assumes perfect carrier and symbol timing recovery. In this section, we tackle the problem of phase and timing estimation such to make the proposed solution feasible. There is a wide literature on these matters, and therefore, we present only the additional efforts to realize carrier and timing recovery. Our approach considers the transmission of a finite length of pilot symbols (PS) to align the receiver’s internal clock. The same pilot sequence allows phase alignment between the transmitter oscillator and the MP-VCO. Symbol timing recovery is a combination of a proper selection of the pilot sequence and use of a known hardware since the 70s [29, 30] based on a closed-loop made of a digital phase detector, an analog loop-filter, a VCO oscillator and finally a frequency divider. We transmit a repeated and periodic sequence + 1 and − 1 to realize a clock at frequency 1/2T. This reference pulse has the same transmitter clock’s internal phase. Let L(s) be the Laplace function of this loop filter; it is a one-pole filter where we set these important constraints: ωF=2πfF, fF·T>>1 and KF strictly positive:

$$ L(s) = K_{F} \cdot \frac{\omega_{F}}{s+\omega_{F}} $$
(21)

Timing jitter jk at step k is a function of the loop filter direct current (DC) gain KF and the VCO sensitivity K0. The output of the digital phase detector is a rectangular pulse within the timing interval [kT,(k+1)T] with amplitude jk−1. Therefore, let I(s) be the Laplace function of the impulse according to the equation below:

$$ I(s) = \frac{1}{s} \cdot \left(1-e^{-s \cdot j_{k-1}}\right) \cdot e^{-s \cdot \tau_{0}} $$
(22)

We assumed for simplicity the delay τ0=0. Let u(t) be the step function, the loop filter output wave f(t) uses the function α defined as:

$$ \alpha(t) = u(t) \cdot \left(1.0 - e^{-\omega_{F} \cdot t} \right) \approx u(t) $$
(23)

Under these approximations, the function f(t) is therefore:

$$ f(t) = K_{F} \cdot u(t) - K_{F} \cdot u(t-j_{k-1}) $$
(24)

The digital VCO uses the function f(t) as follows:

$$ \begin{aligned} j_{k} &= j_{k-1} - K_{0} \cdot K_{F} \cdot \int_{0}^{T} u(\tau) - u(\tau-j_{k-1}) d\tau= \\ &= j_{k-1} - j_{k-1} \cdot K_{0} \cdot K_{F} \end{aligned} $$
(25)

Therefore, jitter after N pilot symbols is a function of related initial value j0 :

$$ j_{N} = j_{0} \cdot \left(1 - K_{0} \cdot K_{F} \right)^{N} $$
(26)

The proposed loop achieves the clock phase tracking until the perfect timing recovery. Phase recovery uses the same pilot sequence; now the MP-VCO generates a new waveform with the pilot symbol, the related code word, and further phase rotation of π/2, resulting the sine function according to the equation below:

$$ v(t) = \sqrt{E_{s}} \cdot u_{k} \cdot \text{sin} \left(2 \cdot \pi \cdot f_{0} \cdot t + \theta_{k} + \theta_{v} + \theta_{A} \right) $$
(27)

The MP-VCOs indicated in the first section have difficulties to generate the signal (27). The contribution [31] shows a MP-VCO realized with a daisy chain of Mdifferential and tunable (in frequency) VCOs. This phase generator works at high frequency with a low-phase noise. The M VCOs generate the digital phases θk, instead the tune port receives a signal proportional to the derivative of the ML trigger y(t) to achieve the phase recovery. Generally, a frequency tunable VCO allows the carrier recovery. A small varactor can be used in combination with MOS transistors that switch a fixed capacitor in and out of the VCO as suggested in [32]. This approach allows the large tuning ranges, reducing the translation of different scale in signal and noise to phase noise and sidebands. This VCO, therefore, mitigates the tuning sensitivity, reducing the disturbances to the signal voltage and reducing the sensitivity to noise. Polyphase filters receive the tuned VCO output, generating the required M symmetric phases [33]. Now, the signal after the mixer is a composition of two waveforms: the sine at frequency twice, removed by the loop filter, and the sine of differential phase, so the signal y(t) under the model (4) is therefore:

$$ y\left(t\right) = E_{s} \cdot (t/T) \cdot \text{sin}\left(\theta_{0}-\theta_{v} -\theta_{A}\right) + n\left(t\right) $$
(28)

The additional phase θA is the result of a closed loop with the signal (28) and a VCO with sensitivity Kv>0. The equation below shows the accumulated phase in the VCO:

$$ \theta_{A} = K_{v} \int_{0}^{t} y\left(\tau \right) d \tau $$
(29)

At the end of the pilot sequence, the phase θA achieves the carrier recovery: θA=θ0θv.

6 Results and discussion

We prove the validity of our assumption, simulating the proposed decoder by a mathematical description using System C/System C-AMS C + + library. Assuming no inter-symbol interference and perfect knowledge of carrier frequency and symbol timing, carrier frequency is f0=400 Mhz, Es=1 J and the symbol rate is T=80 ns; LNA bandwidth is B=2 Mhz, and the loop filter cutoff frequency is fp=20 Mhz.

6.1 Error rate comparing the proposed recovery loop with known digital and analog approaches

Our preliminary goal is to demonstrate the proposed closed-loop recovery system, jointly to the supported modulation, has better error rate with respect to other radio frequency architectures. Figure 6 shows our main competitor to recover data without the LLR computation; it does not require any coding scheme. Costas’ loop uses a binary phase shift keying (B-PSK) digital modulation; it has also an ML hard decision unit, an oversampling mechanism, and a unit for the majority-vote decision. In this way, the proposed circuit and the Costas loop apply the same modulation at the ML trigger. Additionally, our loop and the reference Costas recovery system have the same over-sample rate: S = 8. Although the figure does not show the radio frequency front-end, both approaches have a low-noise amplifier, same carrier frequency, and symbol rate. We consider our mixed-signal system, the Costas loop, Viterbi hard (VH), and Viterbi soft (VS); these two last with 8-state internal trellis, a VLSI architecture that uses a traceback path with depth equal to 10 and 8-PSK modulation. Figure 7 shows how our approach is competitive over the state of the art in digital convolutional decoding. Our recovery loop gains more than 1 dB in signal-to-noise ratio with soft Viterbi when BER is 10−4. Additionally, the SNR gain reaches 2.5 dB comparing to the Costas loop at the same error rate. Furthermore, we also confirm the use of a DSB-SC AM modulation (equivalent to the trellis code modulation) has many benefits, such as same bandwidth of an uncoded phase shift keying (PSK), improved bandwidth efficiency, and lastly high coding gain without compromising bandwidth efficiency or reduction of data rate.

Fig. 6
figure 6

The Costas loop as our reference oversampling data recovery system. This loop and the proposed recovery system have the same modulation at the ML trigger’s input

Fig. 7
figure 7

Simulated error rate of proposed recovery loop (loop), Costas’ loop, Viterbi hard (VH), and soft (VS). The proposed recovery loop gains 2.5 dB in SNR when BER is 10−4

6.2 Performance estimation using the lower bound

Our secondary goal is to demonstrate high-order phase mapping achieves the best error probability. For this purpose, we use the lower bound found in Section 4, when SNR is 2 dB, varying the parameter Δk from 0 to 360. The results are in Fig. 8, where the error rate using eight distinct phases is lower than two or four symmetric phases. However, higher M significantly improves the error rate. Figure 9 shows the error rate of the proposed closed-loop system (loop S = 8) and lower bound bit error rates when sampling rate is S = 8 (loop bound S = 8) and S = 16 (loop bound S = 16) respectively. Analytical error bound suggests wrongly that we can improve the error probability with no limit increasing the over-sample rate (S). Instead, this limit arises in receiver physical design: the decision unit, a pure digital system at clock T/S, the sample and hold’s latency, the trigger’s bandwidth, and more. Figure 9 also shows the Costas loop analytical BER from (19) when PL=1.0 and a context of B-PSK modulation. An accurate analysis of the lower bound BER in Eq. (19) is shown in Fig. 10 when SNR is 0 dB. We found the presence of a loop with an in-lock probability (PL) less than 1.0 lowers the total error rate although the Costas loop removes the second half of the Eq. (19). Table 1 reports in detail the two contributions of BER: P1 linked to the in-lock probability and P2 linked to the coefficient (1.0−PL).

$$ \begin{aligned} & P1 =(P_{R} \{ C < S/2 | E_{0}\} + P_{R} \{ C \ge S/2 | E_{2}\}) \cdot \\ & \cdot P_{L}\\ & P2 =(P_{R} \{ C < S/2 | E_{1}\} + P_{R} \{ C \ge S/2 | E_{3}\}) \cdot \\ & \cdot(1-P_{L})\\ \end{aligned} $$
(30)
Fig. 8
figure 8

Analytical BER varying the parameter Δk (degrees). Higher phase mappings allow better error rate

Fig. 9
figure 9

Simulated error rate of recovery loop (S= 8) theoretical lower bound BER when S= 8,16 and finally Costas’ loop bound BER (S= 8). High oversampling rate ideally improves the error rate, but the hardware implementation could be difficult to achieve

Fig. 10
figure 10

Error rate of Costas’ loop and some recovery loops varying the parameter M when Eb/N0 = 0 dB. The presence of an analog loop lowers the in-lock probability PL and optimizes the error rate than Costas’ approach

Table 1 Lower bound BER as the sum of two contributions varying M. SNR = 0 dB

SNR is 0 dB and the term P2 dominates the lower bound BER when using our recovery loop. Instead, P1 is higher in the Costas loop, since the signal-to-noise ratio is aligned to a B-PSK modulation.

6.3 Effects of phase and clock jitter

This subsection shows the performance of phase and clock recovery sub-systems. We send N=64 pilot symbols, alternating the binary values 0 and 1. The timing recovery sub-system is a specialization of the hardware introduced in Section 5 and illustrated in Fig. 11. This hardware is a closed loop made of a phase detector, a voltage pump, a low-pass filter, a VCO, and finally a frequency divider by 2. In particular, the phase detector gains a binary data for both the pilot sequence and the RX side clock, sampling these waveforms at clock 1/8T, by using eight different clocks, CLK1, CLK2,..., CLK8. Moreover, the digital phase detector measures the leading one of the two 8-bit registers at the CLK8-positive edge: P[7:0] (linked to the pilot sequence) and C[7:0] (linked to the RX oscillation), starting from the most significant bit (MSB) to the least significant bit (LSB) deriving two numbers CP and CC respectively. When CC < CP, the positive edge of the RX clock anticipates the positive edge of the pilot sequence so the pump receives an impulse to increase the voltage at the loop filter input. When CC > CP, the pump receives the command to decrease the output value. Finally, when CP = CC the two waveforms are aligned in phase, the pump clears the output value, and the VCO runs with the final offset, acquiring the timing recovery. We simulate with voltage step of 1.0 V for the analog pump, a loop filter with pole at 100 Mhz and a VCO with sensitivity K0 such that KF·K0=0.052. This sub-system gains the timing recovery with a relative jitter of 1.05% starting from an initial timing delay of 20 ns when the master clock has delay of 80 ns.

Fig. 11
figure 11

The timing recovery subsystem based on a digital closed loop. This architecture uses a phase detector that samples the pilot sequence and the RX clock. The leading one in the 8-bit registers measures the current jitter between these two waveforms

Next, we compare our phase recovery strategy with the Cramer-Rao bound (CRB); it is a lower bound for the variance of any unbiased estimator under the optimal, in the log-likelihood sense, estimation theory. Under an hypothesis of a linear transformation s[k; θ0] over an AWGN noise nk, we can approximate the CRB with the well-known modified Cramer-Rao bound (MCRB) introduced in [34] and [35]. Our bound considers N=64independent pilot symbols and N samples of the signal models s[k; θ0], so the estimation of the initial phase \(\hat {\theta }_{0}\) introduces a variance with a lower limit of:

$$ \begin{aligned} Var\{\hat{\theta}_{0}\} &\geq \frac{\sigma^{2}}{\sum_{k=0}^{N-1}\left(\ \frac{\partial s[k;\theta_{0}]}{\partial \theta_{0}} \right)^{2}} \geq \\ &\geq \frac{1}{2 \cdot N \cdot E_{s}/N_{0}} \end{aligned} $$
(31)

The real transformation is not linear since s[k; θ0] = \(\text {cos}(\theta _{0}-\hat {\theta }_{0}) + n_{k}\); then, we can estimate the initial phase with a variance less then the number found in (31). We measure the error variance in degrees2, at the SNR range from 0 to 10 dB. We consider two different experiments where the phase recovery sub-system works with a product among symbol rate T, the loop filter DC gain and Kv in (29) 0.028 (KV1) and 0.284 (KV2) respectively. Figure 12 shows how the scenario KV2 has better performance than the bound (31). At high SNR, our carrier recovery algorithm introduces an asymptote. The authors in [36] underline how the use of PS only and the lack of data symbols (DS) in the carrier recovery algorithm is the reason for the poor performance we found at high SNR. Finally, Fig. 13 shows the efficacy of the proposed phase acquisition subsystem, under the model KV2, when the initial phase in (1) covers the full range in 0 to 360 and the SNR = 2 dB. Instead, Fig. 14 shows the performance of our timing recovery algorithm measuring the relative final jitter varying the filter’s DC gain and VCO sensitivity product when N = 64.

Fig. 12
figure 12

Performance of our carrier recovery sub system. Estimated phase’s variance analyzing two different scenarios

Fig. 13
figure 13

Convergence of our carrier recovery sub system. The initial oscillation could vary in the full range [ 0, 360], SNR = 2 dB

Fig. 14
figure 14

Convergence of our symbol timing recovery sub-system. We plot the final relative jitter, when N= 64, varying the filter DC gain and VCO sensitivity product, SNR = 2 dB

6.4 Effects of non-linearity and noise in the critical blocks

Effects of non-ideal modeling of critical block worsen the total bit error rate. For instance, the mixer is not an ideal four-quadrant multiplier; it introduces non- linearity and bias effects according to the model below for low carrier frequency [37]:

$$ v_{d}(t) = V_{OO} + K_{m} \cdot \left(r(t)+V_{IO}\right) \cdot \left(v(t)+V_{IO}\right) $$
(32)

Additionally, we model the multi-phase VCO, as a typical oscillator, with the phase noise as the addition of thermal and shot noise, with flat spectral density for medium frequencies. We also consider the flicker noise, with power spectral density proportional to the inverse of frequency f, for the low spectral components. The frequency value below which flicker noise dominates cannot be calculated; it must be measured. However, it depends on the construction, materials, and environment frequency of the oscillator, and we set this value to 4 kHz. Results, simulating with VIO=0.2 V, VOO=2.0 V, and Km=10.0 V−1, are shown in Fig. 15 where we measure a SNR loss of 0.3 dB when BER is 10−2.

Fig. 15
figure 15

Performance of our recovery loop under non-linearity and phase noise. MP-VCO’s model includes the flicker noise and thermal and shot noise. The mixer model includes bias and non-linearity

6.5 Use of SSB-SC AM modulation

In this work, we use the DSB-SC AM modulation to transmit information from the source to the destination, this last a mixed-signal recovery loop using oversampling. The AM signal is \(m(t) = \sum _{k=0}^{N-1} m_{k}(t) = \sum _{k=0}^{N-1} u_{k} \cdot p(t-kT)\). The same hardware could be used to recover the source symbol from an SSB-SC AM modulation, with a better spectral efficiency than the dual sideband signal. Therefore, the analytic representation ma(t) uses the message m(t) and its Hilbert transform \(\hat {m}\left (t\right)=\sum _{k=0}^{N-1} \hat {m}_{k}(t)\) (jrepresents the imaginary unit):

$$ m_{a}\left(t\right) = m\left(t\right) + j \cdot \hat{m}\left(t\right) $$
(33)

The SSB-SC AM waveform, in the current interval [kT,(k+1)T], is therefore:

$$ \begin{aligned} s_{SSB}(t) =& \sqrt{E_{S}} \cdot m_{k}(t) \cdot \text{cos} \left(2 \cdot \pi \cdot f_{0} \cdot t +\theta_{k}+\theta_{0} \right) + \\ & - \sqrt{E_{S}} \cdot \hat{m}_{k}(t) \cdot \text{sin} \left(2 \cdot \pi \cdot f_{0} \cdot t +\theta_{k}+\theta_{0} \right) \end{aligned} $$
(34)

The process of frequency conversion is ideally the multiplication of signal (34), corrupted by AWGN noise, with the MP-VCO output (3). After the mixer, we have four components: the cosine and sine at frequency twice, removed by our loop filter, and finally the sine and cosine with the differential phase Δk. The effect of a real behavior when the system is in the out-of-lock state results in a waveform y(t) such as:

$$ \begin{aligned} y(t) \approx& 0.5 \cdot \sqrt{E_{S}} \cdot m_{k}(t) \cdot \text{cos} \left(\Delta_{k}\right) + \\ & - 0.5 \cdot \sqrt{E_{S}} \cdot \hat{m}_{k}(t) \cdot \text{sin} \left(\Delta_{k}\right) + n(t) \end{aligned} $$
(35)

Analog infinite impulse response (IIR) filters approximate the Hilbert function. These IIR filters introduce inter-symbol interference (ISI), and this has an impact in the BER performance and stability issues, so the proposed approach could fail. In fact, the structure of Eq. (35) suggests how our loop could miss the asymptotic stability when the new value of the constant a is negative.

$$ a\left(t\right) = \cos\left(\frac{2 \cdot \pi}{M} \right) \pm p(t) \otimes h(t) \cdot \sin\left(\frac{2 \cdot \pi}{M} \right) $$
(36)

In this last equation, the variable a is time variant and the function h(t) is the impulse response of the IIR filter used to approximate the Hilbert function. However, an high-order phase mapping (M1) achieves the quasi-stability of the equivalent dual sideband model.

7 Conclusion and future directions

In this work, we propose a coded symbol data recovery loop in a radio transmission environment over Gaussian additive noise with the minimal hardware. The analysis with the maximum-likelihood detection theory should require a complex system implementation with an analog-to-digital converter and fixed-precision digital logic. Our proposed system is in a middle way between a pure analog decoder and a coded symbol recovery loop. It differs from the known decoder since it does not need the LLR computation. Finally, this approach is different from known literature on data recovery loop using oversampling since the application of actual state-of-art is in digital modulations over serial links. We show in the document how the proposed approach requires the proper selection of the supported modulation and coding scheme. In particular, the use of a rate 1/R encoder limits the loop state to two possible values: in-lock and out-of-lock. Applying the proposed approach to a simple modulation without symbol redundancy, the recovery loop ideally introduces M different loop states, so the stability condition as the correct decoding with a unique equilibrium point is difficult to achieve. Analysis of theoretical and simulated BER justifies the superior performance of our proposed system with respect to considered Costas’ recovery circuit. SNR gain using our loop with respect to the Costas approach is 2.5 dB when error rate is around 10−4. The proposed approaches, based on closed and open (Costas) loop have execution time that depends on the maximum symbol frequency, which is technology dependent. Instead, the ML algorithm computation in DSP and Viterbi decoders requires an execution time that depends on technology and very large-scale integration system (VLSI) architecture (e.g., parallelism). Future directions concern the application to the most important communications problems: coding, wireless channels, channel estimation, modulations, and multiple access systems.

References

  1. J. G. Proakis, Digital Communications 5th Edition Electrical engineering series (McGraw-Hill, New York, 2007).

    Google Scholar 

  2. H. Yoshikawa, in 2016 International Symposium on Information Theory and Its Applications (ISITA). On the bit error probability for constant log-MAP decoding of convolutional codes (IEEEMonterey, 2016), pp. 502–506.

    Google Scholar 

  3. P. Robertson, E. Villebrun, P. Hoeher, in Communications, 1995. ICC ’95 Seattle, ’Gateway to Globalization’, 1995 IEEE International Conference On, 2. A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain (IEEESeattle, 1995), pp. 1009–10132.

    Google Scholar 

  4. C. Enz, A. Pezzotta, in 2016 MIXDES - 23rd International Conference Mixed Design of Integrated Circuits and Systems. Nanoscale MOSFET modeling for the design of low-power analog and RF circuits (IEEELodz, 2016), pp. 21–26.

    Chapter  Google Scholar 

  5. J. Hagenauer, in Information Theory Workshop. Decoding of binary codes with analog networks (IEEEKillarney, 1998), pp. 13–14.

    Google Scholar 

  6. H. -A. Loeliger, F. Tarkoy, F. Lustenberger, M. Helfenstein, Decoding in analog VLSI. Commun. Mag. IEEE. 37(4), 99–101 (1999).

    Article  Google Scholar 

  7. J. Hagenauer, M. Winklhofer, in Information Theory, 1998. Proceedings. 1998 IEEE International Symposium On. The analog decoder, (1998), p. 145.

  8. A. Marcone, M. Pierobon, M. Magarini, in IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. A parity check analog decoder for molecular communication based on biological circuits, (2017), pp. 1–9.

  9. H. Zheng, Z. Zhao, X. Li, H. Han, Design of a (480, 240) CMOS analog low-density parity-check decoder. China Commun.14(8), 41–53 (2017).

    Article  Google Scholar 

  10. Z. Zhao, K. Yang, H. Zheng, F. Gao, X. Bu, Design, simulation, and implementation of a CMOS analog decoder for (480,240) low-density parity-check code. IEEE Access. 5:, 17381–17391 (2017).

    Article  Google Scholar 

  11. C. Winstead, C. Schlegel, in Information Theory, 2004. ISIT 2004. Proceedings. International Symposium On. Density evolution analysis of device mismatch in analog decoders (IEEEChicago, 2004), p. 293.

    Google Scholar 

  12. G. Visalli, F. Pappalardo, G. Avellone, F. Rimi, A. Galluzzo, Method and system for coding/decoding signals and computer program product therefor (US Patents, Agrate Brianza, 2008). https://patents.google.com/patent/US7424068B2. Patent 7, US,424,068.

    Google Scholar 

  13. T. Bartley, S. Tanaka, Y. Nonomura, T. Nakayama, M. Muroyama, in 2015 IEEE International Symposium on Circuits and Systems (ISCAS). Delay window blind oversampling clock and data recovery algorithm with wide tracking range (IEEELisbon, 2015), pp. 1598–1601.

    Chapter  Google Scholar 

  14. B. Jiang, C. Hung, B. Chen, K. Cheng, in 2012 IEEE International Symposium on Circuits and Systems. A 6-Gb/s 3x-oversampling-like clock and data recovery in 0.13−μ m CMOS technology (IEEESeoul, 2012), pp. 2597–2600.

    Chapter  Google Scholar 

  15. H. B. Thameur, B. L. Gal, N. Khouja, F. Tlili, C. Jego, in 2017 IEEE Symposium on Computers and Communications (ISCC). A survey on decoding schedules of LDPC convolutional codes and associated hardware architectures (IEEEHeraklion, 2017), pp. 898–905.

    Chapter  Google Scholar 

  16. G. Ungerboeck, Trellis-coded modulation with redundant signal sets part I: Introduction. Commun. Mag. IEEE. 25(2), 5–11 (1987).

    Article  Google Scholar 

  17. C. An, H. Ryu, S. B. Ryu, S. G. Lee, in 2017 International Conference on Information and Communication Technology Convergence (ICTC). Turbo equalizer design and performance evaluation of 8PSK-TCM based satellite receiver system (IEEEJeju, 2017), pp. 904–907.

    Chapter  Google Scholar 

  18. J. Nargis, D. Vaithiyanathan, R. Seshasayanan, in 2013 International Conference on Information Communication and Embedded Systems (ICICES). Design of high speed low power Viterbi decoder for TCM system (IEEEChennai, 2013), pp. 185–190.

    Chapter  Google Scholar 

  19. A. Papoulis, U. Pillai, Probability, Random Variables and Stochastic Processes 4th edn. (McGraw-Hill Companies, New York, 2015).

    Google Scholar 

  20. W. Ibrahim, V. Beiu, M. Tache, F. Kharbash, in 2013 IEEE 20th International Conference on Electronics, Circuits, and Systems (ICECS). On Schmitt trigger and other inverters (IEEEAbu Dhabi, 2013), pp. 29–32.

    Chapter  Google Scholar 

  21. O. Azzabi, C. B. Njima, H. Messaoud, in 2017 International Conference on Control, Automation and Diagnosis (ICCAD). Modeling a system with hybrid automata and multi —models, (2017), pp. 087–091.

  22. D. Murphy, J. J. Rael, A. A. Abidi, Phase noise in LCO oscillators: a phasor-based analysis of a general result and of loaded Q. IEEE Trans. Circ. Syst. I: Regular Pap.57(6), 1187–1203 (2010).

    Google Scholar 

  23. J. H. Lee, B. K. Han, S. S. Ahn, S. H. Lee, H. D. Kim, in 2007 International Conference on Electromagnetics in Advanced Applications. A low phase noise octa-phase LC VCO for multi-band direct conversion receiver (IEEETurin, 2007), pp. 411–414.

    Chapter  Google Scholar 

  24. Y. -. Liao, C. -. R. Shi, in 2008 IEEE International Symposium on Circuits and Systems. A 6–11GHz multi-phase VCO design with active inductors (IEEESeattle, 2008), pp. 988–991.

    Chapter  Google Scholar 

  25. M. S. Branicky, in Decision and Control, 1997., Proceedings of the 36th IEEE Conference On, 1. Stability of hybrid systems: state of the art (IEEESan Diego, 1997), pp. 120–1251.

    Chapter  Google Scholar 

  26. M. S. Branicky, in Proceedings of 1994 33rd IEEE Conference on Decision and Control, 4. Stability of switched and hybrid systems (IEEELake Buena Vista, 1994), pp. 3498–35034.

    Chapter  Google Scholar 

  27. C. Berrou, A. Glavieux, P. Thitimajshima, in Communications, 1993. ICC ’93 Geneva. Technical Program, Conference Record, IEEE International Conference On, 2. Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1 (IEEEGeneva, 1993), pp. 1064–10702.

    Google Scholar 

  28. H. Chen, R. G. Maunder, L. Hanzo, A survey and tutorial on low-complexity turbo coding techniques and a holistic hybrid ARQ design example. IEEE Commun. Surv. Tutor.15(4), 1546–1566 (2013).

    Article  Google Scholar 

  29. R. B. Sepe, Frequency multiplier and frequency waveform generator (US Patents, Lexington, 1970). https://www.google.it/patents/US3551826. US Patent 3,551,826.

    Google Scholar 

  30. M. Rau, T. Oberst, R. Lares, A. Rothermel, R. Schweer, N. Menoux, Clock/data recovery PLL using half-frequency clock. IEEE J. Solid-State Circ.32(7), 1156–1159 (1997).

    Article  Google Scholar 

  31. L. Zhang, A. A. Sawchuk, in 2002 IEEE International Symposium on Circuits and Systems. Proceedings (Cat. No.02CH37353), 2. Monolithic multi-phase LC-VCO in ultra-thin silicon-on-insulator (UTSI®-SOI) CMOS technology (IEEEPhoenix-Scottsdale, 2002).

    Google Scholar 

  32. H. Sjoland, Improved switched tuning of differential CMOS VCOs. IEEE Trans. Circ. Syst. II: Analog. Digit. Signal Process.49(5), 352–355 (2002).

    Article  Google Scholar 

  33. D. I. Sanderson, R. M. Svitek, S. Raman, A 5-6-GHz polyphase filter with tunable I/Q phase balance. IEEE Microw. Wirel. Components Lett.14(7), 364–366 (2004).

    Article  Google Scholar 

  34. A. N. D’Andrea, U. Mengali, R. Reggiannini, The modified Cramer-Rao bound and its application to synchronization problems. IEEE Trans. Commun.42(234), 1391–1399 (1994).

    Article  Google Scholar 

  35. F. Gini, R. Reggiannini, U. Mengali, The modified Cramer-Rao bound in vector parameter estimation. IEEE Trans. Commun.46(1), 52–60 (1998).

    Article  Google Scholar 

  36. N. Noels, H. Steendam, M. Moeneclaey, H. Bruneel, Carrier phase and frequency estimation for pilot-symbol assisted transmission: bounds and algorithms. IEEE Trans. Signal Process.53(12), 4578–4587 (2005).

    Article  MathSciNet  Google Scholar 

  37. D. H. Wolaver, Phase-locked Loop Circuit Design Prentice Hall Biophysics and B (Prentice Hall, Upper Saddle River, New Jersey, 1991). https://books.google.it/books?id=C2V5QgAACAAJ.

    Google Scholar 

Download references

Acknowledgments

Not applicable

Funding

Not applicable

Author information

Authors and Affiliations

Authors

Contributions

The author read and approved the final manuscript.

Corresponding author

Correspondence to Giuseppe Visalli.

Ethics declarations

Consent for publication

Not applicable

Competing interests

The author declares that he has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

Availability of data and materials

Please contact the author for data request or visit his http://www.visalli.it Homepage

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Visalli, G. Analysis and performance of coded symbol recovery loop using oversampling. EURASIP J. Adv. Signal Process. 2019, 26 (2019). https://doi.org/10.1186/s13634-019-0623-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-019-0623-7

Keywords