Skip to main content

A novel approach to extracting useful information from noisy TFDs using 2D local entropy measures

Abstract

The paper proposes a novel approach for extraction of useful information and blind source separation of signal components from noisy data in the time-frequency domain. The method is based on the local Rényi entropy calculated inside adaptive, data-driven 2D regions, the sizes of which are calculated utilizing the improved, relative intersection of confidence intervals (RICI) algorithm. One of the advantages of the proposed technique is that it does not require any prior knowledge on the signal, its components, or noise, but rather the processing is performed on the noisy signal mixtures. Also, it is shown that the method is robust to the selection of time-frequency distributions (TFDs). It has been tested for different signal-to-noise-ratios (SNRs), both for synthetic and real-life data. When compared to fixed TFD thresholding, adaptive TFD thresholding based on RICI rule and the 1D entropy-based approach, the proposed adaptive method significantly increases classification accuracy (by up to 11.53%) and F1 score (by up to 7.91%). Hence, this adaptive, data-driven, entropy-based technique is an efficient tool for extracting useful information from noisy data in the time-frequency domain.

1 Introduction

Various real-life phenomena produce signals that contain information on the systems of their origin. When analyzing underlying dynamics of these signals, most of them are non-stationary, meaning that their spectrum is time-varying and have dynamical spectral behavior (e.g., bio-medical signals, signals from radars, sonars, seismic activity, audio). In addition, many real-life signals are also multicomponent and may be decomposed to multiple amplitudes and/or frequency modulated components.

When dealing with signal interpretation, signals are commonly represented in one of two domains, namely time domain or frequency domain. In classical representations, the variables representing time and frequency are mutually exclusive. The time-frequency distribution (TFD) of the signal, when the signal has time-varying frequency content and dynamical spectral behavior, allows us to represent the signal jointly in time and frequency domain and to detect frequency components at each time instant [1]. TFDs are used in various fields, such as nautical studies [2], medicine [3, 4], electrical engineering [5, 6], and image processing [7, 8].

One of the simplest TFDs is the short-time Fourier transform (STFT) proposed by Gabor in 1946 which introduces a moving window and applies the Fourier transform (FT) to the signal inside the window [9]. However, the performance of the STFT is highly dependant on the window size and, according to the Heisenberg uncertainty principle, there exists a compromise between time and frequency resolution (increasing window size increases frequency resolution and reduces time resolution and vice versa). This has motivated the development of numerous other high-resolution TFDs, many of which are quadratic. The main shortcoming of the quadratic class of TFDs is the inevitable appearance of cross-terms or interferences caused by the TFD quadratic nature (this has led to the development of a wide range of reduced interferences quadratic TFDs).

In nonstationary signal analysis in the time-frequency domain, one of the fundamental problems is measuring the signal information content, both globally and locally (e.g., complexity and the number of signal components). Knowing the information content allows efficient pre-processing and dynamic memory allocation prior to signal features extraction (e.g., instantaneous frequency and amplitude estimation) in blind source separation, machine learning, automatic classification systems, etc.

A challenging problem in signal analysis is blind source separation, i.e., separating signal components from a noisy mixture without any a-priori knowledge about the signal. Some of the algorithms that are considered standard in solving this problem are greedy approach [10, 11], relaxation approach [12, 13], smoothed approach [14], and component analysis method [1517]. A time-frequency approach has been proposed in [18]. There is a variety of different methods and several new approaches have been studied in the last few years [1922]. Methods exploring the use of entropy measures in separating the source signal have also been investigated in many studies.

Flandrin et al. [23] in their paper from 1994 gave a detailed discussion on the Rényi information measure for deterministic and random signals. In this study, authors have indicated the Rényi entropy measure general utility as a complexity indicator of signals in the time-frequency plane. Extensive research has shown that the most suitable entropy measure for TFD of a signal is the Rényi entropy [24].

In [25, 26] and later on in [27, 28], authors present and analyze a method based on the Rényi entropy for blind source separation as well as an extensive comparison of the proposed method with several different methods. Authors state that the method based on Rényi’s entropy should be preferred over other methods. Methods in the mentioned papers are not related to the signals TFD.

A modification of sparse component analysis based on the time-frequency domain was given in [29]. The blind source separation problem in the time-frequency domain has also been investigated in [30], as well as in [31]: the mixed signals were transformed from the time domain to the time-frequency domain. Both the effectiveness and superiority of the proposed algorithm were verified, but under the assumption that there are several sensors and that there are single-source points. Both methods are dependent on the number of sensors. Other methods dependent on the number of sensors for blind source separation based on the mixing matrix are presented in [32, 33].

A method of combining wavelet transform with time-frequency blind source separation based on the smooth pseudo-Wigner-Ville distribution is investigated in [34] to extract electroencephalogram characteristic waves, and the result is used to construct the support vector machine. In the paper written by Saulig et al. [35], the authors propose an automatic adaptive method for identification and separation of the useful information contained in TFDs. The main idea behind the method is based on the K-means clustering algorithm that performs a 1D partitioning of the data set. Instead of hard thresholding, authors use blind separation of useful information from background noise with the local Rényi entropy. The advantage of this approach is that there is no need for any prior knowledge of the signal. The results show that this method acts as a near-to-optimal automatic hard-threshold selector.

Combining a data-driven method for adaptive Rényi entropy calculation with the relative intersection of confidence intervals (RICI) method could allow the user to extract useful content without the need of any information about the signal source. The method could automatically adapt to the data obtained from the signal TFD. In this paper, we present a method for blind source separation based on the local 2D windowed Rényi entropy of the signals TFD. The method is self-adaptive in terms of choosing the appropriate window for the entropy calculation. It has been tested on the spectrogram and reduced interference distribution (RID) based on the Bessel function. Results are obtained for multicomponent signals. The results are compared to both fixed TFD thresholding and RICI based selection of fixed TFD thresholds without entropy calculations. In addition, comparison to the recently introduced [35] entropy-based method is performed. The method is adaptive and no prior knowledge of the signal is required. It can be applied to various multicomponent frequently modulated signals both in noisy and noise-free environments. This blind-source separation method could potentially be applied to different real-life problems, such as biomedical signals (EEG, ECG, etc.) and seismology (earthquake seismographs). The method performance remains stable when considering different TFDs.

The rest of the paper is structured as follows. Section 2.1 provides a brief overview of time-frequency signal representations starting from the spectrogram and focusing on the RID with Bessel function. Entropy measures, in particular, the Rényi entropy, is defined in Section 2.2. Next, the proposed method is described in Section 2.3, followed by the RICI based adaptive thresholding procedure given in Section 2.4. Section 3 elaborates in detail numerical results achieved by the proposed technique. Finally, conclusions are found in Section 4. Nomenclature used in the paper is given in Table 1.

Table 1 Nomenclature

2 Methodology

2.1 Time-frequency distributions

The majority of real-life signals are non-stationary signals, meaning that their frequency content changes with time. The classic time or frequency representation does not display the dependencies between the two.

TFDs are used for the representation of the signal’s frequency contents w.r.t. time, allowing the analyst to see the start and end time of each signal component in the time-frequency domain. Unlike in classical representations, TFD can show whether the signal is monocomponent or multicomponent which can be hard to achieve with spectral analysis.

Two different distributions were used for the algorithm validation, namely the spectrogram and the reduced interference distribution (RID) based on the Bessel function.

2.1.1 The spectrogram

Computation of the spectrogram from the signal’s time domain essentially corresponds to the squared magnitude of the STFT of the signal [1, 36, 37].

$$ \begin{aligned} S_{x}(t,f)=\mid STFT_{x}(t,f)\mid^{2}\\ = \left|\int_{-\infty}^{\infty}x(\tau)\omega(t-\tau)e^{-j2\pi f\tau} d\tau\right|^{2} \end{aligned} $$
(1)

x is the analyzed signal and ω is the smoothing window. The spectrogram introduces nonlinearity in the time-frequency representation. The spectrogram of the sum of two signals does not correspond to the sole sum of the spectrograms of the two signals but presents a third term if the two components share time-frequency supports. Also, the representation is dependent on the window function ω(t). A smaller window produces better time resolution, while a wider window gives a better frequency resolution. In other words, the observation window ω(t) allows localization of the spectrum in time but also smears the spectrum in frequency.

2.1.2 The reduced interference distribution (RID) based on Bessel function

The RID is a quadratic TFD in which the cross-terms are constricted w.r.t. the auto-terms. In this paper, the Bessel function of the first kind has been used [38]. The distribution is defined as

$$ RIDB_{x}(t,f)=\int_{-\infty}^{+\infty} h(\tau)R_{x}(t,\tau)e^{-j2\pi f \tau}d\tau, $$
(2)

where h is the frequency smoothing window and Rx represents the kernel

$$ \begin{aligned} R_{x}(t,\tau)=\int_{t-|\tau|}^{t+|\tau|}\frac{2g(\upsilon)}{\pi|\tau|}\sqrt{1-\left(\frac{\upsilon-t}{\tau}\right)^{2}}x\\ \cdot \left(\upsilon+\frac{\tau}{2}\right)x^{*}\left(\upsilon-\frac{\tau}{2}\right) d\upsilon, \end{aligned} $$
(3)

g is the time smoothing window and x denotes the complex conjugate of x. The paper provides the comparison of the results for the simple spectrogram and high-resolution RID. Note, however, that other quadratic, high-resolution TFDs can also be used with similar performances.

2.2 The Rényi entropy

Entropy measures are most commonly used in the analysis of medical signals such as EEG, heart-rate variability, blood pressure, and similar.

The entropy estimation is a calculation of the time density of the average information in a stochastic process.

Shannon in [39] presents the concept of information of a discrete source without memory as a function that quantifies the uncertainty of a random variable at each discrete time. The average of that information is known as Shannon entropy. The Shannon entropy is restricted to random variables taking discrete values. A discrete random variable s, which can take a finite number M of possible values si{s1,…,sM} with corresponding probabilities pi{p1,...,pM}, has the Shannon entropy defined as

$$ H(s)=-\sum_{i=1}^{M}p_{i}log_{2}(p_{i}). $$
(4)

From the Shannon entropy, many other entropy measures have emerged. One of the extensions of the Shannon entropy has been presented by Rényi [40].

The Rényi entropy of order α, where α≥0 and α≠1 [23], is defined as

$$ H(s)=\frac{1}{1-\alpha}log_{2}\sum_{i=1}^{M}p^{\alpha}_{i}. $$
(5)

Depending on the chosen α, different entropy measures are defined. For α=0, the obtained entropy is known as Hartley entropy. H(s) as \(\alpha \xrightarrow {} 1\) is the Shannon entropy, while α=2 is the collision entropy used in quantum information theory and it bounds the collision probability of the distribution.

When \(\alpha \xrightarrow {}+\infty \), the obtained entropy is known as the min-entropy.

When the TFD entropy is calculated, odd integer values are suggested for the parameter α as the contribution of cross-terms oscillatory structures cancels under the integration with odd powers [24, 40].

The definition of Rényi entropy can be extended to continuous random variables by

$$ H(s)=\frac{1}{1-\alpha}log_{2}\int_{-\infty}^{+\infty}p^{\alpha}_{i}(x) dx. $$
(6)

When it is applied to a normalized TFD, the Rényi entropy is defined as

$$ H_{\alpha,(t,f)}=\frac{1}{1-\alpha}log_{2}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}{C^{\alpha}}(t, f) dt df, $$
(7)

where Cα(t,f) is TFD of the signal.

2.3 The proposed method

The proposed method, aimed at extracting useful information from noisy signals, relies on the hypothesis that a two-dimensional entropy map could provide a more suitable substrate for a sensitive extraction procedure, compared to the classical extraction procedures from TFDs. After obtaining the TFD of the signal, for each point in the distribution, the local entropy is calculated over square window sizes ranging from one to the one tenth of the signal size as

$$\begin{array}{@{}rcl@{}} H_{\rho(t,f)}^{\Delta}=\frac{1}{1-\alpha}log_{2}\int_{t-\Delta/2}^{t+\Delta/2}\int_{f-\Delta/2}^{f+\Delta/2}{C^{\alpha}}(t, f) dt df. \end{array} $$
(8)

The different window sizes are defined as

$$ \Delta=\{\Delta_{1},\Delta_{2},...\Delta_{n}\}, $$
(9)

where

$$\Delta_{1}=2 \times 2$$

and

$$\Delta_{n}=\frac{\text{signal length}}{10}\times \frac{\text{signal length}}{10}.$$

The entropy values \(H_{\rho (t,f)}^{\Delta }(t,f)\) for each window size are given as input to the RICI algorithm to determine the window size for the given point based on the entropy changes. The window chosen by the RICI algorithm, in this case, corresponds to the first inflection point when entropy values are modeled as a curve, suggesting that a change in entropy behavior has occurred. In this case, the change in entropy behavior is an indicator of the point where noise starts to influence the entropy measure.

For every t and f the Rényi entropy \(H_{\rho (t,f)}^{\text {RICI}}(t,f)\) is calculated so that

$$ H_{\rho(t,f)}^{\text{RICI}}(t,f)=\text{RICI}\Big\{ H_{\rho(t,f)}^{\Delta}(t,f)\Big\}, $$
(10)

where

$$ H_{\rho(t,f)}^{\Delta}=\Big\{ H_{\rho(t,f)}^{\Delta_{1}},H_{\rho(t,f)}^{\Delta_{2}},...,H_{\rho(t,f)}^{\Delta_{n}}\Big\}. $$
(11)

HΔ represents the entropy calculation at the desired point for a specified window size.

The algorithm results are produced by observing the intersection of confidence intervals of the signal entropy for the given window size in comparison with the confidence intervals of the other proposed window sizes. The aim of applying the RICI rule to \(H_{\rho (t,f)}^{\Delta }(t,f)\) is to track the interval in which the change in the growth of the entropy occurs.

After the calculation is performed for every pair of t and f, the optimal entropy picture is obtained

$$ M= H_{\rho(t,f)}^{\text{RICI}}(t,f), t=1..N, f=1..M, $$
(12)

where N represent number of samples, and M represents frequency bins. The RICI algorithm selects the desired window size for entropy calculation by tracking the existence and estimating the amount of the intersection of confidence intervals.

In the RICI algorithm, the number of overlapping confidence intervals is calculated to reduce the estimation bias. The method calculates N confidence intervals for each M(n). To produce the function M(n) with a noticeable difference between the signal and the noise entropy, the overlapping of confidence intervals is calculated and the interval Δ+(n) defines the ideal interval. Δ+(n) presents the last index that has the lowest estimation error [41]. The estimation error is calculated as the pointwise mean squared error (MSE) as

$$ \text{MSE}(n,\Delta)=(\sigma(n,\Delta))^{2}+(\omega(n,\Delta))^{2}, $$
(13)

where σ(n,Δ) represents the estimation variance and ω(n,Δ) is the estimation bias.

In [4244], the asymptotic estimation error is shown to demonstrate the following properties, where β is a constant and it is not signal-dependent

$$ \frac{|\omega(n,\Delta^{+})|}{\sigma(n,\Delta^{+})}=\beta. $$
(14)

When Δ>Δ+, β is defined as β>1 and β<1 if Δ<Δ+. The ideal window size Δ+ is the one providing the optimal bias-to-variance trade-off resulting in the best estimate M(n,Δ+).

Every confidence interval is defined by its lower and upper limits

$$\begin{array}{@{}rcl@{}} D(n,\Delta)=[L(n,\Delta),U(n,\Delta)]. \end{array} $$
(15)

The lower confidence interval L(n,Δ) limit is defined as

$$\begin{array}{@{}rcl@{}} L(n,\Delta)=M(n,\Delta)-\Gamma \times \sigma(n,\Delta), \end{array} $$
(16)

and upper confidence interval limit U(n,Δ) is defined as

$$\begin{array}{@{}rcl@{}} U(n,\Delta)=M(n,\Delta)+\Gamma \times \sigma(n,\Delta), \end{array} $$
(17)

where Γ is the threshold parameter of the confidence intervals.

The RICI rule, when compared to the original intersection of confidence interval (ICI) rule, introduces additional tracking of the amount of overlapping of confidence intervals, defined as

$$\begin{array}{@{}rcl@{}} O(n,\Delta) &=& \underline{U}(n,\Delta) - \overline{L}(n,\Delta), \end{array} $$
(18)

Δ=1,2,,L. In order to obtain the value belonging to the finite interval [0,1], O(n,Δ) is divided by the size of the confidence interval D(n,Δ) resulting in R(n,Δ) defined as

$$\begin{array}{@{}rcl@{}} R(n,\Delta) &=& \frac{\underline{U}(n,\Delta) - \overline{L}(n,\Delta)}{U(n,\Delta)-L(n,\Delta)}. \end{array} $$
(19)

For the optimal window width selection by the RICI rule, the previously described procedure can be expressed as

$$\begin{array}{@{}rcl@{}} R(n,\Delta) &\geq& R_{c}, \end{array} $$
(20)

where Rc is a chosen threshold [41, 45, 46]). The window width Δ+ obtained by the RICI rule is defined as

$$\begin{array}{@{}rcl@{}} \Delta^{+}=\max \left\{ \Delta : R(n,\Delta) \geq R_{c} \right\}. \end{array} $$
(21)

This results in an image of the signal entropy. The flowchart of the algorithm is reported in Fig. 1.

Fig. 1
figure 1

Flowchart of the proposed algorithm

Next, the mask for the original signal is extracted from the previously obtained time-frequency image again by using the RICI thresholding method.

2.4 The RICI thresholding method

To extract a mask from the optimal entropy map, the RICI method is used once again. Namely, the threshold is defined as

$$ \tau ={0.01\times\max(M),0.02\times \max(M),...0.99\times \max(M)}. $$
(22)

For every τ, E(Mρ(t,f,τ)) is calculated and it represents the signal energy when a threshold is applied on the entropy map. E(Mρ(t,f,τ)) is the energy of the distribution for the chosen threshold τ. The energy calculation for every threshold is given as input to the RICI algorithm

$$ \tau^{+}=\text{RICI}\left\{ E(M_{\rho}(t,f,\tau))\right\}. $$
(23)

With that, the entropy mask is extracted

$$ \chi=M_{\rho}(t,f,\tau^{+}). $$
(24)

The next section estimates the performances of the proposed approach.

3 Results and discussion

3.1 Experimental setup

The method has been tested on four different types of signals, where two of them were synthetic signals. The resulting error shows the difference between the non-zero elements when the mask of the noise-free signal is subtracted from the mask obtained by the tested method. The correct extraction presents all zeros in the resulting error map, where 1 is false negative and −1 is a false positive. Two measures were used to evaluate the performance of the proposed method. The first one is accuracy, calculated as the difference between a given result and the correct result. In this case, the points where the signal and noise were correctly classified are the 0 elements in the subtraction mask. In the metric calculations they present TruePositives(TP)+TrueNegatives(TN). TruePositives(TP) are correctly classified signal points and TrueNegatives (TN) are correctly classified points where the signal is not present. FalseNegatives(FN) are points where the signal is present but the mask obtained from this method discarded them as noise and the value of those points is 1 in the subtraction matrix. FalsePositives(FP) are points where the method misclassified noise as a signal and are defined as −1 values in the subtraction matrix. Description of the used points is given in Table 2. In that case, accuracy is calculated as follows

$$ \text{accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}} $$
(25)
Table 2 Explanation of points in the map used for validation

As can be seen from the expression above, accuracy is not suitable for unbalanced data sets. In mask extraction, the useful signal takes only a portion of the whole set. F1 score is more suitable in cases when there is an uneven class distribution; in this specific case, it is more suitable as the useful signal takes only a smaller portion of the signal TFD. F1 score considers both precision and recall of the result. It is a harmonic mean between the two

$$ F1= 2\times \frac{\text{precision}\times \text{recall}}{\text{precision}+\text{recall}}, $$
(26)

where

$$ \text{ precision}=\frac{\text{TP}}{\text{TP}+\text{FP}}, $$
(27)

and

$$ \text{recall}=\frac{\text{TP}}{\text{TP}+\text{FN}}. $$
(28)

Accuracy and F1 score are usually used as metrics in machine learning for evaluating classification models. In this case, it has been used to determine the fit of the obtained mask for the given noise-free signal. FP and FN are the classification version of statistical error types 1 and 2. This metrics are used in several papers that deal with image [47, 48] and signal processing [49], such as EEG signals [50, 51]. In addition to numerical results, images of the obtained signal masks are shown of Figs. 3, 4, 5, 6, 7, and 8 where the obtained masks are emphasized in yellow.

Fig. 2
figure 2

TFD of the first noise-free signal (a), TFD of the second noise-free signal (b), RIDB distribution

Fig. 3
figure 3

Results for the first tested signal with SNR=-3dB, TFD of the noisy signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), and mask from applying fixed threshold of 15% on signal spectrogram (f)

Fig. 4
figure 4

Results for the first tested signal with SNR=3 dB, TFD of the noisy signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), and mask from applying fixed threshold of 5% on signal spectrogram (f)

Fig. 5
figure 5

Results for the second tested signal with SNR=-3 dB, TFD of the noisy signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), mask from applying fixed threshold of 15% on signal spectrogram (f)

Fig. 6
figure 6

Results for the second tested signal with SNR=3 dB, TFD of the noisy signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), mask from applying fixed threshold of 10% on signal spectrogram (f)

Fig. 7
figure 7

Results for the first real signal of the dolphin sound, TFD of the original signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), and mask from applying fixed threshold of 10% on signal spectrogram (f)

Fig. 8
figure 8

Results for the second real seismology signal, TFD of the original signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), and mask from applying fixed threshold of 10% on signal spectrogram (f)

3.2 Simulation results

The first signal to be tested was the combination of three atoms as shown in Fig. 2. Noise was added with different signal-to-noise ratios (SNRs) and the extracted useful information content from signals, for SNR’s −3 dB and 3 dB are shown in Figs. 3 and 4.

Results are shown in Tables 3 and 4 for the spectrogram distribution and in the Tables 5 and 6 for the RIDB distribution. Methods are compared by means of accuracy and F1 score from −3 dB to 10 dB SNR.

Table 3 Comparison of the accuracy measure applied on the spectrogram
Table 4 Comparison of the F1 measure applied on the spectrogram
Table 5 Comparison of the accuracy measure applied on the RIDB
Table 6 Comparison of the F1 measure applied on the RIDB

The proposed method was compared to the state-of-the-art algorithm based on local entropy in one dimension described in [35] as well as to the RICI thresholding of the TFD and the fixed thresholding of the signal TFD. The RICI TFD thresholding is performed similarly to the described procedure in Section 2.4 with the only difference that the input to the RICI operator in Eq. 23 is not the energy calculation for different τ of the optimal entropy map, but the energy calculation for different τ of the signal TFD

$$ \tau^{+}=\text{RICI}\left\{ E(\rho(t,f,\tau)))\right\} $$
(29)

The extracted mask is then

$$ \chi=\rho(t,f,\tau^{+}). $$
(30)

A comparison of the obtained results for the signal spectrogram shows that the proposed method overperforms the fixed TFD thresholding, local entropy-based approach, and the RICI TFD threshold method in most cases.

Figure 3 shows the results obtained for the first synthetic signal with SNR=-3 dB. Figure 3a shows the spectrogram of the noisy signal. Figure 3b and c represent the optimal entropy maps for the spectrogram and RIDB respectively. Results for the RICI thresholding are in Fig. 3d for the spectrogram, and in Fig. fig:atomi-3fixe for the RIDB. The result of fixed thresholding is in Fig. 3f.

Comparison of the methods metrics for the spectrogram distribution is reported in Tables 3 and 4. The fixed thresholding has the highest error, while the proposed method gives similar results to the RICI TFD thresholding. The local entropy-based algorithm does not perform as well as the proposed method. While the proposed method has a higher accuracy of 0.001, the RICI TFD threshold has a slightly higher F1 score. The local entropy-based method seems to perform worse than the fixed threshold as well as the proposed method when applied to spectrogram in this case when SNR= −3 dB.

The proposed method performs far better on the RIDB distribution in Tables 5 and 6. The local entropy-based algorithm does not appear to be sui for the RIDB distribution. When compared to the proposed method and RICI TFD threshold, the difference between the method’s measurements are much greater than in the case of the spectrogram. The proposed method has higher accuracy for 0.102 and a higher F1 score for 0.018 when compared to the RICI TFD thresholding.

The fixed threshold method has a lower score in comparison to both the proposed method and the RICI TFD threshold in the case of both spectrogram and RIDB distribution. The local entropy-based algorithm does not perform as well as the proposed method or the RICI TFD thresholding for low SNR values.

The representation of obtained results for SNR=3 dB can be seen in Fig. 4. Figure 4a reports the spectrogram of the noisy signal. The optimal entropy map for the spectrogram and RIDB are in Fig. 4b and c. The results for the RICI thresholding are in Fig. 4d, for the spectrogram, and in Fig. 4e for the RIDB. The result of fixed thresholding is in Fig. 4f.

For the spectrogram, Tables 3 and 4, RICI TFD threshold seems to have the best result with the F1 score higher then the proposed method for 0.066 in case of the first signal spectrogram. The local entropy-based signal has the accuracy lower than the proposed method by 0.015, but the F1 score is better by 0.011.

The proposed method still gives better results when applied to the RIDB distribution. It outperforms the RICI TFD threshold by 0.055 in accuracy and by 0.054 in the F1 score and fixed threshold by 0.094 in accuracy and by 0.051 in the F1 score. The entropy-based method has lower accuracy by 0.041 and F1 score by 0.073.

As can be seen, the proposed method produces similar results as the RICI similar results to the RICI TFD thresholding. Namely, it presents slightly better performance in all cases, except for the first signal when SNR=3 dB (in this case, the F1 score is highest for the RICI thresholding while the accuracy measure is still higher for the proposed method). Differences in the accuracy, for the first signal, range from 0.001, in case of the spectrogram, to 0.102 in the case of the RIDB distribution.

The results for the second multi-component synthetic signal with added noise with SNR= −3 dB are reported in Fig. 5.

Figure 5a reports the spectrogram of the noisy signal. Figure 5b shows the obtained optimal entropy map from the spectrogram, and Fig. 5c represents the obtained optimal entropy map from the RIDB distribution. The map obtained from the RICI threshold is in Fig. 5d for the spectrogram, and in Fig. 5e for the RIDB. The result of fixed thresholding is in Fig. 5f.

The results for the spectrogram are presented in Tables 3 and 4. The accuracy measure is larger for the proposed method for 0.008 when compared to the RICI TFD threshold, and for 0.019 when compared to the best fixed threshold. The F1 measure of the proposed method is 0.001 higher than the RICI TFD threshold and 0.005 then the highest F1 score for the fixed threshold. The local entropy-based method, in this case, has the highest accuracy value but the lowest F1 score when compared to the proposed method and RICI TFD threshold.

From Tables 5 and 6, it is visible that, just as in the case of the first signal, the proposed method outperforms the other three. The difference between the accuracy is 0.058 and 0.091 for the RICI TFD threshold and fixed threshold. Even though the accuracy is higher for the proposed method, the F1 score is in favor of the RICI TFD threshold for 0.081. The local entropy-based method has a slightly higher accuracy measure than the proposed method, but it also has a very low F1 score. In terms of the measures, the proposed method, in this case, has higher accuracy than RICI TFD threshold and higher F1 score than the entropy-based method.

The results for SNR=3 dB are in Fig. 6. Figure 6a represents the spectrogram of the noisy signal. The optimal entropy map is in Fig. 6b for the spectrogram and in Fig. 6c for the RID. Figure 6d and e report the RICI TFD threshold result for the spectrogram and RID while in Fig. 6f the results of the fixed threshold are reported.

The difference in accuracy between the proposed method and the RICI TFD threshold for the spectrogram is 0.002 (Table 3), while between the proposed method and fixed threshold, it is 0.011. The F1 score (Table 4) is higher for the proposed method, compared to all other methods.

When the methods are applied to the RID distribution, accuracy (Table 5) is higher for 0.058 when the proposed method is compared to the RICI TFD threshold and 0.012 when compared to the entropy-based method.

The considerably larger improvement obtained by the proposed method can be observed for the RID distribution. In the case of the RID distribution, the proposed method exceeds all other approaches. Specifically, the proposed optimal entropy map increases the accuracy for different SNRs, when compared to the RICI thresholding, from 0.055 to 0.1 in case of the first signal, and from 0.017 to 0.058 in the case of the second signal, i.e., improvement from 5.86 to 11.53% in the case of the first signal and from 1.83 to 6.79% in the case of the second signal, and when compared to the local entropy-based algorithm, from 0.041 to 0.062 in case of the first signal, and for up to 0.026 in the case of the second signal, i.e., improvement from 4.03 to 6.65% in case of the first signal and up to 2.77% in the case of the second signal.

Differences between the results obtained by the spectrogram and the RID distribution are substantial. The RICI thresholding has considerably better performance on the signal spectrogram, regardless of the tested signal. The proposed method in the case of the first signal performs better on the RID, while in case of the second signal, the results are finer for the spectrogram.

The optimal entropy map provides similar results to the local entropy-based method and RICI TFD threshold when applied to the spectrogram, but it outperforms them when applied to the RID distribution. All three methods are preferred to the fixed thresholding.

3.3 Real-life examples

The proposed method has been applied to a real-life signal, i.e., dolphin sound and seismology signals.

In Fig. 7, results obtained by the methods for the first real-life signal are displayed. In Fig. 7a, the RID distribution of the original signal is shown. The optimal entropy maps are in Fig. 7b and c for the spectrogram and RID distribution. Figure 7d and e present the maps obtained on the same distributions but by means of the RICI TFD threshold. The results for the fixed threshold of 30% is in Fig. 7f.

In Fig. 8, the extracted maps for a seismic signal are reported. Figure 8a shows the original signal’s TFD. Figure 8b shows the optimal entropy map extracted from the spectrogram and Fig. 8c shows the optimal entropy map extracted from the RID distribution. Figure 8d and e present the maps obtained from the RICI TFD threshold for the same distributions. The result for the fixed threshold of 5% is reported in Fig. 8f.

It is difficult to draw conclusions in the case of the real-life signal as we can not obtain numerical results. Visually, the results are similar in the case of the dolphin sound signal analysis for all tested methods.

For the seismic signal, the largest difference between the obtained signal maps for the different approaches seems to be in the case of the spectrogram, where the optimal map preserves more of the signal. The RID distribution, unlike in the case of the dolphin sound, seems to preserve more of the signal.

4 Conclusion

Here, we introduced a method for blind source separation of signal components and extraction of useful information from noisy TFDs based on a 2D local Rényi entropy. The method uses adaptive windows, the size of which is calculated utilizing the RICI rule. One of the advantages of the approach is that it does not require any specific knowledge of the signal or noise. Also, the proposed technique performs well for different TFDs, as shown in the paper for different SNRs. The method has been applied to both synthetic and real-world signals. When compared to fixed TFD thresholding, the adaptive approach when the RICI rule is applied directly to TFD thresholding, and the current 1D local entropy-based method, the proposed adaptive 2D Rényi entropy approach is shown to significantly increase classification accuracy and F1 score. Hence, the method can be used as an efficient tool for extracting useful information from noisy data in the time-frequency domain. Future work prospects a combination of the proposed approach with machine learning techniques to yield additional classification improvements.

Availability of data and materials

Please contact the authors for data requests.

Abbreviations

FT:

Fourier transform

ICI:

Intersection of confidence interval

MSE:

Mean squared error

RID:

Reduced interference distribution

RICI:

Relative intersection of confidence intervals

STFT:

Short-time Fourier transform

SNR:

Signal-to-noise-ratio

TFD:

Time-frequency distribution

References

  1. B. Boashash, Time-frequency Signal Analysis and Processing: a Comprehensive Reference (Elsevier Academic Press, Australia, 2016).

    Google Scholar 

  2. Z. Hong, W. Qing-ping, P. Yu-jian, T. Ning, Y. Nai-chang, in 2015 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). A sea corner-reflector jamming identification method based on time-frequency feature (Ningbo, 2015).

  3. P. A. Karthick, G. Venugopal, S. Ramakrishnan, Analysis of surface emg signals under fatigue and non-fatigue conditions using b-distribution based quadratic time frequency distribution. J. Mech. Med. Biol.15(2) (2015).

  4. M. A. Colominas, M. E. S. H. Jomaa, N. Jrad, A. Humeau-Heurtier, P. Van Bogaert, Time-varying time–frequency complexity measures for epileptic eeg data analysis. IEEE Trans. Biomed. Eng.65(8), 1681–8 (2018).

    Article  Google Scholar 

  5. M. Noor Muhammad Hamdi, A. Z. Sha’ameri, Time-frequency represetation of radar signals using doppler-lag block searching wigner-ville distribution. Adv Electr Electron Eng. 16: (2018).

  6. Z. Wang, Y. Wang, L. Xu, in Communications, Signal Processing, and Systems. CSPS 2017. Lecture Notes in Electrical Engineering. Time-frequency ridge-based parameter estimation for sinusoidal frequency modulation signals (SpringerSingapore, 2019).

    Google Scholar 

  7. A. Mjahad, A. Rosado-Muñoz, J. F. Guerrero-Martínez, M. Bataller-Mompeán, J. V. Francés-Villora, M. K. Dutta, Detection of ventricular fibrillation using the image from time-frequency representation and combined classifiers without feature extraction. Appl. Sci.8(11) (2018).

  8. Y. Zhao, S. Han, J. Yang, L. Zhang, H. Xu, J. Wang, A novel approach of slope detection combined with Lv’s distribution for airborne SAR imagery of fast moving targets. Remote Sens.10:, 764 (2018).

    Article  Google Scholar 

  9. D. Gabor, Part 1 J. Inst. Electr. Eng. Part III Radio Commun.93:, 429–457 (1946).

  10. S. G. M. and, Matching pursuits with time-frequency dictionaries. IEEE Trans. Sig. Process.41(12), 3397–3415 (1993).

    Article  Google Scholar 

  11. J. A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory. 50(10), 2231–2242 (2004).

    Article  MathSciNet  Google Scholar 

  12. S. Chen, D. Donoho, M. Saunders, Atomic decomposition by basis pursuit. SIAM Rev.43(1), 129–159 (2001).

    Article  MathSciNet  Google Scholar 

  13. I. F. Gorodnitsky, B. D. Rao, Sparse signal reconstruction from limited data using focuss: a re-weighted minimum norm algorithm. IEEE Trans. Sig. Process.45(3), 600–616 (1997).

    Article  Google Scholar 

  14. H. Mohimani, M. Babaie-Zadeh, C. Jutten, A fast approach for overcomplete sparse decomposition based on smoothed 0norm. IEEE Trans. Sig. Process.57(1), 289–301 (2009).

    Article  Google Scholar 

  15. J. Wen, H. Liu, S. Zhang, M. Xiao, A new fuzzy K-EVD orthogonal complement space clustering method. Neural Comput. Appl.24(1), 147–154 (2014).

    Article  Google Scholar 

  16. E. Eqlimi, B. Makkiabadi, in 2015 23rd European Signal Processing Conference (EUSIPCO). An efficient K-SCA based unerdetermined channel identification algorithm for online applications, (2015), pp. 2661–2665.

  17. P. Addabbo, C. Clemente, S. L. Ullo, in 2017 IEEE International Workshop on Metrology for AeroSpace (MetroAeroSpace). Fourier independent component analysis of radar micro-doppler features, (2017), pp. 45–49.

  18. A. Belouchrani, M. Amin, Blind source separation based on time-frequency signal representations. IEEE Trans. Sig. Process.46(11), 2888–2897 (1998).

    Article  Google Scholar 

  19. F. Feng, M. Kowalski, Underdetermined reverberant blind source separation: sparse approaches for multiplicative and convolutive narrowband approximation. IEEE/ACM Tran. Audio Speech. Lang. Process.27(2), 442–456 (2019).

    Article  Google Scholar 

  20. T. -H. Yi, X. -J. Yao, C. -X. Qu, H. -N. Li, Clustering number determination for sparse component analysis during output-only modal identification. J. Eng. Mech.145:, 04018122 (2019).

    Article  Google Scholar 

  21. P. Zhou, Y. Yang, S. Chen, Z. Peng, K. Noman, W. Zhang, Parameterized model based blind intrinsic chirp source separation. Digit Sig. Process.83:, 73–82 (2018).

    Article  Google Scholar 

  22. S. Senay, Time-frequency bss of biosignals. Healthcare Technol. Lett.5(6), 242–246 (2018).

    Article  Google Scholar 

  23. P. Flandrin, R. G. Baraniuk, O. Michel, in Proc. IEEE Int. Conf. Acoustics Speech and Signal Processing ICASSP’94. Time-frequency complexity and information, (1994), pp. 329–332.

  24. R. G. Baraniuk, P. Flandrin, A. J. E. M. Janssen, O. J. J. Michel, Measuring time-frequency information content using the Renyi entropies. IEEE Trans. Inf. Theory. 47(4), 1391–1409 (2001).

    Article  Google Scholar 

  25. K. E. Hild, D. Erdogmus, J. Príncipe, Blind source separation using Renyi’s mutual information. IEEE Sig. Process. Lett.8(6), 174–176 (2001).

    Article  Google Scholar 

  26. D. Erdogmus, K. E. Hild Ii, J. C. Principe, Blind source separation using Renyi’s α-marginal entropies. Neurocomputing. 49(1–4), 25–38 (2002).

    Article  Google Scholar 

  27. K. E. Hild, D. Pinto, D. Erdogmus, J. C. Principe, Convolutive blind source separation by minimizing mutual information between segments of signals. IEEE Trans. Circ. Syst. I Regular Papers. 52(10), 2188–2196 (2005).

    Article  Google Scholar 

  28. K. E. Hild II, D. Erdogmus, J. C. Principe, An analysis of entropy estimators for blind source separation. Sig. Process.86(1), 182–194 (2006).

    Article  Google Scholar 

  29. X. Yao, T. Yi, C. Qu, H. Li, Blind modal identification using limited sensors through modified sparse component analysis by time–frequency method. Comput-Aided Civil Infrastruct Eng. 33: (2018).

  30. F. Ye, J. Chen, L. Gao, W. Nie, Q. Sun, A mixing matrix estimation algorithm for the time-delayed mixing model of the underdetermined blind source separation problem. Circ. Syst. Sig. Process., 1–18 (2018).

  31. Q. Guo, G. Ruan, L. Qi, A complex-valued mixing matrix estimation algorithm for underdetermined blind source separation. Circ. Syst. Sig. Process.37(8), 3206–3226 (2018).

    Article  MathSciNet  Google Scholar 

  32. F. Ye, J. Chen, L. Gao, W. Nie, Q. Sun, A mixing matrix estimation algorithm for the time-delayed mixing model of the underdetermined blind source separation problem. Circ. Syst. Sig. Process.38:, 1–18 (2018).

    MathSciNet  Google Scholar 

  33. Q. Guo, C. Li, R. Guoqing, Mixing matrix estimation of underdetermined blind source separation based on data field and improved fcm clustering. Symmetry. 10:, 21 (2018).

    Article  Google Scholar 

  34. X. -Y. Zhang, W. -R. Wang, C. -Y. Shen, Y. Sun, L. -X. Huang, in Advances in intelligent information hiding and multimedia signal processing, ed. by J. -S. Pan, P. -W. Tsai, J. Watada, and L. C. Jain. Extraction of EEG components based on time - frequency blind source separation (SpringerCham, 2018), pp. 3–10.

    Chapter  Google Scholar 

  35. N. Saulig, Z. Milanovic, C. Ioana, A local entropy-based algorithm for information content extraction from time-frequency distributions of noisy signals. Digit. Sig. Process.70: (2017).

  36. F. Hlawatsch, G. F. Boudreaux-Bartels, Linear and quadratic time-frequency signal representations. IEEE Sig. Process. Mag.9(2), 21–67 (1992).

    Article  Google Scholar 

  37. L. Cohen, Time-frequency distributions-a review. Proc. IEEE. 77(7), 941–981 (1989).

    Article  Google Scholar 

  38. Zhenyu Guo, L. -. Durand, H. C. Lee, The time-frequency distributions of nonstationary signals based on a Bessel kernel. IEEE Trans. Sig. Process.42(7), 1700–1707 (1994).

    Article  Google Scholar 

  39. C. E. Shannon, A mathematical theory of communication. Bell Syst. Tech. J.27(3), 379–423 (1948).

    Article  MathSciNet  Google Scholar 

  40. A. Rényi, in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. On measures of entropy and information (University of California PressBerkeley, 1961), pp. 547–561.

    Google Scholar 

  41. J. Lerga, M. Vrankic, V. Sucic, A signal denoising method based on the improved ICI rule. IEEE Sig. Process. Lett.15:, 601–604 (2008).

    Article  Google Scholar 

  42. A. Goldenshluger, A. Nemirovski, On spatial adaptive estimation of nonparametric regression. Math. Methods Stat.6: (1997).

  43. V. Katkovnik, A new method for varying adaptive bandwidth selection. IEEE Trans. Sig. Process.47:, 2567–2571 (1999).

    Article  Google Scholar 

  44. K. Egiazarian, V. Katkovnik, L. Astola, in 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), 3. Adaptive window size image denoising based on ICI rule, (2001), pp. 1869–18723.

  45. G. Segon, J. Lerga, V. Sucic, Improved LPA-ICI-based estimators embedded in a signal denoising virtual instrument. Sig. Image Video Process.11: (2016).

  46. J. Lerga, M. Franušić, V. Sucic, Parameters analysis for the time-varying automatically adjusted LPA based estimators. J. Autom. Control Eng.2:, 203–208 (2014).

    Article  Google Scholar 

  47. G. Blanco, A. J. M. Traina, C. T. Jr., P. M. Azevedo-Marques, A. E. S. Jorge, D. de Oliveira, M. V. N. Bedo, A superpixel-driven deep learning approach for the analysis of dermatological wounds. Comput. Methods Prog. Biomed.183:, 105079 (2020).

  48. H. Li, H. Li, J. Kang, Y. Feng, J. Xu, Automatic detection of parapapillary atrophy and its association with children myopia. Comput. Methods Prog. Biomed.183:, 105090 (2020).

    Article  Google Scholar 

  49. F. M. Bayer, A. J. Kozakevicius, R. J. Cintra, An iterative wavelet threshold for signal denoising. Sig. Process.162:, 10–20 (2019).

    Article  Google Scholar 

  50. M. Sharma, S. Singh, A. Kumar, R. S. Tan, U. R. Acharya, Automated detection of shockable and non-shockable arrhythmia using novel wavelet-based ECG features. Comput. Biol. Med.115:, 103446 (2019).

    Article  Google Scholar 

  51. J. S. Lee, S. J. Lee, M. Choi, M. Seo, S. W. Kim, QRS detection method based on fully convolutional networks for capacitive electrocardiogram. Expert Syst. Appl.134:, 66–78 (2019).

    Article  Google Scholar 

Download references

Funding

This work was fully supported by the Croatian Science Foundation under the projects IP-2018-01-3739 and IP-2020-02-4358, Center for Artificial Intelligence and Cybersecurity - University of Rijeka, University of Rijeka under the projects uniri-tehnic-18-17 and uniri-tehnic-18-15, and European Cooperation in Science and Technology (COST) under the project CA17137.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, A.V, J.L., and N.S.; methodology, A.V. and J. L.; software, A.V..; validation, A.V., and N.S.; investigation, A.V., and J.L.; resources, A.V. and J.L.; data curation, J.L.; writing—original draft preparation, A.V.; writing—review and editing, J.L. and N.S.; supervision, J.L.; project administration, J.L.; funding acquisition, N.S. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Jonatan Lerga.

Ethics declarations

Consent for publication

This research does not contain any individual person’s data in any form (including individual details, images, or videos).

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vranković, A., Lerga, J. & Saulig, N. A novel approach to extracting useful information from noisy TFDs using 2D local entropy measures. EURASIP J. Adv. Signal Process. 2020, 18 (2020). https://doi.org/10.1186/s13634-020-00679-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-020-00679-2

Keywords