Skip to main content

Efficient methods for joint estimation of multiple fundamental frequencies in music signals

Abstract

This study presents efficient techniques for multiple fundamental frequency estimation in music signals. The proposed methodology can infer harmonic patterns from a mixture considering interactions with other sources and evaluate them in a joint estimation scheme. For this purpose, a set of fundamental frequency candidates are first selected at each frame, and several hypothetical combinations of them are generated. Combinations are independently evaluated, and the most likely is selected taking into account the intensity and spectral smoothness of its inferred patterns. The method is extended considering adjacent frames in order to smooth the detection in time, and a pitch tracking stage is finally performed to increase the temporal coherence. The proposed algorithms were evaluated in MIREX contests yielding state of the art results with a very low computational burden.

1 Introduction

The goal of a multiple fundamental frequency (f0) estimation method is to infer the number of simultaneous harmonic sounds present in an acoustic signal and their fundamental frequencies. This problem is relevant in speech processing, structural audio coding, and several music information retrieval (MIR) applications, like automatic music transcription, compression, instrument separation and chord estimation, among others.

In this study, a multiple f0 estimation method is presented for the analysis of pitched musical signals. The core methodology introduced in [1] is described and extended considering information about neighbor frames.

Most multiple f0 estimation methods are complex systems. The decomposition of a signal into multiple simultaneous sounds is a challenging task due to harmonic overlaps and inharmonicity (when partial frequencies are not exact multiples of the f0). Many different techniques are proposed in the literature to face this task. Recent reviews of multiple f0 estimation in music signals can be found in [24].

Some techniques rely on the mid-level representation, trying to emphasize the underlying fundamental frequencies by applying signal processing transformations to the input signal [57]. Supervised [8, 9] and unsupervised [10, 11] learning techniques have also been investigated for this task. The matching pursuit algorithm, which approximates a solution for decomposing a signal into linear functions (atoms), is also adopted in some approaches [12, 13]. Methods based on statistical inference within parametric signal models [3, 14, 15] have also been studied for this task.

Heuristic approaches can also be found in the literature. Iterative cancellation methods estimate the prominent f0 subtracting it from the mixture and repeating the process until a termination criterion [1618]. Joint estimation methods [1921] can evaluate a set of possible f0 hypotheses, consisting of f0 combinations, selecting the most likely at each frame without corrupting the residual as it occurs with iterative cancellation.

Some existing methods can be switched to another framework. For example, iterative methods can be viewed against matching pursuit background, and many unsupervised learning methods like [11] can be switched to a statistical framework.

Statistical inference provides an elegant framework to deal with this problem, but these methods are usually intended for single instrument f0 estimation (typically piano), as exact inference often becomes computationally intractable for complex and very different sources.

Similarly, supervised learning methods can infer models of pitch combinations seen in the training stage, but they are currently constrained to monotimbral sounds with almost constant spectral profiles [4].

In music, consonant chords include harmonic components of different sounds which coincide in some of their partial frequencies (harmonic overlaps). This situation is very frequent and introduces ambiguity in the analysis, being the main challenge in multiple f0 estimation. When two harmonics are overlapped, two sinusoids of the same frequency are summed in the waveform, resulting a signal with the same frequency and which magnitude depends on their phase difference.

The contribution of each harmonic to the mixture can not be properly estimated without considering the interactions with the other sources. Joint estimation methods provide an adequate framework to deal with this problem, as they do not assume that sources are mutually independent and individual pitch models can be inferred taking into account their interactions. However, they tend to have high computational costs due to the number of possible combinations to be evaluated.

Novel efficient joint estimation techniques are presented in this study. In contrast to previous joint approaches, the proposed algorithms have a very low computational cost. They were evaluated and compared to other studies in MIREX [22, 23] multiple f0 estimation and tracking contests, yielding competitive results with very efficient runtimes.

The core process, introduced in [1], relies on the inference and evaluation of spectral patterns from the mixture. For a proper inference, source interactions must be considered in order to estimate the amplitudes of their overlapped harmonics. This is accomplished by evaluating independent combinations consisting of hypothetical patterns (f0 candidates). The evaluation criterion enhances those patterns having high intensity and smoothness. This way, the method takes advantage of the spectral properties of most harmonic sounds, in which first harmonics are usually those with higher energy and their spectral profile tend to be smooth.

Evaluating many possible combinations can computationally intractable. In this study, the efficiency is boosted by reducing the spectral information to be considered for the analysis, adding a f0 candidate selection process, and pruning unlikely combinations by applying some constraints, like a minimum intensity for a pattern.

One of the main contributions of this study is the extension of the core algorithm to increase the temporal coherence. Instead considering isolated frames, the combinations sharing the same pitches across neighbor frames are grouped to smooth the detection in time. A novel pitch tracking stage is finally presented to favor smooth transitions of pitch intensities.

The proposed algorithms are publicly available at http://grfia.dlsi.ua.es/cm/projects/drims/software.php.

The overall scheme of the system can be seen in Figure 1. The core methodology performing a frame by frame analysis is described in Sec. 2, whereas the extended method which considers temporal information is presented in Sec. 3. The evaluation results are described in Sec. 4, and the conclusions and perspectives are finally discussed in Sec. 5.

Figure 1
figure 1

General overview of the core method and its extension.

2 Methodology

Joint estimation methods generate and evaluate competing sets of f0 combinations in order to select the most plausible combination directly. This scheme, recently introduced in [24, 25] has the advantage that the amplitudes of overlapping partials can be approximated taking into account the partials of the other candidates for a given combination. Therefore, partial amplitudes can depend on the particular combination to be evaluated, opposite to an iterative estimation scheme like matching pursuit, where a wrong estimate may produce cumulative errors.

The core method performs a frame by frame analysis, selecting the most likely combination of fundamental frequencies at each instant. For this purpose, a set of f0 candidates are first identified from the spectral peaks. Then, a set of possible combinations, C ( t ) , of candidates are generated, and a joint algorithm is used to find the most likely combination.

In order to evaluate a combination, hypothetical partial sequences HPS (term proposed in [26] to refer to a vector containing hypothetical partial amplitudes) are inferred for its candidates. In order to build these patterns, harmonic interactions with the partials of the other candidates in the combination are considered. The overlapped partials are first identified, and their amplitudes are estimated by linear interpolation using the non-overlapped harmonic amplitudes.

Once patterns are inferred, they are evaluated taking into account the sum of its hypothetical harmonic amplitudes and a novel smoothness measure.

Combinations are analysed considering their individual candidate scores, and the most likely combination is selected at the target frame.

The method assumes that the spectral envelopes of the analysed sounds tend to vary smoothly as a function of frequency. The spectral smoothness principle has successfully been used in different ways in the literature [7, 2629]. A novel smoothness measure based on the convolution of the hypothetical harmonic pattern with a Gaussian window is proposed.

The processing stages, shown in Figure 1, are described below.

2.1 Preprocessing

The analysis is performed in the frequency domain, computing the magnitude spectrogram using a 93 ms Hanning windowed frame with a 9.28 ms hop size. This is the frame size typically chosen for multiple f0 estimation of music signals in order to achieve a suitable frequency resolution, and it experimentally showed to be adequate. The selected frame overlap ratio may seem high from a practical point of view, but it was required to compare the method with other studies in MIREX (see 4.3).

To get a more precise estimation of the lower frequencies, zero padding is used multiplying the original window size by a factor z to complete it with zeroes before computing the FFT.

In order to increase the efficiency, many unnecessary spectral bins are discarded for the subsequent analysis using a simple peak picking algorithm to extract the hypothetical partials. At each frame, only those spectral peaks with an amplitude higher than a threshold μ are selected, removing the rest of spectral information and obtaining this way a sparse representation containing a subset of spectral bins. It is important to note that this thresholding does not have a significant effect on the results, as values of μ are quite low, but the efficiency of the method importantly increases.

2.2 Candidate selection

The evaluation of all possible f0 combinations in a mixture is computationally intractable, therefore a reduced subset of candidates must be chosen before generating their combinations. For this, candidates are first selected from the spectral peaks within the range [fmin, fmax] corresponding to the musical pitches of interest. Harmonic sounds with missing fundamentals are not considered, although they seldom appear in practical situations. A minimum spectral peak amplitude ε for the first partial (f0) can also be assumed in this stage.

The spectral magnitudes at the candidate partial positions are considered as a criterion for candidate selection as described next.

2.2.1 Partial search

Slight harmonic deviations from ideal partial frequencies are common in music sounds, therefore inharmonicity must be considered for partial search. For this, a constant margin around each harmonic frequency f h ± f r is set. If there are no spectral peaks within this margin, the harmonic is considered to be missing. Besides considering a constant margin, frequency dependent margins were also tested assuming that partial deviations in high frequencies are larger than those in low frequencies. However, results decreased, mainly because many false positive harmonics (most of them corresponding to noise) can be found in high frequencies.

Different strategies were also tested for partial search, and finally, like in [30], the harmonic spectral location and spectral interval principles [31] were chosen in order to take inharmonicity into account. The ideal frequency f h of the first harmonic is initialized to f h = 2f0. The next ones are searched at fh+1= (f x + f0) ± f r , where f x = f i if the previous harmonic h was found at the frequency f i , or f x = f h if the previous partial was missing.

In many studies, the closest peak to f h within a given region is identified as a partial. A novel variation which experimentally slightly increased (although not significantly) the proposed method performance is the inclusion of a triangular window. This window, centered in f h with a bandwidth 2f r and a unity amplitude, is used to weight the partial magnitudes within this range (see Figure 2). The spectral peak with maximum weighted value is selected as a partial. The advantage of this scheme is that low amplitude peaks are penalized and, besides the harmonic spectral location, intensity is also considered to correlate the most important spectral peaks with partials.

Figure 2
figure 2

Partial selection example. The selected peak is the one with the greatest weighted value.

2.2.2 Selection of F candidates

Once the hypothetical partials for all possible candidates are searched, candidates are ordered decreasingly by the sum of their amplitudes and, at most, only the first F candidates of this ordered list are chosen for the following processing stages.

Harmonic summation is a simple criterion for candidate selection, and other alternatives can be found in the literature, including harmonicity criterion [30], partial beating [30], or the product of harmonic amplitudes in the power spectrum [20]. Evaluating alternative criteria for candidate selection is left as future study.

2.3 Generation of candidate combinations

All the possible combinations of the F selected candidates are calculated and evaluated, and the combination with highest score is yielded at the target frame. The combinations consist of different number of fundamental frequencies. In contrast to studies like [26], there is not need for a priori estimation of the number of concurrent sounds before detecting the fundamental frequencies, and the polyphony is implicitly calculated in the f0 estimation stage, choosing the combination with highest score independently from the number of candidates.

At each frame t, a set of combinations C ( t ) = { C 1 , C 2 , , C N } is obtained. For efficiency, like in [20], only the combinations with a maximum polyphony P are generated from the F candidates. The amount of combinations without repetition (N) can be calculated as:

N = n = 1 P F n = n = 1 P F ! n ! ( F - n ) !
(1)

Therefore, N combinations are evaluated at each frame, so the adequate selection of F and P is critical for the computational efficiency of the algorithm. An experimental discussion on this issue is presented in Sec. 4.2.

2.4 Evaluation of combinations

In order to evaluate a combination C i C ( t ) , a hypothetical pattern is first estimated for each of its candidates. Then, these patterns are evaluated in terms of their intensity and smoothness, assuming that music sounds have a perceivable intensity and their spectral shapes are smooth, like it occurs for most harmonic instruments. The combination C ^ ( t ) which patterns maximize these measures is yielded at the target frame t.

2.4.1 Inference of hypothetical patterns

The intention of this stage is to infer harmonic patterns for the candidates. This is performed taking into account the interactions with other candidates in the analysed combination, assuming that they have smooth spectral envelopes. A pattern (HPS) is a vector p c estimated for each candidate cC consisting of the hypothetical harmonic amplitudes of the first H harmonics:

p c = p c , 1 , p c , 2 , , p c , h , , p c , H T
(2)

where pc,his the amplitude for the h harmonic of the candidate c. The partials are searched the same way as previously described for the candidate selection stage. If a particular harmonic is not found within the search margin, then the corresponding value pc,his set to zero. As in music sounds the first harmonics are usually the most representative and they contain most of the sound energy, only the first H partials are considered to build the patterns. Once the partials of a candidate are identified, the HPS values are estimated taking into account the hypothetical source interactions. For this task, their harmonics are identified and labeled with the candidate they belong to (see Figure 3). After the labeling process, some harmonics will only belong to one candidate (non-overlapped harmonics), whereas others will belong to more than one candidate (overlapped harmonics).

Figure 3
figure 3

HPS estimation in a combination of two candidates separated by one octave. The HPS of f1 is estimated by interpolation using the non-overlapped partials.

Assuming that interactions between non-coincident partials (beating) do not alter significantly the original spectral amplitudes, the non-overlapped amplitudes are directly assigned to the HPS. However, the contribution of each source to an overlapped partial amplitude must be estimated.

Getting an accurate estimate of the amplitudes of colliding partials is not reliable only with the spectral magnitude information. In this study, the additivity of linear spectrum is assumed as in most approaches in the literature. Assuming additivity and spectral smoothness, the amplitudes of overlapped partials can be estimated similarly to [26, 32] by linear interpolation of the neighboring non-overlapped partials, as shown in Figure 3 (bottom).

If there are two or more consecutive overlapped partials, then the interpolation is done the same way using the available non-overlapped values. For instance, if harmonics 2 and 3 of a pattern are overlapped, then the amplitudes of harmonics 1 and 4 are used to estimate them by linear interpolation.

After the interpolation, the estimated contribution of each partial to the mixture is subtracted before processing the next candidates. This calculation (see Figure 3) is done as follows:

  • If the interpolated (expected) value is greater than the corresponding overlapped harmonic amplitude, then pc,his set as the original harmonic amplitude, and the spectral peak is completely removed from the residual, setting it to zero for the candidates that share that partial.

  • If the interpolated value is smaller than the corresponding overlapped harmonic amplitude, then pc,his set as the interpolated amplitude, and this value is linearly subtracted for the candidates that share the harmonic.

The residual harmonic amplitudes after this process are iteratively analysed for the rest of the candidates in the combination in ascending frequency order.

2.4.2 Candidate evaluation

The intensity l(c) of a candidate c is a measure of the strength of a source obtained by summing its HPS amplitudes:

l ( c ) = h = 1 H p c , h
(3)

Assuming that a pattern should have a minimum loudness, those combinations having any candidate with a very low absolute (l(c) <η) or relative ( l ( c ) < γ L C , being L C = max c { l ( c ) } ) intensity are discarded.

The underlying hypothesis assumes that a smooth spectral pattern is more probable than an irregular one. This is assessed through a novel smoothness measure s(c) which is based on Gaussian smoothing.

To compute it, the HPS of a candidate is first normalized dividing the amplitudes by its maximum value, obtaining p ̄ . The aim is to compare p ̄ with a smooth model p ̃ built from it, in such a way that the similarity between p ̄ and p ̃ will give an estimation of the smoothness.

For this purpose, p ̄ is smoothed using a truncated normalized Gaussian window N 0 , 1 , which is convolved with the HPS to obtain p ̃ :

p ̃ c = N 0 , 1 * p ̄ c
(4)

Only three components were chosen for the Gaussian window of unity variance, N 0 , 1 = ( 0 . 2 1 , 0 . 5 8 , 0 . 2 1 ) T , due to the small size of p c , which is limited by H. Typical values for H are within the range H [5, 20], as only the first harmonics contain most of the energy of a harmonic source.

Then, as shown in Figure 4, a roughness measure r(c) is computed by summing up the absolute differences between p ̃ and the actual normalized HPS amplitudes:

Figure 4
figure 4

Spectral smoothness measure example. The normalized HPS vector p ̄ and the smooth version p ̃ of two candidates c1 (top) and c2 (down) are shown. In this example, r(c1) = 0.13, and r(c2) = 1.23.

r ( c ) = h = 1 H p ̃ c , h - p ̄ c , h
(5)

The roughness r(c) is normalized into r ̄ ( c ) to make it independent of the intensity:

r ̄ ( c ) = r ( c ) 1 - N 0 , 1 ( x ̄ )
(6)

And finally, the smoothness s(c) [0, 1] of a HPS is calculated as:

s ( c ) = 1 - r ̄ ( c ) H c
(7)

where H c is the index of the last harmonic found for the candidate. This factor was introduced to prevent that high frequency candidates that have less partials than those at low frequencies will have higher smoothness. This way, the smoothness is considered to be more reliable when there are more partials to estimate it.

A candidate score is computed taking into account the HPS smoothness and intensity:

S ( c ) = l ( c ) s κ ( c )
(8)

where κ is a factor that permits to balance the smoothness contribution experimentally.

2.4.3 Combination selection

Once all candidates are evaluated, a salience measure S ( C i ) for a combination C i is computed as:

S ( C i ) = c = 1 C S ( c ) 2
(9)

When there are overlapped partials, their amplitudes are estimated by interpolation, therefore the HPS smoothness tends to increase. To partially compensate this effect in S ( C i ) , the candidate scores are squared in order to boost the highest values. This favors a sparse representation, as it is convenient to explain the mixture with the minimum number of sources. Experimentally, it was found that this square factor was important to improve the success rate of the method (more details can be found at [4, p. 148]). Once computed S ( C i ) for all the combinations at C ( t ) , the one with highest score is selected:

C ^ ( t ) = arg max i S ( C i ( t ) )
(10)

3 Extension using neighbor frames

In the previously described method, each frame was independently analysed, yielding the combination of fundamental frequencies that maximizes a given measure. One of the main limitations of this approach is that the window size (93 ms) is relatively short to perceive the pitches in a complex mixture, even for an expert musician. Context is very important in music to disambiguate certain situations. In this section the core method is extended, considering information about adjacent frames to produce a smoothed detection across time.

3.1 Temporal smoothing

A simple and effective novel technique is presented in order to smooth the detection across time. Instead of selecting the most likely combination at isolated frames, adjacent frames are also analysed to get the score of each combination.

The method aims to enforce the pitch continuity in time. For this, the fundamental frequencies of each combination C are mapped into music pitches, obtaining a pitch combination C . For instance, the combination C i = { 2 6 1 Hz , 4 1 6 Hz } is mapped into C i = { C 4 , G 4 } .

If there is more than one combination with the same pitches (for instance, C 1 = { 2 6 0 Hz } and C 2 = { 2 6 3 Hz } are both C = { C 4 } ), it is removed, and the unique combination with the highest score value is only kept.

Then, at each frame t, a new smoothed score function S ̃ ( C i ( t ) ) for a combination C i ( t ) is computed using the neighbor frames:

S ̃ ( C i ( t ) ) = j = t - K t + K S ( C i ( j ) )
(11)

where C i are the combinations that appear at least once in the 2K + 1 frames considered. Note that the score values for the same combination are summed in the 2K frames around t to obtain S ̃ ( C i ( t ) ) . An example of this procedure is displayed in Figure 5 for K = 1. If C i is missing for any t - K <j <t + K, it does not contribute to the sum.

Figure 5
figure 5

Example of combinations fusion across adjacent frames using K = 1.

In this new situation, the pitch combination at the target frame t is selected as:

C ^ ( t ) = arg max i S ̃ ( C i ( t ) )
(12)

If C ^ ( t ) does not contain any combination because there are no valid candidates in the frame t, then a rest is yielded without evaluating the adjacent frames.

This technique smoothes the detection in the temporal dimension. For a visual example, let's consider the smoothed intensity of a given candidate c' as:

l ̃ ( c ( t ) ) = j = t - K t + K l ( c ( j ) )
(13)

When the temporal evolution of the smoothed intensity l ̃ ( c ( t ) ) of the winner combination candidates is plotted in a three-dimensional representation (see Figures 6 and 7), it can be seen that the correct estimates usually show smooth temporal curves. An abrupt change (a sudden note onset or offset, represented by a vertical cut in the smoothed intensities 3D plot) means that the energy of some harmonic components of a given candidate were suddenly improperly assigned to another candidate in the next frame. Therefore, vertical lines in the plot usually indicate errors in assigning harmonic components.

Figure 6
figure 6

3D intensity representation (oboe). Top: Ground-truth evolution of pitch along time for an oboe melody. Bottom: 3D temporal representation of l ̃ ( c ( t ) ) for the candidates of the winner combination at each frame. In this example, all the pitches were correctly detected.

Figure 7
figure 7

3D intensity representation (piano). Top: Ground-truth evolution of pitch along time for a piano piece. Bottom: 3D temporal representation of l ̃ ( c ( t ) ) for the candidates of the winner combination at each frame. Most errors occur when there exist steep intensity transitions which mean that harmonics of a candidate were wrongly assigned to another candidate

3.2 Pitch tracking

A basic pitch tracking method is introduced in order to favor smooth transitions of l ̃ ( c ( t ) ) . The proposed technique aims to increase the temporal coherence using a layered weighted directed acyclic graph (wDAG).

The idea is to minimize abrupt changes in the intensities of the pitch estimates. For that, a graph layered by frames is built with the pitch combinations, where the weights consider the differences in the smoothed intensities for the candidates in adjacent frames and their combination scores. Let G = (V, v I , E, w,t) be a layered wDAG, with vertex set V, initial vertex v I , edge set E, and edge function w, where w(v i ,v j ) is the weight of the edge from the vertex v i to v j . The position function t: V → {0,1, 2,..., T} associates each node with an input frame, being T the total number of frames. Each vertex v i V represents a combination C i . The vertices are organized in layers (see Figure 8), in such a way that all vertices in a given layer have the same value for t(v) = τ, and they represent the M most likely combinations at a time frame τ.

Figure 8
figure 8

Layered wDAG example for M = 3 combinations at each time. Each layer t(v) = τ represents a time frame, and each vertex is a combination C i ( t ) . Weights have been multiplied by 104 for visual clarity. The grayed nodes are the pitch combinations selected at each frame in this example.

The edges connect all the vertices of a layer with all the vertices of the next layer so, if (v i , v j ) E, then t(v i ) = τ and t(v j ) = τ + 1. The weights w(v i , v j ) between two combinations are computed as follows:

w ( v i , v j ) = D ( v i , v j ) S ̃ ( v j ) + 1
(14)

where S ̃ ( v j ) is the salience of the combination in v j and D(v i ,v j ) is a similarity measure for two combinations v i and v j , corresponding to the sum of the absolute differences between the intensities of all the candidates in both combinations:

D ( v i , v j ) = c v i , v j l ̃ v i , c - l ̃ v j , c + c v i - v j l ̃ v i , c + c v j - v i l ̃ v j , c
(15)

Using this scheme, the transition weight between two combinations considers the score of the target combination and the differences between the candidate intensities.

Once the graph is generated, the shortest path that minimizes the sum of weights from the starting node to the final state across the wDAG is found using the Dijkstra [33] algorithm. The vertices that belong to the shortest path are the pitch combinations yielded at each time frame.

Building the wDAG for all possible combinations at all frames could be computationally intractable, but considering only the M most likely combinations at each frame keeps almost the same runtime than without performing tracking for small values of M.

4 Evaluation

Initial experiments were done using a data set of random mixtures to perform a first evaluation and set up the parameters. Then, the proposed approaches were publicly evaluated and compared by a third party to other studies in the MIREX [22, 23] multiple f0 estimation and tracking contest.

4.1 Evaluation metrics

Different metrics for multiple f0 estimation can be found in the literature. The evaluation can be done both at frame by frame and note levels. The first mode evaluates the correct estimation in a frame by frame basis, whereas note tracking also considers the temporal coherence of the detection, adding more restrictions for a note to be considered correct. For instance, in the MIREX note tracking contest, a note is correct if its f0 is closer than half a semitone to the ground-truth pitch and its onset is within a ± 50 ms range of the ground truth note onset.

A false positive (FP) is a detected pitch (or note, if evaluation is performed at note level) which is not present in the signal, and a false negative (FN) is a missing pitch. Correctly detected pitches (OK) are those estimates that are also present in the ground-truth at the detection time.

A commonly used metric for frame-based evaluation is the accuracy, defined as:

A c c = Σ O K Σ O K + Σ F P + Σ F N
(16)

Alternatively, the performance can be assessed using precision/recall terms. Precision is related to the fidelity whereas recall is a measure of completeness.

Prec = Σ O K Σ O K + Σ F P
(17)
Rec = Σ O K Σ O K + Σ F N
(18)

The balance between precision and recall, or F-measure, is computed as their harmonic mean:

F - measure = 2 Prec Rec Prec + Rec = Σ O K Σ O K + 1 2 Σ F P + 1 2 Σ F N
(19)

An alternative metric based on the speaker diarization error score from NISTa was proposed by Poliner and Ellis [34] to evaluate multiple f0 estimation methods. The NIST metric consists of a single error score which takes into account substitution errors (mislabeling an active voice, E subs ), miss errors (when a voice is truly active but results in no transcript, E miss ), and false alarm errors (when an active voice is reported without any underlying source, E fa ).

This metric avoids counting errors twice as classical metrics do in some situations. For instance, using accuracy, if there is a C3 pitch in the reference ground-truth but the system reports a C4, two errors (a false positive and a false negative) are counted. However, if no pitch was detected, only one error would be reported.

To compute the total error (E tot ) in T frames, the estimated pitches at every frame are denoted as N sys , the ground-truth pitches as N ref , and the number of correctly detected pitches as N corr , which is the intersection between N sys and N ref .

E t o t = t = 1 T max N r e f ( t ) , N s y s ( t ) - N c o r r ( t ) t = 1 T N r e f ( t )
(20)

Poliner and Ellis [34] state that, as in the universal practice in the speech recognition community, this is probably the most adequate measure, since it gives a direct feel for the quantity of errors that will occur as a proportion of the total quantity of notes present.

4.2 Parameterization

A data set of random pitch combinations, also used in the evaluation of Klapuri [35] method, was used to tune up the algorithm parameters. The data set consists on 4000 mixtures with polyphonyb 1, 2, 4, and 6. The 2842 audio samples from 32 music instruments used to generate the mixtures are from the McGill University master samples collectionc, the University of Iowad, IRCAM studio onlinee, and recordings of an acoustic guitar. In order to respect the copyright restrictions, only the first 185 ms of each mixturef were used for evaluation. In this dataset, the range of valid pitches is [fmin = 38 Hz, fmax = 2100 Hz], and the maximum polyphony is P = 6.

The values for the free parameters of the method were experimentally evaluated. Their impact on the performance and efficiency can be seen on Figures 9 and 10, and it is extensively analysed in [4, pp. 141-156]. In these figures, the cross point represents the values selected for the parameters. Lines represent the impact tuning individual parameters when keeping the selected values for the rest of parameters.

Figure 9
figure 9

Accuracy for the core method in the random pitch dataset adjusting the individual parameters. The abcissae axis is not labeled since these values depend on each particular parameter (the first and last values for each parameter have been displayed in each plot to get the grid step for each parameter).

Figure 10
figure 10

Core method runtime adjusting the parameters. Frame by frame method runtime in seconds for the entire random mixtures database for the parameters that have some influence in the computational cost.

In the parameterization stage, the selected parameter values were not those that achieved the highest accuracy in the test set, but those that obtained a good trade-off between accuracy and low computational cost.

The chosen parameter values for the core method are shown in Table 1. For the extended method, when considering K adjacent frames, different values for parameters H = 15, η = 0.15, κ = 4, and ε = 0 showed to perform slightly better, therefore they were selected for comparing the method to other studies (see Sec. 4.3). A detailed analysis of the parameterization process can be found in [4].

Table 1 Parameter values experimentally selected

4.3 Evaluation and comparison with other methods

The core method was externally evaluated and compared with other approaches in MIREX 2007 [22] multiple f0 estimation and tracking contest, whereas the extended method was submitted to MIREX 2008 [23]. The data set used in both MIREX editions were essentially the same, therefore the results can be directly compared. The details of the evaluation and the ground-truth labeling are described in [36]. Accuracy, precision, recall and E tot were reported for frame by frame estimation, whereas precision, recall and F-measure were used for the note tracking task.

The core method (PI1-07) was evaluated using the parameters specified in Table 1. For this contest, a final postprocessing stage was added. Once the fundamental frequencies were estimated, they were converted into music pitches, and pitch series shorter than d = 56 ms were removed to avoid some local discontinuities.

The extended method was submitted with pitch tracking (PI1-08) and without it (PI2-08) for comparison. In the non-tracking case, a similar procedure than in the core method was adopted, removing notes shorter than a minimum duration and merging note with short rests between them. Using pitch tracking, the methodology described in Sec. 3.2 was performed instead, increasing the temporal coherence of the estimate with the wDAG using M = 5 combinations at each layer.

The Table 2 shows all the methods evaluated. The proposed approaches were submitted both for frame by frame and note tracking contests, despite the only method which performs pitch tracking is PI1-08.

Table 2 MIREX 07-08 methods submitted for frame by frame (FBF) and note tracking (NT) evaluation

In the review from Bay et al. [36], the results of the algorithms evaluated in both MIREX editions are analysed. As shown in Figure 11, the proposed methods achieved a high overall accuracy and the highest precision rates. The extended method also obtained the lowest error rates E tot from all the methods submitted in both editions (see Figure 12).

Figure 11
figure 11

Frame by frame MIREX accuracy. Figure from [36]. Frame by frame precision, recall and accuracy for MIREX 07-08 multiple f0 estimation methods.

Figure 12
figure 12

Frame by frame MIREX E tot . Figure from [36], showing E subs , E miss and E fa for MIREX 07-08 frame by frame evaluation ordered by E tot .

In the evaluation of note tracking considering only onsets, the proposed approaches showed lower accuracies (Figure 13), as only the extended method can perform pitch tracking. The inclusion of the tracking stage did not improve the results for frame by frame estimation, but in the note tracking task the results outperformed those obtained for the same method without tracking. The proposed methods were also very efficient respect to the other state of the art algorithms presented (see Table 3), especially considering that they are based on a joint estimation scheme.

Figure 13
figure 13

MIREX note tracking F-m (Figure from [36]). Figure from [36]. Precision, recall, average F-measure and average overlap based on note onset for MIREX 07-08 note tracking.

Table 3 MIREX 07-08 frame by frame and note tracking runtimes in seconds of the top ten accuracy methods

While the proposed approaches achieved the lowest E tot score, there were very few false alarms compared to miss errors. On the other hand, the methods from Ryynänen and Klapuri [17] and Yeh et al. [37] had a better balanced precision, recall, as well as a good balance in the three error types, and as a result, high accuracies.

Quoting Bay et al. [36], "Inspecting the methods used and their performances, we can not make generalized claims as to what type of approach works best. In fact, statistical significance testing showed that the top three methods (YRC, PI, and RK) were not significantly different."

5 Conclusions and discussion

In this study, an efficient methodology is proposed for multiple f0 estimation of real music signals assuming spectral smoothness and strong harmonic content without any other a priori knowledge of the sources.

The method can infer and evaluate hypothetical spectral patterns from the analysis of different hypotheses taking into account the interactions with other sources.

The algorithm is extended considering adjacent frames to smooth the temporal detection. In order to increase the temporal coherence of the detection, a novel pitch tracking stage based on a wDAG has been included. The proposed algorithms were evaluated and compared to other works by a third party in a public contest (MIREX), obtaining a high accuracy, the highest precision and the lowest E tot among all the multiple f0 methods submitted. Although many possible combinations of candidates are evaluated at each frame, the presented approach has a very low computational cost, showing that it is possible to make an efficient joint estimation method by applying some constraints, like the sparse representation of only certain spectral peaks, the candidate filtering stage, and the combination pruning process.

The pitch tracking stage could be replaced by a more reliable method in a future study. For instance, the transition weights could be learned from a labeled test set, or a more complex tracking method like the high-order HMM scheme from Chang et al. [38] could be used instead. Besides intensity, the centroid of an HPS should also have a temporal coherence when belonging to the same source, therefore this feature could also be considered for tracking. Using stochastic models, a probability could be assigned to each pitch in order to remove those that are less probable given their context. Musical probabilities can be taken into account, like in [17], to remove very unlikely notes. The adaptation to polyphonic music of the stochastic approach from Perez-Sancho [39] is also planned as future study, in order to complement the multiple f0 estimation method to obtain a musically coherent detection. Besides frame by frame analysis and the analysis of adjacent frames, the possibility of the extended method for combining similar information across frames allows to consider different alternative architectures.

This novel methodology permits interesting schemes. For example, the beginnings of musical events can be estimated using an onset detection algorithm like [40]. Then, combinations of those frames that are between two consecutive onsets can be merged to yield the pitches within the inter-onset interval. This technique is close to segmentation, and it can obtain reliable results when the onsets are correctly estimated, as it happens with sharp attack sounds like piano, but a wrong estimate in the onset detection stage will affect the results.

Beats, that can be defined as a sense of equally spaced temporal units [41], can also be detected to merge combinations with a quantization grid. Once the beats are estimated (for example with a beat tracking algorithm like BeatRoot [42]), a grid split with a given beat divisor 1/q can be used, assuming that the minimum note duration is q. For instance, if q = 4, each inter-beat interval can be split in q sections. Then, the combinations of the frames that belong to the quantization unit can be merged to obtain the results at each minimum grid unit. Like in the onset detection scheme, the success rate of this approach depends on the success of the beat estimation. The extended method can be applied using any of these schemes. The adequate choice of the architecture depends on the signal to be analysed. For instance, for timbres with sharp attacks, it is recommended to use onset information, which is very reliable for these kind of sounds. These alternative architectures have been perceptually evaluated using some example real songs, but a more rigorous evaluation of these schemes is left for future study, since an aligned dataset of real musical pieces with symbolic data is required for this task.

Endnotes

aNational Institute of Standards and Technology.

bThere are 1000 mixtures for each polyphony.

c http://www.music.mcgill.ca/resources/mums/html/index.htm

d http://theremin.music.uiowa.edu/MIS.html

e http://forumnet.ircam.fr/402.html?&L=1

fThe authors would like to thank A. Klapuri for providing this data set for evaluation.

References

  1. Pertusa A, Iñesta JM: Multiple fundamental frequency estimation using Gaussian smoothness. In Proc of the IEEE Int Conf on Acoustics, Speech, and Signal Processing (ICASSP). Volume I. Las Vegas, NV; 2008:105-108.

    Google Scholar 

  2. Klapuri A, Davy M: Signal Processing Methods for Music Transcription. Springer Science+Business Media LCC, New York; 2006.

    Book  Google Scholar 

  3. Christensen MG, Jakobsson A: Multi-Pitch estimation. In Synthesis Lectures on Speech and Audio Processing. Morgan & Claypool publishers, Seattle, WA, USA; 2009.

    Google Scholar 

  4. Pertusa A: Computationally efficient methods for polyphonic music transcription. PhD thesis. Universidad de Alicante; 2010.

    Google Scholar 

  5. Tolonen A, Karjalainen M: A computationally efficient multipitch analysis model. IEEE Trans Speech Audio Process 2000, 8(6):708-716. 10.1109/89.876309

    Article  Google Scholar 

  6. Peeters G: Music pitch representation by periodicity measures based on combined temporal and spectral representations. In Proc of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Volume V. Toulouse, France; 2006:53-56.

    Google Scholar 

  7. Zhou R, Reiss JD, Mattavelli M, Zoia G: A computationally efficient method for polyphonic pitch estimation. EURASIP J Adv Signal Process 2009., 2009(28):

  8. Marolt M: Networks of adaptive oscillators for partial tracking and transcription of music recordings. J New Music Res 2004, 33: 49-59. 10.1076/jnmr.33.1.49.35391

    Article  Google Scholar 

  9. Pertusa A, Iñesta JM: Polyphonic monotimbral music transcription using dynamic networks. Pattern Recogn Lett 2005, 26(12):1809-1818. 10.1016/j.patrec.2005.03.001

    Article  Google Scholar 

  10. Cont A: Realtime multiple pitch observation using sparse non-negative constraints. In Proc of the 7th International Symposium on Music Information Retrieval (ISMIR). Victoria, Canada; 2006:206-211.

    Google Scholar 

  11. Bertin N, Badeau R, Vincent E: Enforcing harmonicity and smoothness in bayesian non-negative matrix factorization applied to polyphonic music transcription. IEEE Trans Audio Speech Language Process 2010, 18(3):538-549.

    Article  Google Scholar 

  12. Carabias-Orti JJ, Vera-Candeas P, Cañadas-Quesada FJ, Ruiz-Reyes N: Music scene-adaptive harmonic dictionary for unsupervised note-event detection. IEEE Trans Audio Speech Language Process 2010, 18(3):473-486.

    Article  Google Scholar 

  13. Leveau P, Vincent E, Richard G, Daudet L: Instrument-specific harmonic atoms for mid-level music representation. IEEE Trans Audio Speech Language Process 2008, 16: 116-128.

    Article  Google Scholar 

  14. Cemgil AT: Bayesian Music Transcription. PhD thesis.Radboud University of Nijmegen, Netherlands; 2004. [http://www-sigproc.eng.cam.ac.uk/~atc27/papers/cemgil-thesis.pdf]

    Google Scholar 

  15. Davy M: An introduction to signal processing. In Signal Processing methods for music transcription. Springer Science+Business Media LCC, New York; 2006.

    Google Scholar 

  16. Klapuri A: Multipitch analysis of polyphonic music and speech signals using an auditory model. IEEE Trans Audio, Speech and Language Process 2008, 16(2):255-266.

    Article  Google Scholar 

  17. Ryynänen M, Klapuri A: Polyphonic Music Transcription Using Note Event Modeling. In Proc IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). New Paltz, New York, USA; 2005:319-322.

    Google Scholar 

  18. Cao C, Li M, Liu J, Yan Y: Multiple F0 estimation in polyphonic music. In Proc of the 3rd Music Information Retrieval Evaluation eXchange (MIREX). Vienna, Austria; 2007.

    Google Scholar 

  19. Yeh C, Roebel A, Rodet X: Multiple fundamental frequency estimation and polyphony inference of polyphonic music signals. IEEE Trans Audio, Speech Language Process 2010, 18(6):1116-1126.

    Article  Google Scholar 

  20. Emiya V, Badeau R, David B: Automatic transcription of piano music based on HMM tracking of jointly-estimated pitches. In Proc European Signal Processing Conference (EUSIPCO). Rhodes, Greece; 2008.

    Google Scholar 

  21. Cañadas-Quesada FJ, Vera-Candeas P, Ruiz-Reyes N, Carabias-Orti JJ: Polyphonic transcription based on temporal evolution of spectral similarity of Gaussian Mixture Models. In 17th European Signal Processing Conference (EUSIPCO). Glasgow, Scotland; 2009:10-14.

    Google Scholar 

  22. MIREX, Music Information Retrieval Evaluation eXchange. Multiple fundamental frequency estimation and tracking contest2007. [http://www.music-ir.org/mirex/wiki/2007:MultipleFundamentalFrequencyEstimation&TrackingResults]

  23. MIREX, Music Information Retrieval Evaluation eXchange. Multiple fundamental frequency estimation and tracking contest2008. [http://www.music-ir.org/mirex/wiki/2008:MultipleFundamentalFrequencyEstimation&TrackingResults]

  24. Yeh C: Multiple F0 estimation for MIREX 2007. In Proc of the 3rd Music Information Retrieval Evaluation eXchange (MIREX). Vienna, Austria; 2007.

    Google Scholar 

  25. Pertusa A, Iñesta JM: Multiple fundamental frequency estimation based on spectral pattern loudness and smoothness. In Proc of the 3rd Music Information Retrieval Evaluation eXchange (MIREX). Vienna, Austria; 2007.

    Google Scholar 

  26. Yeh C, Röbel A, Rodet X: Multiple fundamental frequency estimation of polyphonic music signals. In IEEE, Int Conf on Acoustics, Speech and Signal Processing (ICASSP). Volume III. Philadelphia, PA; 2005:225-228.

    Google Scholar 

  27. Cañadas-Quesada FJ, Vera-Candeas P, Ruiz-Reyes N, Mata-Campos R, Carabias-Orti JJ: Note-event detection in polyphonic musical signals based on Harmonic matching pursuit and spectral smoothness. J New Music Res 2008, 37(3):167-183. 10.1080/09298210802474475

    Article  Google Scholar 

  28. Klapuri A: Multiple fundamental frequency estimation based on harmonicity and spectral smoothness. IEEE Trans Speech Audio Process 2003, 11(6):804-816. 10.1109/TSA.2003.815516

    Article  Google Scholar 

  29. Badeau R, Emiya V, David B: Expectation-maximization algorithm for multi-pitch estimation and separation of overlapping harmonic spectra. In International Conference on Acoustics, Speech and Signal Processing (ICASSP). Volume I. Taipei, Taiwan; 2009:3073-3076.

    Google Scholar 

  30. Yeh C: Multiple fundamental frequency estimation of polyphonic recordings. PhD thesis. Université Paris VI - Pierre et Marie Curie; 2008.

    Google Scholar 

  31. Klapuri A: Signal processing methods for the automatic transcription of music. PhD thesis. Tampere Univ. of Technology; 2004.

    Google Scholar 

  32. Maher RC: Evaluation of a method for separating digitized duet signals. J Audio Eng Soc 1990, 38: 956-979.

    Google Scholar 

  33. Dijkstra EW: A note on two problems in connexion with graphs. Numerische Mathematik 1959, 1: 269-271. 10.1007/BF01386390

    Article  MathSciNet  MATH  Google Scholar 

  34. Poliner GE, Ellis DPW: A Discriminative Model for Polyphonic Piano Transcription. EURASIP J Adv Signal Process 2007: 2007.

  35. Klapuri A: Multiple Fundamental Frequency Estimation by Summing Harmonic Amplitudes. In Proc of the Int Conference on Music Information Retrieval (ISMIR). Victoria, Canada; 2006:216-221.

    Google Scholar 

  36. Bay M, Ehmann AF, Downie JS: Evaluation of multiple-F0 estimation and tracking systems. In Proc of the 10th International Conference on Music Information Retrieval (ISMIR). Kobe, Japan; 2009:315-320.

    Google Scholar 

  37. Yeh C, Roebel A, Chang WC: Multiple F0 estimation for MIREX 08. In Proc of the 4th Music Information Retrieval Evaluation eXchange (MIREX). Philadelphia, PA; 2008.

    Google Scholar 

  38. Chang WC, Su AWY, Yeh C, Roebel A, Rodet X: Multiple-F0 tracking based on a high-order HMM model. In Proc of the 11th Int Conference on Digital Audio Effects (DAFx). Espoo, Finland; 2008:379-386.

    Google Scholar 

  39. Pérez-Sancho C: Stochastic Language Models for Music Information Retrieval. PhD thesis. Universidad de Alicante, Spain; 2009.

    Google Scholar 

  40. Pertusa A, Klapuri A, Iñesta JM: Recognition of note onsets in digital music using semitone bands. Lecture Notes in Computer Science 2005, 3773: 869-879. 10.1007/11578079_90

    Article  Google Scholar 

  41. Handel S: Listening: An introduction to the perception of auditory events. Bradford Books MIT Press, Cambridge; 1989.

    Google Scholar 

  42. Dixon S: Onset detection revisited. In Proc of the Int Conf on Digital Audio Effects (DAFx). Montreal, Canada; 2006:133-137.

    Google Scholar 

  43. Cont A: Real-time transcription of music signals: MIREX 2007 submission description. In Proc of the 3rd Music Information Retrieval Evaluation eXchange (MIREX). Vienna, Austria; 2007.

    Google Scholar 

  44. Cao C, Li M: Multiple F0 estimation in polyphonic music (MIREX 2008). In Proc of the 4th Music Information Retrieval Evaluation eXchange (MIREX). Philadelphia, PA; 2008.

    Google Scholar 

  45. Durrieu JL, Richard G, David B: Singer melody extraction in polyphonic signals using source separation methods. In Proc of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Volume I. Las Vegas, NV; 2008:169-172.

    Google Scholar 

  46. Emiya V, Badeau R, David B: Automatic transcription of piano music based on HMM tracking of jointly-estimated pitches. In Proc of the 4th Music Information Retrieval Evaluation eXchange (MIREX). Philadelphia, PA; 2008.

    Google Scholar 

  47. Egashira K, Ono N, Sagayama S: Sequential estimation of multiple fundamental frequencies through Harmonic-Temporal-Structured clustering. In Proc of the 4th Music Information Retrieval Evaluation eXchange (MIREX). Philadelphia, PA; 2008.

    Google Scholar 

  48. Kameoka H, Nishimoto T, Sagayama S: A Multipitch analyser based on harmonic temporal structured clustering. IEEE Trans Audio Speech Language Process 2007, 5(3):982-994.

    Article  Google Scholar 

  49. Groble M: Multiple fundamental frequency estimation. In Proc of the 4th Music Information Retrieval Evaluation eXchange (MIREX). Philadelphia, PA; 2008.

    Google Scholar 

  50. Lidy T, Rauber A, Pertusa A, Iñesta JM: Improving genre classification by combination of audio and symbolic descriptors using a transcription system. In Proc of the 8th International Conference on Music Information Retrieval (ISMIR). Vienna, Austria; 2007:61-66.

    Google Scholar 

  51. Leveau P: A multipitch detection algorithm using a sparse decomposition with instrument-specific harmonic atoms. In Proc of the 3rd Music Information Retrieval Evaluation eXchange (MIREX). Vienna, Austria; 2007.

    Google Scholar 

  52. Reis G, Fernandez F, Ferreira A: Genetic algorithm approach to polyphonic music transcription for MIREX 2008. In Proc of the 4th Music Information Retrieval Evaluation eXchange (MIREX). Philadelphia, PA; 2008.

    Google Scholar 

  53. Raczynski SA, Ono N, Sagayama S: Multipitch analisys with harmonic nonnegative matrix approximation. In Proc of the 8th Int Conference on Music Information Retrieval (ISMIR). Vienna, Austria; 2007:381-386.

    Google Scholar 

  54. Vincent E, Bertin N, Badeau R: Two nonnegative matrix factorization methods for polyphonic pitch transcription. In Proc of the 3rd Music Information Retrieval Evaluation eXchange (MIREX). Vienna, Austria; 2007.

    Google Scholar 

  55. Zhou R, Reiss JD: A real-time polyphonic music transcription system. In Proc of the 4th Music Information Retrieval Evaluation eXchange (MIREX). Philadelphia, PA; 2008.

    Google Scholar 

Download references

Acknowledgements

This study was supported by the project DRIMS (code TIN2009-14247-C02), the Consolider Ingenio 2010 research programme (project MIPRCV, CSD2007-00018), and the PASCAL2 Network of Excellence, IST-2007-216886.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antonio Pertusa.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Pertusa, A., Iñesta, J.M. Efficient methods for joint estimation of multiple fundamental frequencies in music signals. EURASIP J. Adv. Signal Process. 2012, 27 (2012). https://doi.org/10.1186/1687-6180-2012-27

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-27

Keywords