Adaptive independent sticky MCMC algorithms

Monte Carlo methods have become essential tools to solve complex Bayesian inference problems in different fields, such as computational statistics, machine learning, and statistical signal processing. In this work, we introduce a novel class of adaptive Monte Carlo methods, called adaptive independent sticky Markov Chain Monte Carlo (MCMC) algorithms, to sample efficiently from any bounded target probability density function (pdf). The new class of algorithms employs adaptive non-parametric proposal densities, which become closer and closer to the target as the number of iterations increases. The proposal pdf is built using interpolation procedures based on a set of support points which is constructed iteratively from previously drawn samples. The algorithm’s efficiency is ensured by a test that supervises the evolution of the set of support points. This extra stage controls the computational cost and the convergence of the proposal density to the target. Each part of the novel family of algorithms is discussed and several examples of specific methods are provided. Although the novel algorithms are presented for univariate target densities, we show how they can be easily extended to the multivariate context by embedding them within a Gibbs-type sampler or the hit and run algorithm. The ergodicity is ensured and discussed. An overview of the related works in the literature is also provided, emphasizing that several well-known existing methods (like the adaptive rejection Metropolis sampling (ARMS) scheme) are encompassed by the new class of algorithms proposed here. Eight numerical examples (including the inference of the hyper-parameters of Gaussian processes, widely used in machine learning for signal processing applications) illustrate the efficiency of sticky schemes, both as stand-alone methods to sample from complicated one-dimensional pdfs and within Gibbs samplers in order to draw from multi-dimensional target distributions.


Introduction
Markov chain Monte Carlo (MCMC) methods [1,2] are very important tools for Bayesian inference and numerical approximation, which are widely employed in signal processing [3][4][5][6][7] and other related fields [1,8].A crucial issue in MCMC is the choice of a proposal probability density function (pdf ), as this can strongly affect the mixing of the MCMC chain when the target pdf has a complex structure, e.g., multimodality and/or heavy tails.Thus, in the last decade, a remarkable stream of literature focuses on adaptive proposal pdfs, which allow for self-tuning *Correspondence: lukatotal@gmail.com 1 Image Processing Lab., University of Valencia, Valencia, Spain Full list of author information is available at the end of the article procedures of the MCMC algorithms, flexible movements within the state space, and improved acceptance rates [9,10].
Adaptive MCMC algorithms are used in many statistical applications and several schemes have been proposed in the literature [8][9][10][11].There are two main families of methods: parametric and non-parametric.The first strategy consists in adapting the parameters of a parametric proposal pdf according to the past values of the chain [10].However, even if the parameters are perfectly adapted, a discrepancy between the target and the proposal pdfs remains.A second strategy attempts to adapt the entire shape of the proposal density using non-parametric procedures [12,13].Most authors have payed more attention to the first family, designing local adaptive random-walk algorithms [9,10], due to the difficulty of approximating the full target distribution by non-parametric schemes with any degree of generality.
In this work, we describe a general framework to design suitable adaptive MCMC algorithms with nonparametric proposal densities.After describing the different building blocks and the general features of the novel class, we introduce two specific algorithms.Firstly, we describe the adaptive independent sticky Metropolis (AISM) algorithm to draw efficiently from any bounded univariate target distribution. 1Then, we also propose a more efficient scheme that is based on the multiple try Metropolis (MTM) algorithm: the adaptive independent sticky Multiple Try Metropolis (AISMTM) method.The ergodicity of the adaptive sticky MCMC methods is ensured and discussed.The underlying theoretical support is based on the approach introduced in [14].The new schemes are particularly suitable for sampling from complicated full-conditional pdfs within a Gibbs sampler [5][6][7].
Moreover, the new class of methods encompasses different well-known algorithms available in literature: the griddy Gibbs sampler [15], the adaptive rejection Metropolis sampler (ARMS) [12,16], and the independent doubly adaptive Metropolis sampler (IA 2 RMS) [13,17].Other related or similar approaches are also discussed in Section 6.The main contributions of this paper are the following: 1.A very general framework, that allows practitioners to design proper adaptive MCMC methods by employing a non-parametric proposal.2. Two algorithms (AISM and AISMTM), that can be used off-the-shelf in signal processing applications.3.An exhaustive overview of the related algorithms proposed in the literature, showing that several well-known methods (such as ARMS) are encompassed by the proposed framework.4. A theoretical analysis of the AISM algorithm, proving its ergodicity and the convergence of the adaptive proposal to the target.
The structure of the paper is the following.Section 2 introduces the generalities of the class of sticky MCMC methods and the AISM scheme.Sections 3 and 4 present the general properties, altogether with specific examples, of the proposal constructions and the update control tests.Section 5 introduces some theoretical results.Section 6 discusses several related works and highlights some specific techniques belonging to the class of sticky methods.Section 7 introduces the AISMTM method.Section 8 describes the range of applicability of the proposed framework, including its use within other Monte Carlo methods (like the Gibbs sampler or the hit and run algorithm) to sample from multivariate distributions.Eight numerical examples (including the inference of hyper-parameters of Gaussian processes) are then provided in Section 9. 2  Finally, Section 10 contains some conclusions and possible future lines. 3  2 Adaptive independent sticky MCMC algorithms Let π(x) ∝ π(x)>0, with x ∈ X ⊆ R,beabo u n d ed 4  target density known up to a normalizing constant, c π = X π(x)dx, from which direct sampling is unfeasible.In order to draw from it, we employ an MCMC algorithm with an independent adaptive proposal, q t (x|S t ) ∝ q t (x|S t )>0, x ∈ X , where t is the iteration index of the corresponding MCMC algorithm, and S t ={ s 1 , ..., s m t } with m t > 0i st h es e t of support points used for building q t .A t t h e t-th iteration, an adaptive independent sticky MCMC method is conceptually formed by three stages (see Fig. 1): 1. Construction of the non-parametric proposal : given the nodes in S t , the function q t is built using a suitable non parametric procedure that provides a function which is closer and closer to the target as the number of points m t increases.Section 3 describes the general properties that must be fulfilled by a suitable proposal construction, as well as specific procedures to build this proposal.
Fig. 1 Graphical representation of a generic adaptive independent sticky MCMC algorithm 2. MCMC stage: some MCMC method is applied in order to produce the next state of the chain, x t , employing q t (x|S t ) as (part of the) proposal pdf.This stage produces the next state of the chain, x t+1 ,and an auxiliary variable z (see Tables 1 and 4), used in the following update stage.3. Update stage: A statistical test on the auxiliary variable z is performed in order to decide whether to increase the number of points in S t or not, defining a new support set, S t+1 , which is used to construct the proposal at the next iteration.The update stage controls the computational cost and ensures the ergodicity of the generated chain (see Appendix A).Section 4 is devoted to the design of different suitable update rules.
In the following section, we describe the simplest possible sticky method, obtained by using the MH algorithm, whereas in Section 7 we consider a more sophisticated technique that employs the MTM scheme. 5

Adaptive independent sticky Metropolis
The simplest adaptive independent sticky method is the adaptive independent sticky Metropolis (AISM) technique, outlined in Table 1.In this case, the proposal pdf q t (x|S t ) changes along the iterations (see step 1 of Table 1) following an adaptation scheme that relies upon a suitable interpolation given the set of support points S t (see Section 3).
Step 3 of Table 1 applies a statistical control to update the set S t .The point z, rejected at the current iteration of the algorithm in the MH test, is added to S t with probability P a (z) = η t (z, d t (z)), ( 1 ) Table 1 Adaptive independent sticky Metropolis (AISM) For t = 0, ..., T − 1: 1 Construction of the proposal: Build a proposal function q t (x|S t ) via a suitable procedure using a set of support points S t (see Section 3). 2 MH step: 2.1 Draw x ′ ∼ q t (x|S t ) ∝ q t (x|S t ).2.2 Set x t = x ′ and z = x t−1 with probability α = min 1, π(x ′ )q t (x t−1 |S t ) π(x t−1 )q t (x ′ |S t ) .
Otherwise, set x t = x t−1 and z = x ′ .
where η t (z, d) : X × R + →[0,1]is an increasing test function w.r.t. the variable d,s u c ht h a tη t (z,0) = 0, and d = d t (z) = π(z) − q t (z|S t ) . is the point distance between π and q t at z.The rationale behind this test is to use information from the target density in order to include in the support set only those points where the proposal pdf differs substantially from the target value at z.N o t e that, since z is always different from the current state (i.e., z = x t for all t), then the proposal pdf is independent from the current state according to Holden's definition [14], and thus the theoretical analysis is greatly simplified.

Construction of the sticky proposals
There are many alternatives available for the construction of a suitable sticky proposal (SP).However, in order to be able to provide some theoretical results in Section 5, let us define precisely what we understand here by a sticky proposal.
Definition 1 (Valid Adaptive Proposal) Let us consider a target density, π(x) ∝ π(x)>0 for any x ∈ X ⊆ R (the target's support), and a set of m t =|S t | support points, S t ={ s 1 , ..., s m t } with s i ∈ X for all i = 1, ..., m t .A n adaptive proposal built using S t via some non-parametric interpolation approach is considered valid if the following four properties are satisfied: 1.The proposal function is positive, i.e., q t (x|S t )>0 for all x ∈ X and for all possible sets S t with t ∈ N. 2. Samples can be drawn directly and easily from the resulting proposal, q t (x|S t ) ∝ q t (x|S t ), using some exact sampling procedure.3.For any bounded target, π(x), the resulting proposal function, q t (x|S t ), is also bounded.Furthermore, defining 4. The proposal function, q t (x|S t ), has heavier tails than the target, i.e., defining Condition 1 guarantees that the function q t (x|S t ) leads to a valid pdf, q t (x|S t ), that covers the entire support of the target.Condition 2 is required from a practical point of view to obtain efficient algorithms.Finally, conditions 3 and 4 are required by the proofs of Theorems 3 and 1, respectively, and also make sense from a practical point of view: if the target is bounded, we would expect the proposallearntfromittobealsoboundedandthisproposal should be heavier tailed than the target in order to avoid under-sampling the tails.Now we can define precisely what we understand by a "sticky" proposal.

Definition 2 (Sticky Proposal (SP))
Let us consider a valid proposal pdf according to Definition 1.Letusassume also that the i-th support point is distributed according to p i (x) (i.e., s i ∼ p i (x))suchthatp i (x)>0 for any x ∈ X and i = 1, ..., m t .Then, a sticky proposal is any valid proposal pdf s.t. the L 1 distance between q t (x) and π(x) vanishes to zero when the number of support points increases, i.e., if m t →∞, where d t (z) =|π(z) − q t (z|S t )| is the L 1 distance between π(x) and q t (x) evaluated at z ∈ X ,and(2) implies almost everywhere (a.k.a., almost surely) convergence of q t (x) to π(x).
In the following, we provide some examples of constructions that fulfill all the conditions in Definitions 1 and 2. All of them approximate the target pdf by interpolating points that belong to the graph of the target function π.

Examples of constructions
Given S t ={ s 1 , ..., s m t } at the t-th iteration, let us define as e q u e n c eo fm t + 1i n t e r v a l s :I 0 = (−∞, s 1 ], I j = (s j , s j+1 ]f o rj = 1, ..., m t − 1, and I m t = (s m t , +∞).The simplest possible procedure uses piecewise constant (uniform) pieces in I i ,1 ≤ i ≤ m t − 1, with two exponential tails in the first and last intervals [13,15,18].Mathematically, where 1 ≤ i ≤ m t − 1a n dE 0 (x), E m t (x) represent two exponential pieces.These two exponential tails can be obtained simply constructing two straight lines in the log-domain as shown in [12,13,19].For instance, defining V (x) = log[ π(x)], we can build the straight line w 0 (x) passing through the points (s 1 , V (s 1 )) and (s 2 , V (s 2 )), and the straight line w m t (x) passing through the points (s m t −1 , V (s m t −1 )) and (s m t , V (s m t )).Hence,the proposal function is defined as E 0 (x) = exp (w 0 (x)) for x ∈ I 0 and E m t (x) = exp w m t (x) for x ∈ I m t .O t h e r kinds of tails can be built, e.g., using Pareto pieces as shown in Appendix E.2.Alternatively, we can use piecewise linear pieces [20].The basic idea is to build straight lines, L i,i+1 (x), passing through the points (s i , π(s i )) and (s i+1 , π(s i+1 )) for i = 1, ..., m t − 1, and two exponential pieces, E 0 (x) and E m t (x), for the tails: with i = 1, ..., m t − 1.Note that drawing samples from these trapezoidal pdfs inside I i = (s i , s i+1 ] is straightforward [20,21].Figure 2 shows examples of the construction of q t (x|S t ) using Eq. ( 3) or ( 4) with different number of points, m t = 6, 8, 9, 11.See Appendix A for further considerations.
A more sophisticated and costly construction has been proposed for the ARMS method in [12].However, note that this construction does not fulfill Condition 3 in Definition 1.A similar construction based on B-spline interpolation methods has been proposed in [22,23] to build a non-adaptive random walk proposal pdf for an MH algorithm.Other alternative procedures can also be found in the literature [13,16,[18][19][20].

abcd ef gh
Fig. 2 Examples of the proposal construction q t considering a bimodal target π, using the procedures described in Eq. ( 3) for a-d andinEq.(4)for e-h with m t = 6, 8, 9, 11 support points, respectively

Update of the set of support points
In AISM, a suitable choice of the function η t (z, d) is required.Although more general functions could be employed, we concentrate on test functions that fulfill the conditions provided in the following definition.
Definition 3 (Test Function) Let us denote the L 1 distance between the target and the proposal at the t-th iteration, for any z ∈ X ,asd= d t (z) =| π(z) − q t (z)|.A valid test function, η t (z, d), is any function that fulfills all of the following properties: The first condition ensures that we obtain a valid probability for the addition of new support points, P a (z) = η t (z, d), whereas the remaining three conditions imply that support points are more likely to be added in those areas where the proposal is further away from the target, with a non-null probability of adding new points in places where d > 0. In particular, Condition 4 is required by several theoretical results provided in the Appendix.However, update rules that do not fulfill this condition can also be useful, as discussed in the following.Figure 3 depicts an example of function η t when η t (z, d) = η t (d).Note that, for a given value of z, η t satisfies all the properties of a continuous distribution function (cdf ) associated to a positive random variable.Therefore, any pdf for positive random variables ca nbeu s edt od e fi n eava l i dt e s tfu n c t i o nη t through its corresponding cdf.

Examples of update rules
Below, we provide three different possible update rules.First of all, we consider the simplest case: η t (z, d) = η(d).As a first example, we propose ( 5 ) where β>0 is a constant parameter.Note that this is the cdf associated to an exponential random variable.
A second possibility is where 0 <ε t < M π ,w i t hM π = max z∈X {π(z)}, 6 is some appropriate time-varying threshold that can either follow some user pre-defined rule or be updated automatically. 7 Alternatively, we can also set this threshold to a fixed value, ε t = ε, as done in the simulations.In this case, setting ε ≥ M π implies that the update of S t never happens (i.e., new support points are never added to the support set), whereas candidate nodes would be incorporated to S t almost surely by setting ε → 0. For any other value of ε (i.e., 0 <ε<M π ), the adaptation would eventually stop and no support points would be added after some random number of iterations.Note that this update rule does not fulfill Condition 4 in Definition 3, implying that some of the theoretical results of Section 5 (e.g., Conjecture 1) are not applicable.However, we have included it here because it is a very simple rule that has shown a good performance in practice and can be useful to limit the number of support points by using a fixed value of ǫ.Finally, note also that Eq. ( 6) corresponds to the cdf associated to a Dirac's delta located at ε t .A third alternative is Fig. 3 Graphical representation of the underlying idea behind the update control test.For simplicity, in this figure, we have assumed η t (z, d) = η t (d).
As the proposal function q t becomes closer and closer to π, the probability of adding a new node to S t decreases for z ∈ X and 0 ≤ d ≤ max{π(z), q t (z|S t )},since = max{π(z), q t (z|S t )}−min{π(z), q t (z|S t )}, ≤ max{π(z), q t (z|S t )}, ( 8 ) This rule appears in other related algorithms, as discussed in Section 6.1.Furthermore, it corresponds to the cdf of a uniform random variable defined in the interval [0, max{π(z), q t (z|S t )}].Hence, for a given value of z, the update test can be implemented as follows: (a) draw a samples v ′ uniformly distributed in the interval 0, max{π(z), q t (z|S t )} ;( b )i fv ′ ≤ d t (z),a d dz to the set of support points.A graphical representation of this rule is given in Fig. 4, whereas Table 2 summarizes all the previously described update rules.

Theoretical results
In this section, we provide some theoretical results regarding the ergodicity of the proposed approach, the convergence of a sticky proposal to the target, and the expected growth of the number of support points of the proposal.First of all, regarding the ergodicity of the AISM, we have the following theorem.
Theorem 1 (Ergodicity of AISM) Let x 1 , x 2 , ..., x T−1 be the set of states generated by the AISM algorithm in Table 1, using a valid adaptive proposal function, q t (x|S t ) = 1 c t q t (x|S t ), constructed according to Definition 1, and a test rule fulfilling the conditions in Definition 3.Thepdfofx t ,p t (x), converges geometrically in total variation (TV) norm to the target, π(x) = 1 c π π(x), i.e., where with c π and c ℓ denoting the normalizing constants of π(x) and q ℓ (x|S ℓ ), respectively.
Proof See Appendix A.
Theorem 1 ensures that the pdf of the states of the Markov chain becomes closer and closer to the target pdf as t increases, since 0 ≤ 1−a t ≤ 1 and thus the product in the right hand side of ( 9) is a decreasing function of t.This theorem is a direct consequence of Theorem 2 in [14], and ensures the ergodicity of the proposed adaptive MCMC approach.Regarding the convergence of a sticky proposal to the target, we consider the following conjecture.
Conjecture 1 (Convergence of SP to the target) Let π(x) = 1 c π π(x) be a continuous and bounded target pdf that has bounded first and second derivatives for all x ∈ X .L e t q t (x|S t ) = 1 c t q t (x|S t ) be a sticky proposal pdf, constructed according to Definition 1 by using either a piecewise constant (PWC) or piecewise linear (PWL) approximation (given by Eqs.(3) and (4), respectively).Let us also assume that the support points have been obtained by applying a test rule according to Definition 3 within the AISM algorithm described in Table 1.Then, it is reasonable to assume that q t (x|S t ) converges in L 1 distance to π(x) as t increases (i.e., as the number of support points grows), i.e., as t →∞ An intuitive argumentation is provided in Appendix A. abc Fig. 4 Graphical interpretation of the third rule in Eq. ( 7) for the update control test.Given a point z, this test can be implemented as following: (1) draw a sample v ′ ∼ U ([0,max{π(z), q t (z|S t )}] ), (2) then if v ′ ≤ d t (z), add z to the set of support points, i.e., S t+1 = S t ∪{z}.a The interval [0,max{π(z), q t (z|S t )}] and the distance d t (z).b The case when v ′ ≤ d t (z) so that the point is incorporated in the set of support points whereas c illustrates the case when v ′ > d t (z); hence, S t+1 = S t .Note that as the proposal function q t becomes closer and closer to π (i.e., d t (z) decreases for any z), the probability of adding a new node to S t decreases In the first and second cases, we have η t (z, d) = η t (d) Note that Conjecture 1 essentially shows that the "sticky" condition is fulfilled for PWC and PWL proposals and continuous, bounded targets with bounded first and second derivatives.Note also that this conjecture implies that q t (x|S t ) → π(x) almost everywhere.Combining Theorem 1 and Conjecture 1 we get the following corollary.
Corollary 2 Let x 1 , x 2 , ..., x T−1 be the set of states generated by the AISM algorithm in Table 1, using either a PWC or a PWL sticky proposal function, q t (x|S t ) = 1 c t q t (x|S t ), constructed according to Definition 2 and a test rule fulfilling the conditions in Definition 3.L e t π(x) = 1 c π π(x) be a continuous and bounded target pdf that has bounded first and second derivatives for all x ∈ X .Then, π(x) − q t (x) TV → 0 as t →∞.
F i n a l l y ,w ea l s oh a v eab o u n do nt h ee x p e c t e dg r o w t h of the number of support points, as provided by the following theorem.
z)| be the L 1 distance between the bounded target, π(x), and an arbitrary sticky proposal function, q t (x), constructed according to Definition 2.L e ta l s oη t (z, d) = η t (d) be an acceptance function that only depends on z through d = d t (z) and fulfills the conditions in Definition 3.Theexpectedprobability of adding a new support point in the AISM algorithm of Table 1 at the t-th iteration is where D 1 (π, q t ) = X d t (z)dz, and C = max z∈X q t (z|S t ) is a constant that depends on the sticky proposal used.Furthermore, under the conditions of Conjecture 1, Theorem 3 sets a bound on the expected probability of adding new support points, and thus on the expected rate of growth of the number of support points.Furthermore, under certain smoothness assumptions for the target (i.e., that π(x) is twice continuously differentiable), it also guarantees that this expectation tends to zero as the number of iterations increases, hence implying that less points are added as the algorithm evolves.Finally, note that Theorem 3 has been derived only for η t (z, d) = η t (d).However, under certain mild assumptions, it can be easily extended to more general test functions, as stated in the following corollary.

concave function of d t (z). Then, if the rest of the conditions in Theorem 3 are satisfied, the expected probability of adding a new support point in the AISM algorithm of Table 1 at the t-th iteration is
where D 1 (π, q t ) = X d t (z)dz and C = max z∈X q t (z|S t ).Furthermore, under the conditions of Conjecture 1, Note that Corollary 4 allows us to extend the results of Theorem 3 to update rule 3, which corresponds to 6 Related works

Other examples of sticky MCMC methods
ThenovelclassofadaptiveindependentMCMCmethods encompasses several existing algorithms already available in the literature, as shown in Table 3.We denote the proposal pdf employed in these methods as p t (x) and, for simplicity, we have removed the dependence on S t in the function q t (x).The Griddy Gibbs Sampler [15] builds a proposal pdf as in Eq. ( 3), which is never adapted later.ARMS [12] and IA 2 RMS [13] use as proposal density p t (x) ∝ min q t (x), π(x) , where q t (x) is built using different alternative methods [12,13,16,18].Note that it is possible to draw easily Proposal pdf p t (x) Eq. ( 3) [12], [16] Eqs. ( 3)-( 4), [13] Update rule or P a (z) Never update, i.e., If q t (z) ≥ π(x) then Rule 3, Rule 3 Rule 2 If q t (z)<π(x) then with ǫ =∞, i.e., no update, i.e., P a (z) = 0 for all z.
Rule 2 with ǫ =∞, i.e., The ARS method in [19] is a special case of ARMS and IA 2 RMS, so that ARS can be considered also belonging to the new class of techniques from p t (x) ∝ min{q t (x), π(x)} using the rejection sampling principle [24,25], i.e., using the following procedure (in order to draw one sample x a ): , repeat from 1.The accepted sample x a has pdf p t (x) ∝ min{q t (x), π(x)}.Moreover, ARMS adds new points to S t using the update Rule 3, only when q t (z) ≥ π(z),sothat Otherwise, if q t (z)<π ( z), ARMS does not add new nodes (see the discussion in [13] about the issues in ARMS mixing).Then, the update rule for ARMS can be written as Furthermore, the double update check used in IA 2 RMS coincides exactly with Rule 3 when is employed as proposal pdf.Finally, note that ARMS and IA 2 RMS contain ARS in [19] as special case when q t (x) ≥ π(x), ∀x ∈ X and ∀t ∈ N. Hence, ARS can be considered also a special case of the new class of algorithms.

Related algorithms
Other related methods, using non-parametric proposals, can be found in the literature.Samplers for drawing from univariate pdfs, using similar proposal constructions, has been proposed in [20] but the sequence of adaptive proposals does not converge to the target.Interpolation procedures for building the proposal pdf are also employed in [22,23].The authors in [22,23] suggest to build the proposal by b-spline procedures.However, in this case, the resulting proposal is a random walk-type (not independent) and the resulting algorithm is not adaptive.Furthermore, there is not a convergence of the shape of proposal to the shape to target, but only local approximations via b-spline interpolation.The methods [12,13,15] are included in the sticky class of algorithms, as pointed out in Section 6.1.In [16], the authors suggest an alternative proposal construction considering pieces of second order polynomial, in order to be used with the ARMS structure [12].The adaptive rejection sampling (ARS) method [19,26] is not an MCMC technique, but it is strongly related to the sticky approach, since it also employs an adaptive non-parametric proposal pdf.ARS needs to be able to build a proposal such that q t (x) ≥ π(x), ∀x ∈ X and ∀t ∈ N.This is possible only when more requirements about the target are assumed (for instance, log-concavity).For this reason, several extensions of the standard ARS have been also proposed [25,27,28], for tackling wider classes of target distributions.In [29], the non-parametric proposal is still adapted by in this case the number of support points remains constant, fixed in advance by the user.Different construction non-parametric procedures in order to address multivariate distributions have been also presented [21,30,31].
Other techniques have been developed to be applied specifically for Monte Carlo-within-in-Gibbs scenario when it is possible to draw directly from the fullconditional pdfs.In [32], an importance sampling approximation of the univariate target pdf is employed and a resampling step is performed in order to provide an "approximate" sample from the full-conditional.In [18], the authors suggest a non-adaptive strategy for building a suitable non-parametric proposal via interpolation.In this work, the interpolation procedure is first performed using a huge amount of nodes and then many of them are discarded, according to a suitable criteria.Several other alternatives involving MH-type algorithms have been used for sampling efficiently from the full-conditional pdfs within a Gibbs sampler [5-7, 15, 33-35].

Adaptive independent sticky MTM
In this section, we consider an alternative MCMC structure for the second stage described in Section 2: using a multiple-try Metropolis (MTM) approach [36,37].The resulting technique, Adaptive Independent Sticky MTM (AISMTM), is an extension of AISM that considers multiple candidates as possible new state, at each iteration.This improves the ability of the chain to explore the state space [37].At iteration t, AISMTM builds the proposal density q t (x|S t ) (step 1 of Table 4) using the current set of support points S t .Letx t = x be the current state of the chain and x ′ j (j = 1, ..., M) a set of i.i.d.candidates simulated from q t (x|S t ) (see step 2 of Table 4).Note that AISMTM uses an independent proposal [2], just like AISM.As a consequence, the auxiliary points in step 2.3 of Table 4 can be deterministically set ( [1], pp.119-120), [37].
The selected candidate is then accepted or rejected according to the acceptance probability α giveninstep2.Finally, step 3 updates the set S t ,including a new point ∈ Z, and thus AISMTM is an independent MCMC algorithm Table 4 Adaptive independent sticky Multiple-try Metropolis For t = 0, ..., T − 1: 1 Construction of the proposal: Build a proposal function q t (x|S t ) via a suitable interpolation procedure using the set of support points S t (see Section 3).2M T M s t e p : .
Otherwise, set x t = x t−1 and z j = x ′ .
according to Holden's definition [14].For the sake of simplicity, we only consider the case where a single point can be added to S t at each iteration.However, this update step can be easily extended to allow for more than one sampletobeincludedintothesetofsupportpoints.Notealso that AISMTM becomes AISM for M = 1.AISMTM provides a better choice of the new support points than AISM (see Section 9).The price to pay for this increased efficiency is the higher computational cost per iteration.However, since the proposal quickly approaches the target, it is possible to design strategies with a decreasing number of tries ( in order to reduce the computational cost.

Update rules for AISMTM
T h eu p d a t er u l e sp r e s e n t e da b o v er e q u i r ec h a n g e st h a t take into account the multiple samples available, when used in AISMTM.As an example, let us consider the update scheme in Eq. (7).Considering for simplicity that only a single point can be incorporated to S t ,t heupda t e step for S t can be split in two parts: choose a "bad" point in Z ∈{z 1 , ..., z M } and then test whether it should be added or not.Thus, first a z ′ = z i is selected among the samples in Z with probability proportional to for i = 1, ..., M. 8 This step selects (with high probability) a sample where the proposal value is far from the target.Then, the point z ′ is included in S t with probability exactly as in Eq. (7).Therefore, the probability of adding a point z i to S t is , that is a probability mass function defined over M + 1 elements: z 1 , ..., z M and the event {no addition} that, for simplicity, we denote with the empty set symbol ∅.Thus, the update rule in step 3 of Table 4 can be rewritten as a unique step, (r) .

Range of applicability and multivariate generation
The range of applicability of the sticky MCMC methods is briefly discussed below.On the one hand, sticky MCMC methods can be employed as stand-alone algorithms.Indeed, in many applications, it is necessary to draw samples from complicated univariate target pdf (as example in signal processing, see [38]).In this case, the sticky schemes provide virtually independent samples (i.e., with correlation close to zero) very efficiently.It is also important to remark that AISM and AISMTM also provide automatically an estimation of the normalizing constant of the target (a.k.a.marginal likelihood or Bayesian evidence) (since, with a suitable choice of the update test, the proposal approaches the target pdf almost everywhere).This is usually a hard task using MCMC methods [1,2,11].AISM and AIMTM can be also applied directly to draw from a multivariate distribution if a suitable construction procedure of the multivariate sticky proposal is designed (e.g, see [30,31,39,40] and ( [21], Chapter 11)).However, devising and implementing such procedures in high dimensional state spaces are not easy tasks.Therefore, in this paper, we focus on the use of the sticky schemes within other Monte Carlo techniques (such as Gibbs sampling or the hit and run algorithm) to draw from multivariate densities.More generally, Bayesian inference often requires drawing samples from complicated multivariate posterior pdfs, π(x|y) with For instance, this happens in blind equalization and source separation, or spectral analysis [3,4].For simplicity, in the following we denote the target pdf as π(x).When direct sampling from π(x) in the space R L is unfeasible, a common approach is the use of Gibbs-type samplers [2].This type of methods split the complex sampling problem into simpler univariate cases.Below we briefly summarize some well-known Gibbs-type algorithms.
Gibbs sampling.Let us denote as x (0) ar a n d o m l y chosen starting point.At iteration k ≥ 1, a Gibbs sampler obtains the ℓ-th component (ℓ = 1, ..., L)o fx, x ℓ , drawing from the full conditional π ℓ x|x (k) given all the information available, namely: The steps above are repeated for k = 1, ..., N G , where N G is the total number of Gibbs iterations.However, even sampling from π ℓ can often be complicated.In some specific situations, rejection samplers [41][42][43][44][45] and their adaptive versions, adaptive rejection sampling (ARS) algorithms, are employed to generate (one) sample from π ℓ [12, 19, 25, 27-29, 40, 46, 47].The ARS algorithms are very appealing techniques since they construct a non-parametric proposal in order to mimic the shape of the target pdf, yielding in general excellent performance (i.e., independent samples from π ℓ with an high acceptance rate).However, their range of application is limited to some specific classes of densities [19,47].
More generally, it is impossible to draw from a fullconditional pdf π ℓ (neither a rejection sampler can be applied), an additional MCMC sampler is required in order to draw from π ℓ [33].Thus, in many practical scenarios, we have an MCMC (e.g., an MH sampler) inside another MCMC scheme (i.e., the Gibbs sampler).In the so-called MH-within-Gibbs approach, only one MH step is often performed within each Gibbs iteration, in order to draw from each complicated full-conditionals.This hybrid approach preserves the ergodicity of the Gibbs sampler and provides good performance in many cases.On the other hand, several authors have noticed that using a single MH step for the internal MCMC is not always the best solution in terms of performance (cf.[48]).Other approximated approaches have been also proposed, considering the application of the importance sampling within the Gibbs sampler [32].
Using a larger number of iterations for the MH algorithm, there is more probability of avoiding the "burnin" period so that the last sample be distributed as the full-conditional [33][34][35].Thus, this case is closer to the ideal situation, i.e., sampling directly from the fullconditional pdf.However, unless the proposal is very well tailored to the target, a properly designed adaptive MCMC algorithm should provide less correlated samples than a standard MH algorithm.Several more sophisticated (adaptive or not) MH schemes for the application "within-Gibbs" have been proposed in literature [12,13,16,18,20,23,49,50]. In general, these techniques employ a non-parametric proposal pdf in the same fashion of the ARS schemes (and as the sticky MCMC methods).It is important to remark that performing more steps of a standard or adaptive MH within a Gibbs sampler can provide better performance than performing a longer Gibbs chain applying only one MH step (see, e.g., [12,13,16,17]).

Recycling Gibbs sampling.
Recently, an alternative Gibbs scheme, called Recycling Gibbs (RG) sampler,h a s been proposed in literature [51].The combined use of RG with a sticky algorithm is particularly interesting since RG recycles and employs all the samples drawn from each full-conditional pdfs in the final estimators.Clearly, this scheme fits specially well for the use of a adaptive sticky MCMC algorithm where different MCMC steps are performed for each full-conditional pdfs.
Hit and Run.The Gibbs sampler only allows movements along the axes.In certain scenarios, e.g., when the variables x ℓ are highly correlated, this can be an important limitation that slows down the convergence of the chain to the stationary distribution.The Hit and Run sampler is a valid alternative.Starting from x (0) ,atthek-th iteration, it applies the following steps: 1. Choose uniformly a direction d (k) in R L .For instance, it can be done drawing L samples v ℓ from a standard Gaussian N (0, 1), and setting (k) where λ (k) is drawn from the univariate pdf p(λ) ∝ π x (k−1) + λd (k) , where π x (k−1) ℓ + λd (k) is a slice of the target pdf along the direction d (k) .
Also in this case, we need to be able to draw from the univariate pdf p(λ) using either some direct sampling technique or another Monte Carlo method (e.g., see [50]).
There are several methods similar to the Hit and Run where drawing from a univariate pdf is required; for instance, the most popular one is the Adaptive Direction Sampling [52].
Sampling from univariate pdfs is also required inside other types of MCMC methods.For instance, this is the case of exchange-type MCMC algorithms [53] for handling models with intractable partition functions.In this case, efficient techniques for generating artificial observations are needed.Techniques which generalize the ARS method, using non-parametric proposals, have been applied for this purpose (see [54]).

Numerical simulations
In this section, we provide several numerical results comparing the sticky methods with several well-known MCMC schemes, such as the ARMS technique [12], the adaptive MH method in [10], and the slice sampler [55]. 9The first two experiments (which can be easily reproduced by interested users) correspond to bi-modal one-dimensional and two-dimensional targets, respectively, and are used as benchmarks to compare different variants of the AISM and AISMTM methods with other techniques.They allow us to show the benefits of the non-parametric proposal construction, even in these two simple experiments.Then, in the third example, we approximate the hyper-parameters of a Gaussian process (GP) [56], which is often used for regression purposes in machine learning for signal processing applications.

Multimodal target distribution
We study the ability of different algorithms to simulate multimodal densities (which are clearly non-log-concave).
As an example, we consider a mixture of Gaussians as target density, π(x) = 0.5N (x;7,1) + 0.5N (x; −7, 0.1), where N x; μ, σ 2 denotes the normal distribution with mean μ and variance σ 2 .The two modes are so separated that ordinary MCMC methods fail to visit one of the modes or remains indefinitely trapped in one of them.The goal is to approximate the expected value of the target (E[ X] = 0w i t hX ∼ π(x))v i aM o n t eC a r l o .W e test the ARMS method [12] and the proposed AISM and AISMTM algorithms.For AISM and AISMTM, we consider different construction procedures for the proposal pdf: • P1: the construction given in [12] formed by exponential pieces, specifically designed for ARMS.• P2: alternative construction formed by exponential pieces obtained by a linear interpolation in the log-pdf domain, given in [13].• P3: the construction using uniform pieces in Eq. (3).
Furthermore, for AISM and AISMTM, we consider the Update Rule 1 (R1) with different values of the parameter β, the Update Rule 2 (R2) with different value of the parameter ε,a n dt h eU p d a t eR u l e3( R 3 )f o rt h ei n c l usion of a new node in the set S t (see Section 4).More specifically, we first test AISM and AISMTM with all the construction procedures P1, P2, P3, and P4 jointly with the rule R3.Then, we test AISM with the construction P4 and the update test R2 with ε ∈{ 0.005, 0.01, 0.1, 0.2}.F o r Rule 1 we consider β ∈{ 0.3, 0.5, 0.7, 2, 3, 4}.All the algorithms start with S 0 ={ − 10, −8, 5, 10} and initial state x 0 =− 6.6.For AISMTM, we have set M ∈{ 10, 50}.F or each independent run, we perform T = 5000 iterations of the chain.
The results given in Table 5 are the averages over 2000 runs, without removing any sample to account for the initial burn-in period.Table 5 shows the Mean Square Error (MSE) in the estimation E[ X], the auto-correlation function ρ(τ) at different lags, τ ∈{1, 10, 50} (normalized, i.e., ρ(0) = 1), the approximated effective sample size (ESS) of the produced chain ( [57], Chapter 4) (clearly, ESS ≤ T), the final number of support points m T and the computing time normalized with respect to the time spent by ARMS [12].For simplicity, in Table 5, we have reported only the case of R2 with ε ∈{ 0.005, 0.01}; however, other results are shown in Fig. 5. AISM and AIMTM outperform ARMS, providing a smaller MSE and correlation (both close to zero).This is because ARMS does not allow a complete adaptation of the proposal pdf as highlighted in [13].The adaptation in AISM and AIMTM provides a better approximation of the target than ARMS, as also indicated by the ESS which is substantially higher in the proposed methods.ARMS is in general slower than AISM for two main reasons.Firstly, the construction P1 (used by ARMS) is more costly since it requires the computation of several intersection points [12].It is not required for the procedures P2, P3, and P4.Secondly, the effective number of iterations in ARMS is higher than T = 5000 (the averaged value is ≈ 5057.83)due to the discarded samples in the rejection step (in this case, the chain is not moved forward).
Figure 6a-d depicts the averaged autocorrelation function ρ(τ) for τ = 1, ..., 100 for the different techniques and constructions.Figure 6e-h shows the average acceptance probability (AAP; the value of α of the MH-type techniques) of accepting a new state as function of the iterations t.W ec a ns e et h a t ,w i t hA I S Ma n dA I M T M , AAP approaches 1 since q t becomes closer and closer to π. Figure 7 shows the evolutions of the number of support points, m t ,a sf u n c t i o no ft = 1, ..., T = 5000, again for the different techniques and constructions.Note that, with AIMTM and P3-P4, AAP approaches 1 so quickly and the correlation is so small (virtually zero) that it is difficult to recognize the corresponding curves which are almost constant close to one or zero, respectively.The constructions P3 and P4 provide the better results.In this experiment, P4 seems to provide the best compromise between performance and computational cost.We also test AISM with update R2 for different values of ε (and different constructions).The number of nodes m t and AAP as function of t for these cases are shown in Fig. 5.These figures and the results given in Table 5 show that AISM-P4-R2 provides extremely good performance with a small computational cost (e.g., the final number of  points is only m T ≈ 43 with ǫ = 0.005).This shows that the update rule R2 is a very promising choice given the obtained results.Moreover, we can observe that the update rule R1 is very parsimonious in adding new points even considering a great range of values of β,f r o m0 .3 to 4. The results are good also in this case with R1, so that this rule seems to be a more robust interesting alternative to R2 (which seems more dependent on the choice of β).Finally, Fig. 8 shows the histograms of the 5000 samples obtained by one run of AISM-P3-R1 with β = 0.1 and β = 3.The target pdf is depicted in solid line and the final construction proposal pdf is shown in dashed line.

Missing mode experiment
Let us consider again the previous bimodal target pdf, about the range of the target pdf is provided).We test the robust implementation described in Appendix E.1, i.e., we employ the proposal density defined where q 1 (x) = N x;0,σ 2 p and q 2 (x|S t ) is a sticky proposal constructed using the procedure P3 in Eq. (3) ( w eu s et h eu p d a t er u l e1w i t hβ = 0.1).We consider the most defensive strategy defining α t = α 0 = 0.5 for all t.W et e s tσ p ∈{ 2, 3, 8, 10}.We compute the mean absolute error (MAE), in estimating the variance Var[ X] = 49.55where X ∼ π(x), with different MCMC methods generating chains of length T = 10 4 .W ec o mpare this Robust AISM-P3-R1 scheme with a standard MH method using q 1 (x) as proposal pdf and the Adaptive MH technique where the scale parameter σ results, averaged over 10 3 independent runs, are given in Table 6.

Heavy-tailed target distribution
In this section, we test the AISM method from drawing with a target heavy tails.We show that the sticky MCMC schemes can be applied in this scenario, even by using a proposal pdf with exponential (i.e., "light") tails.However, we recall that an alternative construction of the tails is always possible, as suggested in Appendix E.2 using Pareto tails, for instance.More specifically, we consider the Lévy density, i.e.
∀x ≥ λ.Given a random variable X ∼π(x),w eh a v e that E[ X] =∞and Var[ X] =∞due to the heavy-tail of the Lévy distribution.However, the normalizing constant, 1 c π ,s u c ht h a t π(x) = 1 c π π(x) integrates to one, can be determined analytically, and is given by 1 c π = ν 2π .Our goal is estimating the normalizing constant 1 c π via Monte Carlo simulation, when λ = 0andν = 2.In general, it is difficult to estimate a normalizing constant using MCMC outputs [2,58,59].However, in the sticky MCMC algorithms (with update rules as R1 and R3 in Table 2), the normalizing constant of the adaptive non-parametric proposal approaches the normalizing constant of the target.We compare AISM-P4-R3 and different Multiple-try ab Fig. 8 (Ex-Sect-9.1). a Histogram of the 5000 samples obtained by one run of AISM-P3-R1 with β = 0.1 (28 final points).b Histogram of the 5000 samples obtained by one run of AISM-P3-R1 with β = 3 (79 final points).The target pdf, π(x), is depicted in solid line and the final construction proposal pdf, q T (x), is shown in dashed line Metropolis (MTM) schemes.For the MTM schemes, we use the following procedure: given the MTM outputs obtained in one run, we use these samples as nodes, then construct the approximated function using the construction P4 (considering these nodes), and finally compute the normalizing constant of this approximated function.Note that we use the same construction procedure P4, for a fair comparison.
For AISM, we start with only m 0 = 3 support points, S 0 ={ s 1 = 0, s 2 , s 3 }, where two nodes are randomly chosen at each run, i.e., s 2 , s 3 ∼ U ([ 1, 10] ) with s 2 < s 3 .We also test three different MTM techniques, two of them using an independent proposal pdf (MTM-ind) and the last one a random walk proposal pdf (MTMrw).For the MTM schemes, we set M = 1000 tries and importance weights designed again to choose the best candidate in each step [37].We set T = 5000 for all the methods.Note that, the total number of target valuation E of AISM is only E = T = 5000 whereas we E = MT = 5 • 10 6 for the MTM-ind schemes and E = 2MT = 10 7 for the MTM-rw algorithm (see [37] for further details).For the MTM-ind methods, we use an independent proposal q(x) ∝ exp(−(x − μ) 2 /(2σ 2 )) with μ ∈ {10, 100} and σ 2 = 2500.In MTM-rw, we have a random walk proposal q(x|x t−1 ) ∝ exp −(x − x t−1 ) 2 2σ 2 with σ 2 = 2500.Note that we need to choose huge values of σ 2 due to the heavy-tailed feature of the target.
The results, averaged over 2000 runs, are summarized in Table 7.Note that the real value of 1 c p when ν = 2is 1 √ π = 0.5642.The AISM-P4-R3 provides better results than a l lo ft h eM T Ma p p r o a c h e st e s t e dw i t ho n l yaf r a c t i o n of their computational cost.Furthermore, AISM-P4-R3 avoids the critical issue of parameter selection (selecting a small value of σ 2 in this case can easily lead to very poor performance).

Sticky MCMC methods within Gibbs sampling 9.4.1 Example 1: comparing different MCMC-within-Gibbs schemes
In this example we show that, even in a simple bivariate scenario, AISM schemes can be useful within a Gibbs sampler.Let us consider the bimodal target density 2 .Densities with this non-linear analytic form have been used in the literature (cf.[10]) to compare the performance of different Monte Carlo algorithms.We apply N G steps of a Gibbs sampler to draw from π(x 1 , x 2 ),u s i n g ARMS [12], AISM-P4-R3, and AISMTM-P4-R3 within of the Gibbs sampler to generate samples from the fullconditionals, starting always with the initial support set S 0 ={−10, −6, −4.3, 0, 3.2, 3.8, 4.3, 7, 10}.From each fullconditional pdf, we draw T samples and take the last one as the output from the Gibbs sampler.We also apply a standard MH algorithm with a random walk proposal q x ℓ,t |x ℓ,t−1 ∝ exp (x ℓ,t − x ℓ,t−1 ) 2 2σ 2 p for ℓ ∈ {1, 2}, σ p ∈{ 1, 2, 10},1 ≤ t ≤ T.F u r t h e r m o r e ,w e test an adaptive parametric approach (as suggested in [8]).Specifically, we apply the adaptive MH method in [10] where the scale parameter of q(x ℓ,t |x ℓ,t−1 ) is adapted online, i.e., p,t varies with t (we set σ p,0 = 3).We also consider the application of the slice sampler [55] and the Hamiltonian Monte Carlo (HMC) method [60].For the standard MH and the slice samplers we have used the function mhsample.mand slicesample.mdirectly provided by MATLAB.For HMC, we consider t h ec o d ep r o v i d e di n[ 6 1 ]w i t hǫ d = 0.01 as discretization parameter and L = 1aslengthofthetrajectory. 10We recall that a preliminary code of AISM is also available at Matlab-FileExchange webpage.
We consider two initializations for all the methodswithin-Gibbs: (In1) ℓ,T for k = 1, ..., N G .W eu s ea l lt h es a m p l e st o estimate four statistics that involve the first four moments of the target: mean, variance, skewness, and kurtosis.Table 8 provides the mean absolute error (MAE; averaged over 500 independent runs) for each of the four statistics estimated, and the time required by the Gibbs sampler (normalized by considering 1.0 to be the time required by ARMS with T = 50).
The results are provided in Table 8.First of all, we notice that AISM outperforms ARMS and the slice sampler for  All the techniques are used within a Gibbs sampler: N G is the number of iterations of the Gibbs sampler whereas T is is the number of iterations of the technique within Gibbs (so that T × N G is the global number of MCMC iterations).The best results (in each column, and in each panel) are highlighted with italics all values of T and N G , in terms of performance and computational time.Regarding the use of the MH algorithm within Gibbs, the results depend largely on the choice of the variance of the proposal, σ 2 p , and the initialization, showing the need for adaptive MCMC strategies.For a fixed value of T × N G , the AISM schemes provide results close to the smallest averaged MAE for In1 and the best results for In2 with a slight increase in the computing time, w.r.t. the standard MH algorithm.Finally, Table 8 shows the advantage of the non-parametric adaptive independent sticky approach w.r.t. the parametric adaptive approach [8,10].

Example 2: comparison with an ideal Gibbs sampler
The ideal scenario for the Gibbs sampling scheme is that we are able to draw samples from the full-conditional pdfs (using a transformation or a direct method).In this section, we compare the performance of MH and AISMwithin-Gibbs schemes with the ideal case.Let us consider two Gaussian full-conditional densities, with ξ 1 = 1a n dξ 2 = 0.2.The joint pdf is a bivariate Gaussian pdf with mean vector μ =[0,0] ⊤ and covariance matrix =[ 1.08 0.54; 0.54 0.31].We apply a Gibbs sampler with N G iterations to estimate both the mean and the covariance of the joint pdf.Then, we calculate the average MSE in the estimation of all the elements in μ and , averaged over 2000 independent runs.We use this simple case, where we can draw directly from the full-conditionals, to check the performance of MH and AISM-P3-R3 within Gibbs as a function of T and N G .For the MH scheme, we use a Gaussian random walk proposal, We set N G = 10 3 and x (i) ℓ,0 = 1( b o t hf o rM Ha n d AISM-P3-R3), and increase the value of T. The results can be seen in Fig. 9. AISM-within-Gibbs easily reaches the same performance as the ideal case (sampling directly from the full conditionals) even for small values of T, whereas the MH-within-Gibbs needs a substantially larger value of T (up to T = 500 for σ p = 0.1) to attain a similar performance.Note the importance of using a proper parameter σ p for attaining good performance.This observation shows the importance of employing an adaptive technique within-Gibbs.

Sticky MCMC methods within Recycling Gibbs sampling
In this section, we test the sticky MCMC methods within the Recycling Gibbs (RG) sampling scheme where the intermediate samples drawn from each full-conditional pdf are sued in the final estimator [51].We consider a simple numerical simulation (easily reproducible by any practitioner) involving a bi-dimensional target pdf where ) is bimodal and is not Gaussian.The goal is to approximate via Monte Carlo the expected value, E We test different Gibbs techniques: the MH [2] and AISM-P3-R3 algorithm (with update rule 3 and proposal construction in Eq. ( 3)), within the Standard Gibbs (SG) and within the RG sampling schemes.For the MH method, we use a Gaussian random walk proposal, for k = 1, ..., N G , for all schemes.

Optimal scale parameter for MH
First of all, we obtain the MSE in estimation of E[X] for different values of the σ parameter for MH-within-SG (with T = 1a n dN G = 1000).Figure 10a shows the results averaged over 10 5 independent runs.The performance of the Standard Gibbs (SG) sampler depends strongly on the choice of σ of the internal MH method.We can observe that there exists an optimal value σ * ≈ 3.This shows the need of using an adaptive scheme for drawing from the full-conditional pdfs.In the following, we compare the performance of AISM with the performance of this optimized MH using the optimal scale parameter σ * = 3, in order to show the capability of the non-parametric adaptation employed in AISM, with respect to a standard adaptation procedure [10].

Comparison among different schemes
For AISM-P3-R3, we start with the set of support points S 0 ={ − 10, −6, −2, 2, 6, 10}.W eh a v ea v e r a g e dt h e M S Ev a l u e so v e r1 0 5 independent runs for each Gibbs scheme.
In Fig. 10b (represented in log-scale), we fix N G = 1000 and vary T.A sT grows, when a standard Gibbs (SG) sampler is used, the curves show an horizontal asymptote since the internal chains converge after some value T ≥ T * .Considering an RG scheme, the increase of T yield lower MSE since now we recycle the internal samples.Figure 10b shows the advantage of using AISM-R3-P3 even when compared with the optimized MH method.The advantage of AISM-R3-P3 is clearer with small T values (10 < T < 30; recall that in this experiment N G = 1000 is kept fixed).The performance of AISM-R3-P3 and optimized MH (within Gibbs) becomes more similar as T increases.This is due to the fact that, ab Fig. 10 (Ex-Sect-9.5). a MSE (log-scale) as function of σ for MH-within-SG (T = 1andN G = 1000).b MSE (log-scale) as function of T for different MCMC-within-Gibbs schemes (we keep fixed N G = 1000).Note the MH is using the optimal scale value σ * = 3 for the (Gaussian) parametric proposal density in this case, with a high enough value of T,t h eM H chain is able to exceed its burn-in period and eventually converges.9.6 Tuning of the hyper-parameters of a Gaussian process (GP)

Exponential Power kernel function
Let assume to observe the pairs of data {y j , z j } P j=1 ,w i t h y j ∈ R and z j ∈ R d Z , and denote the corresponding vectors y = y 1 , ..., y P and Z = [z 1 , ..., z P ].We address the regression problem of inferring the hidden function y = f (z),l i n k i n gt h ev a r i a b l ey and z.For this goal, we assume the model where e ∼ N e;0,σ 2 .F o rs i m p l i c i t y ,w es e td Z = 1.We consider the f is a Gaussian process (GP) [56], i.e., we assume a GP prior over f,sof ∼ GP(μ(z), κ(z, r)) where μ(z) = 0, and the kernel function is Therefore, the vector f = f (z 1 ), ..., f (z P ) is distributed as p(f|Z, κ, β, δ) = N (f; 0, K) where 0 is a 1 × P vector, K := κ(z i , z j ), for all i, j = 1, ..., P is a P × P matrix, and we have expressed explicitly the dependence on the choice of the kernel family κ in Eq. (22).Moreover, we denote the hyper-parameters of the model as θ =[ θ 1 = σ , θ 2 = β, θ 3 = δ], i.e., the standard deviation of the observation noise and the two parameters of the kernel κ(z, r).We assume a prior with independent truncated positive Gaussian components for the hyper-parameters p(θ) = p(σ , β, δ) = N (σ ;0,5)N (β;0,5)N (δ;0,5)I σ I β I γ where I v = 1ifv > 0, and I v = 0ifv ≤ 0. To simplify the expression of the posterior pdf, let us focus on the filtering problemandthetuneoftheparameters,namelywedesire to infer f and θ .Hence, the posterior pdf is given by with p(y|f, Z, θ , κ) = N y; 0, σ 2 I and p(f|y, Z, θ , κ) = N (f; μ p , p ),w i t hm e a nμ p = K K + σ 2 I −1 y ⊤ and covariance matrix p = K − K K + σ 2 I −1 K ⊤ ,r e p r esenting the solution of the GP given the specific choice of the hyper-parameters θ.The marginal posterior of the hyper-parameters [56] is where p(y|Z, θ , κ) = p(y|Z, f, θ , κ)p(f|Z, θ, κ)df.

Automatic Relevant Determination kernel function
Here we consider the estimation of the hyper-parameters of the Automatic Relevance Determination (ARD) covariance ( [62], Chapter 6).Let us assume again the P observed data pairs {y j , z j } P j=1 ,withy j ∈ R and where d Z is the dimension of the input features.We also denote the corresponding P × 1o u t p u tv e c t o ra sy = [ y 1 , ..., y P ] ⊤ and the d Z × P input matrix Z =[ z 1 , ..., z P ].
We again address the regression problem of inferring the unknown function f which links the variable y and z.
Thus, the assumed model is y = f (z) + e,w h e r ee ∼ N e;0,σ 2 ,a n dt h a tf (z) is a realization of a Gaussian process (GP) [56].Hence f (z) ∼ GP(μ(z), κ(z, r)) where μ(z) = 0, z, r ∈ R d Z , and we consider the ARD kernel function for ℓ = 1, ..., d Z .Note that we have a different hyperparameter δ ℓ for each input component z ℓ ;hence,wealso define δ = δ 1: . Unlike in the previous section, note that here β is assumed known (β = 2).This type of kernel function is often employed to perform an automatic relevance determination (ARD) of the input components with respect the output variable ([62], Chapter 6).Namely, using ARD allows us to infer the relative importance of different components of inputs: a small value of δ ℓ means that a variation of the ℓ-component z ℓ impacts the output more, while a high value of δ ℓ shows virtually independence between the ℓ-component and the output.Therefore, the complete vector containing all the hyper-parameters of the model is i.e., all the parameters of the kernel function in Eq. ( 22) and standard deviation σ of the observation noise.We assume p(θ) = More specifically, we apply a AISM-P4-R3 within-Gibbs (with S 0 ={ 0.01, 0.5, 1, 2, 5, 8, 10, 15})a n dt h eS i n g l e Component Adaptive Metropolis (SCAM) algorithm [63] within-Gibbs to draw from π(θ) ∝ p(θ|y, Z, κ).Notethat dimension of the problem is D = d X + 1s i n c eθ ∈ R D .For SCAM, we use the Gaussian random walk proposal q(x ℓ,t |x ℓ,t−1 ) ∝ exp −(x ℓ,t − x ℓ,t−1 ) 2 / 2γ 2 ℓ,t .I n SCAM, the scale parameters γ ℓ,t are adapted (one for each component) considering all the previous corresponding samples (starting with γ ℓ,0 = 1).
We have averaged the results using 10 3 independent runs.We consider N G = 1000 and T = 20 for both schemes, AISM-within-Gibbs and SCAM-within-Gibbs.The results are provided in Table 11.We can see that

Conclusions
In this work, we have introduced a new class of adaptive MCMC algorithms for any-purpose stochastic simulation.We have discussed the general features of the novel family, describing the different parts which form a generic sticky adaptive MCMC algorithm.The proposal density used in the new class is adapted on-line, constructed by employing non-parametric procedures.The name "sticky" remarks that the proposal pdf becomes progressively more and more similar to the target.Namely, a complete adaptation of the shape of the proposal is obtained (unlike using parametric proposals).The role of the update control test for the inclusion of new support points has been investigated.The design of this test is extremely important, since it controls the tradeoff between computational cost and the efficiency of the resulting algorithm.Moreover, we have discussed how the combined design of a suitable proposal construction and a proper update test ensures the ergodicity of the generated chain.Two specific sticky schemes, AISM and ASMTM, have been proposed and tested exhaustively in different numerical simulations.The numerical results show the efficiency of the proposed algorithms with respect to other state-ofthe-art adaptive MCMC methods.Furthermore, we have showed that other well-known algorithms already introduced in the literature are encompassed by the novel class of methods proposed.A detailed description of the related works in the literature and their range of applicability are also provided, which is particularly useful for the interested practitioners and researchers.The novel methods can be applied both as a stand-alone algorithm or within any Monte Carlo approach that requires sampling from univariate densities (e.g., the Gibbs sampler, the hit-and-run algorithm or adaptive direction sampling).A promising future line is designing suitable constructions of the proposal density in order to allow the direct sampling from multivariate target distributions (similarly as [21,30,31,39,40]).However, we remark that the structure of the novel class of methods is valid regardless of the dimension of the target.
Endnotes 1 The adjective "sticky" highlights the ability of the proposed schemes to generate a sequence of proposal densities that progressively "stick" to the target. 2The purpose of this work is to provide a family of methods applicable to a wide range of signal processing problems.A generic Matlab code (not focusing on any specific application) is provided at http://www.lucamartino.altervista.org/STICKY.zip. 3A preliminary version of this work has been published in [64].With respect to that paper, the following major changes have been performed: we discuss exhaustively the general structure of the new family (not just a particular algorithm); we perform a complete theoretical analysis o ft h eA I S Ma l g o r i t h m ;w ee x t e n ds u b s t a n t i a l l yt h ed i scussion about related works; we introduce the AISMTM algorithm; we show how sticky methods can be used to sample from multi-variate pdfs by embedding them within a Gibbs sampler or the hit and run algorithm; and we provide additional numerical simulations (including comparisons with other benchmark sampling algorithms and the estimation of the hyper parameters of a Gaussian processes). 4 For simplicity, we assume that π(x) is bounded.However, the case of unbounded target pdfs can also be tackled by designing a suitable proposal construction that takes into account the vertical asymptotes of the target function.Similarly, we consider a target function defined in a continuous space X for the sake of simplicity, although the support domain could also be discrete. 5Note that any other MCMC technique could be used. 6Note that d t (z) ≤ max{π(z), q t (z|S t )}≤M π ,s i n c e M t = max z∈X q t (z|S t ) ≤ M π for all of the constructions described in Section 3 for the proposal function.Therefore, all the ε t ≥ M π lead to equivalent update rules. 7Regarding the definition of ε t , this threshold should decrease over time (to guarantee that new support points can always be added), but not too fast (to avoid adding too many points and thus increasing the computational cost).Selecting the optimum threshold can be very challenging,butmanyoftherulesthathavebeenusedinthe area of stochastic filtering for the update parameter could be used here.For instance, good update rules could be ε t = κM π • e −γ t or ε t = κM π t+1 for some appropriate values of 0 <κ<1andγ>0.Exploring this issue is out of the scope of this paper, but we plan to address this in future works. 8We have used the equality d t (z i ) =|π(z i )−q t (z i |S t )|= max{π(z i ), q t (z i |S t )}−min{π(z i ), q t (z i |S t )}. 9 Preliminary Matlab code for the AISM algorithm, with the constructions described in Section 3.1 and the update control rule R3, is provided at https:// www.mathworks.com/matlabcentral/fileexchange/Appendix A: Proof of Theorem 1 Note that Eq. ( 9) in Theorem 1 is a direct consequence of Theorem 2 in [14], which requires x t ∼ q(x|S t ) to be independent of the current state, x t−1 ,andthesatisfaction of the strong Doeblin condition.Regarding the first issue, x t is independent of x t−1 by construction of the algorithm, so we only need to focus on the second issue.The strong Doeblin condition is satisfied if, given a proposal pdf, q t (x|S t ) = 1 c t q t (x|S t ),a n dat a r g e t , π(x) = 1 c π π(x) with support X ⊆ R,t h e r ee x i s t ss o m ea t ∈ (0, 1] such that, for all x ∈ X and t ∈ N, 1 a t q t (x|S t ) ≥ π(x).
First of all, note that Eq. ( 28) can be rewritten as Then, note also that where the last inequality is due to the fact that min{1, x}≤x.Therefore, a possible value of a t that allows us to satisfy Eq. ( 29) is From Eq. ( 30) it is clear that a t ≤ 1, so all that remains to be shown is that a t > 0. Let us recall that I t = (s 1 , s m t ], where s 1 and s m t are the smallest and largest support points in S t ={ s 1 , ..., s m t }, respectively.Then, since q t (x|S t )>0 for all x ∈ X (condition 1 in Definition 1) and t ∈ N,andπ(x) is assumed to be bounded, we have min 1, c π c t min x∈I t q t (x|S t ) π(x) > 0.
And regarding the tails, note that q t (x|S t ) must be uniformly heavier tailed by construction (condition 4 in Definition 1), 12 so q t (x|S t ) ≥ π(x) for all x ∈ I c t = (−∞, s 1 ] ∪(s m t , ∞) andwealsohave min 1, c π c t min x∈I c t q t (x|S t ) π(x) > 0.
Therefore, we conclude that 0 < a t ≤ 1, the strong Doeblin condition is satisfied and thus all the conditions for Theorem 2 in [14] are fulfilled.

Appendix B: Argumentation for Conjecture 1
Let us define I t = (s 1 , s m t ]andI c t = (−∞, s 1 ] ∪(s m t , ∞), where s 1 and s m t are the smallest and largest points of the set of support points at time step t, S t ={ s 1 , ..., s m t } with s 1 < ... < s m t .T h e n ,t h eL 1 distance between the target and the proposal can be expressed as D 1 (π, q t ) = D I t (π, q t ) + D I c t (π, q t ),w h e r eD I t (π, q t ) = I t d t (x) dx and D I c t (π, q t ) = I c t d t (x) dx with d t (x) =|π(x) − q t (x)|.Let us focus first on D I t (π, q t ).Sinceq t (x) is constructed as a piecewise polynomial approximation on the intervals I t,i = (s i , s i+1 ], where is the L 1 distance between the target and the proposal in the i-th interval.Now, using Theorem 3.1.1 in [65] we can easily bound d t (x) for the ℓ-th order interpolation polynomial (with ℓ ∈{ 0, 1} in this case) used within the i-th interval.For ℓ = 0 and assuming that π(s i ) ≥ π(s i+1 ) (and thus q t (x) = π(s i ) ∀x ∈ I t,i ) without loss of generality, 13 where π(ξ) denotes the first derivative of π(x) evaluated at x = ξ , ξ ∈ (s i , s i+1 ] is some point inside the interval whose value depends on x, x i and π(x),andthisboundis finite since we assume that the first derivative of π(x) is bounded.Therefore, for the PWC approximation we have where d t (z) = π(z) − q t (z|S t ) and p t (z|x t−1 , S t ) = X p t z|x ′ , x t−1 , S t p t x ′ |x t−1 , S t dx ′ , (38) represents the kernel function of AISM given x t−1 and S t .Since candidate points x ′ ∈ X are directly drawn from the proposal pdf, we have p t x ′ |x t−1 , S t = q t x ′ |S t , and from the structure of the AISM in Table 1 it is straightforward to see that where α(x t−1 , x ′ ) = min 1, π(x ′ )q t (x t−1 |S t ) π(x t−1 )q t (x ′ |S t ) .Inserting these two expressions in Eq. ( 38), the kernel function of AISM becomes p t (z|x t−1 , S t ) = X α(x t−1 , x ′ ) q t (x ′ |S t ) dx ′ × δ(z − x t−1 ) Let us recall now the integral form of Jensen's inequality foraconcavefunctionϕ(x) with support X ⊆ R [66]: X ϕ(x)f (x) dx ≤ ϕ X xf (x) dx , which is valid for any non-negative function f (x) such that X f (x) dx = 1.Then, since we assume that η t (z, d) = η t (d), η t (d) is a concave function of d by condition 4 of Definition 3, and X p t (z|x t−1 , S t )dz = 1, we have with where we have used (39) to obtain the final expression in (41).Now, for the first term in the right hand side of (41), note that X α(x t−1 , x ′ ) q t (x ′ |S t ) dx ′ ≤ 1, since 0 ≤ α(x t−1 , x ′ ) ≤ 1and X q t (x ′ |S t ) dx ′ = 1.And for the second term, we have X 1 − α(x t−1 , z) d t (z) q t (z|S t ) dz ≤ X d t (z) q t (z|S t ) dz where we recall that D 1 (π, q t ) = X d t (z) dz = X |π(z) − q t (z|S t )| dz and C = max z∈X q t (z|S t )<∞, since we have assumed that π(x) is bounded and thus, by condition 4 in Definition 1, q t (z|S t ) is also bounded.Therefore, we obtain and inserting ( 42) into (40) we have the following bound for the expected probability of adding a support point at the t-the iteration, Finally, noting C < ∞,t h a tb o t hd t (x t−1 ) → 0 and D 1 (q t , π) → 0a st →∞by Conjecture 1, and that η t (0) = 0 by condition 2 in Definition 3, we have E[ P a (z)|x t−1 , S t ] → 0ast →∞.

C.2 Proof of Corollary 4
First of all, recall that a semi-metric fulfills all the properties of a metric except for the triangle inequality.Therefore, we have d t (π(z), q t (z)) ≥ 0, d t (π(z), q t (z)) = 0 ⇐⇒ π(z) = q t (z) and d t (π(z), q t (z)) = d t (q t (z), π(z)).N o w , from the proof of Theorem 3 (see Appendix C.1) we can see that η t i sn o tu s e du n t i lE q .( 4 0 ) .S i n c eη t ( d t (z)) is a concave function of d t (z), we can still use Jensen's inequality and this equation becomes where, following the same procedure as in Appendix C.1 (which is still valid due to the fact that d t (π(z), q t (z)) is a semi-metric), the term inside η t can be now bounded by with D t (π, q t ) = X d t (z) dz.Therefore, we have with E[ P a (z)|x t−1 , S t ] → 0a st →∞under the conditions of Conjecture 1. with m t i=0 η i = 1, whereas φ i (x) is a linear pdf or a uniform pdf (depending on the employed construction; see Eqs. ( 3)-( 4)) defined in the interval I i ,a n dφ i (x) = 0f o r x / ∈ I i .The tails, φ 0 (x) and φ m t (x), are truncated exponen- tial pdfs (or Pareto tails see Appendix E.2).Hence, in order to draw a sample from q t (x|S t ) ∝ q t (x|S t ), it is necessary to perform the following steps: 1. Compute the area A i below each piece composing q t (x|S t ), i = 0, ..., m t .This is straightforward for the construction procedures in Eqs. ( 3)-( 4) since the function q t (x|S t ) is formed by linear or constant pieces, so that it can be easily done analytically.Moreover, since the tails are exponential functions also in this case we compute the areas below A 0 and A m t analytically.Then, we need to normalize them, , for i = 0, ..., m.

Fig.
Fig. (Ex-Sect-9.1).Evolution of the number of support points m t and average acceptance probability (AAP), as function of t = 1, ..., T for AISM, for different constructions, and update rule R2 with ε = 0.005 (square), ε = 0.01 (cross), ε = 0.1 (triangle) and ε = 0.2 (circle).Moreover, in a-d the evolution of m t of AISM with the update rule R3 is also shown with solid line.Note that the range of values in a-d is different.(e)-(f)-(g)-(h) Acceptance Rate as function of the iteration t

Table 2
Examples of test function η t (z, d) for different update rules (recall that d

Table 3
Special cases of sticky MCMC algorithms

Table 5 (
Ex-Sect-9.1).For each algorithm, the table shows the mean square error (MSE), the autocorrelation (ρ(τ)) at different lags, the effective sample size (ESS), the final number of support points (m T ), the computing times normalized w.r.t.ARMS (Time)

Table 7
Estimation of the normalizing constant 1 c π = 0.5642 for the Lévy distribution (T = 5000)

Table 8 (
Ex-Sect-9.4.1).Mean absolute error (MAE) in the estimation of four statistics (first component) and normalized computing time

Table 8 (
Ex-Sect-9.4.1).Mean absolute error (MAE) in the estimation of four statistics (first component) and normalized computing time (Continued)

Table 9 (
Ex-Sect-9.6.1).MSE in the estimation of the hyper-parameters θ * with N G = 2000 Note that IA 2 RMS is a special case of AISM which employs the equivalent proposal p t (x) ∝ min{q t (x), π(x)},andtheruleR3(seeSection6.1).InIA 2 RMS, we have used the construction procedure P4 in order to build q t (x).The computing times are normalized w.r.t. the time spent by MH computing times are normalized w.r.t. the time spent by MH in Tables

Table 10
The computing times are normalized w.r.t. the time spent by MH in Table9

Table 11 (
Ex-Sect-9.6.2).MSE for different techniques and different dimensions D = d Z + 1 of the inference problem (number of hyper-parameters), with T = 20 and N G = 1000 for both schemes Algorithm D = 2 D = 4 D = 6 D = 8 D = 10