oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Geometric Ergodicity & Scanning Strategies For Two-Component Gibbs Samplers  [PDF]
Alicia A. Johnson,Owen Burbank
Statistics , 2012,
Abstract: In any Markov chain Monte Carlo analysis, rapid convergence of the chain to its target probability distribution is of practical and theoretical importance. A chain that converges at a geometric rate is geometrically ergodic. In this paper, we explore geometric ergodicity for two-component Gibbs samplers which, under a chosen scanning strategy, evolve by combining one-at-a-time updates of the two components. We compare convergence behaviors between and within three such strategies: composition, random sequence scan, and random scan. Our main results are twofold. First, we establish that if the Gibbs sampler is geometrically ergodic under any one of these strategies, so too are the others. Further, we establish a simple and verifiable set of sufficient conditions for the geometric ergodicity of the Gibbs samplers. Our results are illustrated using two examples.
Uniform Ergodicity of the Iterated Conditional SMC and Geometric Ergodicity of Particle Gibbs samplers  [PDF]
Christophe Andrieu,Anthony Lee,Matti Vihola
Mathematics , 2013,
Abstract: We establish quantitative bounds for rates of convergence and asymptotic variances for iterated conditional sequential Monte Carlo (i-cSMC) Markov chains and associated particle Gibbs samplers. Our main findings are that the essential boundedness of potential functions associated with the i-cSMC algorithm provide necessary and sufficient conditions for the uniform ergodicity of the i-cSMC Markov chain, as well as quantitative bounds on its (uniformly geometric) rate of convergence. Furthermore, we show that the i-cSMC Markov chain cannot even be geometrically ergodic if this essential boundedness does not hold in many applications of interest. Our sufficiency and quantitative bounds rely on a novel non-asymptotic analysis of the expectation of a standard normalizing constant estimate with respect to a "doubly conditional" SMC algorithm. In addition, our results for i-cSMC imply that the rate of convergence can be improved arbitrarily by increasing N, the number of particles in the algorithm, and that in the presence of mixing assumptions, the rate of convergence can be kept constant by increasing N linearly with the time horizon. We translate the sufficiency of the boundedness condition for i-cSMC into sufficient conditions for the particle Gibbs Markov chain to be geometrically ergodic and quantitative bounds on its geometric rate of convergence, which imply convergence of properties of the particle Gibbs Markov chain to those of its corresponding Gibbs sampler. These results complement recently discovered, and related, conditions for the particle marginal Metropolis-Hastings (PMMH) Markov chain.
Variable transformation to obtain geometric ergodicity in the random-walk Metropolis algorithm  [PDF]
Leif T. Johnson,Charles J. Geyer
Statistics , 2013, DOI: 10.1214/12-AOS1048
Abstract: A random-walk Metropolis sampler is geometrically ergodic if its equilibrium density is super-exponentially light and satisfies a curvature condition [Stochastic Process. Appl. 85 (2000) 341-361]. Many applications, including Bayesian analysis with conjugate priors of logistic and Poisson regression and of log-linear models for categorical data result in posterior distributions that are not super-exponentially light. We show how to apply the change-of-variable formula for diffeomorphisms to obtain new densities that do satisfy the conditions for geometric ergodicity. Sampling the new variable and mapping the results back to the old gives a geometrically ergodic sampler for the original variable. This method of obtaining geometric ergodicity has very wide applicability.
Comment: On Random Scan Gibbs Samplers  [PDF]
Richard A. Levine,George Casella
Statistics , 2008, DOI: 10.1214/08-STS252B
Abstract: Comment on ``On Random Scan Gibbs Samplers'' [arXiv:0808.3852]
Adaptive Gibbs samplers  [PDF]
Krzysztof Latuszynski,Jeffrey S. Rosenthal
Statistics , 2010,
Abstract: We consider various versions of adaptive Gibbs and Metropolis within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run, by learning as they go in an attempt to optimise the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various positive results guaranteeing convergence of adaptive Gibbs samplers under certain conditions.
Quantum Gibbs Samplers: the commuting case  [PDF]
Michael J. Kastoryano,Fernando G. S. L. Brandao
Physics , 2014,
Abstract: We analyze the problem of preparing quantum Gibbs states of lattice spin Hamiltonians with local and commuting terms on a quantum computer and in nature. Our central result is an equivalence between the behavior of correlations in the Gibbs state and the mixing time of the semigroup which drives the system to thermal equilibrium (the Gibbs sampler). We introduce a framework for analyzing the correlation and mixing characteristics of quantum Gibbs states and quantum Gibbs samplers, which is rooted in the theory of non-commutative Lp spaces. We consider two distinct classes of Gibbs samplers, one of which being the well-studied Davies generators modelling the dynamics on the system due to weak-coupling with a large Markovian environment. We show that their gap is independent of system size if, and only if, a certain strong form of clustering of correlations holds in the Gibbs state. As concrete applications of our formalism, we show that for every one-dimensional lattice system, or for systems in lattices of any dimension at high enough temperatures, the Gibbs samplers of commuting Hamiltonians are always gapped, giving an efficient way of preparing these states on a quantum computer.
Convergence Rates for Hierarchical Gibbs Samplers  [PDF]
Oliver Jovanovski,Neal Madras
Mathematics , 2014,
Abstract: We establish some results for the rate of convergence in total variation of a Gibbs sampler to its equilibrium distribution. This sampler is motivated by a hierarchical Bayesian inference construction for a gamma random variable. Our results apply to a wide range of parameter values in the case that the hierarchical depth is 3 or 4, and are more restrictive for depth greater than 4. Our method involves showing a relationship between the total variation of two ordered copies of our chain and the maximum of the ratios of their respective co-ordinates. We construct auxiliary stochastic processes to show that this ratio does converge to 1 at a geometric rate.
Adaptive Gibbs samplers and related MCMC methods  [PDF]
Krzysztof ?atuszyński,Gareth O. Roberts,Jeffrey S. Rosenthal
Statistics , 2011, DOI: 10.1214/11-AAP806
Abstract: We consider various versions of adaptive Gibbs and Metropolis-within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run by learning as they go in an attempt to optimize the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various positive results guaranteeing convergence of adaptive Gibbs samplers under certain conditions.
Exact thresholds for Ising-Gibbs samplers on general graphs  [PDF]
Elchanan Mossel,Allan Sly
Mathematics , 2009, DOI: 10.1214/11-AOP737
Abstract: We establish tight results for rapid mixing of Gibbs samplers for the Ferromagnetic Ising model on general graphs. We show that if \[(d-1)\tanh\beta<1,\] then there exists a constant C such that the discrete time mixing time of Gibbs samplers for the ferromagnetic Ising model on any graph of n vertices and maximal degree d, where all interactions are bounded by $\beta$, and arbitrary external fields are bounded by $Cn\log n$. Moreover, the spectral gap is uniformly bounded away from 0 for all such graphs, as well as for infinite graphs of maximal degree d. We further show that when $d\tanh\beta<1$, with high probability over the Erdos-Renyi random graph $G(n,d/n)$, it holds that the mixing time of Gibbs samplers is \[n^{1+\Theta({1}/{\log\log n})}.\] Both results are tight, as it is known that the mixing time for random regular and Erdos-Renyi random graphs is, with high probability, exponential in n when $(d-1)\tanh\beta>1$, and $d\tanh\beta>1$, respectively. To our knowledge our results give the first tight sufficient conditions for rapid mixing of spin systems on general graphs. Moreover, our results are the first rigorous results establishing exact thresholds for dynamics on random graphs in terms of spatial thresholds on trees.
Sufficient burn-in for Gibbs samplers for a hierarchical random effects model  [PDF]
Galin L. Jones,James P. Hobert
Mathematics , 2004, DOI: 10.1214/009053604000000184
Abstract: We consider Gibbs and block Gibbs samplers for a Bayesian hierarchical version of the one-way random effects model. Drift and minorization conditions are established for the underlying Markov chains. The drift and minorization are used in conjunction with results from J. S. Rosenthal [J. Amer. Statist. Assoc. 90 (1995) 558-566] and G. O. Roberts and R. L. Tweedie [Stochastic Process. Appl. 80 (1999) 211-229] to construct analytical upper bounds on the distance to stationarity. These lead to upper bounds on the amount of burn-in that is required to get the chain within a prespecified (total variation) distance of the stationary distribution. The results are illustrated with a numerical example.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.