oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
The Distribution of Mixing Times in Markov Chains  [PDF]
Jeffrey J. Hunter
Mathematics , 2011, DOI: 10.1142/S0217595912500455
Abstract: The distribution of the "mixing time" or the "time to stationarity" in a discrete time irreducible Markov chain, starting in state i, can be defined as the number of trials to reach a state sampled from the stationary distribution of the Markov chain. Expressions for the probability generating function, and hence the probability distribution of the mixing time starting in state i are derived and special cases explored. This extends the results of the author regarding the expected time to mixing [J.J. Hunter, Mixing times with applications to perturbed Markov chains, Linear Algebra Appl. 417 (2006) 108-123], and the variance of the times to mixing, [J.J. Hunter, Variances of first passage times in a Markov chain with applications to mixing times, Linear Algebra Appl. 429 (2008) 1135-1162]. Some new results for the distribution of recurrence and first passage times in three-state Markov chain are also presented.
Elementary Bounds On Mixing Times for Decomposable Markov Chains  [PDF]
Natesh S. Pillai,Aaron Smith
Mathematics , 2015,
Abstract: Many finite-state reversible Markov chains can be naturally decomposed into "projection" and "restriction" chains. In this paper we provide bounds on the total variation mixing times of the original chain in terms of the mixing properties of these related chains. This paper is in the tradition of existing bounds on Poincare and log-Sobolev constants of Markov chains in terms of similar decompositions [JSTV04, MR02, MR06, MY09]. Our proofs are simple, relying largely on recent results relating hitting and mixing times of reversible Markov chains [PS13, Oli12]. We give examples in which our results give substantially better bounds than those obtained by applying existing decomposition results.
Stable Adiabatic Times for Markov Chains  [PDF]
Kyle Bradford,Yevgeniy Kovchegov,Thinh Nguyen
Mathematics , 2012,
Abstract: In this paper we continue our work on adiabatic time of time-inhomogeneous Markov chains first introduced in Kovchegov (2010) and Bradford and Kovchegov (2011). Our study is an analog to the well-known Quantum Adiabatic (QA) theorem which characterizes the quantum adiabatic time for the evolution of a quantum system as a result of applying of a series of Hamilton operators, each is a linear combination of two given initial and final Hamilton operators, i.e. $\mathbf{H}(s) = (1-s)\mathbf{H_0} + s\mathbf{H_1}$. Informally, the quantum adiabatic time of a quantum system specifies the speed at which the Hamiltonian operators changes so that the ground state of the system at any time $s$ will always remain $\epsilon$-close to that induced by the Hamilton operator $\mathbf{H}(s)$ at time $s$. Analogously, we derive a sufficient condition for the stable adiabatic time of a time-inhomogeneous Markov evolution specified by applying a series of transition probability matrices, each is a linear combination of two given irreducible and aperiodic transition probability matrices, i.e., $\mathbf{P_{t}} = (1-t)\mathbf{P_{0}} + t\mathbf{P_{1}}$. In particular we show that the stable adiabatic time $t_{sad}(\mathbf{P_{0}}, \mathbf{P_{1}}, \epsilon) = O (t_{mix}^{4}(\epsilon \slash 2) \slash \epsilon^{3}), $ where $t_{mix}$ denotes the maximum mixing time over all $\mathbf{P_{t}}$ for $0 \leq t \leq 1$.
Adiabatic times for Markov chains and applications  [PDF]
Kyle Bradford,Yevgeniy Kovchegov
Mathematics , 2010, DOI: 10.1007/s10955-011-0219-6
Abstract: We state and prove a generalized adiabatic theorem for Markov chains and provide examples and applications related to Glauber dynamics of Ising model over Z^d/nZ^d. The theorems derived in this paper describe a type of adiabatic dynamics for l^1(R_+^n) norm preserving, time inhomogeneous Markov transformations, while quantum adiabatic theorems deal with l^2(C^n) norm preserving ones, i.e. gradually changing unitary dynamics in C^n.
Coalescence and meeting times on $n$-block Markov chains  [PDF]
Kathleen Lan,Kevin McGoff
Mathematics , 2014,
Abstract: We consider finite state, discrete-time, mixing Markov chains $(V,P)$, where $V$ is the state space and $P$ is transition matrix. To each such chain $(V,P)$, we associate a sequence of chains $(V_n,P_n)$ by coding trajectories of $(V,P)$ according to their overlapping $n$-blocks. The chain $(V_n,P_n)$, called the $n$-block Markov chain associated to $(V,P)$, may be considered an alternate version of $(V,P)$ having memory of length $n$. Along such a sequence of chains, we characterize the asymptotic behavior of coalescence times and meeting times as $n$ tends to infinity. In particular, we define an algebraic quantity $L(V,P)$ depending only on $(V,P)$, and we show that if the coalescence time on $(V_n,P_n)$ is denoted by $C_n$, then the quantity $\frac{1}{n} \log C_n$ converges in probability to $L(V,P)$ with exponential rate. Furthermore, we fully characterize the relationship between $L(V,P)$ and the entropy of $(V,P)$.
Mixing times of Markov chains on a cycle with additional long range connections  [PDF]
Balázs Gerencsér
Mathematics , 2014,
Abstract: We develop Markov chain mixing time estimates for a class of Markov chains with restricted transitions. We assume transitions may occur along a cycle of $n$ nodes and on $n^\gamma$ additional edges, where $\gamma < 1$. We find that the mixing times of reversible Markov chains properly interpolate between the mixing times of the cycle with no added edges and of the cycle with $cn$ added edges (which is in turn a Small World Network model). In the case of non-reversible Markov-chains, a considerable gap remains between lower and upper bounds, but simulations give hope to experience a significant speedup compared to the reversible case.
Moments of recurrence times for Markov chains  [PDF]
Frank Aurzada,Hanna Doering,Marcel Ortgiese,Michael Scheutzow
Mathematics , 2011,
Abstract: We consider moments of the return times (or first hitting times) in a discrete time discrete space Markov chain. It is classical that the finiteness of the first moment of a return time of one state implies the finiteness of the first moment of the first return time of any other state. We extend this statement to moments with respect to a function $f$, where $f$ satisfies a certain, best possible condition. This generalizes results of K. L. Chung (1954) who considered the functions $f(n)=n^p$ and wondered "[...] what property of the power $n^p$ lies behind this theorem [...]" (see Chung (1967), p. 70). We exhibit that exactly the functions that do not increase exponentially -- neither globally nor locally -- fulfill the above statement.
Mixing times of lozenge tiling and card shuffling Markov chains  [PDF]
David Bruce Wilson
Mathematics , 2001, DOI: 10.1214/aoap/1075828054
Abstract: We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, Randall, and Sinclair to generate random tilings of regions by lozenges. For an L X L region we bound the mixing time by O(L^4 log L), which improves on the previous bound of O(L^7), and we show the new bound to be essentially tight. In another application we resolve a few questions raised by Diaconis and Saloff-Coste, by lower bounding the mixing time of various card-shuffling Markov chains. Our lower bounds are within a constant factor of their upper bounds. When we use our methods to modify a path-coupling analysis of Bubley and Dyer, we obtain an O(n^3 log n) upper bound on the mixing time of the Karzanov-Khachiyan Markov chain for linear extensions.
The combinational structure of non-homogeneous Markov chains with countable states  [cached]
A. Mukherjea,A. Nakassis
International Journal of Mathematics and Mathematical Sciences , 1983, DOI: 10.1155/s0161171283000320
Abstract: Let P(s,t) denote a non-homogeneous continuous parameter Markov chain with countable state space E and parameter space [a,b], ¢ ’ ¢ 0}. It is shown in this paper that R(s,t) is reflexive, transitive, and independent of (s,t), s Keywords non-homogeneous Markov chains --- reflexive and transitive relations --- homogeneity condition.
Geometric ergodicity for families of homogeneous Markov chains  [PDF]
Leonid Galtchouk,Serguei Pergamenchtchikov
Statistics , 2010,
Abstract: In this paper we find nonasymptotic exponential upper bounds for the deviation in the ergodic theorem for families of homogeneous Markov processes. We find some sufficient conditions for geometric ergodicity uniformly over a parametric family. We apply this property to the nonasymptotic nonparametric estimation problem for ergodic diffusion processes.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.