oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Markov degree of the three-state toric homogeneous Markov chain model  [PDF]
David Haws,Abraham Martín del Campo,Akimichi Takemura,Ruriko Yoshida
Statistics , 2012,
Abstract: We consider the three-state toric homogeneous Markov chain model (THMC) without loops and initial parameters. At time $T$, the size of the design matrix is $6 \times 3\cdot 2^{T-1}$ and the convex hull of its columns is the model polytope. We study the behavior of this polytope for $T\geq 3$ and we show that it is defined by 24 facets for all $T\ge 5$. Moreover, we give a complete description of these facets. From this, we deduce that the toric ideal associated with the design matrix is generated by binomials of degree at most 6. Our proof is based on a result due to Sturmfels, who gave a bound on the degree of the generators of a toric ideal, provided the normality of the corresponding toric variety. In our setting, we established the normality of the toric variety associated to the THMC model by studying the geometric properties of the model polytope.
A Markov basis for two-state toric homogeneous Markov chain model without initial parameters  [PDF]
Hisayuki Hara,Akimichi Takemura
Statistics , 2010,
Abstract: We derive a Markov basis consisting of moves of degree at most three for two-state toric homogeneous Markov chain model of arbitrary length without parameters for initial states. Our basis consists of moves of degree three and degree one, which alter the initial frequencies, in addition to moves of degree two and degree one for toric homogeneous Markov chain model with parameters for initial states.
Putting Markov Chains Back into Markov Chain Monte Carlo  [PDF]
Richard J. Barker,Matthew R. Schofield
Advances in Decision Sciences , 2007, DOI: 10.1155/2007/98086
Abstract: Markov chain theory plays an important role in statistical inference both in the formulation of models for data and in the construction of efficient algorithms for inference. The use of Markov chains in modeling data has a long history, however the use of Markov chain theory in developing algorithms for statistical inference has only become popular recently. Using mark-recapture models as an illustration, we show how Markov chains can be used for developing demographic models and also in developing efficient algorithms for inference. We anticipate that a major area of future research involving mark-recapture data will be the development of hierarchical models that lead to better demographic models that account for all uncertainties in the analysis. A key issue is determining when the chains produced by Markov chain Monte Carlo sampling have converged.
Parallel and interacting Markov chains Monte Carlo method  [PDF]
Fabien Campillo,Vivien Rossi
Mathematics , 2006,
Abstract: In many situations it is important to be able to propose $N$ independent realizations of a given distribution law. We propose a strategy for making $N$ parallel Monte Carlo Markov Chains (MCMC) interact in order to get an approximation of an independent $N$-sample of a given target law. In this method each individual chain proposes candidates for all other chains. We prove that the set of interacting chains is itself a MCMC method for the product of $N$ target measures. Compared to independent parallel chains this method is more time consuming, but we show through concrete examples that it possesses many advantages: it can speed up convergence toward the target law as well as handle the multi-modal case.
The combinational structure of non-homogeneous Markov chains with countable states  [cached]
A. Mukherjea,A. Nakassis
International Journal of Mathematics and Mathematical Sciences , 1983, DOI: 10.1155/s0161171283000320
Abstract: Let P(s,t) denote a non-homogeneous continuous parameter Markov chain with countable state space E and parameter space [a,b], ¢ ’ ¢ 0}. It is shown in this paper that R(s,t) is reflexive, transitive, and independent of (s,t), s Keywords non-homogeneous Markov chains --- reflexive and transitive relations --- homogeneity condition.
Mixing times for Markov chains on wreath products and related homogeneous spaces  [PDF]
James Allen Fill,Clyde H. Schoolfield, Jr.
Mathematics , 2000,
Abstract: We develop a method for analyzing the mixing times for a quite general class of Markov chains on the complete monomial group G \wr S_n (the wreath product of a group G with the permutation group S_n) and a quite general class of Markov chains on the homogeneous space (G \wr S_n) / (S_r \times S_{n - r}). We derive an exact formula for the L^2 distance in terms of the L^2 distances to uniformity for closely related random walks on the symmetric groups S_j for 1 \leq j \leq n or for closely related Markov chains on the homogeneous spaces S_{i + j} / (S_i \times S_j) for various values of i and j, respectively. Our results are consistent with those previously known, but our method is considerably simpler and more general.
Geometric ergodicity for families of homogeneous Markov chains  [PDF]
Leonid Galtchouk,Serguei Pergamenchtchikov
Statistics , 2010,
Abstract: In this paper we find nonasymptotic exponential upper bounds for the deviation in the ergodic theorem for families of homogeneous Markov processes. We find some sufficient conditions for geometric ergodicity uniformly over a parametric family. We apply this property to the nonasymptotic nonparametric estimation problem for ergodic diffusion processes.
Degree Bounds for a Minimal Markov Basis for the Three-State Toric Homogeneous Markov Chain Model  [PDF]
David Haws,Abraham Martin Del Campo,Ruriko Yoshida
Statistics , 2011,
Abstract: We study the three state toric homogeneous Markov chain model and three special cases of it, namely: (i) when the initial state parameters are constant, (ii) without self-loops, and (iii) when both cases are satisfied at the same time. Using as a key tool a directed multigraph associated to the model, the state-graph, we give a bound on the number of vertices of the polytope associated to the model which does not depend on the time. Based on our computations, we also conjecture the stabilization of the f-vector of the polytope, analyze the normality of the semigroup, give conjectural bounds on the degree of the Markov bases.
Central limit theorem for triangular arrays of Non-Homogeneous Markov chains  [PDF]
Magda Peligrad
Mathematics , 2010,
Abstract: In this paper we obtain the central limit theorem for triangular arrays of non-homogeneous Markov chains under a condition imposed to the maximal coefficient of correlation. The proofs are based on martingale techniques and a sharp lower bound estimate for the variance of partial sums. The results complement an important central limit theorem of Dobrushin based on the contraction coefficient.
Perfect Sampling of Markov Chains with Piecewise Homogeneous Events  [PDF]
Ana Bu?i?,Bruno Gaujal,Furcy Pin
Computer Science , 2010,
Abstract: Perfect sampling is a technique that uses coupling arguments to provide a sample from the stationary distribution of a Markov chain in a finite time without ever computing the distribution. This technique is very efficient if all the events in the system have monotonicity property. However, in the general (non-monotone) case, this technique needs to consider the whole state space, which limits its application only to chains with a state space of small cardinality. We propose here a new approach for the general case that only needs to consider two trajectories. Instead of the original chain, we use two bounding processes (envelopes) and we show that, whenever they couple, one obtains a sample under the stationary distribution of the original chain. We show that this new approach is particularly effective when the state space can be partitioned into pieces where envelopes can be easily computed. We further show that most Markovian queueing networks have this property and we propose efficient algorithms for some of them.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.