Abstract:
We prove some limit properties of the harmonic mean of a random transition probability for finite Markov chains indexed by a homogeneous tree in a nonhomogeneous Markovian environment with finite state space. In particular, we extend the method to study the tree-indexed processes in deterministic environments to the case of random enviroments. 1. Introduction A tree is a graph which is connected and doesn't contain any circuits. Given any two vertices , let be the unique path connecting and . Define the graph distance to be the number of edges contained in the path . Let be an infinite tree with root . The set of all vertices with distance from the root is called the th generation of , which is denoted by . We denote by the union of the first generations of . For each vertex , there is a unique path from to and for the number of edges on this path. We denote the first predecessor of by . The degree of a vertex is defined to be the number of neighbors of it. If every vertex of the tree has degree , we say it is Cayley’s tree, which is denoted by . Thus, the root vertex has neighbors in the first generation and every other vertex has neighbors in the next generation. For any two vertices and of tree , write if is on the unique path from the root to . We denote by the farthest vertex from satisfying and . We use the notation and denote by the number of vertices of . In the following, we always let denote the Cayley tree . A tree-indexed Markov chain is the particular case of a Markov random field on a tree. Kemeny et al. [1] and Spitzer [2] introduced two special finite tree-indexed Markov chains with finite transition matrix which is assumed to be positive and reversible to its stationary distribution, and these tree-indexed Markov chains ensure that the cylinder probabilities are independent of the direction we travel along a path. In this paper, we omit such assumption and adopt another version of the definition of tree-indexed Markov chains which is put forward by Benjamini and Peres [3]. Yang and Ye[4] extended it to the case of nonhomogeneous Markov chains indexed by infinite Cayley’s tree and we restate it here as follows. Definition 1 (T-indexed nonhomogeneous Markov chains (see [4])). Let be an infinite Cayley tree, a finite state space, and a stochastic process defined on probability space , which takes values in the finite set . Let be a distribution on and a transition probability matrix on . If, for any vertex , then will be called -valued nonhomogeneous Markov chains indexed by infinite Cayley’s tree with initial distribution (1) and

Abstract:
We consider a sequence of Markov chains $(\mathcal X^n)_{n=1,2,...}$ with $\mathcal X^n = (X^n_\sigma)_{\sigma\in\mathcal T}$, indexed by the full binary tree $\mathcal T = \mathcal T_0 \cup \mathcal T_1 \cup ...$, where $\mathcal T_k$ is the $k$th generation of $\mathcal T$. In addition, let $(\Sigma_k)_{k=0,1,2,...}$ be a random walk on $\mathcal T$ with $\Sigma_k \in \mathcal T_k$ and $\widetilde{\mathcal R}^n = (\widetilde R_t^n)_{t\geq 0}$ with $\widetilde R_t^n := X_{\Sigma_{[tn]}}$, arising by observing the Markov chain $\mathcal X^n$ along the random walk. We present a law of large numbers concerning the empirical measure process $\widetilde{\mathcal Z}^n = (\widetilde Z_t^n)_{t\geq 0}$ where $\widetilde{Z}_t^n = \sum_{\sigma\in\mathcal T_{[tn]}} \delta_{X_\sigma^n}$ as $n\to\infty$. Precisely, we show that if $\widetilde{\mathcal R}^n \to \mathcal R$ for some Feller process $\mathcal R = (R_t)_{t\geq 0}$ with deterministic initial condition, then $\widetilde{\mathcal Z}^n \to \mathcal Z$ with $Z_t = \delta_{\mathcal L(R_t)}$.

Abstract:
We study the size properties of a general model of fractal sets that are based on a tree-indexed family of random compacts and a tree-indexed Markov chain. These fractals may be regarded as a generalization of those resulting from the Moran-like deterministic or random recursive constructions considered by various authors. Among other applications, we consider various extensions of Mandelbrot's fractal percolation process.

Abstract:
Let P(s,t) denote a non-homogeneous continuous parameter Markov chain with countable state space E and parameter space [a,b], ￠ ’ ￠ 0}. It is shown in this paper that R(s,t) is reflexive, transitive, and independent of (s,t), s Keywords non-homogeneous Markov chains --- reflexive and transitive relations --- homogeneity condition.

Abstract:
We develop a method for analyzing the mixing times for a quite general class of Markov chains on the complete monomial group G \wr S_n (the wreath product of a group G with the permutation group S_n) and a quite general class of Markov chains on the homogeneous space (G \wr S_n) / (S_r \times S_{n - r}). We derive an exact formula for the L^2 distance in terms of the L^2 distances to uniformity for closely related random walks on the symmetric groups S_j for 1 \leq j \leq n or for closely related Markov chains on the homogeneous spaces S_{i + j} / (S_i \times S_j) for various values of i and j, respectively. Our results are consistent with those previously known, but our method is considerably simpler and more general.

Abstract:
In this paper we find nonasymptotic exponential upper bounds for the deviation in the ergodic theorem for families of homogeneous Markov processes. We find some sufficient conditions for geometric ergodicity uniformly over a parametric family. We apply this property to the nonasymptotic nonparametric estimation problem for ergodic diffusion processes.

Abstract:
Given a finite typed rooted tree $T$ with $n$ vertices, the {\em empirical subtree measure} is the uniform measure on the $n$ typed subtrees of $T$ formed by taking all descendants of a single vertex. We prove a large deviation principle in $n$, with explicit rate function, for the empirical subtree measures of multitype Galton-Watson trees conditioned to have exactly $n$ vertices. In the process, we extend the notions of shift-invariance and specific relative entropy--as typically understood for Markov fields on deterministic graphs such as $\mathbb Z^d$--to Markov fields on random trees. We also develop single-generation empirical measure large deviation principles for a more general class of random trees including trees sampled uniformly from the set of all trees with $n$ vertices.

Abstract:
Markov chain models are used in various fields, such behavioral sciences or econometrics. Although the goodness of fit of the model is usually assessed by large sample approximation, it is desirable to use conditional tests if the sample size is not large. We study Markov bases for performing conditional tests of the toric homogeneous Markov chain model, which is the envelope exponential family for the usual homogeneous Markov chain model. We give a complete description of a Markov basis for the following cases: i) two-state, arbitrary length, ii) arbitrary finite state space and length of three. The general case remains to be a conjecture. We also present a numerical example of conditional tests based on our Markov basis.

Abstract:
In this paper we obtain the central limit theorem for triangular arrays of non-homogeneous Markov chains under a condition imposed to the maximal coefficient of correlation. The proofs are based on martingale techniques and a sharp lower bound estimate for the variance of partial sums. The results complement an important central limit theorem of Dobrushin based on the contraction coefficient.

Abstract:
Perfect sampling is a technique that uses coupling arguments to provide a sample from the stationary distribution of a Markov chain in a finite time without ever computing the distribution. This technique is very efficient if all the events in the system have monotonicity property. However, in the general (non-monotone) case, this technique needs to consider the whole state space, which limits its application only to chains with a state space of small cardinality. We propose here a new approach for the general case that only needs to consider two trajectories. Instead of the original chain, we use two bounding processes (envelopes) and we show that, whenever they couple, one obtains a sample under the stationary distribution of the original chain. We show that this new approach is particularly effective when the state space can be partitioned into pieces where envelopes can be easily computed. We further show that most Markovian queueing networks have this property and we propose efficient algorithms for some of them.