Abstract:
The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t) below Equation (1.7)]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.

Abstract:
Motivated by a model presented by S. Gudder, we study a quantum generalization of Markov chains and discuss the relation between these maps and open quantum random walks, a class of quantum channels described by S. Attal et al. We consider processes which are nonhomogeneous in time, i.e., at each time step, a possibly distinct evolution kernel. Inspired by a spectral technique described by L. Saloff-Coste and J. Z\'u\~niga, we define a notion of ergodicity for nonhomogeneous quantum Markov chains and describe a criterion for ergodicity of such objects in terms of singular values. As a consequence we obtain a quantum version of the classical probability result concerning the behavior of the columns (or rows) of the iterates of a stochastic matrix induced by a finite, irreducible, aperiodic Markov chain. We are also able to relate the ergodic property presented here with the notions of weak and uniform ergodicity known in the literature of noncommutative $L^1$-spaces.

Abstract:
We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all $L^2$ functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis--Hastings algorithms.

Abstract:
It is well known that for a strictly stationary, reversible, Harris recurrent Markov chain, the $\rho$-mixing condition is equivalent to geometric ergodicity and to a "spectral gap" condition. In this note, it will be shown with an example that for that class of Markov chains, the "interlaced" variant of the $\rho$-mixing condition fails to be equivalent to those conditions.

Abstract:
We study properties of the Laplace transforms of non-negative additive functionals of Markov chains. We are namely interested in a multiplicative ergodicity property used in [18] to study bifurcating processes with ancestral dependence. We develop a general approach based on the use of the operator perturbation method. We apply our general results to two examples of Markov chains, including a linear autoregressive model. In these two examples the operator-type assumptions reduce to some expected finite moment conditions on the functional (no exponential moment conditions are assumed in this work).

Abstract:
We consider a bivariate stationary Markov chain $(X_n,Y_n)_{n\ge0}$ in a Polish state space, where only the process $(Y_n)_{n\ge0}$ is presumed to be observable. The goal of this paper is to investigate the ergodic theory and stability properties of the measure-valued process $(\Pi_n)_{n\ge0}$, where $\Pi_n$ is the conditional distribution of $X_n$ given $Y_0,...,Y_n$. We show that the ergodic and stability properties of $(\Pi_n)_{n\ge0}$ are inherited from the ergodicity of the unobserved process $(X_n)_{n\ge0}$ provided that the Markov chain $(X_n,Y_n)_{n\ge0}$ is nondegenerate, that is, its transition kernel is equivalent to the product of independent transition kernels. Our main results generalize, subsume and in some cases correct previous results on the ergodic theory of nonlinear filters.

Abstract:
We apply Doeblin's ergodicity coefficient as a computational tool to approximate the occupancy distribution of a set of states in a homogeneous but possibly non-stationary finite Markov chain. Our approximation is based on new properties satisfied by this coefficient, which allow us to approximate a chain of duration n by independent and short-lived realizations of an auxiliary homogeneous Markov chain of duration of order ln(n). Our approximation may be particularly useful when exact calculations via first-step methods or transfer matrices are impractical, and asymptotic approximations may not be yet reliable. Our findings may find applications to pattern problems in Markovian and non-Markovian sequences that are treatable via embedding techniques.

Abstract:
By the comparison of transition matrices of two nonhomogeneous Markov chains, we discuss the relations of ergodicity for the two chains and obtain some sufficient conditions for a nonhomogeneous Markov chain to be strongly ergodic. The relations of uniformstrong ergodicity and ulilform weall ergodicity for a nonhomogeneous Markov chain are analysedand some sufficient conditions of uniform strong ergodicity for a nonhomogeneous Markov chainare obtained.

Abstract:
In this paper we prove a sharp quantitative version of the Kendall's Theorem. The Kendal Theorem states that under some mild conditions imposed on a probability distribution on positive integers (i.e. probabilistic sequence) one can prove convergence of its renewal sequence. Due to the well-known property - the first entrance last exit decomposition - such results are of interest in the stability theory of time homogeneous Markov chains. In particular the approach may be used to measure rates of convergence of geometrically ergodic Markov chains and consequently implies estimates on convergence of MCMC estimators.

Abstract:
We consider a population with non-overlapping generations, whose size goes to infinity. It is described by a discrete genealogy which may be time non-homogeneous and we pay special attention to branching trees in varying environments. A Markov chain models the dynamic of the trait of each individual along this genealogy and may also be time non-homogeneous. Such models are motivated by transmission processes in the cell division, reproduction-dispersion dynamics or sampling problems in evolution. We want to determine the evolution of the distribution of the traits among the population, namely the asymptotic behavior of the proportion of individuals with a given trait. We prove some quenched laws of large numbers which rely on the ergodicity of an auxiliary process, in the same vein as \cite{guy,delmar}. Applications to time inhomogeneous Markov chains lead us to derive a backward (with respect to the environment) law of large numbers and a law of large numbers on the whole population until generation $n$. A central limit is also established in the transient case.