Math 777 — Stochastic Processes: Quantitative Aspects of Finite Markov Chains
Fall 2002
Instructor: Laurent Saloff-Coste
Time: TR 8:40-9:55
Room: Malott 205
This will be an introduction to some of the modern techniques used to study Markov chains, mostly on finite state spaces. Finite Markov chains are simple stochastic processes evolving on a finite set. A familiar example comes from shuffling a deck of n cards. This can be modelized using certain Markov chains on the symmetric group on n objects. One of the typical possible behaviors for a finite Markov chain is convergence towards an equilibrium measure (this is what happens when one shuffles cards to mix them up. In this case the equilibrium measure is the uniform probability on all n! possible decks). Finite Markov chains are used in many computational algorithms in sciences (Monte Carlo Markov Chain Algorithms). In these applications, a crucial question is to understand how long it takes for a given finite Markov chain to becomes close to its equilibrium measure.
In this course we will discuss a number of techniques to obtain explicit bounds on the convergence of certain finite Markov chains to equilibrium. Probabilistic methods such as coupling and strong stationary times will be presented as well as analytic tools including diagonalization, spectral gap, logarithmic Sobolev inequalities, hypercontractivity, comparison techniques, etc.
Prerequisites are minimal. Familiarity with basic functional analysis and probability theory are a plus.
References:
Aldous D. and Fill J. (2000) Preliminary version of a book.
See www.stat.berkeley.edu/users/aldous
Bremaud P. (2000) Markov chains: Gibbs Fields, Monte Carlo simulation, and queues. Springer.
Diaconis P. (1986) Group representations in probability theory and statistics, IMS Lecture Notes-Monograph Series.
Kemeny J.G. and Snell J.L. (1960) Finite Markov Chains. Van Nostrand.
Saloff-Coste L. (1997) Lectures on finite Markov chains. Lecture Notes in Mathematics 1665, Springer.