Example: quiz answers

Markov Chains and Mixing Times, second edition

Markov Chains and Mixing Times, second editionDavid A. LevinYuval PeresWith contributions by Elizabeth L. WilmerUniversity of OregonE-mail ResearchE-mail CollegeE-mail to second editionxiPreface to first editionxiOverviewxiiiFor the ReaderxivFor the InstructorxvFor the ExpertxvAcknowledgementsxviiiPart I: Basic Methods and Examples1 Chapter 1. Introduction to Finite Markov Markov Random Mapping Irreducibility and Random Walks on Stationary Reversibility and Time Classifying the States of a Markov Chain*15 Exercises17 Notes18 Chapter 2. Classical (and Useful) Markov Gambler s Coupon The Hypercube and the Ehrenfest Urn The P olya Urn Birth-and-Death Random Walks on Random Walks onZand Reflection Principles30 Exercises34 Notes35 Chapter 3. Markov Chain Monte Carlo: Metropolis and Glauber Metropolis Glauber Dynamics41 Exercises45 Notes45vviCONTENTSC hapter 4.

Markov rst studied the stochastic processes that came to be named after him in 1906. Approximately a century later, there is an active and diverse interdisci-plinary community of researchers using Markov chains in computer science, physics, statistics, bioinformatics, engineering, and many other areas.

Tags:

  Processes, Markov

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Markov Chains and Mixing Times, second edition

1 Markov Chains and Mixing Times, second editionDavid A. LevinYuval PeresWith contributions by Elizabeth L. WilmerUniversity of OregonE-mail ResearchE-mail CollegeE-mail to second editionxiPreface to first editionxiOverviewxiiiFor the ReaderxivFor the InstructorxvFor the ExpertxvAcknowledgementsxviiiPart I: Basic Methods and Examples1 Chapter 1. Introduction to Finite Markov Markov Random Mapping Irreducibility and Random Walks on Stationary Reversibility and Time Classifying the States of a Markov Chain*15 Exercises17 Notes18 Chapter 2. Classical (and Useful) Markov Gambler s Coupon The Hypercube and the Ehrenfest Urn The P olya Urn Birth-and-Death Random Walks on Random Walks onZand Reflection Principles30 Exercises34 Notes35 Chapter 3. Markov Chain Monte Carlo: Metropolis and Glauber Metropolis Glauber Dynamics41 Exercises45 Notes45vviCONTENTSC hapter 4.

2 Introduction to Markov Chain Total Variation Coupling and Total Variation The Convergence Standardizing Distance from Mixing Mixing and Time `pDistance and Mixing56 Exercises57 Notes58 Chapter 5. Bounding Total Variation Grand Couplings69 Exercises73 Notes74 Chapter 6. Strong Stationary Top-to-Random Markov Chains with Stationary Strong Stationary Times and Bounding Stationary Times and Cesaro Mixing Optimal Strong Stationary Times*85 Exercises86 Notes87 Chapter 7. Lower Bounds on Mixing Counting and Diameter Bottleneck Distinguishing Examples96 Exercises98 Notes99 Chapter 8. The Symmetric Group and Shuffling The Symmetric Random Riffle Shuffles107 Exercises110 Notes112 Chapter 9. Random Walks on Networks and Reversible Markov Harmonic Voltages and Current Effective Escape Probabilities on a Square124 Exercises125 Notes127 Chapter 10.

3 Hitting Random Target Commute Hitting Times on Hitting Times for Eulerian Hitting Times for the Bounding Mixing Times via Hitting Mixing for the Walk on Two Glued Graphs144 Exercises146 Notes149 Chapter 11. Cover The Matthews Applications of the Matthews Spanning Tree Bound for Cover Waiting for all patterns in coin tossing156 Exercises158 Notes158 Chapter 12. The Spectral Representation of a Reversible Transition The Relaxation Eigenvalues and Eigenfunctions of Some Simple Random Product Spectral Formula for the Target An` Time Averages173 Exercises177 Notes178 Part II: The Plot Thickens179 Chapter 13. Eigenfunctions and Comparison of Bounds on Spectral Gap via The Dirichlet Form and the Bottleneck Simple Comparison of Markov The Path Wilson s Method for Lower Expander Graphs*196 Exercises198 Notes199 Chapter 14.

4 The Transportation Metric and Path The Transportation Path Rapid Mixing for Approximate Counting209 Exercises212 Notes214 Chapter 15. The Ising Fast Mixing at High The Complete The The Block Lower Bound for Ising on Square*226 Exercises228 Notes229 Chapter 16. From Shuffling Cards to Shuffling Random Adjacent Shuffling Genes236 Exercise241 Notes241 Chapter 17. Martingales and Evolving Definition and Optional Stopping Evolving A General Bound on Return Harmonic Functions and the Strong Stationary Times from Evolving Sets256 Exercises259 Notes259 Chapter 18. The Cutoff Examples of A Necessary Condition for Separation Cutoff268 Exercises269 Notes269 Chapter 19. Lamplighter Relaxation Time Mixing Time Examples277 Exercises277 Notes278 Chapter 20.

5 Continuous-Time Chains * Continuous-Time Spectral Product Chains285 Exercises289 Notes290 Chapter 21. Countable State Space Chains * Recurrence and Infinite Positive Recurrence and Null Recurrence and Bounds on Return Probabilities301 Exercises302 Notes304 Chapter 22. Monotone Stochastic Definition and Examples of Monotone Markov Positive The second Censoring Lower bound on Proof of Strassen s Notes322 Chapter 23. The Exclusion Mixing Time ofk-exclusion on Biased Notes334 Chapter 24. Ces`aro Mixing Time, Stationary Times, and Hitting Large Sets Equivalence oftstop,tCesandtGfor reversible Halting States and Mean-Optimal Stopping Regularity Properties of Geometric Mixing Equivalence Upward Skip-Free ( ) are comparable for 1 An Upper Bound Application to Robustness of Mixing345 Exercises346 Notes346 Chapter 25.

6 Coupling from the Monotone Perfect Sampling via Coupling from the The Hardcore Random State of an Unknown Markov Chain357 Exercise358 Notes358 Chapter 26. Open The Ising Other Update: Previously Open Problems361 Appendix A. Background Probability Spaces and Random Conditional Strong Markov Metric Linear Miscellaneous374 Exercises374 Appendix B. Introduction to What Is Simulation? Von Neumann Unbiasing* Simulating Discrete Distributions and Inverse Distribution Function Acceptance-Rejection Simulating Normal Random Sampling from the About Random Sampling from Large Sets*383 Exercises386 Notes389 Appendix C. Ergodic Ergodic Theorem*390 Exercise391 Appendix D. Solutions to Selected Exercises392 Bibliography422 Notation Index437 Index439 PrefacePreface to second editionSince the publication of the first edition , the field of Mixing times has continuedto enjoy rapid expansion.

7 In particular, many of the open problems posed in thefirst edition have been solved. The book has been used in courses at numerousuniversities, motivating us to update the eight years since the first edition appeared, we have made corrections andimprovements throughout the book. We added three new chapters: Chapter 22 onmonotone Chains , Chapter 23 on the exclusion process, and Chapter 24 that relatesmixing times and hitting time parameters to stationary stopping times. Chapter 4now includes an introduction to Mixing times in`p, which reappear in Chapter latter chapter has several new topics, including estimates for hitting times ontrees and Eulerian digraphs. A bound for cover times using spanning trees hasbeen added to Chapter 11, which also now includes a general bound on cover timesfor regular graphs.

8 The exposition in Chapter 6 and Chapter 17 now employsfiltrations rather than relying on the random mapping representation. To reflectthe key developments since the first edition , especially breakthroughs on the Isingmodel and the cutoff phenomenon, the Notes to the chapters and the open problemshave been thank the many careful readers who sent us comments and corrections:Anselm Adelmann, Amitabha Bagchi, Nathanael Berestycki, Olena Bormashenko,Krzysztof Burdzy, Gerandy Brito, Darcy Camargo, Varsha Dani, Sukhada Fad-navis, Tertuliano Franco, Alan Frieze, Reza Gheissari, Jonathan Hermon, AnderHolroyd, Kenneth Hu, John Jiang, Svante Janson, Melvin Kianmanesh Rad, YinTat Lee, Zhongyang Li, Eyal Lubetzky, Abbas Mehrabian, R. Misturini, L. Mor-gado, Asaf Nachmias, Fedja Nazarov, Joe Neeman, Ross Pinsky, Anthony Quas,Miklos Racz, Dinah Shender, Sloane, Jeff Steif, Izabella Stuhl, Jan Swart,Ryokichi Tanaka, Daniel Wu, and Zhen Zhu.

9 We are particularly grateful to DanielJerison, Pawel Pralat and Perla Sousi who sent us long lists of insightful to first editionMarkov first studied the stochastic processes that came to be named after himin 1906. Approximately a century later, there is an active and diverse interdisci-plinary community of researchers using Markov Chains in computer science, physics,statistics, bioinformatics, engineering, and many other classical theory of Markov Chains studiedfixedchains, and the goal wasto estimate the rate of convergence to stationarity of the distribution at timet, ast . In the past two decades, as interest in Chains with large state spaces hasincreased, a different asymptotic analysis has emerged. Some target distance toxixiiPREFACEthe stationary distribution is prescribed; the number of steps required to reach thistarget is called themixing timeof the chain.

10 Now, the goal is to understand howthe Mixing time grows as the size of the state space modern theory of Markov chain Mixing is the result of the convergence, inthe 1980 s and 1990 s, of several threads. (We mention only a few names here; seethe chapter Notes for references.)For statistical physicists Markov Chains become useful in Monte Carlo simu-lation, especially for models on finite grids. The Mixing time can determine therunning time for simulation. However, Markov Chains are used not only for sim-ulation and sampling purposes, but also as models of dynamical processes . Deepconnections were found between rapid Mixing and spatial properties of spin systems, , by Dobrushin, Shlosman, Stroock, Zegarlinski, Martinelli, and theoretical computer science, Markov Chains play a key role in sampling andapproximate counting algorithms.


Related search queries