Example: confidence

Introduction to Multi-Armed Bandits

Introduction to Multi-Armed Bandits Aleksandrs Slivkins Microsoft Research NYC. [ ] 8 Jan 2022. First draft: January 2017. Published: November 2019. This version: January 2022. Abstract Multi-Armed Bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous body of work has accumulated over the years, covered in several books and surveys. This book provides a more introductory , textbook-like treatment of the subject. Each chapter tackles a particular line of work, providing a self-contained, teachable technical Introduction and a brief review of the further developments; many of the chapters conclude with exercises.

books and surveys. This book provides a more introductory, textbook-like treatment of the subject. Each chapter tackles a particular line of work, providing a self-contained, teachable technical introduction and a brief review of the further developments; many of the chapters conclude with exercises. The book is structured as follows.

Tags:

  Introductory

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Introduction to Multi-Armed Bandits

1 Introduction to Multi-Armed Bandits Aleksandrs Slivkins Microsoft Research NYC. [ ] 8 Jan 2022. First draft: January 2017. Published: November 2019. This version: January 2022. Abstract Multi-Armed Bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous body of work has accumulated over the years, covered in several books and surveys. This book provides a more introductory , textbook-like treatment of the subject. Each chapter tackles a particular line of work, providing a self-contained, teachable technical Introduction and a brief review of the further developments; many of the chapters conclude with exercises.

2 The book is structured as follows. The first four chapters are on IID rewards, from the basic model to impossibility results to Bayesian priors to Lipschitz rewards. The next three chapters cover adversarial rewards, from the full-feedback version to adversarial Bandits to extensions with linear rewards and combinatorially structured actions. Chapter 8 is on contextual Bandits , a middle ground between IID and adversarial Bandits in which the change in reward distributions is completely explained by observable contexts. The last three chapters cover connections to economics, from learning in repeated games to Bandits with supply/budget constraints to exploration in the presence of incentives.

3 The appendix provides sufficient background on concentration and KL-divergence. The chapters on Bandits with similarity information , Bandits with knapsacks and Bandits and agents can also be consumed as standalone surveys on the respective topics. Published with Foundations and Trendsr in Machine Learning, November 2019. This online version is a revision of the Foundations and Trends publication. It contains numerous edits for presentation and accuracy (based in part on readers' feedback), some new exercises, and updated and expanded literature reviews.

4 Further comments, suggestions and bug reports are very welcome! 2017-2022: Aleksandrs Slivkins. Author's webpage: Email: slivkins at Preface Multi-Armed Bandits is a rich, multi-disciplinary research area which receives attention from computer sci- ence, operations research, economics and statistics. It has been studied since (Thompson, 1933), with a big surge of activity in the past 15-20 years. An enormous body of work has accumulated over time, various subsets of which have been covered in several books (Berry and Fristedt, 1985; Cesa-Bianchi and Lugosi, 2006; Gittins et al.)

5 , 2011; Bubeck and Cesa-Bianchi, 2012). This book provides a more textbook-like treatment of the subject, based on the following principles. The literature on Multi-Armed Bandits can be partitioned into a dozen or so lines of work. Each chapter tackles one line of work, providing a self-contained Introduction and pointers for further reading. We favor fundamental ideas and elementary proofs over the strongest possible results. We emphasize accessibility of the material: while exposure to machine learning and probability/statistics would certainly help, a standard undergraduate course on algorithms, , one based on (Kleinberg and Tardos, 2005), should suffice for background.

6 With the above principles in mind, the choice specific topics and results is based on the au- thor's subjective understanding of what is important and teachable , , presentable in a relatively simple manner. Many important results has been deemed too technical or advanced to be presented in detail. The book is based on a graduate course at University of Maryland, College Park, taught by the author in Fall 2016. Each chapter corresponds to a week of the course. Five chapters were used in a similar course at Columbia University, co-taught by the author in Fall 2017.

7 Some of the material has been updated since then, to improve presentation and reflect the latest developments. To keep the book manageable, and also more accessible, we chose not to dwell on the deep connec- tions to online convex optimization. A modern treatment of this fascinating subject can be found, , in Shalev-Shwartz (2012); Hazan (2015). Likewise, we do not venture into reinforcement learning, a rapidly developing research area and subject of several textbooks such as Sutton and Barto (1998); Szepesva ri (2010); Agarwal et al. (2020). A course based on this book would be complementary to graduate-level courses on online convex optimization and reinforcement learning.

8 Also, we do not discuss Markovian models of Multi-Armed Bandits ; this direction is covered in depth in Gittins et al. (2011). The author encourages colleagues to use this book in their courses. A brief email regarding which chapters have been used, along with any feedback, would be appreciated. A simultaneous book. An excellent recent book on Bandits , Lattimore and Szepesva ri (2020), has evolved over several years simultaneously and independently with mine. Their book is longer, provides deeper treatment for some topics (esp. for adversarial and linear Bandits ), and omits some others ( , Lipschitz Bandits , Bandits with knapsacks, and connections to economics).

9 Reflecting the authors' differing tastes and presentation styles, the two books are complementary to one another. Acknowledgements. Most chapters originated as lecture notes from my course at UMD; the initial versions of these lectures were scribed by the students. Presentation of some of the fundamental results is influenced by (Kleinberg, 2007). I am grateful to Alekh Agarwal, Bobby Kleinberg, Akshay Krishnamurthy, Yishay Mansour, John Langford, Thodoris Lykouris, Rob Schapire, and Mark Sellke for discussions, comments, and advice. Chapters 9, 10 have benefited tremendously from numerous conversations with Karthik Abinav Sankararaman.

10 Special thanks go to my PhD advisor Jon Kleinberg and my postdoc mentor Eli Upfal; Jon has shaped my taste in research, and Eli has introduced me to Multi-Armed Bandits back in 2006. Finally, I. wish to thank my parents and my family for love, inspiration and support. ii Contents Introduction : Scope and Motivation 1. 1 Stochastic Bandits 4. Model and examples .. 4. Simple algorithms: uniform exploration .. 6. Advanced algorithms: adaptive exploration .. 8. Forward look: Bandits with initial information .. 13. Literature review and discussion.


Related search queries