### Transcription of INTRODUCTION MACHINE LEARNING - ai.stanford.edu

1 **INTRODUCTION** . TO. **MACHINE** **LEARNING** . AN EARLY DRAFT OF A PROPOSED. TEXTBOOK. Nils J. Nilsson Robotics Laboratory Department of Computer Science **stanford** University **stanford** , CA 94305. e-mail: November 3, 1998. Copyright 2005. c Nils J. Nilsson This material may not be copied, reproduced, or distributed without the written permission of the copyright holder. ii Contents 1 Preliminaries 1. **INTRODUCTION** .. 1. What is **MACHINE** **LEARNING** ? .. 1. Wellsprings of **MACHINE** **LEARNING** .. 3. Varieties of **MACHINE** **LEARNING** .

2 4. **LEARNING** Input-Output Functions .. 5. Types of **LEARNING** .. 5. Input Vectors .. 7. Outputs .. 8. Training Regimes .. 8. Noise .. 9. Performance Evaluation .. 9. **LEARNING** Requires Bias .. 9. Sample Applications .. 11. Sources .. 13. Bibliographical and Historical Remarks .. 13. 2 Boolean Functions 15. Representation .. 15. Boolean Algebra .. 15. Diagrammatic Representations .. 16. Classes of Boolean Functions .. 17. Terms and Clauses .. 17. DNF Functions .. 18. CNF Functions .. 21. Decision Lists.

3 22. Symmetric and Voting Functions .. 23. Linearly Separable Functions .. 23. Summary .. 24. Bibliographical and Historical Remarks .. 25. iii 3 Using Version Spaces for **LEARNING** 27. Version Spaces and Mistake Bounds .. 27. Version Graphs .. 29. **LEARNING** as Search of a Version Space .. 32. The Candidate Elimination Method .. 32. Bibliographical and Historical Remarks .. 34. 4 Neural Networks 35. Threshold Logic Units .. 35. Definitions and Geometry .. 35. Special Cases of Linearly Separable Functions.

4 37. Error-Correction Training of a TLU .. 38. Weight Space .. 40. The Widrow-Hoff Procedure .. 42. Training a TLU on Non-Linearly-Separable Training Sets 44. Linear Machines .. 44. Networks of TLUs .. 46. Motivation and Examples .. 46. Madalines .. 49. Piecewise Linear Machines .. 50. Cascade Networks .. 51. Training Feedforward Networks by Backpropagation .. 52. Notation .. 52. The Backpropagation Method .. 53. Computing Weight Changes in the Final Layer .. 56. Computing Changes to the Weights in Intermediate Layers 58.

5 Variations on Backprop .. 59. An Application: Steering a Van .. 60. Synergies Between Neural Network and Knowledge-Based Methods 61. Bibliographical and Historical Remarks .. 61. 5 Statistical **LEARNING** 63. Using Statistical Decision Theory .. 63. Background and General Method .. 63. Gaussian (or Normal) Distributions .. 65. Conditionally Independent Binary Components .. 68. **LEARNING** Belief Networks .. 70. Nearest-Neighbor Methods .. 70. Bibliographical and Historical Remarks .. 72. iv 6 Decision Trees 73.

6 Definitions .. 73. Supervised **LEARNING** of Univariate Decision Trees .. 74. Selecting the Type of Test .. 75. Using Uncertainty Reduction to Select Tests .. 75. Non-Binary Attributes .. 79. Networks Equivalent to Decision Trees .. 79. Overfitting and Evaluation .. 80. Overfitting .. 80. Validation Methods .. 81. Avoiding Overfitting in Decision Trees .. 82. Minimum-Description Length Methods .. 83. Noise in Data .. 84. The Problem of Replicated Subtrees .. 84. The Problem of Missing Attributes.

7 86. Comparisons .. 86. Bibliographical and Historical Remarks .. 87. 7 Inductive Logic Programming 89. Notation and Definitions .. 90. A Generic ILP Algorithm .. 91. An Example .. 94. Inducing Recursive Programs .. 98. Choosing Literals to Add .. 100. Relationships Between ILP and Decision Tree Induction .. 101. Bibliographical and Historical Remarks .. 104. 8 Computational **LEARNING** Theory 107. Notation and Assumptions for PAC **LEARNING** Theory .. 107. PAC **LEARNING** .. 109. The Fundamental Theorem.

8 109. Examples .. 111. Some Properly PAC-Learnable Classes .. 112. The Vapnik-Chervonenkis Dimension .. 113. Linear Dichotomies .. 113. Capacity .. 115. A More General Capacity Result .. 116. Some Facts and Speculations About the VC Dimension . 117. VC Dimension and PAC **LEARNING** .. 118. Bibliographical and Historical Remarks .. 118. v 9 Unsupervised **LEARNING** 119. What is Unsupervised **LEARNING** ? .. 119. Clustering Methods .. 120. A Method Based on Euclidean Distance .. 120. A Method Based on Probabilities.

9 124. Hierarchical Clustering Methods .. 125. A Method Based on Euclidean Distance .. 125. A Method Based on Probabilities .. 126. Bibliographical and Historical Remarks .. 130. 10 Temporal-Difference **LEARNING** 131. Temporal Patterns and Prediction Problems .. 131. Supervised and Temporal-Difference Methods .. 131. Incremental Computation of the ( W)i .. 134. An Experiment with TD Methods .. 135. Theoretical Results .. 138. Intra-Sequence Weight Updating .. 138. An Example Application: TD-gammon.

10 140. Bibliographical and Historical Remarks .. 141. 11 Delayed-Reinforcement **LEARNING** 143. The General Problem .. 143. An Example .. 144. Temporal Discounting and Optimal Policies .. 145. Q- **LEARNING** .. 147. Discussion, Limitations, and Extensions of Q- **LEARNING** .. 150. An Illustrative Example .. 150. Using Random Actions .. 152. Generalizing Over Inputs .. 153. Partially Observable States .. 154. Scaling Problems .. 154. Bibliographical and Historical Remarks .. 155. vi 12 Explanation-Based **LEARNING** 157.