PDF4PRO ⚡AMP

Modern search engine that looking for books and documents around the web

Example: stock market

11. Parameter Estimation - Stanford University

11. Parameter EstimationChris Piech and Mehran SahamiMay 2017We have learned many different distributions for random variables and all of those distributions had parame-ters: the numbers that you provide as input when you define a random variable. So far when we were workingwith random variables, we either were explicitly told the values of the parameters, or, we could divine thevalues by understanding the process that was generating the random if we don t know the values of the parameters and we can t estimate them from our own expert knowl-edge? What if instead of knowing the random variables, we have a lot of examples of data generated withthe same underlying distribution? In this chapter we are going to learn formal ways of estimating parametersfrom ideas are critical for artificial intelligence. Almost all modern machine learning algorithms work likethis: (1) specify a probabilistic model that has parameters.

Maximum Likelihood Our first algorithm for estimating parameters is called Maximum Likelihood Estimation (MLE). The central idea behind MLE is to select that parameters (q) that make the observed data the most likely. The data that we are going to use to estimate the parameters are going to be n independent and identically distributed (IID ...

Tags:

  Estimation, Likelihood, Likelihood estimation

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Spam in document Broken preview Other abuse

Transcription of 11. Parameter Estimation - Stanford University

Related search queries