Example: tourism industry

Parametric vs Nonparametric Models - Max Planck Society

Parametric vs Nonparametric Models Parametric modelsassume somefinite set of parameters .Giventheparameters,future predictions,x, are independent of the observed data,D:P(x| ,D)=P(x| )therefore capture everything there is to know about the data. So the complexity of the model is bounded even if the amount of data isunbounded. This makes them not very flexible. Non- Parametric modelsassume that the data distribution cannot be defined interms of such a finite set of parameters. But they can often be defined byassuming aninfinite dimensional . Usually we think of as afunction. The amount of information that can capture about the dataDcan grow asthe amount of data grows. This makes them more nonparametricsA simple framework for modelling complex Models can be viewed as having infinitely many parametersExamples of non- Parametric Models :ParametricNon-parametricApplicati onpolynomial regressionGaussian processesfunction regressionGaussian process classifiersclassificationmixture Models , k-meansDirichlet process mixturesclusteringhidden Markov modelsinfinite HMMstime seriesfactor analysis / pPCA / PMFinfinite latent factor modelsfeature regression and Gaussian processesConsider the problem ofnonlinear regression:You want to learn afunctio

Parametric vs Nonparametric ModelsParametric models assume some finite set of parameters .Giventheparameters, future predictions, x, are independent of the observed data, D: P(x| ,D)=P(x| ) therefore capture everything there is to know about the data. • So the complexity of the model is bounded even if the amount of data is unbounded.

Tags:

  Model, Parametric models, Parametric

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Parametric vs Nonparametric Models - Max Planck Society

1 Parametric vs Nonparametric Models Parametric modelsassume somefinite set of parameters .Giventheparameters,future predictions,x, are independent of the observed data,D:P(x| ,D)=P(x| )therefore capture everything there is to know about the data. So the complexity of the model is bounded even if the amount of data isunbounded. This makes them not very flexible. Non- Parametric modelsassume that the data distribution cannot be defined interms of such a finite set of parameters. But they can often be defined byassuming aninfinite dimensional . Usually we think of as afunction. The amount of information that can capture about the dataDcan grow asthe amount of data grows. This makes them more nonparametricsA simple framework for modelling complex Models can be viewed as having infinitely many parametersExamples of non- Parametric Models :ParametricNon-parametricApplicati onpolynomial regressionGaussian processesfunction regressionGaussian process classifiersclassificationmixture Models , k-meansDirichlet process mixturesclusteringhidden Markov modelsinfinite HMMstime seriesfactor analysis / pPCA / PMFinfinite latent factor modelsfeature regression and Gaussian processesConsider the problem ofnonlinear regression:You want to learn afunctionfwitherror barsfromdataD={X,y}xyAGaussian processdefines a distribution over functionsp(f)which can be used forBayesian regression:p(f|D)=p(f)p(D|f)p(D)Letf=(f( x1),f(x2).)

2 ,f(xn))be ann-dimensional vector of function valuesevaluated atnpointsxi2X. Note,fis a random :p(f)is aGaussian processif foranyfinite subset{x1,..,xn} X,the marginal distribution over that subsetp(f)is multivariate RegressionLinear RegressionKernel RegressionBayesian Linear RegressionGP ClassificationBayesian Logistic RegressionKernel ClassificationGP RegressionClassificationBayesianKernelNe ural networks and Gaussian processesinputsoutputsxyweightshiddenuni tsweightsBayesian neural networkData:D={(x(n),y(n))}Nn=1=(X,y)Par ameters are the weights of the neural netparameter priorp( | )parameter posteriorp( | ,D)/p(y|X, )p( | )predictionp(y0|D,x0, )=Rp(y0|x0, )p( |D, )d AGaussian processmodels functionsy=f(x)A multilayer perceptron (neural network) withinfinitely many hidden units and Gaussian priorson the weights!a GP (Neal, 1996)See also recent work on Deep Gaussian Processes(Damianou and Lawrence, 2013)xy


Related search queries