Transcription of The EM Algorithm for Gaussian Mixtures
{{id}} {{{paragraph}}}
The EM Algorithm for Gaussian MixturesProbabilistic Learning: Theory and Algorithms, CS 274 AFinite Mixture ModelsWe are given a data setD={x1, .. , xN}wherexiis ad-dimensional vector measurement. Assume thatthe points are generated in an IID fashion from an underlying densityp(x). We further assume thatp(x)isdefined as a finite mixture model withKcomponents:p(x| ) =K k=1 kpk(x|zk, k)where: Thepk(x|zk, k)aremixture components,1 k K. Each is a density or distribution defined overp(x), with parameters k. z= (z1, .. , zK)is a vector ofKbinary indicator variables that are mutually exclusive and exhaustive( , one and only one of thezk s is equal to 1, and the others are 0).zis aK-ary random variablerepresenting the identity of the mixture component that generatedx.
Gaussian Mixture Models For x ∈ Rd we can define a Gaussian mixture model by making each of the K components a Gaussian density with parameters µ k and Σ k. Each component is a multivariate Gaussian density p k(x|θ k) = 1 (2π)d/2|Σ k|1/2 e− 1 2 (x−µ k)tΣ− k (x−µ ) with its own parameters θ k = {µ k,Σ k}. The EM Algorithm ...
Domain:
Source:
Link to this page:
Please notify us if you found a problem with this document:
{{id}} {{{paragraph}}}