最大期望算法(EM)相关的几篇文献下载
* Arthur Dempster, Nan Laird, and Donald Rubin. “Maximum likelihood from incomplete data via the EM algorithm”. Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977 [1].
* Robert Hogg, Joseph McKean and Allen Craig. Introduction to Mathematical Statistics. pp. 359-364. Upper Saddle River, NJ: Pearson Prentice Hall, 2005.
* Radford Neal, Geoffrey Hinton. “A view of the EM algorithm that justifies incremental, sparse, and other variants”. In Michael I. Jordan (editor), Learning in Graphical Models pp 355-368. Cambridge, MA: MIT Press, 1999.
* The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay includes simple examples of the E-M algorithm such as clustering using the soft K-means algorithm, and emphasizes the variational view of the E-M algorithm.
* A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models, by J. Bilmes includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models.
* Information Geometry of the EM and em Algorithms for Neural Networks, by Shun-Ichi Amari give a view of EM algorithm from geometry view point.
from http://www.5yiso.cn
An expectation-maximization (EM) algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. EM alternates between performing an expectation (E) step, which computes an expectation of the likelihood by including the latent variables as if they were observed, and a maximization (M) step, which computes the maximum likelihood estimates of the parameters by maximizing the expected likelihood found on the E step. The parameters found on the M step are then used to begin another E step, and the process is repeated.