Expectation-maximization algorithms for inference in Dirichlet processes mixture

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Mixture Models and Expectation-Maximization

This tutorial attempts to provide a gentle introduction to EM by way of simple examples involving maximum-likelihood estimation of mixture-model parameters. Readers familiar with ML paramter estimation and clustering may want to skip directly to Sections 5.2 and 5.3.

متن کامل

Expectation-Maximization for Learning Determinantal Point Processes

A determinantal point process (DPP) is a probabilistic model of set diversity compactly parameterized by a positive semi-definite kernel matrix. To fit a DPP to a given task, we would like to learn the entries of its kernel matrix by maximizing the log-likelihood of the available data. However, log-likelihood is non-convex in the entries of the kernel matrix, and this learning problem is conjec...

متن کامل

Distributed Inference for Dirichlet Process Mixture Models

Bayesian nonparametric mixture models based on the Dirichlet process (DP) have been widely used for solving problems like clustering, density estimation and topic modelling. These models make weak assumptions about the underlying process that generated the observed data. Thus, when more data are collected, the complexity of these models can change accordingly. These theoretical properties often...

متن کامل

Noise Benefits in Expectation-Maximization Algorithms

This dissertation shows that careful injection of noise into sample data can substantially speed up Expectation-Maximization algorithms. Expectation-Maximization algorithms are a class of iterative algorithms for extracting maximum likelihood estimates from corrupted or incomplete data. The convergence speed-up is an example of a noise benefit or"stochastic resonance"in statistical signal proce...

متن کامل

Online Expectation Maximization based algorithms for inference in hidden Markov models

The Expectation Maximization (EM) algorithm is a versatile tool for model parameter estimation in latent data models. When processing large data sets or data stream however, EM becomes intractable since it requires the whole data set to be available at each iteration of the algorithm. In this contribution, a new generic online EM algorithm for model parameter inference in general Hidden Markov ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Pattern Analysis and Applications

سال: 2011

ISSN: 1433-7541,1433-755X

DOI: 10.1007/s10044-011-0256-4