نتایج جستجو برای: em algorithm

تعداد نتایج: 1052416  

2004
Nikolaos Nasios Adrian G. Bors

The approach proposed in this paper takes into account the uncertainty in colour modelling by employing variational Bayesian estimation. Mixtures of Gaussians are considered for modelling colour images. Distributions of parameters characterising colour regions are inferred from data statistics. The Variational Expectation-Maximization (VEM) algorithm is used for estimating the hyperparameters c...

2018
Jianxin Wu

3 The Expectation-Maximization algorithm 7 3.1 Jointly-non-concave incomplete log-likelihood . . . . . . . . . . . 7 3.2 (Possibly) Concave complete data log-likelihood . . . . . . . . . . 8 3.3 The general EM derivation . . . . . . . . . . . . . . . . . . . . . 10 3.4 The E& M-steps . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.5 The EM algorithm . . . . . . . . . . . . . . . . . . ...

Journal: :Foundations and Trends in Signal Processing 2010
Maya R. Gupta Yihua Chen

This introduction to the expectation–maximization (EM) algorithm provides an intuitive and mathematically rigorous understanding of EM. Two of the most popular applications of EM are described in detail: estimating Gaussian mixture models (GMMs), and estimating hidden Markov models (HMMs). EM solutions are also derived for learning an optimal mixture of fixed models, for estimating the paramete...

2002
Shane M. Haas

The Expectation-Maximization (EM) algorithm is a hill-climbing approach to finding a local maximum of a likelihood function [7, 8]. The EM algorithm alternates between finding a greatest lower bound to the likelihood function (the “E Step”), and then maximizing this bound (the “M Step”). The EM algorithm belongs to a broader class of alternating minimization algorithms [6], which includes the A...

Journal: :Communications in Statistics - Simulation and Computation 2016
Takeshi Emura Shau-Kai Shiu

In lifetime analysis of electric transformers, the maximum likelihood estimation has been proposed with the EM algorithm. However, it is not clear whether the EM algorithm offers a better solution compared to the simpler Newton-Raphson algorithm. In this paper, the first objective is a systematic comparison of the EM algorithm with the Newton-Raphson algorithm in terms of convergence performanc...

Journal: :CoRR 2018
Osonde Osoba Bart Kosko

We present a noise-injected version of the Expectation-Maximization (EM) algorithm: the Noisy Expectation Maximization (NEM) algorithm. The NEM algorithm uses noise to speed up the convergence of the EM algorithm. The NEM theorem shows that injected noise speeds up the average convergence of the EM algorithm to a local maximum of the likelihood surface if a positivity condition holds. The gener...

1999
Gavin Smith João F. G. de Freitas Tony Robinson Mahesan Niranjan

Tony Robinson Cambridge University Engineering Department Cambridge CB2 IPZ England [email protected] The speech waveform can be modelled as a piecewise-stationary linear stochastic state space system, and its parameters can be estimated using an expectation-maximisation (EM) algorithm. One problem is the initialisation of the EM algorithm. Standard initialisation schemes can lead to poor forma...

Journal: :Neural Networks 1995
Shun-ichi Amari

In order to realize an input-output relation given by noise-contaminated examples, it is e ective to use a stochastic model of neural networks. A model network includes hidden units whose activation values are not speci ed nor observed. It is useful to estimate the hidden variables from the observed or speci ed input-output data based on the stochastic model. Two algorithms, the EM and em-algor...

2010

The Expectation-Maximization (EM) algorithm is a popular tool for determining maximum likelihood estimates (MLE) when a closed form solution does not exist. It is often used for parametric density estimation, that is, to estimate the parameters of a density function when knowledge of the parameters is equivalent to knowledge of the density. The most famous case is the Gaussian distribution, whi...

1996
Yoshitaka KAMEYA Taisuke SATO

We have been developing a general symbolic-statistical modeling language [6, 19, 20] based on the logic programming framework that semantically uni es (and extends) major symbolic-statistical frameworks such as hidden Markov models (HMMs) [18], probabilistic contextfree grammars (PCFGs) [23] and Bayesian networks [16]. The language, PRISM, is intended to model complex symbolic phenomena governe...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید