نتایج جستجو برای: expectation maximum algorithm

تعداد نتایج: 1032475  

2000
Richard Perry Kevin Buckley

In this paper we introduce a new algorithm for the estimation of source location parameters from array data given prior distributions on unknown nuisance source signal parameters. The conditional maximum-likelihood (CML) formulation is employed, and ML estimation is obtained by marginalizing over the nuisance parameters. In general, direct solution of this marginalization ML problem is intracta...

2017
Shervin Shahryari Prashant Doshi

We consider the problem of performing inverse reinforcement learning when the trajectory of the expert is not perfectly observed by the learner. Instead, a noisy continuoustime observation of the trajectory is provided to the learner. This problem exhibits wide-ranging applications and the specific application we consider here is the scenario in which the learner seeks to penetrate a perimeter ...

Journal: :Computational Statistics & Data Analysis 2011
Alice Lemos Morais Wagner Barreto-Souza

In this paper we introduce the class Weibull power series (WPS) of distributions which is obtained by compounding Weibull and power series distributions, where compounding procedure follows same way that was previously carried out by Adamidis and Loukas (1998). This new class of distributions has as particular case the two-parameter class exponential power series (EPS) of distributions, which w...

2015
Naveen Kumar Shrikanth S. Narayanan

Many computational paralinguistic tasks need to work with noisy human annotations that are inherently challenging for the human annotator to provide. In this paper, we propose a discriminative model to account for the inherent heterogeneity in the reliability of annotations associated with a sample while training automatic classification models. Reliability is modeled as a latent factor that go...

2003
Dmitry Pavlov Alexandrin Popescul David M. Pennock Lyle H. Ungar

Driven by successes in several application areas, maximum entropy modeling has recently gained considerable popularity. We generalize the standard maximum entropy formulation of classification problems to better handle the case where complex data distributions arise from a mixture of simpler underlying (latent) distributions. We develop a theoretical framework for characterizing data as a mixtu...

Journal: :IEEE Trans. Automat. Contr. 2000
Charalambos D. Charalambous Andrew Logothetis

This paper is concerned with maximum likelihood (ML) parameter estimation of continuous-time nonlinear partially observed stochastic systems, via the expectation maximization (EM) algorithm. It is shown that the EM algorithm can be executed efficiently, provided the unnormalized conditional density of nonlinear filtering is either explicitly solvable or numerically implemented. The methodology ...

2004
William T. Morgan Warren R. Greiff John C. Henderson

We describe an algorithm for choosing term weights to maximize average precision. The algorithm performs successive exhaustive searches through single directions in weight space. It makes use of a novel technique for considering all possible values of average pre­ cision that arise in searching for a maximum in a given direction. We apply the algorithm and compare this algorithm to a maximum en...

2017
Chanjin Zheng Xiangbin Meng Shaoyang Guo Zhengguang Liu

Stable maximum likelihood estimation (MLE) of item parameters in 3PLM with a modest sample size remains a challenge. The current study presents a mixture-modeling approach to 3PLM based on which a feasible Expectation-Maximization-Maximization (EMM) MLE algorithm is proposed. The simulation study indicates that EMM is comparable to the Bayesian EM in terms of bias and RMSE. EMM also produces sm...

Journal: :Neurocomputing 2004
Mingjun Zhong Huanwen Tang Hongjun Chen Yiyuan Tang

An expectation-maximization (EM) algorithm for learning sparse and overcomplete representations is presented in this paper. We show that the estimation of the conditional moments of the posterior distribution can be accomplished by maximum a posteriori estimation. The approximate conditional moments enable the development of an EM algorithm for learning the overcomplete basis vectors and inferr...

2005
Kevin Duh

In this lecture, we will address problems 3 and 4. First, continuing from the previous lecture, we will view BaumWelch Re-estimation as an instance of the Expectation-Maximization (EM) algorithm and prove why the EM algorithm maximizes data likelihood. Then, we will proceed to discuss discriminative training under the maximum mutual information estimation (MMIE) framework. Specifically, we will...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید