نتایج جستجو برای: expectationmaximization

تعداد نتایج: 273  

2002
David Chiang Daniel M. Bikel

Many recent statistical parsers rely on a preprocessing step which uses hand-written, corpus-specific rules to augment the training data with extra information. For example, head-finding rules are used to augment node labels with lexical heads. In this paper, we provide machinery to reduce the amount of human effort needed to adapt existing models to new corpora: first, we propose a flexible no...

2016
Jianan Sun Yunxiao Chen Jingchen Liu

We develop a latent variable selection method for multidimensional item response theory models. The proposed method identifies latent traits probed by items of a multidimensional test. Its basic strategy is to impose an L1 penalty term to the log-likelihood. The computation is carried out by the expectationmaximization algorithm combined with the coordinate descent algorithm. To the authors’ be...

Journal: :IEEE/ACM transactions on audio, speech, and language processing 2022

This paper describes heavy-tailed extensions of a state-of-the-art versatile blind source separation method called fast multichannel nonnegative matrix factorization (FastMNMF) from unified point view. The common way deriving such an extension is to replace the multivariate complex Gaussian distribution in likelihood function with its generalization, e.g., Student's t and leptokurtic generalize...

1997
Timothy A. Barton Steven T. Smith

Adaptive algorithms require a good estimate of the interference covariance matrix. In situations with limited sample support such an estimate is not available unless there is structure to be exploited. In applications such as space-time adaptive processing (STAP) the underlying covariance matrix is structured (e.g., block Toeplitz), and it is possible to exploit this structure to arrive at impr...

Journal: :IEEE Trans. Systems, Man, and Cybernetics, Part A 1999
Nikos A. Vlassis Aristidis Likas

We address the problem of probability density function estimation using a Gaussian mixture model updated with the expectationmaximization (EM) algorithm. To deal with the case of an unknown number of mixing kernels, we define a new measure for Gaussian mixtures, called total kurtosis, which is based on the weighted sample kurtoses of the kernels. This measure provides an indication of how well ...

2016
Johannes Blömer Kathrin Bujna

This paper is a pre-print of a paper that has been accepted for publication in the Proceedings of the 20th Pacific Asia Conference on Knowledge Discovery and Data Mining (PAKDD) 2016. The final publication is available at link.springer.com (http://link.springer.com/chapter/10.1007/978-3-319-31750-2 24). Abstract. We present new initialization methods for the expectationmaximization algorithm fo...

2016
Zhao Song Ricardo Henao David E. Carlson Lawrence Carin

Belief networks are commonly used generative models of data, but require expensive posterior estimation to train and test the model. Learning typically proceeds by posterior sampling, variational approximations, or recognition networks, combined with stochastic optimization. We propose using an online Monte Carlo expectationmaximization (MCEM) algorithm to learn the maximum a posteriori (MAP) e...

2005
Jen-Tzung Chien Meng-Sung Wu Chia-Sheng Wu

Probabilistic latent semantic analysis (PLSA) is a popular approach to text modeling where the semantics and statistics in documents can be effectively captured. In this paper, a novel Bayesian PLSA framework is presented. We focus on exploiting the incremental learning algorithm for solving the updating problem of new domain articles. This algorithm is developed to improve text modeling by inc...

Journal: :CoRR 2017
Kejun Huang Nikos D. Sidiropoulos

We study the problem of nonnegative rank-one approximation of a nonnegative tensor, and show that the globally optimal solution that minimizes the generalized Kullback-Leibler divergence can be efficiently obtained, i.e., it is not NP-hard. This result works for arbitrary nonnegative tensors with an arbitrary number of modes (including two, i.e., matrices). We derive a closed-form expression fo...

Journal: :Computational Statistics & Data Analysis 2017
Alessandro Chiancone Florence Forbes Stéphane Girard

Sliced Inverse Regression (SIR) has been extensively used to reduce the dimension of the predictor space before performing regression. SIR is originally a model free method but it has been shown to actually correspond to the maximum likelihood of an inverse regression model with Gaussian errors. This intrinsic Gaussianity of standard SIR may explain its high sensitivity to outliers as observed ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید