نتایج جستجو برای: maximum likelihood

تعداد نتایج: 357813  

2002
J. Řeháček

– Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion method...

2004
Clayton Scott Robert Nowak

This module introduces the maximum likelihood estimator. We show how the MLE implements the likelihood principle. Methods for computing th MLE are covered. Properties of the MLE are discussed including asymptotic e ciency and invariance under reparameterization. The maximum likelihood estimator (MLE) is an alternative to the minimum variance unbiased estimator (MVUE). For many estimation proble...

Journal: :Journal of vision 2003
Laurence T Maloney Joong Nam Yang

We present a stochastic model of suprathreshold perceptual differences based on difference measurement. We develop a maximum likelihood difference scaling (MLDS) method for estimating its parameters and evaluate the reliability and distributional robustness of the fitting method. We also describe a method for testing whether the difference measurement model is appropriate as a description of hu...

2000

The method of maximum likelihood (ML), introduced by Fisher (1921), is widely used in human and quantitative genetics and we draw upon this approach throughout the book, especially in Chapters 13–16 (mixture distributions) and 26–27 (variance component estimation). Weir (1996) gives a useful introduction with genetic applications, while Kendall and Stuart (1979) and Edwards (1992) provide more ...

2007
Mark Siskind

This paper presents a novel framework, based on maximum likelihood, for training models to recognise simple spatial-motion events, such as those described by the verbs pick up, put down, push, pull, drop, and throw, and classifying novel observations into previously trained classes. The model that we employ does not presuppose prior recognition or tracking of 3D object pose, shape, or identity....

1998
Charalambos D. Charalambous

The problem of estimating the parameters for continuous-time partially observed systems is discussed. New exact lters for obtaining Maximum Likelihood (ML) parameter estimates via the Expectation Maximization algorithm are derived. The methodology exploits relations between incomplete and complete data likelihood and gradient of likelihood functions, which are derived using Girsanov's measure t...

2000
Clark F. Olson

In image matching applications such as tracking and stereo matching, it is common to use the sum-of-squareddi erences (SSD) measure to determine the best match for an image template. However, this measure is sensitive to outliers and is not robust to template variations. We describe a robust measure and eÆcient search strategy for template matching with a binary or greyscale template using a ma...

1989
Steven J. Nowlan

One popular class of unsupervised algorithms are competitive algorithms. In the traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view competitive adaptation as attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maximum likelihood fit of a model of this type suggests a "softer" form ...

2004
Clayton Scott Robert Nowak

This module introduces the maximum likelihood estimator. We show how the MLE implements the likelihood principle. Methods for computing th MLE are covered. Properties of the MLE are discussed including asymptotic e ciency and invariance under reparameterization. The maximum likelihood estimator (MLE) is an alternative to the minimum variance unbiased estimator (MVUE). For many estimation proble...

2004
Eunmo Kang

1 Summary of Lecture 12 In the last lecture we derived a risk (MSE) bound for regression problems; i.e., select an f ∈ F so that E[(f(X)− Y )]− E[(f∗(X)− Y )] is small, where f∗(x) = E[Y |X = x]. The result is summarized below. Theorem 1 (Complexity Regularization with Squared Error Loss) Let X = R, Y = [−b/2, b/2], {Xi, Yi}i=1 iid, PXY unknown, F = {collection of candidate functions}, f : R → ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید