نتایج جستجو برای: change point maximum likelihood estimator mle step change simple linear profile within

تعداد نتایج: 3258606  

2015
Bander Al-Zahrani M. A. Ali Anwar Joarder

In this paper, we derive the recurrence relations for the moments of function of single and two order statistics from Lindley distribution. We also consider the maximum likelihood estimation (MLE) of the parameter of the distribution based on multiply type-II censoring. The maximum likelihood estimator is comupted numerically because it does not have an explicit form for the parameter. Then, a ...

Journal: :Electronic Journal of Statistics 2021

We revisit the problem of estimating center symmetry θ an unknown symmetric density f. Although Stone (1975), Van Eeden (1970), and Sacks (1975) constructed adaptive estimators in this model, their depend on external tuning parameters. In effort to reduce burden parameters, we impose additional restriction log-concavity construct truncated one-step which are under assumption. Our simulations in...

2009
R. Dennis Cook Bing Li Francesca Chiaromonte FRANCESCA CHIAROMONTE

We propose a new parsimonious version of the classical multivariate normal linear model, yielding a maximum likelihood estimator (MLE) that is asymptotically less variable than the MLE based on the usual model. Our approach is based on the construction of a link between the mean function and the covariance matrix, using the minimal reducing subspace of the latter that accommodates the former. T...

Journal: :Communications in Statistics 2021

In cluster-specific studies, ordinary logistic regression and conditional for binary outcomes provide maximum likelihood estimator (MLE) (CMLE), respectively. this paper, we show that CMLE is approaching to MLE asymptotically when each individual data point replicated infinitely many times. Our theoretical derivation based on the observation a term appearing in average log-likelihood function c...

2016
Li Chou Somdeb Sarkhel Nicholas Ruozzi Vibhav Gogate

The maximum likelihood estimator (MLE) is generally asymptotically consistent but is susceptible to overfitting. To combat this problem, regularization methods which reduce the variance at the cost of (slightly) increasing the bias are often employed in practice. In this paper, we present an alternative variance reduction (regularization) technique that quantizes the MLE estimates as a post pro...

2017
Victor-Emmanuel Brunel Ankur Moitra Philippe Rigollet John Urschel

Determinantal point processes (DPPs) have wide-ranging applications in machine learning, where they are used to enforce the notion of diversity in subset selection problems. Many estimators have been proposed, but surprisingly the basic properties of the maximum likelihood estimator (MLE) have received little attention. In this paper, we study the local geometry of the expected log-likelihood f...

Journal: :IEEE Transactions on Information Theory 2022

In real life we often deal with independent but not identically distributed observations (i.n.i.d.o), for which the most well-known statistical model is multiple linear regression (MLRM) non-random covariates. While classical methods are based on maximum likelihood estimator (MLE), it well known its lack of robustness to small deviations from assumed conditions. this paper, and Rényi’s pseudodi...

Journal: :IEEE Trans. Signal Processing 2000
Steven M. Kay Supratim Saha

Estimation of signals with nonlinear as well as linear parameters in noise is studied. Maximum likelihood estimation has been shown to perform the best among all the methods. In such problems, joint maximum likelihood estimation of the unknown parameters reduces to a separable optimization problem, where first, the nonlinear parameters are estimated via a grid search, and then, the nonlinear pa...

2004
JUNGSOO WOO

The uniformly minimum variance unbiased estimator (UMVUE) and the maximum likelihood estimator (MLE) are derived for samples from uniform distribution in the presence of outliers, where outliers are generated from generalized uniform distribution (GUD), when one parameter of the GUD is known. In this case it has been shown that UMVUE is better than MLE. When both the parameters are unknown then...

Journal: :Mathematics 2023

Multicollinearity negatively affects the efficiency of maximum likelihood estimator (MLE) in both linear and generalized models. The Kibria Lukman (KLE) was developed as an alternative to MLE handle multicollinearity for regression model. In this study, we proposed Logistic Kibria-Lukman (LKLE) logistic We theoretically established superiority condition new over MLE, ridge (LRE), Liu (LLE), Liu...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید