نتایج جستجو برای: variance reduction technique

تعداد نتایج: 1160769  

2010
Vibhav Gogate Rina Dechter

In this paper, we consider two variance reduction schemes that exploit the structure of the primal graph of the graphical model: Rao-Blackwellised w-cutset sampling and AND/OR sampling. We show that the two schemes are orthogonal and can be combined to further reduce the variance. Our combination yields a new family of estimators which trade time and space with variance. We demonstrate experime...

2015
Guo Liu Qiang Zhao

In this paper we present a simple control variate method, for options pricing under stochastic volatility models by the risk-neutral pricing formula, which is based on the order moment of the stochastic factor Yt of the stochastic volatility for choosing a non-random factor Y (t) with the same order moment. We construct the control variate using a stochastic differential equation with a determi...

2001
Stefan Heinrich

We study Monte Carlo approximations to high dimensional parameter dependent integrals. We survey the multilevel variance reduction technique introduced by the author in [4] and present extensions and new developments of it. The tools needed for the convergence analysis of vector-valued Monte Carlo methods are discussed, as well. Applications to stochastic solution of integral equations are give...

2017

A. Additional Applications and Experimental Results In this section, we present the application of our generic framework to one-bit matrix completion as well as additional experimental results for matrix sensing. A.1. One-bit Matrix Completion Compared with matrix completion, we only observe the sign of each noisy entries of the unknown low-rank matrix X⇤ in one-bit matrix completion (Davenport...

2004
Masashi Sugiyama Motoaki Kawanabe Klaus-Robert Müller

A well-known result by Stein shows that regularized estimators with small bias often yield better estimates than unbiased estimators. In this paper, we adapt this spirit to model selection, and propose regularizing unbiased generalization error estimators for stabilization. We trade a small bias in a model selection criterion against a larger variance reduction which has the beneficial effect o...

Journal: :Computer Physics Communications 2007
Tommy Burch Christian Hagen

Applying domain decomposition to the lattice Dirac operator and the associated quark propagator, we arrive at expressions which, with the proper insertion of random sources therein, can provide improvement to the estimation of the propagator. Schemes are presented for both open and closed (or loop) propagators. In the end, our technique for improving open contributions is similar to the “maxima...

2017
Amr Sharaf Hal Daumé

We present an algorithm for structured prediction under online bandit feedback. The learner repeatedly predicts a sequence of actions, generating a structured output. It then observes feedback for that output and no others. We consider two cases: a pure bandit setting in which it only observes a loss, and more fine-grained feedback in which it observes a loss for every action. We find that the ...

2006
Werner Sandmann

Importance Sampling is a variance reduction technique possessing the potential of zero-variance estimators in its optimal case. It has been successfully applied in a variety of settings ranging from Monte Carlo methods for static models to simulations of complex dynamical systems governed by stochastic processes. We demonstrate the applicability of Importance Sampling to the simulation of coupl...

2003
Giorgio Valentini Thomas G. Dietterich

Theoretical and experimental analyses of bagging indicate that it is primarily a variance reduction technique. This suggests that bagging should be applied to learning algorithms tuned to minimize bias, even at the cost of some increase in variance. We test this idea with Support Vector Machines (SVMs) by employing out-of-bag estimates of bias and variance to tune the SVMs. Experiments indicate...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید