نتایج جستجو برای: variance reduction
تعداد نتایج: 591030 فیلتر نتایج به سال:
Ensemble classification techniques such as bagging, (Breiman, 1996a), boosting (Freund & Schapire, 1997) and arcing algorithms (Breiman, 1997) have received much attention in recent literature. Such techniques have been shown to lead to reduced classification error on unseen cases. Even when the ensemble is trained well beyond zero training set error, the ensemble continues to exhibit improved ...
A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers [30, 37, 61, 72], introducing an appropriately chosen nonreversible component to the dynamics is benef...
Suppose as usual that we wish to estimate θ := E[h(X)]. Then the standard simulation algorithm is: 2. Estimate θ with θ n = n j=1 Y j /n where Y j := h(X j). 3. Approximate 100(1 − α)% confidence intervals are then given by θ n − z 1−α/2 σ n √ n , θ n + z 1−α/2 σ n √ n where σ n is the usual estimate of Var(Y) based on Y 1 ,. .. , Y n. One way to measure the quality of the estimator, θ n , is b...
Conjugate gradient methods are a class of important methods for solving linear equations and nonlinear optimization. In our work, we propose a new stochastic conjugate gradient algorithm with variance reduction (CGVR) and prove its linear convergence with the Fletcher and Revves method for strongly convex and smooth functions. We experimentally demonstrate that the CGVR algorithm converges fast...
In this paper, we consider two variance reduction schemes that exploit the structure of the primal graph of the graphical model: Rao-Blackwellised w-cutset sampling and AND/OR sampling. We show that the two schemes are orthogonal and can be combined to further reduce the variance. Our combination yields a new family of estimators which trade time and space with variance. We demonstrate experime...
Monte-Carlo Tree Search (MCTS) has proven to be a powerful, generic planning technique for decision-making in single-agent and adversarial environments. The stochastic nature of the Monte-Carlo simulations introduces errors in the value estimates, both in terms of bias and variance. Whilst reducing bias (typically through the addition of domain knowledge) has been studied in the MCTS literature...
In this paper we present a simple control variate method, for options pricing under stochastic volatility models by the risk-neutral pricing formula, which is based on the order moment of the stochastic factor Yt of the stochastic volatility for choosing a non-random factor Y (t) with the same order moment. We construct the control variate using a stochastic differential equation with a determi...
We study Monte Carlo approximations to high dimensional parameter dependent integrals. We survey the multilevel variance reduction technique introduced by the author in [4] and present extensions and new developments of it. The tools needed for the convergence analysis of vector-valued Monte Carlo methods are discussed, as well. Applications to stochastic solution of integral equations are give...
A. Additional Applications and Experimental Results In this section, we present the application of our generic framework to one-bit matrix completion as well as additional experimental results for matrix sensing. A.1. One-bit Matrix Completion Compared with matrix completion, we only observe the sign of each noisy entries of the unknown low-rank matrix X⇤ in one-bit matrix completion (Davenport...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید