نتایج جستجو برای: importance sampling

تعداد نتایج: 590784  

2007
Roman Liesenfeld Guilherme V. Moura Jean-François Richard

We use panel probit models with unobserved heterogeneity and serially correlated errors in order to analyze the determinants and the dynamics of current-account reversals for a panel of developing and emerging countries. The likelihood evaluation of these models requires high-dimensional integration for which we use a generic procedure known as Efficient Importance Sampling (EIS). Our empirical...

Journal: :IEEE Signal Processing Letters 2021

Importance sampling (IS) is a powerful Monte Carlo (MC) methodology for approximating integrals, instance in the context of Bayesian inference. In IS, samples are simulated from so-called proposal distribution, and choice this key achieving high performance. adaptive IS (AIS) methods, set proposals iteratively improved. AIS relevant timely although many limitations remain yet to be overcome, e....

Journal: :ACM Transactions on Graphics 2022

As scenes become ever more complex and real-time applications embrace ray tracing, path sampling algorithms that maximize quality at low sample counts vital. Recent resampling building on Talbot et al.'s [2005] resampled importance (RIS) reuse paths spatiotemporally to render surprisingly light transport with a few samples per pixel. These reservoir-based spatiotemporal resamplers (ReSTIR) thei...

Journal: :Scandinavian Journal of Statistics 2012

Journal: :SSRN Electronic Journal 2013

Journal: :IEEE Signal Processing Letters 2016

Journal: :The Annals of Applied Probability 2010

Journal: :Statistics and Computing 2021

Abstract Adaptive importance sampling is a class of techniques for finding good proposal distributions sampling. Often the are standard probability whose parameters adapted based on mismatch between current and target distribution. In this work, we present an implicit adaptive method that applies to complicated which not available in closed form. The iteratively matches moments set Monte Carlo ...

Journal: :CoRR 2015
Nan Jiang Lihong Li

We study the problem of evaluating a policy that is different from the one that generates data. Such a problem, known as off-policy evaluation in reinforcement learning (RL), is encountered whenever one wants to estimate the value of a new solution, based on historical data, before actually deploying it in the real system, which is a critical step of applying RL in most real-world applications....

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید