Parametric Learning and Monte Carlo Optimization

نویسندگان

  • David H. Wolpert
  • Dev G. Rajnarayan
چکیده

This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and ‘blackbox’ or ‘oracle’-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only the values of the integrand at those locations are considered. We demonstrate that one can exploit the sample location information using PL techniques, for example by forming a fit of the sample locations to the associated values of the integrand. This provides an additional way to apply PL techniques to improve MCO.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bayesian Optimization with Robust Bayesian Neural Networks

Bayesian optimization is a prominent method for optimizing expensive-to-evaluate black-box functions that is widely applied to tuning the hyperparameters of machine learning algorithms. Despite its successes, the prototypical Bayesian optimization approach – using Gaussian process models – does not scale well to either many hyperparameters or many function evaluations. Attacking this lack of sc...

متن کامل

An Adaptive Scheme for Real Function Optimization Acting as a Selection Operator

We propose an adaptive scheme for real function optimization whose dynamics is driven by selection. The method is parametric and relies explicitly on the Gaussian density seen as an infinite search population. We define two gradient flows acting on the density parameters, in the spirit of neural network learning rules, which maximize either the function expectation relatively to the density or ...

متن کامل

An efficient method for parametric yield gradient estimation

A novel method to improve the yield gradient estimation in parametric yield optimization is proposed. By introducing some deterministic information into the conventional Monte Carlo method and fully utilizing the samples, it is possible to obtain yield gradient estimation with significantly smaller variance. The additional computation is almost negligible. Examples are presented to indicate the...

متن کامل

A Revenue Maximizing Strategy Based on Bayesian Analysis of Demand Dynamics

For firms in an oligopoly service network, demand learning based dynamics pricing is an efficient way to maximize their revenues. This paper introduces a Bayesian method to learn demand behavior from the perspective of game-theoretic dynamics, where non-parametric techniques for nonlinear time series are incorporated, such that stringent parametric assumptions are removed. We determine the unkn...

متن کامل

Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation for the CE Method

In this paper, we examine the CE method in the broad context of Monte Carlo Optimization (MCO) [Ermoliev and Norkin, 1998, Robert and Casella, 2004] and Parametric Learning (PL), a type of machine learning. A well-known overarching principle used to improve the performance of many PL algorithms is the bias-variance tradeoff [Wolpert, 1997]. This tradeoff has been used to improve PL algorithms r...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/0704.1274  شماره 

صفحات  -

تاریخ انتشار 2007