نتایج جستجو برای: stochastic optimization
تعداد نتایج: 429961 فیلتر نتایج به سال:
The recent surge of breakthroughs in machine learning and artificial intelligence has sparked renewed interest large-scale stochastic optimization problems that are universally considered hard. One the most widely used methods for solving such is distributed asynchronous gradient descent (DASGD), a family algorithms result from parallelizing on computing architectures (possibly) asychronously. ...
Most global optimization problems are nonlinear and thus difficult to solve, and they become even more challenging when uncertainties are present in objective functions and constraints. This paper provides a new two-stage hybrid search method, called Eagle Strategy, for stochastic optimization. This strategy intends to combine the random search using Lévy walk with the firefly algorithm in an i...
It is introduced in the paper the newly developed optimization method the Stochastic Optimization Algorithm with Probability Vector (PSV). It is related to Stochastic Learning Algorithm with Probability Vector for artificial neural networks. Both algorithms are inspired by stochastic iterated function system SIFS for generating the statistically self similar fractals. The PSV is gradient method...
We consider a non-stationary variant of a sequential stochastic optimization problem, in which the underlying cost functions may change along the horizon. We propose a measure, termed variation budget, that controls the extent of said change, and study how restrictions on this budget impact achievable performance. We identify sharp conditions under which it is possible to achieve long-run avera...
Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval. For more information, contact [email protected]. The Publisher does not warr...
We study the problem of global maximization of a function f given a finite number of evaluations perturbed by noise. We consider a very weak assumption on the function, namely that it is locally smooth (in some precise sense) with respect to some semi-metric, around one of its global maxima. Compared to previous works on bandits in general spaces (Kleinberg et al., 2008; Bubeck et al., 2011a) o...
Consider the stochastic composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic firstorder method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method, which updates based on queries to the sampling oracle using two different timescales. The ASC-PG is the first proximal gradient method for t...
Simulated annealing (SA) and deterministic continuation are well-known generic approaches to global optimization. Deterministic continuation is computationally attractive but produces suboptimal solutions, whereas SA is asymptotically optimal but converges very slowly. In this paper, we introduce a new class of hybrid algorithms which combines the theoretical advantages of SA with the practical...
In this paper we introduce new algorithms for optimizing noisy plants in which each experiment is very expensive. The algorithms build a global non-linear model of the expected output at the same time as using Bayesian linear regression analysis of locally weighted polynomial models. The local model answers queries about con dence, noise, gradient and Hessians, and use them to make automated de...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید