نتایج جستجو برای: modified subgradient method
تعداد نتایج: 1831354 فیلتر نتایج به سال:
An Inexact Bundle Method for Solving Large Structured Linear Matrix Inequalities
We present a subgradient method for minimizing non-smooth, non-Lipschitz convex optimization problems. The only structure assumed is that a strictly feasible point is known. We extend the work of Renegar [1] by taking a different perspective, leading to an algorithm which is conceptually more natural, has notably improved convergence rates, and for which the analysis is surprisingly simple. At ...
The Lagrangian relaxationLRbased methods are commonly used to solve the thermal unit commitment UC problem which is an important subject in power system engineering. The main drawback of this group of methods is the difference between the dual and the primal solutions which gives some significant problems on the quality of the feasible solutions. In this paper, a new approach, feasible modified...
A Lagrangian based heuristic is proposed for many-to-many assignment problems taking into account capacity limits for task and agents. A modified Lagrangian bound studied earlier by the authors is presented and a greedy heuristic is then applied to get a feasible Lagrangian-based solution. The latter is also used to speed up the subgradient scheme to solve the modified Lagrangian dual problem. ...
This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorit...
We study optimization for collaborative multi-agent planning in factored Markov decision processes (MDPs) with shared resource constraints. Following past research, we derive a distributed planning algorithm for this setting based on Lagrangian relaxation: we optimize a convex dual function which maps a vector of resource prices to a bound on the achievable utility. Since the dual function is n...
In this paper we present a subgradient method with non-monotone line search for the minimization of convex functions simple constraints. Different from standard prefixed step sizes, new selects sizes in an adaptive way. Under mild conditions asymptotic convergence results and iteration-complexity bounds are obtained. Preliminary numerical illustrate relative efficiency proposed method.
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید