نتایج جستجو برای: mollifier subgradient

تعداد نتایج: 1200  

Journal: :Bulletin of the Australian Mathematical Society 1988

1999
Xing Zhao Peter B. Luh

A major issue in Lagrangian relaxation for integer programming problems is to maximize the dual function which is piece-wise linear, and consists of many facets. Available methods include the subgradient method, the bundle method, and the recently developed surrogate subgradient method. Each of the above methods, however, has its own limitations. Based on the insights obtained from these method...

Journal: :CoRR 2017
Patrick R. Johnstone Pierre Moulin

The purpose of this manuscript is to derive new convergence results for several subgradient methods for minimizing nonsmooth convex functions with Hölderian growth. The growth condition is satisfied in many applications and includes functions with quadratic growth and functions with weakly sharp minima as special cases. To this end there are four main contributions. First, for a constant and su...

1999
Luiz Antonio N. Lorena Marcelo Gonçalves Narciso

The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% from optimal) for a large class of symmetric instances. It became a reference bound for new heurist...

2016
Li Xiao Junjie Bao Xi Shi

In this paper, we present an improved subgradient algorithm for solving a general multi-agent convex optimization problem in a distributed way, where the agents are to jointly minimize a global objective function subject to a global inequality constraint, a global equality constraint and a global constraint set. The global objective function is a combination of local agent objective functions a...

Journal: :Computational Optimization and Applications 2008

Journal: :IEEE Control Systems Letters 2022

In this letter we consider a distributed stochastic optimization framework in which agents network aim to cooperatively learn an optimal network-wide policy. The goal is compute local functions minimize the expected value of given cost, subject individual constraints and average coupling constraints. order handle challenges context, resort Lagrangian duality approach that allows us derive assoc...

Journal: :SIAM Journal on Optimization 2014
Angelia Nedic Soomin Lee

This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use o...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید