نتایج جستجو برای: mollifier subgradient

تعداد نتایج: 1200  

2003
M. V. SOLODOV P. Tseng

We consider the proximal form of a bundle algorithm for minimizing a nonsmooth convex function, assuming that the function and subgradient values are evaluated approximately. We show how these approximations should be controlled in order to satisfy the desired optimality tolerance. For example, this is relevant in the context of Lagrangian relaxation, where obtaining exact information about the...

Journal: :J. Optimization Theory and Applications 2016
Andrea Simonetto Hadi Jamali Rad

Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel subgradient calculations and a global subgradient update performed by a master node. In this paper, we...

1993
R. A. Poliquin R. T. Rockafellar

Subgradient mappings associated with various convex and nonconvex functions are a vehicle for stating optimality conditions, and their proto-differentiability plays a role therefore in the sensitivity analysis of solutions to problems of optimization. Examples of special interest are the subgradients of the max of finitely many C functions, and the subgradients of the indicator of a set defined...

2009
Adil Bagirov Marko M. Mäkelä Napsu Karmitsa

The most of nonsmooth optimization methods may be divided in two main groups: subgradient methods and bundle methods. Usually, when developing new algorithms and testing them, the comparison is made between similar kinds of methods. In this report we test and compare both different bundle methods and different subgradient methods as well as some methods which may be considered as hybrids of the...

B. Farhadinia

Recently, Gasimov and Yenilmez proposed an approach for solving two kinds of fuzzy linear programming (FLP) problems. Through the approach, each FLP problem is first defuzzified into an equivalent crisp problem which is non-linear and even non-convex. Then, the crisp problem is solved by the use of the modified subgradient method. In this paper we will have another look at the earlier defuzzifi...

2010
Ion Matei John S. Baras

We investigate collaborative optimization in a multi-agent setting, when the agents execute in a distributed manner using local information, while the communication topology used to exchange messages and information is modeled by a graph-valued random process, independent of other time instances. Specifically, we study the performance of the consensus-based multi-agent subgradient method, for t...

2015
KAZUHIRO HISHINUMA

In this paper, we consider the problem of minimizing the sum of nondifferentiable, convex functions over a closed convex set in a real Hilbert space, which is simple in the sense that the projection onto it can be easily calculated. We present a parallel subgradient method for solving it and the two convergence analyses of the method. One analysis shows that the parallel method with a small con...

Journal: :CoRR 2017
Thomas Holding Ioannis Lestas

In part I we considered the problem of convergence to a saddle point of a concave-convex function via gradient dynamics and an exact characterization was given to their asymptotic behaviour. In part II we consider a general class of subgradient dynamics that provide a restriction in an arbitrary convex domain. We show that despite the nonlinear and nonsmooth character of these dynamics their ω-...

Journal: :Oper. Res. Lett. 2000
Hanif D. Sherali Gyunghyun Choi Cihan H. Tuncbilek

This paper presents a new Variable target value method (VTVM) that can be used in conjunction with pure or de ected subgradient strategies. The proposed procedure assumes no a priori knowledge regarding bounds on the optimal value. The target values are updated iteratively whenever necessary, depending on the information obtained in the process of the algorithm. Moreover, convergence of the seq...

2003
Júlíus Atlason Marina A. Epelman Shane G. Henderson

We study the problem of approximating a subgradient of a convex (or concave) discrete function that is evaluated via simulation. This problem arises, for instance, in optimization problems such as finding the minimal cost staff schedule in a call center subject to a service level constraint. There, subgradient information can be used to significantly reduce the search space. The problem of appr...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید