نتایج جستجو برای: mollifier subgradient

تعداد نتایج: 1200  

Journal: :CoRR 2011
Sangkyun Lee Stephen J. Wright

Subgradient algorithms for training support vector machines have been quite successful for solving largescale and online learning problems. However, they have been restricted to linear kernels and strongly convex formulations. This paper describes efficient subgradient approaches without such limitations. Our approaches make use of randomized low-dimensional approximations to nonlinear kernels,...

2010
Peter Wolenski Padmanabhan Sundar Robert Perlis Stephen Shipman

In this thesis, we have two distinct but related subjects: optimal control and nonlinear programming. In the first part of this thesis, we prove that the value function, propagated from initial or terminal costs, and constraints, in the form of a differential equation, satisfy a subgradient form of the Hamilton-Jacobi equation in which the Hamiltonian is measurable with respect to time. In the ...

Journal: :CoRR 2016
Kaihong Lu Gangshan Jing Long Wang

In this paper, a class of convex feasibility problems (CFPs) are studied for multi-agent systems through local interactions. The objective is to search a feasible solution to the convex inequalities with some set constraints in a distributed manner. The distributed control algorithms, involving subgradient and projection, are proposed for both continuousand discrete-time systems, respectively. ...

1997
S Dempe S Vogel

If a strong suucient optimality condition of second order together with the Mangasarian-Fromowitz and the constant rank constraint quali-cations are satissed for a parametric optimization problem, then a local optimal solution is strongly stable in the sense of Kojima and the corresponding optimal solution function is locally Lipschitz continuous. In the article the possibilities for the comput...

2010
Ofer Meshi David Sontag Tommi S. Jaakkola Amir Globerson

Many structured prediction tasks involve complex models where inference is computationally intractable, but where it can be well approximated using a linear programming relaxation. Previous approaches for learning for structured prediction (e.g., cuttingplane, subgradient methods, perceptron) repeatedly make predictions for some of the data points. These approaches are computationally demanding...

2016
Qingguo Lü Huaqing Li Li Xiao

This paper considers a distributed constrained optimization problem, where the objective function is the sum of local objective functions of distributed nodes in a network. The estimate of each agent is restricted to different convex sets. To solve this optimization problem which is not necessarily smooth, we study a novel distributed projected subgradient algorithm for multi-agent optimization...

2001
Alexandre Belloni Abilio Lucena

Two heuristics for the Linear Ordering Problem are investigated in this paper. heuristics are embedded within a Lagrangian Relaxation framework and are initiated with a construction phase. In this process, some Lagrangian (dual) information is used as an input to guide the construction of initial Linear Orderings. Solutions thus obtained are then submitted to local improvement in an overall pro...

2012
G. Johnson M. Ortiz S. Leyendecker

A subdifferentiable global contact detection algorithm, the Supporting Separating Hyperplane (SSH) algorithm, based on the signed distance between supporting hyperplanes of two convex sets is developed. It is shown that for polyhedral sets, the SSH algorithm may be evaluated as a linear program, and that this linear program is always feasible and always subdifferentiable with respect to the con...

2010
Meritxell Vinyals Marc Pujol Juan A. Rodríguez-Aguilar Jesús Cerquides

In this paper we investigate an approach to provide approximate, anytime algorithms for DCOPs that can provide quality guarantees. At this aim, we propose the divide-and-coordinate (DaC) approach. Such approach amounts to solving a DCOP by iterating (1) a divide stage in which agents divide the DCOP into a set of simpler local subproblems and solve them; and (2) a coordinate stage in which agen...

Journal: :Math. Program. 1999
Jean-Louis Goffin Krzysztof C. Kiwiel

We study the subgradient projection method for convex optimization with Brr ann-lund's level control for estimating the optimal value. We establish global convergence in objective values without additional assumptions employed in the literature.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید