نتایج جستجو برای: convex optimization

تعداد نتایج: 358281  

Journal: :Operations Research 2014
Wolfram Wiesemann Daniel Kuhn Melvyn Sim

Distributionally robust optimization is a paradigm for decision-making under uncertaintywhere the uncertain problem data is governed by a probability distribution that is itself subjectto uncertainty. The distribution is then assumed to belong to an ambiguity set comprising alldistributions that are compatible with the decision maker’s prior information. In this paper,we propose...

2015
Daniel Hsu

1.1 Definitions We say a set S ⊆ Rd is convex if for any two points x,x′ ∈ S, the line segment conv{x,x′} := {(1−α)x+αx′ : α ∈ [0, 1]} between x and x′ (also called the convex hull of {x,x′}) is contained in S. Overloading terms, we say a function f : S → R is convex if its epigraph epi(f) := {(x, t) ∈ S × R : f(x) ≤ t} is a convex set (in Rd × R). Proposition 1. A function f : S → R is convex ...

Journal: :CoRR 2016
Mohammad Gheshlaghi Azar Eva L. Dyer Konrad P. Körding

Finding efficient and provable methods to solve non-convex optimization problems is an outstanding challenge in machine learning. A popular approach used to tackle non-convex problems is to use convex relaxation techniques to find a convex surrogate for the problem. Unfortunately, convex relaxations typically must be found on a problemby-problem basis. Thus, providing a general-purpose strategy...

2010
D. Drusvyatskiy A. S. Lewis

We show that minimizers of convex functions subject to almost all linear perturbations are nondegenerate. An analogous result holds more generally, for lower-C2 functions.

2012
V. Jeyakumar G. Li S. Suthaharan

In this paper we study Support Vector Machine(SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowled...

2016
Hideaki Iiduka

This paper considers the fixed point problem for a nonexpansive mapping on a real Hilbert space and proposes novel line search fixed point algorithms to accelerate the search. The termination conditions for the line search are based on the well-known Wolfe conditions that are used to ensure the convergence and stability of unconstrained optimization algorithms. The directions to search for fixe...

2009
Jacob Mattingley Stephen Boyd

This chapter concerns the use of convex optimization in real-time embedded systems, in areas such as signal processing, automatic control, real-time estimation , real-time resource allocation and decision making, and fast automated trading. By 'embedded' we mean that the optimization algorithm is part of a larger, fully automated system, that executes automatically with newly arriving data or c...

2009
Shai Shalev-Shwartz Ohad Shamir Nathan Srebro Karthik Sridharan

For supervised classification problems, it is well known that learnability is equivalent to uniform convergence of the empirical risks and thus to learnability by empirical minimization. Inspired by recent regret bounds for online convex optimization, we study stochastic convex optimization, and uncover a surprisingly different situation in the more general setting: although the stochastic conv...

2015
John C. Duchi Sorathan Chaturapruek Christopher Ré

We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from...

2017
Boris Houska Moritz Diehl

This paper presents novel convergence results for the Augmented Lagrangian based Alternating Direction Inexact Newton method (ALADIN) in the context of distributed convex optimization. It is shown that ALADIN converges for a large class of convex optimization problems from any starting point to minimizers without needing line-search or other globalization routines. Under additional regularity a...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید