نتایج جستجو برای: strongly convex function
تعداد نتایج: 1435527 فیلتر نتایج به سال:
Subgradients. A function f : Q ✓ R ! R defined on a convex domain Q is said to be convex if every point x 2 Q has a non-empty subgradient @f(x) = {g 2 R; f(y) f(x) + g>(y x), 8y 2 Q}. Geometrically, this means that a function is convex iff it is the maximum of all its supporting hyperplanes, i.e. f(x) = max x0,g2@f(x0) f(x0) + g > (x x0). When there is a unique element in @f(x) we call it the g...
let $omega_x$ be a bounded, circular and strictly convex domain of a banach space $x$ and $mathcal{h}(omega_x)$ denote the space of all holomorphic functions defined on $omega_x$. the growth space $mathcal{a}^omega(omega_x)$ is the space of all $finmathcal{h}(omega_x)$ for which $$|f(x)|leqslant c omega(r_{omega_x}(x)),quad xin omega_x,$$ for some constant $c>0$, whenever $r_{omega_x}$ is the m...
We consider the problem of strongly-convex online optimization in presence of adversarial delays [1]; in a T -iteration online game, the feedback of the player’s query at time t is arbitrarily delayed by an adversary for dt rounds and delivered before the game ends, at iteration t+ dt − 1. Specifically for online-gradient-descent algorithm we show it has a simple regret bound of O (∑T t=1 log(1...
In this paper, we address strongly convex programming for principal component pursuit with reduced linear measurements, which decomposes a superposition of a low-rank matrix and a sparse matrix from a small set of linear measurements. We first provide sufficient conditions under which the strongly convex models lead to the exact low-rank and sparse matrix recovery; Second, we also give suggesti...
Strongly convex sets in Hilbert spaces are characterized by local properties. One quantity which is used for this purpose is a generalization of the modulus of convexity δΩ of a set Ω. We also show that limε→0 δΩ(ε)/ε 2 exists whenever Ω is closed and convex.
Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition–a strictly weaker condition than the strongly convex assumption, we derive a...
Conic optimization is the minimization of a differentiable convex objective function subject to conic constraints. We propose novel primal–dual first-order method for optimization, named proportional–integral projected gradient (PIPG). PIPG ensures that both gap and constraint violation converge zero at rate O(1/k), where k number iterations. If strongly convex, improves convergence O(1/k2). Fu...
A number of learning problems can be cast as an Online Convex Game: on each round, a learner makes a prediction x from a convex set, the environment plays a loss function f , and the learner’s long-term goal is to minimize regret. Algorithms have been proposed by Zinkevich, when f is assumed to be convex, and Hazan et al., when f is assumed to be strongly convex, that have provably low regret. ...
In this manuscript, we introduce concepts of (m1,m2)-logarithmically convex (AG-convex) functions and establish some Hermite-Hadamard type inequalities of these classes of functions.
A number of learning problems can be cast as an Online Convex Game: on each round, a learner makes a prediction x from a convex set, the environment plays a loss function f , and the learner’s long-term goal is to minimize regret. Algorithms have been proposed by Zinkevich, when f is assumed to be convex, and Hazan et al., when f is assumed to be strongly convex, that have provably low regret. ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید