نتایج جستجو برای: nonconvex vector optimization

تعداد نتایج: 506335  

Journal: :Machine Learning 2021

The success of deep learning has led to a rising interest in the generalization property stochastic gradient descent (SGD) method, and stability is one popular approach study it. Existing bounds based on do not incorporate interplay between optimization SGD underlying data distribution, hence cannot even capture effect randomized labels performance. In this paper, we establish error for by char...

2016
Saeed Ghadimi

In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a strongly convex term. While including this strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its convergence rate for solving strongly convex problems, it d...

Journal: :Computational Optimization and Applications 2008

Journal: :Journal of Mathematical Sciences 2022

The paper deals with three numerical approaches that allow one to construct computational technologies for solving nonconvex optimization problems. We propose use the developed algorithms based on modifications of tunnel search algorithm, Luus–Yaakola method, and expert algorithm. presented techniques are implemented within framework software package used problems various classes, in particular...

Journal: :CoRR 2014
Gesualdo Scutari Francisco Facchinei Lorenzo Lampariello Peiran Song

In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints. The algorithm solves a sequence of (separable) strongly convex problems and mantains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general...

2015
Martin Arjovsky

Nonconvex optimization problems such as the ones in training deep neural networks suffer from a phenomenon called saddle point proliferation. This means that there are a vast number of high error saddle points present in the loss function. Second order methods have been tremendously successful and widely adopted in the convex optimization community, while their usefulness in deep learning remai...

2002
B. V. Babu Rakesh Angira

The global optimization of mixed integer non-linear programming (MINLP) problems is an active research area in many engineering fields. In this work, Differential Evolution (DE), a hybrid Evolutionary Computation method, is used for the optimization of nonconvex MINLP problems and a comparison is made among the algorithms based on hybrid of Simplex & Simulated Annealing (MSIMPSA), Genetic Algor...

Journal: :J. Global Optimization 2008
Ramkumar Karuppiah Ignacio E. Grossmann

In this work we present a global optimization algorithm for solving a class of large-scale nonconvex optimization models that have a decomposable structure. Such models are frequently encountered in two-stage stochastic programming problems, engineering design, and also in planning and scheduling. A generic formulation and reformulation of the decomposable models is given. We propose a speciali...

2016
Yan Kaganovsky Ikenna Odinaka David E. Carlson Lawrence Carin

We propose an optimization framework for nonconvex problems based on majorizationminimization that is particularity well-suited for parallel computing. It reduces the optimization of a high dimensional nonconvex objective function to successive optimizations of locally tight and convex upper bounds which are additively separable into low dimensional objectives. The original problem is then brok...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید