نتایج جستجو برای: nonconvex optimization

تعداد نتایج: 320278  

Journal: :CoRR 2014
Gesualdo Scutari Francisco Facchinei Lorenzo Lampariello Peiran Song

In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints. The algorithm solves a sequence of (separable) strongly convex problems and mantains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general...

2015
Martin Arjovsky

Nonconvex optimization problems such as the ones in training deep neural networks suffer from a phenomenon called saddle point proliferation. This means that there are a vast number of high error saddle points present in the loss function. Second order methods have been tremendously successful and widely adopted in the convex optimization community, while their usefulness in deep learning remai...

2002
B. V. Babu Rakesh Angira

The global optimization of mixed integer non-linear programming (MINLP) problems is an active research area in many engineering fields. In this work, Differential Evolution (DE), a hybrid Evolutionary Computation method, is used for the optimization of nonconvex MINLP problems and a comparison is made among the algorithms based on hybrid of Simplex & Simulated Annealing (MSIMPSA), Genetic Algor...

Journal: :J. Global Optimization 2008
Ramkumar Karuppiah Ignacio E. Grossmann

In this work we present a global optimization algorithm for solving a class of large-scale nonconvex optimization models that have a decomposable structure. Such models are frequently encountered in two-stage stochastic programming problems, engineering design, and also in planning and scheduling. A generic formulation and reformulation of the decomposable models is given. We propose a speciali...

2016
Yan Kaganovsky Ikenna Odinaka David E. Carlson Lawrence Carin

We propose an optimization framework for nonconvex problems based on majorizationminimization that is particularity well-suited for parallel computing. It reduces the optimization of a high dimensional nonconvex objective function to successive optimizations of locally tight and convex upper bounds which are additively separable into low dimensional objectives. The original problem is then brok...

2016
Hongyi Zhang Sashank J. Reddi Suvrit Sra

We study optimization of finite sums of geodesically smooth functions on Riemannian manifolds. Although variance reduction techniques for optimizing finite-sums have witnessed tremendous attention in the recent years, existing work is limited to vector space problems. We introduce Riemannian SVRG (RSVRG), a new variance reduced Riemannian optimization method. We analyze RSVRG for both geodesica...

Journal: :IEEE transactions on image processing : a publication of the IEEE Signal Processing Society 2012
Xiaojun Chen Michael K. Ng Chao Zhang

Nonsmooth nonconvex regularization has remarkable advantages for the restoration of piecewise constant images. Constrained optimization can improve the image restoration using a priori information. In this paper, we study regularized nonsmooth nonconvex minimization with box constraints for image restoration. We present a computable positive constant θ for using nonconvex nonsmooth regularizat...

2007
Hong Xia YIN Dong Lei DU

The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is...

2015
Yuancheng Zhu Rina Foygel Barber

Sparse Gaussian graphical models characterize sparse dependence relationships between random variables in a network. To estimate multiple related Gaussian graphical models on the same set of variables, we formulate a hierarchical model, which leads to an optimization problem with a nonconvex log-shift penalty function. We show that under mild conditions the optimization problem is convex despit...

Journal: :CoRR 2015
Shubao Zhang Hui Qian Zhihua Zhang

Sparse learning is an important topic in many areas such as machine learning, statistical estimation, signal processing, etc. Recently, there emerges a growing interest on structured sparse learning. In this paper we focus on the lq-analysis optimization problem for structured sparse learning (0 < q ≤ 1). Compared to previous work, we establish weaker conditions for exact recovery in noiseless ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید