نتایج جستجو برای: nonconvex problem
تعداد نتایج: 882470 فیلتر نتایج به سال:
Based originally on work of McCormick, a number of recent global optimization algorithms have relied on replacing an original nonconvex nonlinear program by convex or linear relaxations. Such linear relaxations can be generated automatically through an automatic differentiation process. This process decomposes the objective and constraints (if any) into convex and nonconvex unary and binary ope...
We describe a general scheme for solving nonconvex optimization problems, where in each iteration the nonconvex feasible set is approximated by an inner convex approximation. The latter is defined using an upper bound on the nonconvex constraint functions. Under appropriate conditions on this upper bounding convex function, a monotone convergence to a KKT point is established. The scheme is app...
In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using ...
The discrete moment problem aims to find a worst-case distribution that satisfies given set of moments. This paper studies the problems with additional shape constraints guarantee is either log-concave (LC) or has an increasing failure rate (IFR) generalized (IGFR). These classes are useful in practice, applications revenue management, reliability, and inventory control. authors characterize st...
We describe a primal-dual application of the proximal point algorithm to nonconvex minimization problems. Motivated by the work of Spingarn and more recently by the work of Kaplan and Tichatschke about the proximal point methodology in nonconvex optimization. This paper discusses some local results in two directions. The first one concerns the application of the proximal method of multipliers t...
In this paper, the estimation problem for sparse reduced rank regression (SRRR) model is considered. The SRRR model is widely used for dimension reduction and variable selection with applications in signal processing, econometrics, etc. The problem is formulated to minimize the least squares loss with a sparsity-inducing penalty considering an orthogonality constraint. Convex sparsity-inducing ...
In machine learning, nonconvex optimization problems with multiple local optimums are often encountered. Graduated Optimization Algorithm (GOA) is a popular heuristic method to obtain global optimums of nonconvex problems through progressively minimizing a series of convex approximations to the nonconvex problems more and more accurate. Recently, such an algorithm GradOpt based on GOA is propos...
Motivated by the recent developments of nonconvex penalties in sparsity modeling, we propose a nonconvex optimization model for handing the low-rank matrix recovery problem. Different from the famous robust principal component analysis (RPCA), we suggest recovering low-rank and sparse matrices via a nonconvex loss function and a nonconvex penalty. The advantage of the nonconvex approach lies in...
This paper presents a canonical dual approach for minimizing a sum of quadratic function and a ratio of nonconvex functions in R. By introducing a parameter, the problem is first equivalently reformed as a nonconvex polynomial minimization with elliptic constraint. It is proved that under certain conditions, the canonical dual is a concave maximization problem in R that exhibits no duality gap....
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید