نتایج جستجو برای: ‎Nonconvex optimization

تعداد نتایج: 320278  

‎By p-power (or partial p-power) transformation‎, ‎the Lagrangian function in nonconvex optimization problem becomes locally convex‎. ‎In this paper‎, ‎we present a neural network based on an NCP function for solving the nonconvex optimization problem‎. An important feature of this neural network is the one-to-one correspondence between its equilibria and KKT points of the nonconvex optimizatio...

Journal: :journal of mathematical modeling 2015
maziar salahi arezo zare

in this paper, we study the problem of minimizing the ratio of two quadratic functions subject to a quadratic constraint. first we introduce a parametric equivalent of the problem. then a bisection and a generalized newton-based method algorithms are presented to solve it. in order to solve the quadratically constrained quadratic minimization problem within both algorithms, a semidefinite optim...

Journal: :bulletin of the iranian mathematical society 0
b. ‎soleimani institute of mathematics‎, ‎martin-luther-university halle-wittenberg‎, ‎theodor-lieser str‎. ‎5‎, ‎06120 halle‎, ‎germany. c. tammer institute of mathematics‎, ‎martin-luther-university halle-wittenberg‎, ‎theodor-lieser str‎. ‎5‎, ‎06120 halle‎, ‎germany.

‎we consider nonconvex vector optimization problems with variable ordering structures in banach spaces‎. ‎under certain boundedness and continuity properties we present necessary conditions for approximate solutions of these problems‎. ‎using a generic approach to subdifferentials we derive necessary conditions for approximate minimizers and approximately minimal solutions of vector optimizatio...

‎We consider nonconvex vector optimization problems with variable ordering structures in Banach spaces‎. ‎Under certain boundedness and continuity properties we present necessary conditions for approximate solutions of these problems‎. ‎Using a generic approach to subdifferentials we derive necessary conditions for approximate minimizers and approximately minimal solutions of vector optimizatio...

2006
David Y. Gao Hanif D. Sherali Mung Chiang

Nonlinear convex optimization has provided both an insightful modeling language and a powerful solution tool to the analysis and design of communication systems over the last decade. A main challenge today is on nonconvex problems in these applications. This chapter presents an overview on some of the important nonconvex optimization problems in communication networks. Four typical applications...

Journal: :Advances in neural information processing systems 2015
Tuo Zhao Zhaoran Wang Han Liu

We study the estimation of low rank matrices via nonconvex optimization. Compared with convex relaxation, nonconvex optimization exhibits superior empirical performance for large scale instances of low rank matrix estimation. However, the understanding of its theoretical guarantees are limited. In this paper, we define the notion of projected oracle divergence based on which we establish suffic...

2013
Abram L. Friesen Pedro Domingos

Difficult nonconvex optimization problems contain a combinatorial number of local optima, making them extremely challenging for modern solvers. We present a novel nonconvex optimization algorithm that explicitly finds and exploits local structure in the objective function in order to decompose it into subproblems, exponentially reducing the size of the search space. Our algorithm’s use of decom...

2008
Zhi-Bin Liu Jong Kyu Kim Nan-Jing Huang

We consider the weakly efficient solution for a class of nonconvex and nonsmooth vector optimization problems in Banach spaces. We show the equivalence between the nonconvex and nonsmooth vector optimization problem and the vector variational-like inequality involving set-valued mappings. We prove some existence results concerned with the weakly efficient solution for the nonconvex and nonsmoot...

Journal: :Math. Program. 2016
Saeed Ghadimi Guanghui Lan

In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using ...

Journal: :Math. Program. 2009
Warren Hare Claudia A. Sagastizábal

The proximal point mapping is the basis of many optimization techniques for convex functions. By means of variational analysis, the concept of proximal mapping was recently extended to nonconvex functions that are prox-regular and prox-bounded. In such a setting, the proximal point mapping is locally Lipschitz continuous and its set of fixed points coincide with the critical points of the origi...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید