نتایج جستجو برای: nonconvex vector optimization

تعداد نتایج: 506335  

Journal: :J. Global Optimization 2008
Angelia Nedic Asuman E. Ozdaglar

We provide a unifying geometric framework for the analysis of general classes of duality schemes and penalty methods for nonconvex constrained optimization problems. We present a separation result for nonconvex sets via general concave surfaces. We use this separation result to provide necessary and sufficient conditions for establishing strong duality between geometric primal and dual problems...

2009
Marko M. Mäkelä Yury Nikulin József Mezei

We consider a general multiobjective optimization problem with five basic optimality principles: efficiency, weak and proper Pareto optimality, strong efficiency and lexicographic optimality. We generalize the concept of tradeoff directions defining them as some optimal surface of appropriate cones. In convex optimization, the contingent cone can be used for all optimality principles except lex...

Journal: :Comp. Opt. and Appl. 2010
Yixin Chen Minmin Chen

Duality is an important notion for nonlinear programming (NLP). It provides a theoretical foundation for many optimization algorithms. Duality can be used to directly solve NLPs as well as to derive lower bounds of the solution quality which have wide use in other high-level search techniques such as branch and bound. However, the conventional duality theory has the fundamental limit that it le...

2017
Mingyi Hong Davood Hajinezhad Ming-Min Zhao

In this paper we consider nonconvex optimization and learning over a network of distributed nodes. We develop a Proximal Primal-Dual Algorithm (Prox-PDA), which enables the network nodes to distributedly and collectively compute the set of first-order stationary solutions in a global sublinear manner [with a rate of O(1/r), where r is the iteration counter]. To the best of our knowledge, this i...

2016
Chunshan Xue Hongwei Jiao Jingben Yin Yongqiang Chen

This paper presents a novel range division and contraction approach for globally solving nonconvex quadratic program with quadratic constraints. By constructing new underestimating linear relaxation functions, we can transform the initial nonconvex quadratic program problem into a linear program relaxation problem. By employing a branch and bound scheme with a range contraction approach, we des...

2006
Warren Hare Claudia Sagastizábal

The major focus of this work is to compare several methods for computing the proximal point of a nonconvex function via numerical testing. To do this, we introduce two techniques for randomly generating challenging nonconvex test functions, as well as two very specific test functions which should be of future interest to Nonconvex Optimization Benchmarking. We then compare the effectiveness of ...

Journal: :CoRR 2017
Tsz Kit Lau Yuan Yao

Nonconvex optimization problems arise in different research fields and arouse lots of attention in signal processing, statistics and machine learning. In this work, we explore the accelerated proximal gradient method and some of its variants which have been shown to converge under nonconvex context recently. We show that a novel variant proposed here, which exploits adaptive momentum and block ...

Journal: :CoRR 2015
Abram L. Friesen Pedro M. Domingos

Continuous optimization is an important problem in many areas of AI, including vision, robotics, probabilistic inference, and machine learning. Unfortunately, most real-world optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the o...

Journal: :Journal of machine learning research : JMLR 2016
Rina Foygel Barber Emil Y. Sidky

Many optimization problems arising in high-dimensional statistics decompose naturally into a sum of several terms, where the individual terms are relatively simple but the composite objective function can only be optimized with iterative algorithms. In this paper, we are interested in optimization problems of the form F(Kx) + G(x), where K is a fixed linear transformation, while F and G are fun...

2017
Cong Fang Zhouchen Lin

Nowadays, asynchronous parallel algorithms have received much attention in the optimization field due to the crucial demands for modern large-scale optimization problems. However, most asynchronous algorithms focus on convex problems. Analysis on nonconvex problems is lacking. For the Asynchronous Stochastic Descent (ASGD) algorithm, the best result from (Lian et al., 2015) can only achieve an ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید