نتایج جستجو برای: saddle point problem

تعداد نتایج: 1332511  

Journal: :SIAM J. Control and Optimization 2017
Ashish Cherukuri Bahman Gharesifard Jorge Cortés

This paper considers continuously differentiable functions of two vector variables that have (possibly a continuum of) min-max saddle points. We study the asymptotic convergence properties of the associated saddle-point dynamics (gradient-descent in the first variable and gradientascent in the second one). We identify a suite of complementary conditions under which the set of saddle points is a...

Journal: :J. Computational Applied Mathematics 2010
Alejandro Balbás Beatriz Balbás Raquel Balbás

Theminimization of risk functions is becoming a very important topic due to its interesting applications in Mathematical Finance and Actuarial Mathematics. This paper addresses this issue in a general framework. Many types of risk function may be involved. A general representation theorem of risk functions is used in order to transform the initial optimization problem into an equivalent one tha...

Journal: :Math. Program. 2016
Dan Garber Elad Hazan

We consider semidefinite optimization in a saddle point formulation where the primal solution is in the spectrahedron and the dual solution is a distribution over affine functions. We present an approximation algorithm for this problem that runs in sublinear time in the size of the data. To the best of our knowledge, this is the first algorithm to achieve this. Our algorithm is also guaranteed ...

Journal: :Pattern Recognition 2014
Aditya Tayal Thomas F. Coleman Yuying Li

Embedding feature selection in nonlinear SVMs leads to a challenging non-convex minimization problem, which can be prone to suboptimal solutions. This paper develops an effective algorithm to directly solve the embedded feature selection primal problem. We use a trust-region method, which is better suited for non-convex optimization compared to line-search methods, and guarantees convergence to...

1994
JIE SUN JISHAN ZHU GONGYUN ZHAO

An interior path-following algorithm is proposed for solving the nonlinear saddle point problem minimax c T x + (x) + b T y ? (y) ? y T Ax subject to (x; y) 2 X Y R n R m ; where (x) and (y) are smooth convex functions and X and Y are boxes (hyper-rectangles). This problem is closely related to models in stochastic programming and optimal control studied by Rockafellar and Wets. Existence condi...

Journal: :SIAM Journal on Optimization 2016
Michal Kocvara Yurii Nesterov Yu Xia

A small improvement in the structure of the material could save the manufactory a lot of money. The free material design can be formulated as an optimization problem. However, due to its large scale, secondorder methods cannot solve the free material design problem in reasonable size. We formulate the free material optimization (FMO) problem into a saddle-point form in which the inverse of the ...

خلیل خلیلی, مرتضی, سهیلی, سعید,

 The number of neutrons emitted by compound nucleus before reaching the saddle-point ( ν pre) is calculated for 16 8 O+20882Pb , 12 6 C+23692U , 11 5 B+23793Np and 18 8 O+19779Au , h eavy-ion induced fission reaction systems. The behavior of angular anisotropies of fission fragments is normal for 16 8 O+20882Pb and 18 8 O+19779Au reaction systems, since the targets have spherical shapes. For th...

2011
S. K Bisoi G. Devi Arabinda Rath

This paper presents a neural network for solving non-linear minimax multiobjective fractional programming problem subject to nonlinear inequality constraints. Neural model is designed for optimization with constraints condition. Methodology is based on the lagrange multiplier with saddle point optimization.

2013
Moon Hee Kim MOON HEE KIM

In this paper, Mond-Weir type duality results for a uncertain multiobjective robust optimization problem are given under generalized invexity assumptions. Also, weak vector saddle-point theorems are obtained under convexity assumptions.

2016
Guanghui Lan Yuyuan Ouyang

Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show that the AGS method can skip the gradient...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید