نتایج جستجو برای: nonconvex optimization

تعداد نتایج: 320278  

Journal: :SIAM Journal on Optimization 2009
Stefan Bundfuss Mirjam Dür

We study linear optimization problems over the cone of copositive matrices. These problems appear in nonconvex quadratic and binary optimization; for instance, the maximum clique problem and other combinatorial problems can be reformulated as such problems. We present new polyhedral inner and outer approximations of the copositive cone which we show to be exact in the limit. In contrast to prev...

2004
P. K. Polisetty E. P. Gatzke

This paper presents a parallel algorithm for obtaining global solutions to general mathematical programming problems with nonconvex constraints involving continuous variables. The proposed algorithm implements an optimization based bound tightening technique (Smith [1996], Ryoo and Sahinidis [1995], Adjiman et al. [2000]) in parallel on the root node of the branch-and-bound tree structure. Upon...

Journal: :Neurocomputing 2014
Vittorio Latorre David Yang Gao

Radial Basis Functions Neural Networks (RBFNNs) are tools widely used in regression problems. One of their principal drawbacks is that the formulation corresponding to the training with the supervision of both the centers and the weights is a highly non-convex optimization problem, which leads to some fundamentally difficulties for traditional optimization theory and methods. This paper present...

2009
DAVID YANG GAO

It is known that in convex optimization, the Lagrangian associated with a constrained problem is usually a saddle function, which leads to the classical saddle Lagrange duality (i. e. the monoduality) theory. In nonconvex optimization, a so-called superLagrangian was introduced in [1], which leads to a nice biduality theory in convex Hamiltonian systems and in the so-called d.c. programming.

Journal: :SIAM journal on mathematics of data science 2022

Adaptivity is an important yet under-studied property in modern optimization theory. The gap between the state-of-the-art theory and current practice striking that algorithms with desirable theoretical guarantees typically involve drastically different settings of hyperparameters, such as step size schemes batch sizes, regimes. Despite appealing results, divisive strategies provide little, if a...

Journal: :Automatica 2023

Privacy protection and nonconvexity are two challenging problems in decentralized optimization learning involving sensitive data. Despite some recent advances addressing each of the separately, no results have been reported that theoretical guarantees on both privacy saddle/maximum avoidance nonconvex optimization. We propose a new algorithm for can enable rigorous differential avoiding perform...

Journal: :J. Global Optimization 2015
Mengwei Xu Jane J. Ye Liwei Zhang

In this paper, we propose a smoothing augmented Lagrangian method for finding a stationary point of a nonsmooth and nonconvex optimization problem. We show that any accumulation point of the iteration sequence generated by the algorithm is a stationary point provided that the penalty parameters are bounded. Furthermore, we show that a weak version of the generalized Mangasarian Fromovitz constr...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید