نتایج جستجو برای: nonconvex vector optimization
تعداد نتایج: 506335 فیلتر نتایج به سال:
Nonsmooth nonconvex regularization has remarkable advantages for the restoration of piecewise constant images. Constrained optimization can improve the image restoration using a priori information. In this paper, we study regularized nonsmooth nonconvex minimization with box constraints for image restoration. We present a computable positive constant θ for using nonconvex nonsmooth regularizat...
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is...
Sparse Gaussian graphical models characterize sparse dependence relationships between random variables in a network. To estimate multiple related Gaussian graphical models on the same set of variables, we formulate a hierarchical model, which leads to an optimization problem with a nonconvex log-shift penalty function. We show that under mild conditions the optimization problem is convex despit...
Sparse learning is an important topic in many areas such as machine learning, statistical estimation, signal processing, etc. Recently, there emerges a growing interest on structured sparse learning. In this paper we focus on the lq-analysis optimization problem for structured sparse learning (0 < q ≤ 1). Compared to previous work, we establish weaker conditions for exact recovery in noiseless ...
We study linear optimization problems over the cone of copositive matrices. These problems appear in nonconvex quadratic and binary optimization; for instance, the maximum clique problem and other combinatorial problems can be reformulated as such problems. We present new polyhedral inner and outer approximations of the copositive cone which we show to be exact in the limit. In contrast to prev...
This paper presents a parallel algorithm for obtaining global solutions to general mathematical programming problems with nonconvex constraints involving continuous variables. The proposed algorithm implements an optimization based bound tightening technique (Smith [1996], Ryoo and Sahinidis [1995], Adjiman et al. [2000]) in parallel on the root node of the branch-and-bound tree structure. Upon...
Radial Basis Functions Neural Networks (RBFNNs) are tools widely used in regression problems. One of their principal drawbacks is that the formulation corresponding to the training with the supervision of both the centers and the weights is a highly non-convex optimization problem, which leads to some fundamentally difficulties for traditional optimization theory and methods. This paper present...
It is known that in convex optimization, the Lagrangian associated with a constrained problem is usually a saddle function, which leads to the classical saddle Lagrange duality (i. e. the monoduality) theory. In nonconvex optimization, a so-called superLagrangian was introduced in [1], which leads to a nice biduality theory in convex Hamiltonian systems and in the so-called d.c. programming.
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید