نتایج جستجو برای: quasi newton method
تعداد نتایج: 1708814 فیلتر نتایج به سال:
We propose a novel algorithm for learning a geometric combination of Gaussian kernel jointly with a SVM classifier. This problem is the product counterpart of MKL, with restriction to Gaussian kernels. Our algorithm finds a local solution by alternating a Quasi-Newton gradient descent over the kernels and a classical SVM solver over the instances. We show promising results on well known data se...
Minimax problems have gained tremendous attentions across the optimization and machine learning community recently. In this paper, we introduce a new quasi-Newton method for minimax problems, which call J-symmetric method. The is obtained by exploiting structure of second-order derivative objective function in problem. We show that Hessian estimation (as well as its inverse) can be updated rank...
Analyses of the convergence properties of general quasi-Newton methods are presented, particular attention being paid to how the approximate solutions and the iteration matrices approach their final values. It is further shown that when Broyden's algorithm is applied to linear systems, the error norms are majorised by a superlinearly convergent sequence of an unusual kind.
This paper is a continuation of our previous paper [3] were we presented generalizations of the Dennis-Moré theorem to characterize q-superliner convergences of quasi-Newton methods for solving equations and variational inequalities in Banach spaces. Here we prove Dennis-Moré type theorems for inexact quasi-Newton methods applied to variational inequalities in finite dimensions. We first consid...
Training in the random neural network (RNN) is generally speci®ed as the minimization of an appropriate error function with respect to the parameters of the network (weights corresponding to positive and negative connections). We propose here a technique for error minimization that is based on the use of quasi-Newton optimization techniques. Such techniques oer more sophisticated exploitation ...
We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...
In this paper, we present a method for solving the nite nonlinear min-max problem. By using quasi-Newton methods, we approximately solve a sequence of diierentiable subproblems where, for each subproblem, the cost function to minimize is a global regularization underestimating the nite maximum function. We show that every cluster point of the sequence generated is a stationary point of the min-...
This paper develops and analyzes a generalization of the Broyden class of quasiNewton methods to the problem of minimizing a smooth objective function f on a Riemannian manifold. A condition on vector transport and retraction that guarantees convergence and facilitates efficient computation is derived. Experimental evidence is presented demonstrating the value of the extension to the Riemannian...
This paper is dedicated to Claude Lemaréchal on the occasion of his 65th birthday. We take this opportunity to thank him deeply for the great moments we have had discussing with him (not only about math). His vision and his ability to put ideas into words has helped us deepen our understanding of optimization. This work builds on one of his lines of research: using convex analysis and nonlinear...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید