نتایج جستجو برای: descent method
تعداد نتایج: 1645212 فیلتر نتایج به سال:
This paper describes an approach to the use of gradient descent search in genetic programming (GP) for object classification problems. Gradient descent search is introduced to the GP mechanism and is embedded into the genetic beam search, which allows the evolutionary learning process to globally follow the beam search and locally follow the gradient descent search. Two different methods, an on...
| In classic backpropagation nets, as introduced by Rumelhart et al. 1], the weights are modiied according to the method of steepest descent. The goal of this weight modiication is to minimise the error in net-outputs for a given training set. Basing upon Jacobs' work 2], we point out drawbacks of steepest descent and suggest improvements on it. These yield a backpropagation net, which adjusts ...
In this paper , we first extend and analyze the steepest descent method for solving optimal control problem for systems governed by Volterra integral equations . Then, we present some hybrid methods based on the extended steepest descent and two-step Newton methods, to solve the problem. The global convergence results are also established using some mild assumptions and conditions. Numerical re...
We analyze an infinite dimensional, geometrically constrained shape optimization problem for magnetically driven microswimmers (locomotors) in three-dimensional (3-D) Stokes flow and give a well-posed descent scheme for computing optimal shapes. The problem is inspired by recent experimental work in this area. We show the existence of a minimizer of the optimization problem using analytical too...
Conjugate gradient methods have been paid attention to, because they can be directly applied to large-scale unconstrained optimization problems. In order to incorporate second order information of the objective function into conjugate gradient methods, Dai and Liao (2001) proposed a conjugate gradient method based on the secant condition. However, their method does not necessarily generate a de...
Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. Most of conjugate gradient methods don’t always generate a descent search direction, so the descent condition is usually assumed in the analyses and implementations. Dai and Yuan (1999) proposed the conjugate gradient method which generates a descent direction at every iteration. Yabe and...
We propose a new method for unconstrained optimization of a smooth and strongly convex function, which attains the optimal rate of convergence of Nesterov’s accelerated gradient descent. The new algorithm has a simple geometric interpretation, loosely inspired by the ellipsoid method. We provide some numerical evidence that the new method can be superior to Nesterov’s accelerated gradient descent.
This article studies the classical MDS and dwMDS location algorithm.On this basis, steepest descent algorithm is introduced to replace SMACOF algorithm as optimization objective function. The results show that the steepest descent method as the optimization objective function is simple and easy to implement. Compared with the dwMDS method based on SMACOF algorithm, the distributed MDS positioni...
Proximal gradient descent (PGD) and stochastic proximal gradient descent (SPGD) are popular methods for solving regularized risk minimization problems in machine learning and statistics. In this paper, we propose and analyze an accelerated variant of these methods in the mini-batch setting. This method incorporates two acceleration techniques: one is Nesterov’s acceleration method, and the othe...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید