نتایج جستجو برای: steepest descent method

تعداد نتایج: 1645898  

Journal: :Journal of studies in science and engineering 2021

The Steepest descent method and the Conjugate gradient to minimize nonlinear functions have been studied in this work. Algorithms are presented implemented Matlab software for both methods. However, a comparison has made between method. obtained results time efficiency aspects. It is shown that needs fewer iterations more than On other hand, converges function less

Journal: :Journal of the Mathematical Society of Japan 1985

Journal: :Foundations of Computational Mathematics 2010
Andreas Asheim Daan Huybrechs

We propose a variant of the numerical method of steepest descent for oscillatory integrals by using a low-cost explicit polynomial approximation of the paths of steepest descent. A loss of asymptotic order is observed, but in the most relevant cases the overall asymptotic order remains higher than a truncated asymptotic expansion at similar computational effort. Theoretical results based on num...

Journal: :Journal of Optimization Theory and Applications 1996

2016
Frank E. Curtis Wei Guo

The limited memory steepest descent method (LMSD) proposed by Fletcher is an extension of the Barzilai-Borwein “two-point step size” strategy for steepest descent methods for solving unconstrained optimization problems. It is known that the Barzilai-Borwein strategy yields a method with an R-linear rate of convergence when it is employed to minimize a strongly convex quadratic. This paper exten...

Journal: :J. Sci. Comput. 2012
Kees van den Doel Uri M. Ascher

The steepest descent method for large linear systems is well-known to often converge very slowly, with the number of iterations required being about the same as that obtained by utilizing a gradient descent method with the best constant step size and growing proportionally to the condition number. Faster gradient descent methods must occasionally resort to significantly larger step sizes, which...

Journal: :CoRR 2012
Sudarshan Nandy Partha Pratim Sarkar Achintya Das

The present work deals with an improved back-propagation algorithm based on Gauss-Newton numerical optimization method for fast convergence. The steepest descent method is used for the back-propagation. The algorithm is tested using various datasets and compared with the steepest descent back-propagation algorithm. In the system, optimization is carried out using multilayer neural network. The ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید