نتایج جستجو برای: bfgs method

تعداد نتایج: 1630302  

In this paper a multiobjective optimal design method of interior permanent magnet synchronous motor ( IPMSM) for traction applications so as to maximize average torque and to minimize torque ripple has been presented. Based on train motion equations and physical properties of train, desired specifications such as steady state speed, rated output power, acceleration time and rated speed of tract...

Journal: :Evolutionary computation 2017
Ilya Loshchilov

Limited-memory BFGS (L-BFGS; Liu and Nocedal, 1989 ) is often considered to be the method of choice for continuous optimization when first- or second-order information is available. However, the use of L-BFGS can be complicated in a black box scenario where gradient information is not available and therefore should be numerically estimated. The accuracy of this estimation, obtained by finite di...

2016
Ching-pei Lee Po-Wei Wang Weizhu Chen Chih-Jen Lin

II More Experiments We present more experimental results that are not included in the main paper in this section. We consider the same experiment environment, and the same problem being solved. We present the results using different values of C to see the relative efficiency when the problems become more difficult or easier. The result of C = 10−3 is shown in Figure (I), and the result of C = 1...

Journal: :JCP 2014
Aijia Ouyang Libin Liu Guangxue Yue Xu Zhou Kenli Li

To make glowworm swarm optimization (GSO) algorithm solve multi-extremum global optimization more effectively, taking into consideration the disadvantages and some unique advantages of GSO, the paper proposes a hybrid algorithm of Broyden–Fletcher–Goldfarb– Shanno (BFGS) algorithm and GSO, i.e., BFGS-GSO by adding BFGS local optimization operator in it, which can solve the problems effectively ...

2007
Nicol N. Schraudolph Jin Yu Simon Günter

We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperfor...

Journal: :Neurocomputing 2003
Christian Igel Michael Hüsken

The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing first-order learning methods for neural networks. We discuss modifications of this algorithm that improve its learning speed. The new optimization methods are empirically compared to the existing Rprop variants, the conjugate gradient method, Quickprop, and the BFGS algorithm on a set of neural network benchmark ...

2006
Samuel R. Buss

The traditional quasi-Newton method for updating the approximate Hessian is based on the change in the gradient of the objective function. This paper describes a new update method that incorporates also the change in the value of the function. The method effectively uses a cubic approximation of the objective function to better approximate its directional second derivative. The cubic approximat...

Journal: :SIAM Journal on Optimization 2013
William W. Hager Hongchao Zhang

In theory, the successive gradients generated by the conjugate gradient method applied to a quadratic should be orthogonal. However, for some ill-conditioned problems, orthogonality is quickly lost due to rounding errors, and convergence is much slower than expected. A limited memory version of the nonlinear conjugate gradient method is developed. The memory is used to both detect the loss of o...

Journal: :Math. Program. 1993
Jorge Nocedal Ya-Xiang Yuan

We study the self-scaling BFGS method of Oren and Luenberger (1974) for solving unconstrained optimization problems. For general convex functions, we prove that the method is globally convergent with inexact line searches. We also show that the directions generated by the self-scaling BFGS method approach Newton's direction asymptotically. This would ensure superlinear convergence if, in additi...

Journal: :SIAM Journal on Optimization 1993
X. Zou Ionel Michael Navon M. Berger Paul Kang-Hoh Phua Tamar Schlick François-Xavier Le Dimet

Computational experience with several limited-memory quasi-Newton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a well-known test library [J. on several synthetic problems allowing control of the clustering of eigenvalues in the Hessian spectrum, and on some large-scale problems in oceanography and meteorology. The result...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید