نتایج جستجو برای: limited memory bfgs

تعداد نتایج: 672103  

Journal: :SIAM Journal on Optimization 2009
Zaiwen Wen Donald Goldfarb

Abstract. We present a line search multigrid method for solving discretized versions of general unconstrained infinite dimensional optimization problems. At each iteration on each level, the algorithm computes either a “direct search” direction on the current level or a “recursive search” direction from coarser level models. Introducing a new condition that must be satisfied by a backtracking l...

2009
Mark W. Schmidt Ewout van den Berg Michael P. Friedlander Kevin P. Murphy

An optimization algorithm for minimizing a smooth function over a convex set is described. Each iteration of the method computes a descent direction by minimizing, over the original constraints, a diagonal plus lowrank quadratic approximation to the function. The quadratic approximation is constructed using a limited-memory quasi-Newton update. The method is suitable for large-scale problems wh...

Journal: :Physical review. E, Statistical, nonlinear, and soft matter physics 2010
C D Chau G J A Sevink J G E M Fraaije

We report a new and efficient factorized algorithm for the determination of the adaptive compound mobility matrix B in a stochastic quasi-Newton method (S-QN) that does not require additional potential evaluations. For one-dimensional and two-dimensional test systems, we previously showed that S-QN gives rise to efficient configurational space sampling with good thermodynamic consistency [C. D....

2013
A. S. Lewis S. Zhang

This paper investigates the potential behavior, both good and bad, of the well-known BFGS algorithm for smooth minimization, when applied to nonsmooth functions. We consider three very particular examples. We first present a simple nonsmooth example, illustrating how BFGS (in this case with an exact line search) typically succeeds despite nonsmoothness. We then study, computationally, the behav...

Journal: :Applied Numerical Mathematics 2022

• The proposed method utilizes the linear conservation law of regularization continuation such that it does not need to compute correction step for preserving feasibility other than previous methods and quasi-Newton updating formulas. replaces pre-conditioner with inverse two-sided projection Lagrangian Hessian as pre- conditioner improve its robustness in ill-posed phase. This paper considers ...

2015
Pinghua Gong Jieping Ye

The Orthant-Wise Limited memory QuasiNewton (OWL-QN) method has been demonstrated to be very effective in solving the l1regularized sparse learning problem. OWL-QN extends the L-BFGS from solving unconstrained smooth optimization problems to l1-regularized (non-smooth) sparse learning problems. At each iteration, OWL-QN does not involve any l1regularized quadratic optimization subproblem and on...

Journal: :JCP 2014
Aijia Ouyang Libin Liu Guangxue Yue Xu Zhou Kenli Li

To make glowworm swarm optimization (GSO) algorithm solve multi-extremum global optimization more effectively, taking into consideration the disadvantages and some unique advantages of GSO, the paper proposes a hybrid algorithm of Broyden–Fletcher–Goldfarb– Shanno (BFGS) algorithm and GSO, i.e., BFGS-GSO by adding BFGS local optimization operator in it, which can solve the problems effectively ...

Journal: :SIAM Journal on Optimization 2011
Richard H. Byrd Gillian M. Chin Will Neveitt Jorge Nocedal

This paper describes how to incorporate sampled curvature information in a NewtonCG method and in a limited memory quasi-Newton method for statistical learning. The motivation for this work stems from supervised machine learning applications involving a very large number of training points. We follow a batch approach, also known in the stochastic optimization literature as a sample average appr...

Journal: :Processes 2021

The dynamic simulation of the continuous catalytic reforming process is great significance to performance prediction and optimization entire process. In this study, a 34-lumped mechanism model described by differential algebra was established based on actual conditions in China, an efficient solution method simultaneous equations proposed. First, differential–algebraic basic principles kinetics...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه گیلان - دانشکده علوم ریاضی 1393

روش bfgs حافظه محدود شده(l-bfgs) یک اقتباس از روش bfgs برای بهینه سازی نامقید درمقیاس بزرگ است. در این پایان نامه، یک استراتژی منظم سازی در ارتباط با روش (l-bfgs)بررسی قرار گرفته است، که در برخی از حالتهای حساس، پارامتر منظم سازی استفاده شده وقتی که عدد حالت تخمینی از هسین منجر به یک وضعیت بدحالت می شود ممکن است یک نقش جبران کننده ایفا کند. سپس روش l-bfgs منظم سازی شده و همگرایی سراسری آن ر...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید