نتایج جستجو برای: bfgs method

تعداد نتایج: 1630302  

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه رازی - دانشکده علوم 1389

چکیده بهینه سازی را می توان علم مشخص نمودن بهترین جواب برای یک مسئله که بصورت ریاضی تعریف شده است، بیان کرد. یکی از شاخه های اساسی بهینه سازی، برنامه ریزی غیرخطی است و یکی از زیر رده های برنامه ریزی غیرخطی، حل مسئله نامقید غیر خطی است. برای حل این مسئله روش های متنوعی ارائه شده است که می توان به روش گرادیان، نیوتن، شبه نیوتن، گرادیان مزدوج و... اشاره کرد. روش های شبه نیوتن، از مشهورترین روش ها...

Journal: :Adv. Operations Research 2009
Gonglin Yuan Shide Meng Zengxin Wei

A trust-region-based BFGS method is proposed for solving symmetric nonlinear equations. In this given algorithm, if the trial step is unsuccessful, the linesearch technique will be used instead of repeatedly solving the subproblem of the normal trust-region method. We establish the global and superlinear convergence of the method under suitable conditions. Numerical results show that the given ...

Journal: :CoRR 2017
Albert S. Berahas Martin Takác

This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations. The first occurs in distributed computing environments where some of the computational nodes devoted to the evaluation of the function and gradient are unable to return results on time. A similar challenge occurs in a multi-batch approach in which the data points used to compute functio...

Journal: :SIAM J. Scientific Computing 2010
Berkant Savas Lek-Heng Lim

In this paper we proposed quasi-Newton and limited memory quasi-Newton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and l-bfgs updates in local and global coordinates on Grassmann manifolds or a product of these. We proved that, when local coordinates are used, our bfgs updates on Grassmann manifolds share the s...

2011
Quoc V. Le Jiquan Ngiam Adam Coates Ahbik Lahiri Bobby Prochnow Andrew Y. Ng

The predominant methodology in training deep learning advocates the use of stochastic gradient descent methods (SGDs). Despite its ease of implementation, SGDs are difficult to tune and parallelize. These problems make it challenging to develop, debug and scale up deep learning algorithms with SGDs. In this paper, we show that more sophisticated off-the-shelf optimization methods such as Limite...

2015
Zengru Cui Gonglin Yuan Zhou Sheng Wenjie Liu Xiaoliang Wang Xiabin Duan Lixiang Li

This paper proposes a modified BFGS formula using a trust region model for solving nonsmooth convex minimizations by using the Moreau-Yosida regularization (smoothing) approach and a new secant equation with a BFGS update formula. Our algorithm uses the function value information and gradient value information to compute the Hessian. The Hessian matrix is updated by the BFGS formula rather than...

2005
YA-XIANG YUAN

In this paper we present a modified BFGS algorithm for unconstrained optimization. The BFGS algorithm updates an approximate Hessian which satisfies the most recent quasi-Newton equation. The quasi-Newton condition can be interpreted as the interpolation condition that the gradient value of the local quadratic model matches that of the objective function at the previous iterate. Our modified al...

1998
Tim Oliver

A quasi-Newton algorithm using the BFGS update is one of the most widely used unconstrained numerical optimisation algorithms. We describe three parallel algorithms to perform the BFGS update on a local memory MIMD architecture such as . These algorithms are distinguished by the way in which Hessian information is stored. Cost models are developed for the algorithms and used to compare their pe...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید