نتایج جستجو برای: conjugate gradient descent

تعداد نتایج: 174860  

2016
Hiroyuki Kasai Bamdev Mishra

We propose a novel Riemannian manifold preconditioning approach for the tensor completion problem with rank constraint. A novel Riemannian metric or inner product is proposed that exploits the least-squares structure of the cost function and takes into account the structured symmetry that exists in Tucker decomposition. The specific metric allows to use the versatile framework of Riemannian opt...

2017
Mohammad Reza Arazm Saman Babaie–Kafaki R. GHANBARI

Using an extension of some previously proposed modified secant equations in the Dai–Liao approach, a modified nonlinear conjugate gradient method is proposed. As interesting features, the method employs the objective function values in addition to the gradient information and satisfies the sufficient descent property with proper choices for its parameter. Global convergence of the method is est...

Journal: :Siam Journal on Optimization 2021

Shape optimization based on shape calculus has received a lot of attention in recent years, particularly regarding the development, analysis, and modification efficient algorithms. In this paper we propose investigate nonlinear conjugate gradient methods Steklov-Poincar\'e-type metrics for solution problems constrained by partial differential equations. We embed these into general algorithmic f...

2012
Pinghua Gong Changshui Zhang

The trust region step problem, by solving a sphere constrained quadratic programming, plays a critical role in the trust region Newton method. In this paper, we propose an efficient Multi-Stage Conjugate Gradient (MSCG) algorithm to compute the trust region step in a multi-stage manner. Specifically, when the iterative solution is in the interior of the sphere, we perform the conjugate gradient...

2010
Neculai Andrei

In this paper we suggest another accelerated conjugate gradient algorithm that for all both the descent and the conjugacy conditions are guaranteed. The search direction is selected as where , The coefficients 0 k ≥ 1 1 1 1 ( / ) ( / ) T T T T k k k k k k k k k k k k k k d g y g y s s t s g y s θ + + + + = − + − , s 1 1 ( ) k k g f x + + = ∇ 1 . k k k s x x + = − k θ and in this linear combinat...

Journal: :CoRR 2017
Tianbing Xu Qiang Liu Jian Peng

Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) technique [1] to model-free p...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید